Skip to main content
Log in

Efficient detection of spurious inputs for improving the robustness of MLP networks in practical applications

  • Articles
  • Published:
Neural Computing & Applications Aims and scope Submit manuscript

Abstract

The problem of the rejection of patterns not belonging to identified training classes is investigated with respect to Multilayer Perceptron Networks (MLP). The reason for the inherent unreliability of the standard MLP in this respect is explained, and some mechanisms for the enhancement of its rejection performance are considered. Two network configurations are presented as candidates for a more reliable structure, and are compared to the so-called ‘negative training’ approach. The first configuration is an MLP which uses a Gaussian as its activation function, and the second is an MLP with direct connections from the input to the output layer of the network. The networks are examined and evaluated both through the technique of network inversion, and through practical experiments in a pattern classification application. Finally, the model of Radial Basis Function (RBF) networks is also considered in this respect, and its performance is compared to that obtained with the other networks described.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  1. Rumelhart DE, Hinton GE, Williams RJ. Learning internal representations by error propagation. In: DE Rumelhart, JL McClelland (Eds) Parallel Distributed Processing, Vol. 1, MIT Press, Cambridge, MA, 1986; 318–362

    Google Scholar 

  2. Lippman R. Pattern classification using neural networks. IEEE Comm Mag November 1989

  3. Martin GL, Pittman JA. Recognizing hand-printed letters and digits using backpropagation learning. Neural Computation 1991;3: 258–267

    Google Scholar 

  4. Linden A, Kindermann J. Inversion of multilayer nets. In: Proceedings of IJCNN, Washington, DC, 1989

  5. Lee Y. Handwritten digit recognition using K nearest neighbour, radial basis function, and backpropagation neural networks. Neural Computation 1991; 3: 440–449.

    Google Scholar 

  6. Smieja FJ, Muhlenbein H. Reflective modular neural network systems. Technical Report, German National Research Centre for Computer Science, Germany, 1992

    Google Scholar 

  7. Bromley J, Denker J. Improving rejection performance on handwritten digits by training with rubbish. Neural Computation 1993;5(3): 367–370

    Google Scholar 

  8. Vasconcelos GC, Fairhurst MC, Bisset DL. Enhanced reliability of multilayer perceptron networks through controlled pattern rejection. Electr Lett 1993;29(3): 261–263

    Google Scholar 

  9. Bishop C. Novelty detection and neural network validation. IEE Proc Vision, Image and Signal Process 1994;141(4): 217–222

    Google Scholar 

  10. Broomhead DS, Lowe D. Multivariable functional interpolation and adaptive networks. Complex Syst 1988;2: 321–355

    Google Scholar 

  11. Moody J, Darken C. Fast learning in networks of locally-tuned processing units. Neural Computation 1989;1: 281–294

    Google Scholar 

  12. Lippmann R. Review of neural networks for speech recognition. Neural Computation 1989;1(1): 1–38

    Google Scholar 

  13. Dawson MRW, Schopflocher DP. Modifying the generalized delta rule to train networks of non-monotonic processors for pattern classification. Connection Science 1992;4(1): 19–31

    Google Scholar 

  14. Sontag E. On the recognition capabilities of feedforward nets. Technical Report SYCON 90-03, Department of Mathematics, Rutgers University, 1990

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to M. C. Fairhurst.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Vasconcelos, G.C., Fairhurst, M.C. & Bisset, D.L. Efficient detection of spurious inputs for improving the robustness of MLP networks in practical applications. Neural Comput & Applic 3, 202–212 (1995). https://doi.org/10.1007/BF01414645

Download citation

  • Received:

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF01414645

Keywords

Navigation