Abstract
In this two part study, we presented a new design methodology for neural classifiers. The design procedure utilizes a multiclass vector quantization, MVQ, algorithm for information extraction from the training set. The extracted information suffices to specify the hidden layer in a canonical neural network architecture. The extracted information also leads to the specification of neuron inhibition rules and subsequently to the design of the hidden layer to output map. In part I of that study, we focused attention on the MVQ algorithm and how it is used to extract information from a training set. The extracted information is used to directly specify the hidden layer. In part II, we consider the non-simplistic hidden layer to output map design. We note that the MVQ algorithm, as it extracts information, decomposes the design set into disjoint neighborhoods. For each neighborhood we identify subsets of the hidden layer neurons which are significant sensors for the neighborhood. For each subset we construct an output map. Inhibition rules are established to assure that the proper output map is activated. In benchmark simulations, the overall design exhibits performance, to the extent that we are hard pressed to identify bounds on performance, if any.
Similar content being viewed by others
References
Bishop, C. M.,Neural Networks for Pattern Recognition, Clarendon Press, Oxford, 1995.
Chen, S., C. F. N. Cowan, and P. M. Grant, Orthogonal least squares learning algorithm for radial basis function networks,IEEE Trans Neural Networks, 2, March 1991.
Porter, W. A., and A. H. Abouali, On neural network design, Part I: Using the MVQ algorithm,Circuits, Systems, and Signal Processing, vol. 17, no. 2, 1998, pp. 196–218.
Porter, W. A., and A. H. Abouali, Function emulation using the MVQ neural network, to appear.
Porter, W. A., and A. H. Abouali, Vector quantization for multiple classes,J. Information Sciences, vol. 105, 1998, pp. 151–171.
Porter, W. A., and W. Liu, Training the higher order moment neural arrary,IEEE Trans. Signal Processing, vol. 42, no. 7, July 1994.
Porter, W. A., and W. Liu, Object recognition by a massively parallel 2-D neural architecture.Multidimensional Systems and Signal Processing, vol. 5, pp. 179–201, 1994.
Porter, W. A., and W. Liu, Steering higher order moment calculations from lower dimensional spaces,Int. J. Information Sciences, vol. 80, no. 3, pp. 181–194, Sept. 1994.
Porter, W. A., and W. Liu, Auxiliary computations for perceptron networks,Circuits, Systems, and Signal Processing, vol. 15, no. 1, pp. 51–69, 1996.
Porter, W. A., and W. Liu, Neural network training enhancement,Circuits, Systems, and Signal Processing, vol. 15, no. 4, pp. 467–480, 1996.
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Porter, W.A., Abouali, A.H. On neural network design part II: Inhibition and the output map. Circuits Systems and Signal Process 17, 613–635 (1998). https://doi.org/10.1007/BF01203108
Received:
Issue Date:
DOI: https://doi.org/10.1007/BF01203108