Skip to main content
Log in

The Neural Solids; For optimization problems

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

The neural solids are novel neural networks devised for solving optimization problems. They are dual to Hopfield networks, but with a quartic energy function. These solids are open architectures, in the sense that different choices of the basic elements and interfacings solve different optimization problems. The basic element is the neural resonator (triangle for the three dimensional case), composed of resonant neurons underlying a self-organizing learning. This module is able to solve elementary optimization problems such as the search for the nearest orthonormal matrix to a given one. Then, an example of a more complex solid, the neural decomposer, whose architecture is composed of neural resonators and their mutual connections, is given. This solid can solve more complex optimization problems such as the decomposition of the essential matrix, which is a very important technique in computer vision.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Bjorck, A. and Bowie, C.: An iterative algorithm for computing the best estimate of an orthogonal matrix, SIAM J. Num. Anal. 8 (1971), 358–364.

    Google Scholar 

  2. Cichocki, A. and Unbehauer, R.: Neural Networks for Optimization and Signal Pro-cessing, John Wiley and Sons (1993).

  3. Cirrincione, G.: A neural approach to the structure from motion problem, Ph.D. thesis, LIS INPG, Grenoble (1998).

    Google Scholar 

  4. Cirrincione, G. and Cirrincione, M.: Neural networks for the singular value decomposition, IEEE Transactions on Signal Processing, submitted (1999).

  5. Cliff, N.: Orthogonal rotation to congruence, Psychometrika 31 (1966), 33–42.

    Google Scholar 

  6. Eckart, G. and Young, G.: The approximation of one matrix by another of lower rank, Psychometrica 1 (1936), 211–218.

    Google Scholar 

  7. Gibson, W.: On the least-square othogonalisation of an oblique transformation, Psychometrika 27 (1962), 193–195.

    Google Scholar 

  8. Golub, G. and Loan, C. V.: Matrix Computations, Baltimore, MD: Johns Hopkins University Press, 2nd Edn. (1989).

    Google Scholar 

  9. Green, B.: The orthogonal approximation of an oblique structure in factor analysis, Psychometrika 1 (1952), 429–440.

    Google Scholar 

  10. Hestenes, M.: Conjugate Direction Methods in Optimization, Springer-Verlag, New York (1980).

    Google Scholar 

  11. Hestenes, M. and Stiefel, E.: Methods of conjugate gradients for solving linear systems, Journal of Research of the National Bureau of Standards 49(6) (1952), 409–436.

    Google Scholar 

  12. Higham, N.: Computing the polar decomposition ‐ with applications, SIAM J. Sci. Statist. Comput. 7 (1986), 1160–1174.

    Google Scholar 

  13. Horn, B., Hilden, H. and Negahdaripour, S.: Closed-form solution of absolute orientation using orthonormal matrices, Journal of Optical Society America A 5(7) (1988), 1127–1135.

    Google Scholar 

  14. Johnson, R.: On a theorem stated by Eckart and Young, Psychometrika 28 (1963), 259–263.

    Google Scholar 

  15. Kanatani, K.: Geometric Computation for Machine Vision, The Oxford engineering science series, Clarendon Press, Oxford (1993).

    Google Scholar 

  16. Kanatani, K.: Analysis of 3-d rotation fitting, IEEE Trans. on Pattern Analysis and Machine Intelligence 16(5) (1994), 543–549.

    Google Scholar 

  17. Moller, M.: A scaled conjugate gradient algorithm for fast supervised learning, Neural Networks 6 (1993), 525–533.

    Google Scholar 

  18. Press, W., Teukolsky, S., Wetterling, W. and Flannery, B.: Numerical Recipes in C: The Art of Scientific Computing, Cambridge University Press, 2nd Edn. (1992).

  19. Schonemann, P.: A generalized solution of the orthogonal procrustes problem, Psychometrika 31 (1966), 1–10.

    Google Scholar 

  20. Schonemann, P. and Carroll, R.: Fitting one matrix to another under choice of a central dilation and a rigid motion, Psychometrika 35 (1970), 169–183.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Cirrincione, G., Cirrincione, M. The Neural Solids; For optimization problems. Neural Processing Letters 13, 1–15 (2001). https://doi.org/10.1023/A:1009660910503

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1023/A:1009660910503

Navigation