Skip to main content
Log in

Dealing with degeneracy in reduced gradient algorithms

  • Published:
Mathematical Programming Submit manuscript

Abstract

Many algorithms for linearly constrained optimization problems proceed by solving a sequence of subproblems. In these subproblems, the number of variables is implicitly reduced by using the linear constraints to express certain ‘basic’ variables in terms of other variables. Difficulties may arise, however, if degeneracy is present; that is, if one or more basic variables are at lower or upper bounds. In this situation, arbitrarily small movements along a feasible search direction in the reduced problem may result in infeasibilities for basic variables in the original problem. For such cases, the search direction is typically discarded, a new reduced problem is formed and a new search direction is computed. Such a process may be extremely costly, particularly in large-scale optimization where degeneracy is likely and good search directions can be expensive to compute. This paper is concerned with a practical method for ensuring that directions that are computed in the reduced space are actually feasible in the original problem. It is based on a generalization of the ‘maximal basis’ result first introduced by Dembo and Klincewicz for large nonlinear network optimization problems.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. J. Abadie and J. Carpentier, “Generalization of the Wolfe reduced gradient method to the case of nonlinear constraints”, in: R. Fletcher, ed.,Optimization (Academic Press, New York, 1969) pp. 37–47.

    Google Scholar 

  2. R.G. Bland, “New finite pivoting rules for the simplex method”,Mathematics of Operations Research 2 (1977) 103–107.

    Article  MATH  MathSciNet  Google Scholar 

  3. R.S. Dembo, “A primal truncated-Newton algorithm with application to large-scale nonlinear network optimization”, Working Paper No. 72, Series B, School of Organization and Management, Yale University (New Haven, CT, 1983).

    Google Scholar 

  4. R.S. Dembo and J.G. Klincewicz, “A scaled reduced gradient algorithm for network flow problems with convex separable costs”,Mathematical Programming Study 15 (1981) 125–147.

    MATH  MathSciNet  Google Scholar 

  5. J.G. Klincewicz, “A Newton method for convex separable network flow problems”,Networks 13 (1983) 427–442.

    MATH  MathSciNet  Google Scholar 

  6. E.L. Lawler,Combinatorial optimization, networks and matroids (Holt, Rinehart & Winston, New York, 1976).

    MATH  Google Scholar 

  7. C.E. Lemke, “The constrained gradient method of linear programming”,Journal of the SIAM 9 (1961), 1–17.

    MATH  MathSciNet  Google Scholar 

  8. B.A. Murtagh and M.A. Saunders, “Large scale linearly constrained optimization”,Mathematical Programming 14 (1978) 41–72.

    Article  MATH  MathSciNet  Google Scholar 

  9. D.F. Shanno and R.E. Marsten, “Conjugate gradient methods for linearly constrained nonlinear programming”,Mathematical Programming Study 16 (1983) 149–161.

    MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Additional information

Research supported in part by NSF Grant ECS-8119513 and DOT Research Grant CT-06-0011.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Dembo, R.S., Klincewicz, J.G. Dealing with degeneracy in reduced gradient algorithms. Mathematical Programming 31, 357–363 (1985). https://doi.org/10.1007/BF02591957

Download citation

  • Received:

  • Revised:

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF02591957

Key words

Navigation