A penalty algorithm for solving convex separable knapsack problems

https://doi.org/10.1016/j.amc.2019.124855Get rights and content

Abstract

In this paper, we propose a penalized gradient projection algorithm for solving the continuous convex separable knapsack problem, which is simpler than existing methods and competitive in practice. The algorithm only performs function and gradient evaluations, sums, and updates of parameters. The relatively complex task of the algorithm, which consists in minimizing a function in a compact set, is given by a closed formula. The convergence of the algorithm is presented. Moreover, to demonstrate its efficiency, illustrative computational results are presented for medium-sized problems.

Introduction

We are interested in solving The Continuous Convex Separable Knapsack Problem:(CSKP):{Minimizef(z):=i=1nfi(zi),s.t.i=1nbizi=c,liziui,i=1,2,,n,where fi:RR are differentiable and convex functions and bi > 0, li < ui for all i=1,2,,n with c > 0 such that ⟨b, l⟩ ≤ c ≤ ⟨b, u⟩, see, for example, [1]. In our approach, we are interested in the problems where cb,l>0.

The knapsack is a thoroughly studied problem in past literature. A survey regarding algorithms and applications for the nonlinear knapsack problem, also known as the nonlinear resource allocation problem, is presented in [1]. The problem considered by Bretthauer and Shetty [1] is somewhat more general than the problem (1) and as was indicated by the authors, and references therein, there is a variety of applications regarding the knapsack problem; included among them are financial models, production and inventory management, stratified sampling, optimal design of queuing network models in manufacturing, computer systems, subgradient optimization, and health care.

The paper [2] surveys the history and applications of the problem of minimizing a separable, convex and differentiable function over a convex set, defined by bounds on the variables and an explicit constraint described by a separable convex function, as well as algorithmic approaches to its solution. They found that the most common techniques are based on finding the optimal value of the Lagrange multiplier for the explicit constraint, most often through the use of a type of line search procedure. They analyze the most relevant references, especially regarding their originality and numerical findings, summarizing with remarks on possible extensions and future research. More recently in [3] it provides an up-to-date extension of the survey of the literature of the field, complementing the survey in [2] with more then 20 books and articles, totaling over one hundred references methodically analyzed. Besides of that they contributes with an improvement of the pegging (that is, variable fixing) process in the relaxation algorithm, and an improved means to evaluate sub-solutions. Finally they provided a rigorous numerical evaluation of several relaxation (primal) and breakpoint (dual) algorithms, incorporating a variety of pegging strategies, as well as a quasi-Newton method.

With regard to the continuous knapsack problem, which is the object of study of this paper, certain well known methods exist that deserve significant attention. With few variations on its formulation, the most known are multiplier search methods and variable pegging methods. The last one is also called variable fixing techniques.

A substantial portion of papers is focused on a particular case of the problem (1), when the objective function is quadratic with continuous or integer variables. This formulation is called quadratic knapsack problem, including, between them, the support vector machines for training. In this case, the primary techniques used for solving this problem are quasi-Newton and Newton methods, projected gradient method, branch and bound, and relaxation techniques among others, for example [3], [4], [5], [6], [7], [8], [9], [10], [11], [12], [13], [14].

For the non-quadratic case, relatively less research papers compared with the quadratic case, some of them found in the literature are [1], [3], [15], [16], [17].

In this paper, we present an algorithm for non-quadratic continuous convex separable knapsack problem. However, it includes a particular case of the quadratic problem. We start by reformulating the problem (1) as a penalized upper unbounded problem restricted to the unit simplex. Thus, as proposed by Beck and Teboulle [18], [19], we use a Bregman distance to define a penalized algorithm with explicit projections. Convergence analysis and numerical results are also presented.

The paper is organized as follows. In Section 2, we present the theoretical basis of our work, i.e., a change of variables and explicit projections on the unit simplex. Section 3 is devoted to introducing our algorithm and its convergence analysis. In Section 4, we report our numerical experiments. Finally, we infer overall conclusions in Section 5.

Section snippets

Preliminaries

This section provides the resource material necessary for paper presentation. We start by proposing a change of variable to transform the problem (1) into an equivalent problem whose constraints are suitable to our approach, specificallyxi=bi(zili)cb,lzi=cb,lbixi+li,i=1,,n,hence, we obtain:i=1n(bizi)c=i=1n[(cb,l)xi+bili]c=(cb,l)i=1nxi+[b,lc]=(cb,l)[i=1nxi1],and for each i=1,2,,n, we haveliziui0xiu¯i,where u¯i=bi(uili)cb,l.

Hence, using (2)–(4), we can

Statement of algorithm and convergence results

This section is divided into two sections. The first one describes the proposed algorithm and the second section discusses its convergence, which is based on the Bregman distance.

Consider ρ > 0 and two positives sequences {ρk} and {βk} satisfying the following conditionsk=1+βk=+,k=1+βk2<,0<ρ<ρk,kN,limk(ρkβk)=0.

Numerical experiments

In this section, we illustrate the performance of our proposed algorithm by running three numerical tests. The algorithm is written in C and run on a 12 GB RAM 920 2.67 GHz i7 desktop with Ubuntu 18.04 64 bits. For all tests, the stop criterion has been defined by:max{xku¯,0}<ϵ1,xk+1xk<ϵ2,where max{v,0}=(max{v1,0},,max{vn,0})T and ϵ1=103,ϵ2=106.

It was considered a set of 100 problems randomly generated with 5 different dimensions 5000, 10, 000, 50, 000, 100, 000 and 200, 000,

Conclusions

In this paper, we have developed an algorithm based on penalized gradient projection for solving the continuous convex separable knapsack problem. The most challenging step of the algorithm is obtaining a solution for the minimization problem in a compact set for which a closed formula is available. Moreover, at each step of the algorithm, only basic operations that involve function evaluations and sums are performed. Therefore, this provides us with a fast and promising algorithm. Based on the

Acknowledgments

We would like to thank two anonymous referees whose comments and suggestions greatly improved this work. The third author was partially supported by Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001.

References (20)

There are more references available in the full text version of this article.

Cited by (2)

  • A cooperative bargaining framework for decentralized portfolio optimization

    2022, Journal of Mathematical Economics
    Citation Excerpt :

    Further, we developed a closed-form analysis of four illustrative cases and uncovered stylized properties concerning the impact of stock correlations and the disagreement points on both NB and KSB equilibria. In this vein, to provide an empirical grounding for the proposed theoretical framework, we used a customized version of the penalty algorithm by Hoto et al. (2020) to characterize the NB solution of large-scale instances. Figures showed that instances of decentralized portfolio problems with dozens of intermediaries with hundreds of financial assets each were solved in less than a minute.

  • The use of the heuristic method for solving the knapsack problem

    2021, 2021 IEEE 2nd KhPI Week on Advanced Technology, KhPI Week 2021 - Conference Proceedings
View full text