Skip to main content
Log in

Fractional 0–1 programs: links between mixed-integer linear and conic quadratic formulations

  • Published:
Journal of Global Optimization Aims and scope Submit manuscript

Abstract

This paper focuses on methods that improve the performance of solution approaches for multiple-ratio fractional 0–1 programs (FPs) in their general structure. In particular, we explore the links between equivalent mixed-integer linear programming and conic quadratic programming reformulations of FPs. Thereby, we show that integrating the ideas behind these two types of reformulations of FPs allows us to push further the limits of the current state-of-the-art results and tackle larger-size problems. We perform extensive computational experiments to compare the proposed approaches against the current reformulations from the literature.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Notes

  1. The second domination statement holds only if all inequalities (6) are added. Nonetheless, \({{{\mathbf {\mathtt{{LF}}}}}}^{{{\mathbf {\mathtt{{P}}}}}}\) is able to achieve excellent convex relaxations with a modest number of cuts.

  2. For MILP formulations, \(z^{{\texttt {Rlx}}}\leqslant z^{{\texttt {Ron}}}\) as additional constraints are added at the root node. For MICQP formulations, this is not necessarily the case: \(z^{{\texttt {Rlx}}}\) is found via interior point methods, while \(z^{{\texttt {Ron}}}\) is obtained after solving a linear outer approximation—which may have a weaker continuous relaxation.

References

  1. Adams, W.P., Forrester, R.J.: A simple recipe for concise mixed 0–1 linearizations. Oper. Res. Lett. 33(1), 55–61 (2005)

    Article  MATH  Google Scholar 

  2. Amaldi, E., Bosio, S., Malucelli, F.: Hyperbolic set covering problems with competing ground-set elements. Math. Program. 134(2), 323–348 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  3. Amaldi, E., Bosio, S., Malucelli, F., Yuan, D.: Solving nonlinear covering problems arising in wlan design. Oper. Res. 59(1), 173–187 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  4. Atamtürk, A., Gómez, A.: Submodularity in conic quadratic mixed 0–1 optimization. (2018). arXiv preprint arXiv:1705.05918

  5. Atamtürk, A., Narayanan, V.: Polymatroids and mean-risk minimization in discrete optimization. Opera. Res. Lett. 36(5), 618–622 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  6. Ben-Tal, A., Nemirovski, A.: On polyhedral approximations of the second-order cone. Math. Oper. Res. 26(2), 193–205 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  7. Bertsimas, D., Korolko, N., Weinstein, A.M.: Identifying exceptional responders in randomized trials: an optimization approach. Informs J. Optim

  8. Boros, E., Hammer, P.: Pseudo-boolean optimization. Discrete Appl. Math. 123(1), 155–225 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  9. Borrero, J.S., Gillen, C., Prokopyev, O.A.: A simple technique to improve linearized reformulations of fractional (hyperbolic) 0–1 programming problems. Oper. Res. Lett. 44(4), 479–486 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  10. Borrero, J.S., Gillen, C., Prokopyev, O.A.: Fractional 0–1 programming: applications and algorithms. J. Global Optim. 69(1), 255–282 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  11. Bront, J.J.M., Méndez-Díaz, I., Vulcano, G.: A column generation algorithm for choice-based network revenue management. Oper. Res. 57(3), 769–784 (2009)

    Article  MATH  Google Scholar 

  12. Busygin, S., Prokopyev, O.A., Pardalos, P.M.: Feature selection for consistent biclustering via fractional 0–1 programming. J. Comb. Optim. 10(1), 7–21 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  13. Davis, J.M., Gallego, G., Topaloglu, H.: Assortment optimization under variants of the nested logit model. Oper. Res. 62(2), 250–273 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  14. Edmonds, J.: Submodular functions, matroids, and certain polyhedra. Comb. Struct. Their Appl. 58, 69–87 (1970)

    MathSciNet  MATH  Google Scholar 

  15. Elhedhli, S.: Exact solution of a class of nonlinear knapsack problems. Oper. Res. Lett. 33(6), 615–624 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  16. Feldman, J., Topaloglu, H.: Bounding optimal expected revenues for assortment optimization under mixtures of multinomial logits. Prod. Oper. Manag. 24(10), 1598–1620 (2015)

    Article  Google Scholar 

  17. Gómez, A., Prokopyev, O.A.: A mixed-integer fractional optimization approach to best subset selection. Optimization-Online (2018)

  18. Gurobi: Gurobi optimizer reference manual v. 8. (2018) http://www.gurobi.com

  19. Hansen, P., de Aragão, M.V.P., Ribeiro, C.C.: Hyperbolic 0–1 programming and query optimization in information retrieval. Math. Program. 52(1–3), 255–263 (1991)

    Article  MathSciNet  MATH  Google Scholar 

  20. Hansen, P., Poggi de Aragão, M.V., Ribeiro, C.C.: Boolean query optimization and the 0–1 hyperbolic sum problem. Ann. Math. Artif. Intell. 1(1), 97–109 (1990)

    Article  MATH  Google Scholar 

  21. IBM: ILOG CPLEX Optimizer v. 12.7.1. (2017) http://www-01.ibm.com/software/commerce/optimization/cplex-optimizer/

  22. Li, H.-L.: A global approach for general 0–1 fractional programming. Eur. J. Oper. Res. 73(3), 590–596 (1994)

    Article  MATH  Google Scholar 

  23. Lovász, L.: Submodular functions and convexity. In: Mathematical Programming The State of the Art, pp. 235–257. Springer, Berlin (1983)

    Chapter  Google Scholar 

  24. Mehmanchi, E., Gillen, C.P., Gómez, A., Prokopyev, O.A.: On robust fractional 0–1 programming. INFORMS J. Optim. (2019) (accepted)

  25. Méndez-Díaz, I., Miranda-Bront, J.J., Vulcano, G., Zabala, P.: A branch-and-cut algorithm for the latent-class logit assortment problem. Discrete Appl. Math. 164, 246–263 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  26. Nguyen, H. T., Franke, K., Petrovic, S.: Towards a generic feature-selection measure for intrusion detection. In: 20th International Conference on Pattern Recognition (ICPR), 2010, pp. 1529–1532. IEEE (2010)

  27. Prokopyev, O.A.: Fractional zero-one programming. In: Floudas, C.A., Pardalos, P.M. (eds.) Encyclopedia of Optimization, pp. 1091–1094. Springer, Berlin (2008)

    Google Scholar 

  28. Prokopyev, O.A., Huang, H.-X., Pardalos, P.M.: On complexity of unconstrained hyperbolic 0–1 programming problems. Oper. Res. Lett. 33(3), 312–318 (2005a)

    Article  MathSciNet  MATH  Google Scholar 

  29. Prokopyev, O.A., Meneses, C., Oliveira, C.A., Pardalos, P.M.: On multiple-ratio hyperbolic 0–1 programming problems. Pac. J. Optim. 1(2), 327–345 (2005b)

    MathSciNet  MATH  Google Scholar 

  30. Rusmevichientong, P., Shen, Z.-J.M., Shmoys, D.B.: Dynamic assortment optimization with a multinomial logit choice model and capacity constraint. Oper. Res. 58(6), 1666–1680 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  31. Saipe, A.L.: Solving a (0, 1) hyperbolic program by branch and bound. Naval Res. Logist. 22(3), 497–515 (1975)

    Article  MATH  Google Scholar 

  32. Schaible, S., Ibaraki, T.: Fractional programming. Eur. J. Oper. Res. 12(4), 325–338 (1983)

    Article  MathSciNet  MATH  Google Scholar 

  33. Şen, A., Atamtürk, A., Kaminsky, P.: A conic integer optimization approach to the constrained assortment problem under the mixed multinomial logit model. Oper. Res. 66(4), 994–1003 (2018)

    Article  MathSciNet  Google Scholar 

  34. Stancu-Minasian, I.: A sixth bibliography of fractional programming. Optimization 55(4), 405–428 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  35. Subramanian, S., Sherali, H.D.: A fractional programming approach for retail category price optimization. J. Global Optim. 48(2), 263–277 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  36. Tawarmalani, M., Ahmed, S., Sahinidis, N.V.: Global optimization of 0–1 hyperbolic programs. J. Global Optim. 24(4), 385–416 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  37. Trapp, A., Prokopyev, O.A., Busygin, S.: Finding checkerboard patterns via fractional 0–1 programming. J. Comb. Optim. 20(1), 1–26 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  38. Trapp, A.C., Konrad, R.A.: Finding diverse optima and near-optima to binary integer programs. IIE Trans. 47(11), 1300–1312 (2015)

    Article  Google Scholar 

  39. Vielma, J.P., Ahmed, S., Nemhauser, G.L.: A lifted linear programming branch-and-bound algorithm for mixed-integer conic quadratic programs. INFORMS J. Comput. 20(3), 438–450 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  40. Vielma, J.P., Dunning, I., Huchette, J., Lubin, M.: Extended formulations in mixed integer conic quadratic programming. Math. Program. Comput. 9(3), 369–418 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  41. Williams, H.P.: Experiments in the Formulation of Integer Programming Problems. Springer, Berlin (1974)

    Book  MATH  Google Scholar 

  42. Wu, T.-H.: A note on a global approach for general 0–1 fractional programming. Eur. J. Oper. Res. 101(1), 220–223 (1997)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors thank the Review Team for their insightful and constructive comments. This paper is based upon work supported by the National Science Foundation under Grant No. 1818700.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Oleg A. Prokopyev.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Assumption justifications

We make the following assumptions in the paper.

Assumption 1

All data are integers, i.e., \(a_{ij},b_{ij}\in \mathbb {Z}\) for all \(i\in I,j\in J\cup \{0\}\).

Assumption 2

All data are non-negative, i.e., \(a_{ij},b_{ij}\geqslant 0\) for all \(i\in I\) and \(j\in J\cup \{0\}\).

The first assumption is without loss of generality, as otherwise rational coefficients can be scaled. Assumption 2 is naturally satisfied in most application settings, as the data typically represents probabilities, prices, weights, utilities etc.—see, e.g., [10] and the applications described therein.

Nonetheless, Assumption 2 is without loss of generality provided that (the weaker and commonly made assumption in the FP literature, see, e.g., [8, 9, 19]) \(b_{i0}+\sum _{j\in J}b_{ij}x_j> 0\) for all \(x\in \mathbb {B}^n\) holds. In each ratio \(i\in I\), for every \(j\in J\) such that \(b_{ij}<0\) and every j such that \(b_{ij}=0\) and \(a_{ij}<0\), replace \(x_j\) with \(\bar{x}_j=1-x_j\), resulting in a problem satisfying \(b_{ij}\geqslant 0\) (possibly with at most n additional variables and constraints). Then observe that for any \(k_i\in \mathbb {R}\)

$$\begin{aligned} \frac{a_{i0} + \sum _{j \in J} a_{ij} x_j}{b_{i0} + \sum _{j \in J} b_{ij} x_j}=\frac{(a_{i0}+k_ib_{i0}) + \sum _{j \in J} (a_{ij}+k_ib_{ij}) x_j}{b_{i0} + \sum _{j \in J} b_{ij} x_j}-k_i. \end{aligned}$$
(26)

Thus, by letting \(k_i\) sufficiently large for each \(i\in I\), we find a problem where all coefficients are non-negative.

Finally, note that if a fractional program is in maximization form and satisfies \(b_{i0}+\sum _{j\in J}b_{ij}x_j> 0\) for all \(x\in \mathbb {B}^n\), then it can be transformed into an equivalent problem in minimization form (by negating all coefficients \(a_{i0}\) and \(a_{ij}\)), and then applying the process above to obtain a problem satisfying Assumption 2.

Additional computational results

In this appendix, we compare the performance of the formulations presented in the paper (not restricted to those discussed in Sect. 4.2 and Sect. 4.3 and presented in Tables 3 and 4 as well as their extended versions, i.e., Tables 5 and 6) to evaluate the individual and combined effects of the enhancements. In order to have a better comparison of the results, we repeat the results for some of the formulations in different subsections.

In particular, first, in Appendix B.1, we compare the basic MILP and the basic MICQP formulations without using additional enhancements. Then in Appendix B.2, we focus on the effect of the binary-expansion technique on the basic formulations. Next, in Appendix B.3, we focus on the impact of polymatroid cuts. In Appendix B.4, we test the formulations that benefit from the integration of the binary-expansion technique with the polymatroid cuts. Recall that, in the following tables, the “\(\dagger \)” symbol is used if CPLEX is unable to fully process the root node of the branch-and-bound tree within the time limit for a given formulation.

1.1 Linear versus conic formulations

Here, we evaluate the basic MILP (LF, LEF) and the basic MICQP (CF, CEF) reformulations, see Tables 7 and 8. Observe that, in most cases, LEF, CF, and CEF are stronger than LF, i.e., they have better Rlx-gap. Additionally, as expected, the extended formulations LEF and CEF are stronger than compact formulations, i.e., LF and CF, respectively. The extended formulations also shows better running time and end gap than the corresponding compact formulations. In general, CEF performs better than LEF for low values of the parameter \(\kappa \), while LEF is comparatively better for high values of \(\kappa \). Moreover, none of the formulations except CF (with a very poor performance) are able to scale to \(n=1000\) for all instances. These results justify the development of enhanced formulations for the medium and large size instances.

1.2 Binary-expansion

Here, we explore the individual impact of binary-expansion technique on the performance and size of the basic formulations. Specifically, we compare LF and CEF versus their binarized versions, i.e., \({\texttt {LF}}_{\texttt {log}}\) and \({{{\mathbf {\mathtt{{CEF}}}}}}_{{{\mathbf {\mathtt{{log}}}}}}\), respectively. We do not consider the binarized formulations of LEF and CF as discussed in Remarks 4 and 6, respectively.

In Tables 9 and 10, we observe that LF has a poor performance even for \(n=100\). In contrast, its binarization leads to noteworthy improvements in the results due to the reduction in its size. These results are consistent with the previous results in the literature that \({\texttt {LF}}_{\texttt {log}}\) outperforms LF and \({\texttt {LEF}}_{\texttt {log}}\)—see [9, 24].

On the other hand, for \(n\leqslant 500\) formulation CEF has a superior performance over \({{{\mathbf {\mathtt{{CEF}}}}}}_{{{\mathbf {\mathtt{{log}}}}}}\) with respect to either time or the considered gaps; e.g., for \(n=500\) and \(\kappa =10\%\cdot n\) in Table 9, CEF reports the 0.2% average End-gap, compared to 5.1% for \({{{\mathbf {\mathtt{{CEF}}}}}}_{{{\mathbf {\mathtt{{log}}}}}}\). Nonetheless, \({{{\mathbf {\mathtt{{CEF}}}}}}_{{{\mathbf {\mathtt{{log}}}}}}\) is able to scale to problems with \(n\geqslant 1000\) while formulation CEF is not. Additionally, for the instances with \(n\geqslant 2000\) we observe that in most cases \({{{\mathbf {\mathtt{{CEF}}}}}}_{{{\mathbf {\mathtt{{log}}}}}}\) outperforms (the superior MILP formulation) \({\texttt {LF}}_{\texttt {log}}\), as well.

Tables 11 and 12 show the impact of binarization in the reduction of the number of continuous variables and the numbers of linear as well as rotated cone constraints for the assortment and the uniformly generated data sets, respectively. It can be seen that the binary-expansion technique substantially reduces the number of (continuous) variables and constraints with a slight increase in the number of binary variables; the percent of these reductions gets larger as n grows. For example, in Table 11 for \(n=1000\), \({\texttt {LF}}_{\texttt {log}}\) and \({{{\mathbf {\mathtt{{CEF}}}}}}_{{{\mathbf {\mathtt{{log}}}}}}\) have at least 97,900 and 391,500 fewer continuous variables and linear constraints, respectively, than LF and CEF with the cost of at most 2100 more binary variables. The binary-expansion technique also leads to a reduction of 97,900 rotated cone constraints for CEF.

1.3 Polymatroid cuts

Next, we explore the individual impact of polymatroid cuts on the basic formulations, namely, LF, LEF, CF, and CEF. Notably, for \(n\leqslant 500\) in Tables 13 and 14, we observe that polymatroid cuts have a significant improvement effect on the performance (time and End-gap) of compact formulations LF and CF. However, the cuts are not that effective for LEF and CEF, as these extended formulations are much stronger and the cuts provide only a marginal improvement in the relaxation quality while increasing the sizes of the formulations.

Additionally, for \(n\geqslant 1000\) polymatroid cuts are not beneficial and employing them makes the results worse, see, e.g., in Table 13 and \(n=1000\) that End-gap of LEF from 13.9% increases to 81% after employing the cuts. The reason is that CPLEX consumes the allocated time only to manage the cuts and process the root node.

Table 5 Computational results to evaluate the best existing methods in the literature against the standout formulations for the assortment data set [33]
Table 6 Computational results to evaluate the best existing methods in the literature against the standout formulations for the uniformly generated data set [9]
Table 7 Computational results to compare basic MILP and MICQP formulations for the assortment data set [33]
Table 8 Computational results to compare basic MILP and MICQP formulations for the uniformly generated data set [9]
Table 9 Computational results to compare binary-expansion formulations with their basic counterparts for the assortment data set [33]
Table 10 Computational results to compare binary-expansion formulations with their basic counterparts for the uniformly generated data set [9]
Table 11 The size of selected formulations versus their binary-expansion versions for the assortment data set [33]
Table 12 The size of selected formulations versus their binary-expansion versions for the uniformly generated data set [9]
Table 13 Computational results to evaluate the impact of the polymatroid cuts on the basic formulations for the assortment data set [33]
Table 14 Computational results to evaluate the impact of the polymatroid cuts on the basic formulations for the uniformly generated data set [9]
Table 15 Computational results to evaluate the combined effect of binarization and polymatroid cuts on the performance of selected basic MILP and MICQP for the assortment data set [33]
Table 16 Computational results to evaluate the combined effect of binarization and polymatroid cuts on the performance of selected basic MILP and MICQP formulations for the uniformly generated data set [9]

1.4 Integration of binary-expansion and polymatroid cuts

Here, we explore the effect of simultaneous usage of both techniques, i.e., the impact of the incorporation of polymatroid cuts with binary expansion on LF and CEF. Tables 15 and 16 present the results and we make the following observations. Formulation \({{{\mathbf {\mathtt{{LF}}}}}}_{{{\mathbf {\mathtt{{log}}}}}}^{{{\mathbf {\mathtt{{P}}}}}}\) either outperforms LF, \({{{\mathbf {\mathtt{{LF}}}}}}^{{{\mathbf {\mathtt{{P}}}}}}\), and \({\texttt {LF}}_{\texttt {log}}\) or (in a few cases) has a competitive performance with \({{{\mathbf {\mathtt{{LF}}}}}}^{{{\mathbf {\mathtt{{P}}}}}}\). On the other hand, for the small- and medium-size instances CEF and \({{{\mathbf {\mathtt{{CEF}}}}}}^{{{\mathbf {\mathtt{{P}}}}}}\) are competitive and they have better performances than \({{{\mathbf {\mathtt{{CEF}}}}}}_{{{\mathbf {\mathtt{{log}}}}}}\) and \({{{\mathbf {\mathtt{{CEF}}}}}}_{{{\mathbf {\mathtt{{log}}}}}}^{{{\mathbf {\mathtt{{P}}}}}}\). However, for large instances \({{{\mathbf {\mathtt{{CEF}}}}}}_{{{\mathbf {\mathtt{{log}}}}}}^{{{\mathbf {\mathtt{{P}}}}}}\) outperforms CEF, \({{{\mathbf {\mathtt{{CEF}}}}}}_{{{\mathbf {\mathtt{{log}}}}}}\) and \({{{\mathbf {\mathtt{{CEF}}}}}}^{{{\mathbf {\mathtt{{P}}}}}}\). These observations imply that—specially in large instances—the integration of binarization and polymatroid cuts in both MILPs and MICQPs leads to superior formulations. Specifically, \({{{\mathbf {\mathtt{{LF}}}}}}_{{{\mathbf {\mathtt{{log}}}}}}^{{{\mathbf {\mathtt{{P}}}}}}\) and \({{{\mathbf {\mathtt{{CEF}}}}}}_{{{\mathbf {\mathtt{{log}}}}}}^{{{\mathbf {\mathtt{{P}}}}}}\) perform better than the corresponding basic formulations and the enhanced ones that only use one of the improving techniques.

Additionally, it appears that for instances up to 500 variables, in general, \({\texttt {CEF}}\) and \({{{\mathbf {\mathtt{{CEF}}}}}}^{{{\mathbf {\mathtt{{P}}}}}}\) are the most efficient formulations. For instances with \(n\geqslant 1000\), \({{{\mathbf {\mathtt{{CEF}}}}}}_{{{\mathbf {\mathtt{{log}}}}}}^{{{\mathbf {\mathtt{{P}}}}}}\) and \({{{\mathbf {\mathtt{{LF}}}}}}_{{{\mathbf {\mathtt{{log}}}}}}^{{{\mathbf {\mathtt{{P}}}}}}\) outperform the others. Finally, we observe that, in general, \({{{\mathbf {\mathtt{{CEF}}}}}}_{{{\mathbf {\mathtt{{log}}}}}}^{{{\mathbf {\mathtt{{P}}}}}}\) has a better performance in the constrained instances, while \({{{\mathbf {\mathtt{{LF}}}}}}_{{{\mathbf {\mathtt{{log}}}}}}^{{{\mathbf {\mathtt{{P}}}}}}\) is superior in the unconstrained instances.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Mehmanchi, E., Gómez, A. & Prokopyev, O.A. Fractional 0–1 programs: links between mixed-integer linear and conic quadratic formulations. J Glob Optim 75, 273–339 (2019). https://doi.org/10.1007/s10898-019-00817-7

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10898-019-00817-7

Keywords

Navigation