Abstract
The United States Army can benefit from effectively utilizing cargo unmanned aerial vehicles (CUAVs) to perform resupply operations in combat environments to reduce the use of manned (ground and aerial) resupply that incurs risk to personnel. We formulate a Markov decision process (MDP) model of an inventory routing problem (IRP) with vehicle loss and direct delivery, which we label the military IRP (MILIRP). The objective of the MILIRP is to determine CUAV dispatching and routing policies for the resupply of geographically dispersed units operating in an austere, combat environment. The large size of the problem instance motivating this research renders dynamic programming algorithms inappropriate, so we utilize approximate dynamic programming (ADP) methods to attain improved policies (relative to a benchmark policy) via an approximate policy iteration algorithmic strategy utilizing least squares temporal differencing for policy evaluation. We examine a representative problem instance motivated by resupply operations experienced by the United States Army in Afghanistan both to demonstrate the applicability of our MDP model and to examine the efficacy of our proposed ADP solution methodology. A designed computational experiment enables the examination of selected problem features and algorithmic features vis-à-vis the quality of solutions attained by our ADP policies. Results indicate that a 4-crew, 8-CUAV unit is able to resupply 57% of the demand from an 800-person organization over a 3-month time horizon when using the ADP policy, a notable improvement over the 18% attained using a benchmark policy. Such results inform the development of procedures governing the design, development, and utilization of CUAV assets for the resupply of dispersed ground combat forces.
Similar content being viewed by others
References
Barr, R. S., Golden, B. L., Kelly, J. P., Resende, M. G., Stewart, J., & William, R. (1995). Designing and reporting on computational experiments with heuristic methods. Journal of Heuristics, 1(1), 9–32.
Bertsekas, D. P. (2011). Approximate policy iteration: A survey and some new methods. Journal of Control Theory and Applications, 9(3), 310–335.
Bertsekas, D. P. (2012). Dynamic programming and optimal control (4th ed., Vol. 2). Belmont: Athena Scientific.
Bertsekas, D. P. (2017). Dynamic programming and optimal control (4th ed., Vol. 1). Belmont, MA: Athena Scientific.
Bradtke, S. J., & Barto, A. G. (1996). Linear least-squares algorithms for temporal difference learning. Machine Learning, 22(1–3), 33–57.
Coelho, L. C., Cordeau, J.-F., & Laporte, G. (2012). Thirty years of inventory routing. Transportation Science, 48(1), 1–19.
Davis, M. T., Robbins, M. J., & Lunday, B. J. (2017). Approximate dynamic programming for missile defense interceptor fire control. European Journal of Operational Research, 259(3), 873–886.
Department of Defense. (2009). FY 2009–2034 Unmanned Systems Integrated Roadmap.
Department of the Army. (2010). Army Field Manual: Brigade Combat Team No. 3-90.6.
Department of the Army. (2012). Cargo Unmanned Aircraft System (UAS) Concept of Operations.
General Dynamics Information Technology. (2010). Future modular force resupply mission for unmanned aircraft systems (UAS). Falls Church: General Dynamics Information Technology.
Jenkins, P. R., Robbins, M. J., & Lunday, B. J. (2019). Approximate dynamic programming for military medical evacuation dispatching policies. INFORMS Journal on Computing 1–40 (in press).
Kleywegt, A. J., Nori, V. S., & Savelsbergh, M. W. P. (2002). The stochastic inventory routing problem with direct deliveries. Transportation Science, 36(1), 94.
Kleywegt, A. J., Nori, V. S., & Savelsbergh, M. W. P. (2004). Dynamic programming approximations for a stochastic inventory routing problem. Transportation Science, 38(1), 42–70.
Lagoudakis, M. G., & Parr, R. (2003). Least-squares policy iteration. The Journal of Machine Learning Research, 4, 1107–1149.
Lamothe, D. (2014). Robotic helicopter completes Afghanistan mission, back in U.S. http://www.washingtonpost.com/news/checkpoint/wp/2014/07/25/robotic-helicopter-completes-afghanistan-mission-back-in-u-s/. Accessed 18 Feb 2015.
Lockheed Martin (2010). K-MAX unmanned arcraft system. http://www.lockheedmartin.com/content/dam/lockheed/data/ms2/documents/K-MAX-brochure.pdf. Accessed 18 Oct 2014.
Lockheed Martin. (2012). Unmanned K-MAX operations in Afghanistan. https://www.youtube.com/watch?v=s-mr5I657GU. Accessed 19 Feb 2015.
Lockheed Martin. (2018). K-MAX deployment infographic. https://www.lockheedmartin.com/us/products/kmax/infographic.html?_ga=2.196024741.809269596.1517926078-149994848.1490021743. Accessed 06 Feb 2018.
McCormack, I. (2014). The military inventory routing problem with direct delivery. Master’s thesis, Air Force Institute of Technology.
Mu, S., Fu, Z., Lysgaard, J., & Eglese, R. (2010). Disruption management of the vehicle routing problem with vehicle breakdown. Journal of the Operational Research Society, 62, 742–749.
Powell, W. B. (2011). Approximate dynamic programming: solving the curses of dimensionality (2nd ed.). Hoboken, NJ: Wiley.
Powell, W. B. (2012). Perspectives of approximate dynamic programming. Annals of Operations Research, 13(2), 1–38.
Rettke, A. J., Robbins, M. J., & Lunday, B. J. (2016). Approximate dynamic programming for the dispatch of military medical evacuation assets. European Journal of Operational Research, 254(3), 824–839.
Ruszczynski, A. (2010). Commentary-post-decision states and separable approximations are powerful tools of approximate dynamic programming. INFORMS Journal on Computing, 22(1), 20–22.
Söderström, T. D., & Stoica, P. G. (1983). Instrumental variable methods for system identification (Vol. 57). Berlin: Springer.
Van Roy, B., Bertsekas, D. P., Lee, Y., & Tsitsiklis, J. N. (1997). A neuro-dynamic programming approach to retailer inventory management. In Proceedings of the IEEE conference on decision and control (Vo. 4, pp. 4052–4057). IEEE.
Williams, J. (2010). Unmanned tactical airlift: A business case study. Master’s thesis, Air Force Institute of Technology.
Acknowledgements
The views expressed in this paper are those of the authors and do not reflect the official policy or position of the United States Air Force, United States Army, Department of Defense, or United States Government.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
McKenna, R.S., Robbins, M.J., Lunday, B.J. et al. Approximate dynamic programming for the military inventory routing problem. Ann Oper Res 288, 391–416 (2020). https://doi.org/10.1007/s10479-019-03469-8
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10479-019-03469-8