Optimally controlling nutrition and propulsion force in a long distance running race

Introduction Runners competing in races are looking to optimize their performance. In this paper, a runner's performance in a race, such as a marathon, is formulated as an optimal control problem where the controls are: the nutrition intake throughout the race and the propulsion force of the runner. As nutrition is an integral part of successfully running long distance races, it needs to be included in models of running strategies. Methods We formulate a system of ordinary differential equations to represent the velocity, fat energy, glycogen energy, and nutrition for a runner competing in a long-distance race. The energy compartments represent the energy sources available in the runner's body. We allocate the energy source from which the runner draws, based on how fast the runner is moving. The food consumed during the race is a source term for the nutrition differential equation. With our model, we are investigating strategies to manage the nutrition and propulsion force in order to minimize the running time in a fixed distance race. This requires the solution of a nontrivial singular control problem. Results As the goal of an optimal control model is to determine the optimal strategy, comparing our results against real data presents a challenge; however, in comparing our results to the world record for the marathon, our results differed by 0.4%, 31 seconds. Per each additional gel consumed, the runner is able to run 0.5 to 0.7 kilometers further in the same amount of time, resulting in a 7.75% increase in taking five 100 calorie gels vs no nutrition. Discussion Our results confirm the belief that the most effective way to run a race is to run approximately the same pace the entire race without letting one's energies hit zero, by consuming in-race nutrition. While this model does not take all factors into account, we consider it a building block for future models, considering our novel energy representation, and in-race nutrition.

1 SUPPLEMENTARY DATA 1.1 Existence of the optimal control THEOREM 1. Given a nutrition strategy s i , there exist an optimal control f * that solves the optimal control problem of maximizing J i (f ) = T 0 V (t)dt with corresponding state equations over admissible controls, PROOF. First, the admissible control set is nonempty since the control f = 0 satisfies the conditions. Our objective functional, J(f ) is uniformly bounded due to bounds on the states and controls. There exists a maximizing sequence of control functions denoted by f n such that and a corresponding sequence of states V n , E n F , E n G , N n . Note that the control sequence and the state sequences are uniformly bounded. By the Banach-Alaoglu theorem (2), there exists a control f * such that on a subsequence f n ⇀ f * weakly in L 2 (0, T ). From the state differential equations we see that the derivative sequences are also uniformly bounded. Thus, V n , E n F , E n G , N n are uniformly Lipschitz and thus equicontinuous. Therefore by the Arzela-Ascoli theorem, on a subsequence, we have uniform convergence of our corresponding state functions We need to show that f * is an optimal control and V * , E * F , E * G , N * are corresponding optimal states. To show those states correspond to control to f * , we illustrate this by showing the E differential equation: We have that all 4 states correspond to the control f * . Next, f * is an optimal control since, Now that we've proven an optimal control exists, we begin solving the optimal control problem using Pontryagin's Maximum Principle (PMP) (6) by determining the set of necessary conditions, using the Hamiltonian with state constraints. We solve the maximum distance problem, obtaining an f * for each nutrition strategy, s i , and maximize over our set of finite strategies S.

Necessary Conditions for the Maximum Distance Problem
We determine the necessary conditions by using PMP (6) to compute the Lagrangian, which is the Hamiltonian with state constraints: where the penalty function η(t) ≥ 0 is a Lagrangian multiplier appended to E G as a state constraint penalty. Note that we do not have a state constraint penalty for E F as E F will not get close to 0 over the time interval. The function η satisfies: η ≡ 0 where E * G > 0, and η ≥ 0 otherwise when the state constraint is tight (i.e. E G = 0) Since we seek to maximize L with respect to f (t), a state variable violating the constraint would decrease L and not be optimal as ηE G < 0.
with transversality conditions λ 1 (T ) = 0, λ 2 (T ) = 0, λ 3 (T ) = 0, λ 4 (T ) = 0 For t ∈ [0, T ], due to it's initial condition, E F , stays positive for any choices of f (t) in the admissible control set and thus λ 2 = 0 over [0, T ]. Since our objective functional and state equations are linear in the control, we consider the different signs of the derivative of L with respect to f (t), which is the switching function ϕ(t).
At time t, for ϕ(t) < 0 we will have a maximum when f * (t) = 0, whereas for ϕ(t) > 0 we will have a maximum when f * (t) = f max . However, PMP (6) does not tell us what happens when ϕ(t) = 0. More information with respect to control f than ∂L ∂f = 0 is needed when ϕ(t) = 0 on a subinterval. If this happens only for for finite number of time points, we would have a bang-bang control and those points would represent the switching times. On the other hand, if ϕ(t) = 0 on a subinterval of time, then we would have a singular control.
For optimal control problems with state constraints there is the possibility of boundary singular subarcs as well as interior singular sub-arcs. Boundary singular sub-arcs occur when the constraint is tight, meaning when E G (t) = 0. If E G (t) = 0, that implies E ′ G = 0, and solving the glycogen energy differential equation to obtain f s,b , provided that V (glyc(V )) ̸ = 0. Note that as glyc(V ) accounts for the percent of glycogen energy being used and there is always a percentage of fuel usage coming from glycogen, this function will never be 0. Due to the structure of the differential equation for V , as soon as f is positive, V is also positive. Hence, V (glyc(V )) ̸ = 0 when the boundary singular arc is active.
If the constraint is not tight, then we have the possibility of an interior singular sub-arc that we can obtain by differentiating the switching function with respect to time twice, noting that on an interior singular sub-arc η(t) = 0, and solving for f singular . When the constraint E G (t) is not tight, η(t) = 0; however, when it is tight we solve for it using the fact that over the small time interval where the glycogen energy constraint is tight, the switching function, ϕ(t), is 0 on a boundary subarc and thus also ϕ ′ (t) = 0. We are then able to solve this equation for η(t) to get our complete characterization of η(t).
From the above, our optimal control, force, has the following structure: This f (t) represent the force profile of which a runner should adhere to in order to run the optimal race. We have shown that our problem meets all of the necessary conditions for existence of a singular interior sub-arc, including the GLC condition (3) (not shown), which suggests that the optimal trajectory could include a singular component. To summarize the control trajectory, we know that the optimal control begins with a maximum force sub-arc, that for a race with large enough T , an optimal control must be comprised of more than just a maximum force arc, that a singular boundary force sub-arc exists (and likely at the end of the time interval), that a singular interior sub-arc is likely, and that intuitivly, it is unlikely for a zero force arc to exist, as that would not be optimal.
We believe that for our system, the control is singular for the majority of the event, and is comprised of a maximum force sub-arc, followed by a singular interior sub-arc, and finishing with a singular Frontiers