Increased accuracy of planning tools for optimization of dynamic multileaf collimator delivery of radiotherapy through reformulated objective functions

The purpose of this study is to examine in a clinical setting a novel formulation of objective functions for intensity-modulated radiotherapy treatment plan multicriteria optimization (MCO) that we suggested in a recent study. The proposed objective functions are extended with dynamic multileaf collimator (DMLC) delivery constraints from the literature, and a tailored interior point method is described to efficiently solve the resulting optimization formulation. In a numerical planning study involving three patient cases, DMLC plans Pareto optimal to the MCO formulation with the proposed objective functions are generated. Evaluated based on pre-defined plan quality indices, these DMLC plans are compared to conventionally generated DMLC plans. Comparable or superior plan quality is observed. Supported by these results, the proposed objective functions are argued to have a potential to streamline the planning process, since they are designed to overcome the methodological shortcomings associated with the conventional penalty-based objective functions assumed to cause the current need for time-consuming trial-and-error parameter tuning. In particular, the increased accuracy of the planning tools imposed by the proposed objective functions has the potential to make the planning process less complicated. These conclusions position the proposed formulation as an alternative to existing methods for automated planning.


Introduction
Optimization of intensity-modulated radiation therapy (IMRT) treatment plans using the widely used penalty-based objective functions results in plans of high quality in a majority of cases (see, e.g., [4,22] for references on the conventional penalty functions, and [5] for a general discussion on IMRT treatment planning based on these functions). The path towards satisfactory clinical quality, however, is not always straight. Due to an implicit and not so clear relationship between the conventional objective functions and measures of plan quality, several re-optimizations with manually revised parameters are often needed-if not to improve the plan quality, then to explore local variations in order to feel convinced on the optimality.
Limitations of the current planning tools are also reflected in the existence of literature on methods for detection and/or post-processing of suboptimal plans. For example, Appenzoller et al. [2] suggest a method to predict achievable dosevolume histograms (DVHs) to be used as reference, and Fredriksson [10] describes a technique to reduce the dose delivered to healthy tissue subsequent to optimization. Fredriksson successfully improves in that sense an already optimized plan while maintaining target coverage.
The large amount of resources spent on treatment planning has motivated the development of strategies to automate the planning process. Automating strategies include machine learning, with the aim of predicting planning related parameters based on a database with previously treated patients; and computerization of planning schemes, often with the aim of mimicking the planning steps of the human planner by enclosing the optimization in an outer automated fine-tuning loop. For example, McIntosh and Purdie [13] and Shiraishi and Moore [20] present methods to predict an achievable dose distribution to possibly input in a subsequent automated plan reconstruction phase, and Wu et al. [25] and Appenzoller et al. [2] suggest methods to predict parts of or entire achievable DVHs to support the planner's desicions on planning parameters. Some automating strategies have already been implemented into clinical practice through commercial treatment planning systems (TPSs). Yuan et al. [26] describe the machine-learning concepts of one commercially available DVH prediction method, which has been later evaluated, e.g., in the study by Tol et al. [21]. Gintz et al. [11] describe and evaluate a clinically available computerization strategy to mimic a planning scheme that can handle a general patient case.
The common approach of many automating strategies is to remove or largely reduce human interaction within the planning process. A risk associated with such an approach is "automation surprises", causing the user to feel out of control and leading to unnecessary cognitive load [3]. In our research, we take a different perspective on automated treatment planning. We argue that the demand for automation is partly due to methodological shortcomings in the conventional penalty-based formulation, and that resolving these issues by reformulating the objective functions have a similar potential as other automating strategies to streamline the treatment planning process. A qualified formulation of objective functions has to offer a stronger connection to plan quality measures and lead to tractable (e.g., convex) optimization problems, yet without compromising performance-the plan quality achieved must be at least comparable to that achieved from conventional planning. A streamlining effect would then be expected as a result of eliminated or reduced trial and error due to increased accuracy of the planning tools provided in the TPS.
In a recent fluence-based study [9], we suggest a novel formulation of objective functions that possesses the above-mentioned properties to streamline the plan-ning process. We abandon the implicit penalty-function based paradigm to obtain a stronger connection to commonly used plan quality measures, and use approximations in order to achieve a convex optimization problem. The results of our numerical planning study indicate an ability of the proposed formulation to perform equivalently or better (in terms of given plan quality measures) as compared to the conventional formulation in the given fluence-based setting. As to the study presented in this manuscript, the purpose is to strengthen our conclusions regarding the streamlining potential of the proposed formulation by evaluation in a more clinical context. To this end, deliverability constraints from the literature are added to handle the delivery with the dynamic multileaf collimator (DMLC) technique. This extension enables clinically accurate dose computations, and thus comparison with DMLC plans conventionally generated using clinical TPSs.

Method
The method of this study is an extension of the objective functions introduced in our previous fluence-based study [9] with the linear DMLC deliverability constraints suggested by Papp and Unkelbach [16]; and the design of a tailored interior point method to efficiently solve the resulting MCO formulation. In Section 2.1, we state and give a motivation for the proposed objective functions, and review the deliverbility constraints by Papp and Unkelbach in Section 2.2. Finally, in Section 2.3, we discuss the matter of solving the entire MCO formulation. In particular, we demonstrate the linear-algebraic steps towards achieving the tailored interior point method, which are analogous to the derivation of the method described in [9] for the deliverability-relaxed MCO formulation.

Formulation of MCO objective functions
The plan quality indices that we consider are those which can be deduced from the dose-volume histograms (DVHs) of the dose distribution. Specifically, a doseat-volume index here refers to the dose level at a given volume fraction in the DVH of a region of interest (ROI), such as a planning target volume (PTV) or an organ at risk (OAR), thus corresponding to the minimum dose received by this fraction of the particular ROI. In treatment plan optimization, doses-at-volume for possibly several volume fractions of the ROIs are to be either minimized or maximized, and/or kept below or above given dose bounds, to attain highest possible (quantitative) plan quality and fulfil clinical requirements. This definition of the general planning goal is in line with the "Plan Quality Metric" designed by Nelms et al. [15] to unambiguously quantify and compare the plan quality of different plans. The conventional way of mathematically formulating the planning goal is by objective functions that penalize any deviation of the DVH curves from user-specified reference points (see, e.g., [4,22]). Since the connection between such penalties and the actual dose-at-volume is implicit and not always clear, the optimization operation might behave unexpectedly, causing a need for re-optimization and tedious parameter tuning. On the contrary, the ideal way of formulating the planning goal is, naturally, by objective functions that explicitly quantify the doses-at-volume in question. A consequence of such a choice would be an intractable optimization problem which cannot be solved to proven optimality: the function quantifying doseat-volume is both non-differentiable and non-convex. The idea behind our proposed formulation is to adopt the ideal explicit structure, by which the penalty-function paradigm is abandoned, while replacing the dose-at-volume objective functions by convex over-and concave underestimates. We use the mean-tail-dose functions introduced by Romeijn et al. [19] to obtain such estimates. A brief review of the proposed MCO formulation follows here; the full formulation is stated in Appendix A and a detailed description is given in our previous paper [9].
Let d be a distribution of dose inside the patient geometry. Denote by D = D(d; v, s) the function quantifying the dose-at-volume associated with the volume fraction v of the ROI s, and by D + = D + (d; v, s) and D − = D − (d; v, s) the functions quantifying the upper and lower mean-tail-dose, respectively. The proposed MCO formulation takes D + as objective function to approximate the minimization of D, and analogously the negative of D − to approximate the maximization. With K doses-at-volume out of which q should be minimized and K − q maximized, the formulation is where auxiliary variables ξ k are introduced for clarity. Hard upper and lower dose bounds u k and l k are imposed on the mean-tail-doses via their auxiliary variable, and should be used to exclude clinically irrelevant dose distributions. Lower and upper dose boundsl k andû k are imposed only on the auxiliary variables, giving the effect of removed incentives to further minimize or maximize the mean-tail-doses; thus,l k andû k should be used to specify clinically satisfactory (or utopian) dose levels. Alternatively, excluding the auxiliary variable from the multicriteria objective removes all optimizing incentive and leaves only the hard dose bound for that meantail-dose. Notice that the ideal formulation is obtained by simply replacing D + and D − with D. Since the upper and lower mean-tail-doses over-and underestimate the dose-at-volume, i.e., the dose-at-volume is properly controlled and guaranteed to satisfy the hard dose bounds imposed on the mean-tail-doses. Regarding conservativeness of using the clinical dose bounds aimed for D as upper and lower dose bounds for D + and D − , this is briefly discussed in Section 4.
By convexity of D + and concavity of D − , the formulation in (2.1) is a convex MCO formulation provided that the intended set of deliverable dose distributions is convex (with respect to the delivery parameters). Without compromising convexity, (2.1) can be extended with objective functions to minimize the maximum dose, maximize the minimum dose, or in any direction optimize the average dose. These extensions are included in the full formulation given in Appendix A.

Modelling of DMLC deliverability
We use the formulation by Papp and Unkelbach [16] to mathematically model the deliverability requirement in (2.1) for "sliding window" DMLC delivery. DMLC delivery allows the collimator leaves to move during irradiation. By restricting to unidirectional sweeps (sliding windows), Papp and Unkelbach were able to express the resultant dose distribution using only linear transformations. This was achieved by representing the sweeps as the sequences of time of arrival and departure of the leaves at fixed positions (bixels) along the sweeping direction, rather than using control-point based representations where the trajectories are given by the sequence of the positions of the leaves at fixed points in time. Control-point based representations require in general a non-convex mapping to transform into dose distribution (see, e.g., [23]). A brief review of the formulation by Papp and Unkelbach, with some supplementary assumptions, follows here.
Denote by B the number of beam angles, by N the number of leaf pairs of the MLC, and by J the number of bixels swept by each leaf pair, so that the fluence map for each beam is defined on a rectangular N × J grid of bixels. The trajectory of the nth leaf pair in the bth beam is defined as the sequences of departure times l b,n,j and r b,n,j , for all bixels j, when the trailing (WLOG, the left) and the leading (right) leaf begin traversing each bixel. Assuming that all leaves require a constant time ∆t to traverse a bixel, the conditions for a feasible trajectory become where (2.2a) and (2.2b) prevent a leaf to depart from bixel j +1 before arriving there, and (2.2c) prevents a trailing leaf to pass a leading leaf. The effect of assuming constant traversing time is a halved number of DMLC variables as compared to the original formulation by Papp and Unkelbach, who also keep track of arrival times. The assumption reduces the set of feasible trajectories but, most importantly, has no impact on the set of deliverable dose distributions. Further restrictions to the trajectories can be made depending on limitations of the delivering treatment machine or other resources. For example, Papp and Unkelbach outline how to handle interdigitation constraints. The machine used in this study allows interdigitation, but requires the leading and trailing leaves to always be separated by a minimum gap corresponding to a fraction ρ ∈ (0, 1) of the bixel width. The leading leaf must thus depart ρ∆t time units before the trailing leaf arrives, Here, (2.3b) handles the case with the left-most bixel. Notice that the minimum gap constraints (2.3) make (2.2c) redundant. We also require the entire delivery to be completed within some maximum treatment time T max . Since the delivery of beam b is completed when all trailing leaves have traversed the right-most bixel J, this requirement is formulated by the linear constraints where T b will be regarded as the beam-on time for beam b.
The dose distribution is determined by the time of exposure to radiation of the bixels. As observed by Papp and Unkelbach, the exposure can be expressed linearly: the full exposure of the bixels traversed by leaf pair n in beam b equals the time intervals between the departure of the trailing and leading leaves, We slightly extend this model by accounting for partial exposure due to MLC leakage during the remaining portion of the beam-on time T b , where τ ∈ (0, 1) denotes the transmission factor. Assuming constant dose rate δ, the beamlet weights of the fluence map are then given by δ l b,n,j − r b,n,j + τ T b − (l b,n,j − r b,n,j ) , j = 1, . . . , J, and the dose distribution d (now discretized into voxels) is obtained from multiplication with the dose deposition matrix P , where l and r are vectorizations of l b,n,j and r b,n,j , and the vector T is the appropriate vectorization of the beam-on times (i.e., the vector of N J repetitions of each T b ).

On solving the proposed MCO formulation
A Pareto optimal solution to (2.1) is obtained as the optimal solution to a weightedsum instance, where the multiple objective functions are scaled by some weighting factors and accumulated into one. Different weighting factors lead to different Pareto optimal solutions. Rockafellar and Uryasev [18] have shown that the conditional value-at-risk measure-the financial counterpart to mean-tail-dose-can be expressed as a linear program by the introduction of artificial variables. This result allows us to expand the weighted-sum instances of the DMLC deliverability version of (2.1) into linear programs for which both extensive theory and efficient methods are available, such as the simplex method and interior point methods (see, e.g., [12] and [24]). The full formulation given in Appendix A takes this expanded form. A drawback with the expansion is largely increased problem sizes caused by variables and constraints with voxelwise components (usually of the order of 10 5 ).
In our previous study [9], however, we demonstrate how to utilize an interior point method to efficiently solve the deliverability-relaxed version of (2.1). The key is to reduce the system of linear equations associated with the interior point method to the order of number of bixels, usually implying a reduction in size by several orders of magnitude. Fortunately, we can obtain an analogous reduction for the DMLC deliverability version, since adding the constraints (2.2)-(2.5) does not change the problem structure. The essential steps to reduce the system of linear equations are summarized below; for linear-algebraic details, we refer to Section 4.A in our previous study. The constraint matrix of the linear program expansion is partitioned according to voxelwise dependence; let it be denoted by where A 22 takes the coefficients for all variables and constraints with voxelwise components. The size of A 11 mainly depends on the number of bixels, and thus is in general several orders of magnitude smaller than A 22 . The system of linear equations to be solved at each iteration of the interior point method has the form where D 1 , D 2 , D 3 and D 4 are positive definite diagonal matrices containing the socalled complementarity products (∆x and r RHS are generalized unknowns and righthand-sides). By a simple rearrangement, the voxelwise components are concentrated in the bottom right quadrant: -but also to its inverse, which is identically structured and can be formed explicitly (see Section 4.A in [9]). Both solving and multiplying using matrices with this structure are inexpensive operations that merely amount to scaling and adding two vectors. Therefore, the entire system can be solved in two relatively inexpensive steps: by one solve using the Schur complement of the voxelwise quadrant, and by one subsequent operation using the inverse of the voxelwise quadrant. The first step is inexpensive since the Schur complement is considerably smaller than the entire system; its size is identical to that of the top left quadrant, i.e., is of the order of the number of bixels.
Applying an interior point method to solve the linear program expansion of (2.1) has an advantage over a simplex method regarding the interpretation and choice of stopping criteria. Once they are feasible, the iterates of the interior point method provide information about the proximity to the optimal value, and the method is usually terminated when the distance is "sufficiently small". In the case of a weighted-sum instance of (2.1), this stopping criterion is directly interpretable as a dose tolerance (owing to the proposed objective functions being interpretable as dose) and can thus be chosen as such. In our planning study presented in Section 3, the interior point method is terminated when the objective function value is within 1 cGy from the optimal value. While this is to be inexact in a mathematical meaning (the stopping criterion normally relating to machine precision), pushing the method much further is not expected to be meaningful in our application.

Results
To examine the ability of the proposed objective functions to produce high-quality DMLC plans, we have studied the plan quality indices obtained among DMLC plans Pareto optimal to (2.1). DMLC plans generated using the MCO module in RayStation [17] are used as a reference for conventional planning.
Three patient cases are considered: a prostate, a lung, and a head-and-neck (HN) case. Their respective planning goal amounts to optimizing the three plan quality measures shown in Table 3.1, subject to fulfilment of several PTV and OAR dose requirements expressed in terms of dose-at-volume, average, minimum, and maximum dose bounds. Lists of all PTV and OAR requirements are given in Appendix B. The restriction to three plan quality indices is to enable the visualization of the obtained values in a three-dimensional coordinate system; in general, any number of indices can be chosen. Our translation of the planning goals into tricriteria instances of (2.1) is straightforward: doses-at-volume are replaced with upper and lower mean-tail-dose (average, minimum, and maximum doses are exact, see Appendix A), their dose bounds left unaltered; and since no optimizing incentive is needed for the PTV and OAR requirements in Appendix B, their auxiliary variables are excluded from the objective as described in Section 2. As to the analogous set-up in RayStation of the tricriteria instances for conventional planning, besides constraints to handle the dose bounds, three penalty-based objective functions are used with the utopian levelsl k andû k as reference points. Except for the initial set-up according to the planning goals and constraints given in the tables, no further Table 3.1: Plan quality indices for the three studied patient cases. Type states the type of index, here either the dose-at-volume (d-a-v) at the v k volume fraction or average dose (avg). For the PTVs, a homogeneity index HI H% L% is constructed as the difference between a low-(L) and high-percentage (H) dose-at-volume. Aim indicates, for clarity, whether the index is subjected to minimization ( ) or to maximization ( ). Parameters l k and u k are hard dose bounds [Gy], whilel k and u k are utopian levels [Gy] at which optimizing incentives can be removed.

Structure
Type v k Aim l k ,û klk , u k A total of 34 DMLC plans Pareto optimal to (2.1) are generated for the prostate and lung case, and a total of 13 for the computationally more expensive HN case. The underlying objective function weights are normalized and equidistantly distributed on the triangle with corners in (1,0,0), (0,1,0), and (0,0,1). As to the reference DMLC plans Pareto optimal to the conventional formulation, a total of 55 are generated for each case using the MCO module in RayStation. Independently of how generated, all plans undergo final dose computation of clinical accuracy and are then evaluated based on the three targeted plan quality indices of Table 3.1. The outcome of this evaluation is shown in Figures 3.1-3.3, and DVHs are shown in Figure 3.4 to provide a broader perspective on the plans with regards to dose distribution. Figure 3.5 shows the plan quality indices in juxtaposition with the Pareto optimal objective function values of (2.1).  by their plan quality indices-i.e., not by their respective Pareto optimal objective function values. As expected from the weak connection between the penalty-based objective functions and the plan quality measures, the plans Pareto optimal to the conventional formulation (red dots) appear arbitrarily scattered with this representation. In contrast, the plans Pareto optimal to (2.1) (blue dots) conform to a non-dominated surface in a similar way as Pareto optimal values do to a Pareto surface (we refer to a point as dominating over another point if it dominates in all coordinates). This conformity is indicated by the fact that almost all of these plans lie on the convex hull (blue surface). In fact, as can be observed in Figure 3.5, the convex hull of the plan quality indices appears only rigidly shifted relative to the convex hull of the corresponding Pareto optimal values, rather than deformed in shape. These indications support the assumed stronger correlation between the proposed objective functions and the plan quality measures.
As to the values obtained in Figures 3.1-3.3, none of the two sets of points appears to in a clear way dominate the other. Dominance is particularly unclear for the prostate case shown in Figure 3.1. However, the situation changes when violation of constraints is taken into account; violation of PTV constraints by more than 1 % is marked in the figures by unfilled dots. A dominance in favor of the proposed objective functions is revealed when violating plans are ignored. For example, for the lung case shown in Figure 3.2, the violating plans are exactly those out of the conventionally generated plans that are not dominated. Although minor violation is somewhat expected for all plans due to the final accurate dose computation, the violation obtained among plans generated using the conventional objective functions is more systematic and of a larger extent. This inherent tendency of using the conventional approach, giving solutions that sometimes fail to satisfy constraints, is addressed and explained by Fredriksson [10], who suggests using smoother penalty functions as a possible remedy. Implications of the results are further discussed in Section 4.

Discussion
The goal of this study was to examine the streamlining potential on the planning process of the objective functions suggested in our previous study [9] and formulated in (2.1) when stress-tested with DMLC deliverability constraints and accurate dose computations. The proposed objective functions are designed to offer a stronger connection to measures of plan quality than the conventional penalty-based objective functions, and to give tractable (convex) optimization problems. The proposed functions are argued to have a streamlining potential if they, in addition, are able to produce plans of at least equivalent quality as the conventional functions. The outcome of our planning study indeed indicates such an ability. Surprisingly, the results presented in Figures 3.1-3.3 and commented in Section 3 were obtained despite several handicaps for the proposed formulation. One such is the already addressed unfair advantage for the conventional formulation of systematically violated constraints. Violated constraints theoretically give more room to optimize the objective function values (here, to minimize the penalties), yet, the plan quality indices obtained are comparable. Another handicap is the fact that the dose bounds aimed for PTV and OAR doses-at-volume are imposed unaltered on the mean-tail-doses when optimizing using the proposed formulation. Since upper and lower mean-tail-doses over-and underestimate dose-at-volume, re- spectively, the unaltered bounds overly restrict (i.e., narrow) the feasible region to the optimization problem. A smaller feasible region gives less room to optimize the objective function values, yet again, none of the two sets of plan quality indices dominates the other. It seems indeed that the characterizing properties of the proposed formulation-consistency with respect to plan quality measures, and convexity-are able to compensate for these handicaps. As mentioned in Section 3, no human-to-TPS interaction was involved in our planning study except for initial set-up. The results are the outcome of optimizing an MCO set-up based on a straightforward and systematic translation of the clinical goals and requirements into objective functions and constraints. Re-optimizing each of the conventionally generated plans with revised parameters (i.e., revised reference points) could improve their quality and feasibility. The extent of this potential improvement has not been investigated, but what can be concluded is an existing need to re-optimize all three studied patient cases in order to meet the outcome from using (2.1).
The positive effects of explicitness and convexity of the proposed formulation (2.1) do not come for free. In abandoning the penalty function paradigm in favor of these aspects, compromises have been made with regards to computational complexity. The problem size of (2.1) in its expanded form is proportional to the number of voxels, which can be contrasted to the problem size of the conventional formulation proportional to the usually substantially smaller number of bixels. Although fast optimization possibly becomes less important with more accurate planning tools, running times must be reasonable to justify practical use. We have demonstrated in Section 2.3 how to exploit the structure of (2.1) to significantly reduce the computational cost associated with solving this formulation using a tailored interior point method. Combined with the techniques developed by Colombo and Gondzio [6] to Figure 3.4: DVHs for the three patient cases and three respective structures for which plan quality measures are optimized. The blue band covers the area spanned by the plans Pareto optimal to (2.1), and the red band covers that spanned by the plans Pareto optimal to the conventional formulation. The highlighted blue histograms correspond to the plan with balanced objective function weights (i.e., with all weights equal to one third).
further reduce this cost, running times of the order of 5, 10, and 60 minutes per weighted-sum instance of the prostate, lung, and HN case were required to meet the desired optimization tolerance with our MATLAB (MathWorks, Natick, Massachusetts) implementation on an Intel Core i7 2.80 GHz computer with four cores. We anticipate a decrease if GPUs are used.
Treatment planning is a complex task. To what extent it is complicated, however, depends on the planning tools available to support the planner in completing that task. This distinction between the complexity of a process, and complicated tools to handle the process is in line with the theories and experiences presented by Andersson et al. [1]. While the latter is a consequence of shortcomings in tool design and can (should) be avoided, complexity is something inherent that needs to be accepted. In our research, we embrace the complexity of treatment planning while aiming at making the planning process less complicated, specifically, by offering more accurate tools, i.e., by reformulating the underlying objective functions to more comprehensive alternatives. Another example of reducing complication while accepting the complexity of treatment planning, is the successful [8] clinical introduction of a posteriori MCO techniques made possible through rigorous and extensive research (see, e.g., [4,7,14]). With these techniques at hand, the otherwise tedious manual task of finding weighting factors that give a desired objective function trade-off can be accomplished in a formalized and structured manner without hiding degrees of freedom; the weighting factors indeed introduce a flexibility to the planning process that should be available for the planner to explore. We believe that combining MCO techniques with the proposed objective functions can provide the skilled planner with the necessary tools to control the planning process in an uncomplicated fashion. Future studies should be aiming at further developing practical aspects of using the proposed formulation, in particular with regards to compliance with other treatment delivery techniques and with other types of plan quality indices.

Conclusion
We have investigated the ability of our previously suggested formulation of objective functions for treatment plan MCO to produce DMLC plans of plan quality comparable to that of DMLC plans generated using the conventional penalty-based objective functions. In a numerical planning study involving three patient case, DMLC plans Pareto optimal to the proposed formulation were generated and compared to the DMLC plans resultant from the penalty-based MCO module in RayStation. Comparable plan quality-or, better when accounting for the systematic violation of constraints among conventionally generated plans-was observed when evaluating the plans based on three pre-defined plan quality indices. Supported by these results, the proposed objective functions are argued to have a potential to streamline the planning process, since they also are designed to overcome methodological shortcomings in the conventional objective functions that cause a need for trial and error and time-consuming re-optimizations in the current planning process. These conclusions position the proposed formulation as an alternative to existing methods for automated planning. In particular, we believe that the increased accuracy of the planning tools imposed by the proposed objective functions is essential in making the planning process less complicated.
The full proposed formulation is then given by minimize α k ,ξ k ,η k ,d ξ 1 , · · · , ξ q , −ξ q+1 , · · · , −ξ K l k ≤ ξ k ≤ u k , k = 1, . . . , q, where α k and η k are artificial variables used to linearly handle the mean-tail-dose functions. The dose distribution d, auxiliary variables ξ k , volume fraction v k , and bounds l k ,l k , u k ,û k are as defined in Section 2. Moreover, V s collects the voxels of ROI s, and ∆ s i is the relative volume of s located in voxel i such that i∈Vs ∆ s i = 1. Notice that the constraints (A.1b), (A.1c), (A.1f), and (A.1g) and the variable η k have voxelwise components, as addressed in Section 2.3. Table B.1 shows the PTV and OAR requirements for the prostate, lung and HN case introduced in Section 3. In addition to these constraints, dose fall-off is required in terms of average and maximum dose bounds for several ring structures at different distances from the PTVs.