Paper The following article is Open access

Efficiency enhancements of a Monte Carlo beamlet based treatment planning process: implementation and parameter study

, , , , , , and

Published 13 February 2023 © 2023 The Author(s). Published on behalf of Institute of Physics and Engineering in Medicine by IOP Publishing Ltd
, , Citation S Mueller et al 2023 Phys. Med. Biol. 68 044003 DOI 10.1088/1361-6560/acb480

0031-9155/68/4/044003

Abstract

Objective. The computational effort to perform beamlet calculation, plan optimization and final dose calculation of a treatment planning process (TPP) generating intensity modulated treatment plans is enormous, especially if Monte Carlo (MC) simulations are used for dose calculation. The goal of this work is to improve the computational efficiency of a fully MC based TPP for static and dynamic photon, electron and mixed photon-electron treatment techniques by implementing multiple methods and studying the influence of their parameters. Approach. A framework is implemented calculating MC beamlets efficiently in parallel on each available CPU core. The user can specify the desired statistical uncertainty of the beamlets, a fractional sparse dose threshold to save beamlets in a sparse format and minimal distances to the PTV surface from which 2 × 2 × 2 = 8 (medium) or even 4 × 4 × 4 = 64 (large) voxels are merged. The compromise between final plan quality and computational efficiency of beamlet calculation and optimization is studied for several parameter values to find a reasonable trade-off. For this purpose, four clinical and one academic case are considered with different treatment techniques. Main results. Setting the statistical uncertainty to 5% (photon beamlets) and 15% (electron beamlets), the fractional sparse dose threshold relative to the maximal beamlet dose to 0.1% and minimal distances for medium and large voxels to the PTV to 1 cm and 2 cm, respectively, does not lead to substantial degradation in final plan quality compared to using 2.5% (photon beamlets) and 5% (electron beamlets) statistical uncertainty and no sparse format nor voxel merging. Only OAR sparing is slightly degraded. Furthermore, computation times are reduced by about 58% (photon beamlets), 88% (electron beamlets) and 96% (optimization). Significance. Several methods are implemented improving computational efficiency of beamlet calculation and plan optimization of a fully MC based TPP without substantial degradation in final plan quality.

Export citation and abstract BibTeX RIS

Original content from this work may be used under the terms of the Creative Commons Attribution 4.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.

1. Introduction

A highly conformal dose distribution to the target volume is achieved with treatment plans for photon intensity modulated radiotherapy (IMRT). IMRT was enabled through the introduction of the multileaf collimator (MLC) (Convery and Rosenbloom 1992) and inverse planning strategies (Bortfeld 2006). To follow an inverse planning strategy for photon IMRT, the user sets up multiple treatment fields. Each field is discretized into hundreds of small beams and their dose distributions are calculated (beamlets). These beamlets are described by a dose-influence matrix dij , where j is the beamlet index and i is the voxel index. A large-scale optimization follows to optimize the large amount of parameters such as leaf positions of the MLC or intensities based on the information of the dij . Finally, the dose distribution of the optimized plan is re-calculated to accurately predict the delivered dose to the patient with algorithms also considering the impact of the MLC such as Monte Carlo (MC) simulations, solvers of the linear Boltzmann transport equations or convolution/superposition models. Final plan quality is less degraded from optimized to final dose distribution, if the same accurate dose calculation algorithm is used for beamlet as for final dose calculation (Jeraj 2002, Dogan et al 2006, Li et al 2015). However, computation time is increased.

It is apparent that following an inverse planning strategy in the treatment planning process (TPP) with its beamlet calculations, optimization and final dose calculation leads to an enormous computational effort. For the development of novel treatment techniques and optimization strategies beyond standard IMRT, there is also a strong tendency for increasing computational demands:

  • Dynamic treatment techniques such as volumetric modulated arc therapy (VMAT) (Yu 1995, Otto 2008) or dynamic trajectory radiotherapy (DTRT) (Smyth et al 2019) extending VMAT by dynamic table and collimator rotation are described by dynamic paths discretized into control points. The beamlets are calculated for each of these control points.
  • Treatment techniques using multiple beam energies such as photon MLC based modulated electron radiotherapy (MERT) (Klein et al 2008, Salguero et al 2009, 2010, Henzen et al 2014a, Joosten et al 2018, Kaluarachchi et al 2020) and intensity modulated proton therapy (IMPT) (Paganetti 2017) or even multiple particle types (Unkelbach et al 2022) such as mixed photon-electron beam radiotherapy (MBRT) (Palma et al 2012, Míguez et al 2017, Mueller et al 2017, 2018, Renaud et al 2019, Heath et al 2021, Heng et al 2021) or combined proton-photon therapy (CPPT) (Unkelbach et al 2018b, Gao 2019, Fabiano et al 2020a, 2020b, Kueng et al 2021, Marc et al 2021, Amstutz et al 2022) need to calculate the beamlets for each different beam energy and particle type.
  • Robust optimization needs to calculate beamlets for each error scenario to be considered (Unkelbach et al 2018a).
  • Optimization strategies such as multicriteria optimization and auto-planning run numerous optimizations with different objectives each dealing with a huge amount of beamlets.
  • Many beam angle optimization strategies need to calculate the beamlets for any beam angle under consideration and need to run multiple optimizations to find a suitable set of beam angles (Haas 2003, Aleman 2007).

Thus, the computation time is a crucial aspect for the clinical introduction of novel techniques and hence of great importance. However, there exists usually a compromise between final plan quality and computational efficiency as already pointed out with the example of the algorithm selection for beamlet calculation. There are multiple approaches to reduce the computation time of the TPP in general. Especially adaptations in the beamlet calculation are relevant for the TPP:

  • Adaptations and suitable parameter selection of the beamlet calculation algorithm itself. In case of MC simulations, examples are variance reduction techniques (Fippel 2016) and the choice of the desired statistical uncertainty (Jeraj et al 2000, Ma et al 2005).
  • Reduction of beamlet data leading to a faster plan optimization. An example is using a larger voxel size (De Smedt et al 2005).
  • Perform beamlet calculation using faster hardware and especially using parallel execution (Neph et al 2019), because each beamlet can be calculated independent from the others.

The techniques in the first two categories have in common that they often lead to less accurate or less precise beamlets and therefore to a degradation in final plan quality. Thus, the impact of such solutions needs to be studied systematically to find a reasonable trade-off.

The goal of this work is to improve the computational efficiency of a fully MC based TPP. For this purpose, multiple approaches enhancing the efficiency of the beamlet calculation itself or reducing the beamlet data leading to reduced optimization times are implemented and their impact on final plan quality and computational efficiency is studied. Several intensity modulated photon, electron and mixed photon-electron beam treatment techniques are considered for this study.

2. Methods

In this work, whenever the term statistical uncertainty is used for both beamlet and final dose distributions, the mean statistical uncertainty (one standard deviation) over the voxels with dose values higher than 50% of the maximum dose is meant. The statistical uncertainty is always determined with the history by history method (Walters et al 2002).

2.1. Treatment planning process

In recent work (Henzen et al 2014a, Mueller et al 2017, 2022, Guyer et al 2022), a fully MC based TPP as illustrated in figure 1(a) was developed to create treatment plans for IMRT, MERT, MBRT, VMAT, DTRT and dynamic mixed beam radiotherapy (DYMBER) (Mueller et al 2018). These treatment plans are all deliverable in the developer mode of a TrueBeam system (Varian Medical Systems, Palo Alto, CA) equipped with a Millennium 120 MLC (Varian Medical Systems, Palo Alto, CA). Source-axis-distance (SAD) of the TrueBeam system is 100 cm. DYMBER is a combination of MERT and DTRT (Mueller et al 2018). Plans created by this TPP for MERT, MBRT and DYMBER collimate the electron beams with the TrueBeam installed photon MLC. Thus, no cut-out nor applicator is used, but the patient is moved to a reduced source-to-surface distance (SSD) to reduce in-air scatter of electrons.

Figure 1.

Figure 1. (a) Treatment planning process used in this work to create treatment plans for various treatment techniques: IMRT, VMAT, MERT, MBRT and DYMBER. The third subprocess 'Beamlet dose calculation' is illustrated in more detail in (b). The inner loop iterating over all beamlets of a single field (orange background) is performed in parallel with as many threads as CPU cores available. Beamlet specific tasks are colored in magenta, tasks specific to a beamlet collection (static field or control point of a dynamic field) in purple and common tasks for the whole plan in dark red.

Standard image High-resolution image

This TPP starts by importing the CT data into a research version of the Eclipse treatment planning system (TPS) (Varian Medical Systems, Palo Alto, CA) and contouring the structures including the PTV, OARs and normal tissue. Afterwards, the fields are defined in Eclipse TPS. This step is treatment technique dependent. All photon and electron fields to be delivered with intensity modulated step-and-shoot apertures from static beam directions (called static fields) are defined manually as single fields in Eclipse TPS. Similarly, VMAT arcs are defined manually in Eclipse TPS. To setup photon dynamic trajectories in Eclipse TPS, the external path finding procedure described in Fix et al (2018) is used. In this work, conventional arcs and dynamic trajectories are called dynamic fields.

The next step is to calculate the beamlets for each beamlet collection (static field or control point of a dynamic field) within the framework of the Eclipse interfaced Swiss Monte Carlo Plan (SMCP) (Fix et al 2007, Manser et al 2018). For the purpose of this work, the part of the SMCP managing beamlet calculations was re-implemented in C++ to fulfill the following three functional requirements to steer the computational performance of beamlet calculation and plan optimization:

  • The user can specify the desired statistical uncertainty of the beamlets.
  • The user can specify a sparse dose threshold setting dose values below this threshold to zero.
  • The user can specify distances from voxels to the PTV surface from which voxels are merged to larger voxels.

These parameters are described in more detail in the sections 2.1.12.1.3. The beamlet calculation is designed as a single multi-threaded process calculating all beamlets of all fields assigned to a treatment plan. This enables to perform common tasks for all beamlet collections and common tasks for all beamlets of a specific beamlet collection only once. Thus, computational overhead is reduced. The workflow is illustrated in figure 1(b). First, HU values of the CT data are transformed to physical density and material composition according to a dose scoring grid and CT calibration ramp. Next, as many photon and electron dose calculation engines are prepared as there are CPU cores available. Each engine is an instance of the Voxel Monte Carlo (VMC++) (Kawrakow and Fippel 2000) algorithm for photons or the electron Macro Monte Carlo (eMC) (Neuenschwander and Born 1992, Neuenschwander et al 1995, Fix et al 2013) algorithm for electrons. Afterwards, the contours of the structures are loaded and it is identified which voxels are merged together. It follows a first loop iterating over all beamlet collections of the fields assigned to the treatment plan starting with a preparation of a beamlet collection. This includes the definition of the rectangular beamlet grid in the middle plane of the MLC. The beamlet grid is conformal to the PTV in beams eye view (static field) or the smallest possible opening encompassing all control point specific conformal openings (dynamic field). An additional beam modality and SSD dependent margin is added to the beamlet grid size to account for the beam penumbra. The beamlet size is 0.5 × 0.5 cm2 for the inner 0.5 cm wide leaves and 0.5 × 1.0 cm2 for the outer 1.0 cm wide leaves, specified in the plane of the isocenter. Furthermore, the preparation of the beamlet collection also calculates the transformation of the beamlet collection from the beam coordinate system to the patient coordinate system. This is needed as the beamlet sources are patient-independent pre-calculated phase-spaces located at the treatment head exit plane and provided in the beam coordinate system, while the dose calculation engines simulate the particle transport in the patient coordinate system. Then, a second loop follows, iterating over all beamlets of the prepared beamlet collection in parallel using OpenMP with as many threads as CPU cores available. For each beamlet, first the number of particles to be simulated is estimated to reach the user-defined desired statistical uncertainty. Afterwards the phase-space is loaded and a third loop (not explicitly depicted as a loop in figure 1(b)) iterating over the particles is executed using a prepared dose calculation engine. The engine transports the particle and scores the dose values in the dose scoring grid (original voxel grid). Afterwards, the sparse dose threshold is applied, and then the dose is transformed to the prepared voxel merged format.

After beamlet calculation is completed, the treatment plan is optimized using a hybrid column generation (Romeijn et al 2005) and simulated annealing (Shepard et al 2002) direct aperture optimization (H-DAO) (Mueller et al 2022) resulting in a deliverable plan. The same H-DAO algorithm is applied for all treatment techniques considered in this work. The H-DAO considers mechanical constraints such as movement ranges of the MLC leaves and also transmission of photon beams through the MLC (Mueller et al 2022), while other impact of the MLC such as particle scattering is not considered. Thus, a final MC dose calculation is performed within the SMCP framework re-calculating the dose distribution of each static aperture and each control point of dynamic fields depending on the field type (static beam direction or dynamic field). For the final dose calculation of photon beams, the source is a pre-simulated phase-space located above the secondary collimator jaws. The particles of the photon beams are simulated through the secondary collimator jaws and MLC using VMC++. For the electron beams, the multiple source model ebm70 (Henzen et al 2014b) is used. The same algorithms VMC++ (photon beams) and eMC (electron beams) are used for the final dose calculation in the patient as for the beamlet calculation. Finally, the monitor units (MUs) of the static apertures and control points are re-optimized using a limited-memory Broyden–Fletcher–Goldfarb–Shanno (L-BFGS) algorithm with the same objectives as used for the H-DAO optimization. Purpose of this re-optimization is to mitigate the degradation from optimized to final dose distribution.

2.1.1. Statistical uncertainty

The user can specify the desired statistical uncertainty up of the beamlets for both photon and electron beamlets separately. p stands for the particle type x (photon beam) or e (electron beam) of the beam. For each specific beamlet, the number of particles n to be simulated to reach the desired statistical uncertainty u is estimated as a function of the actual beamlet size b, actual source-to-surface-distance SSD and the actual voxel volume V (only outlaid for cubic voxels). For this, pre-simulations of beamlets are performed to determine the number of particles ${n}_{{ref},b}$ to reach ${u}_{{ref}}\,$ = 1% statistical uncertainty in defined reference conditions (see table 1). b stands here either for small (0.5 × 0.5 cm2) or for large (0.5 × 1.0 cm2) beamlets. The reference conditions are at $SS{D}_{ref,p}$ with the beam perpendicular to a 30 × 30 × 30 cm3 water phantom discretized into voxels with a voxel volume of ${V}_{{\rm{ref}}}$= 0.25 × 0.25 × 0.25 cm3. The corresponding reference SSD values are $SS{D}_{ref,x}$ = 95 cm and $SS{D}_{ref,e}$ = 80 cm for photon and electron beams, respectively. The formula used to correct n for the specific beamlet is the following:

Equation (1)

Table 1. Number of particles needed to reach 1% statistical uncertainty with small 0.5 × 0.5 cm2 (middle column) and large 0.5 × 1.0 cm2 (right column) beamlets in the reference conditions. These conditions are an SSD of 95 cm for photon beams and 80 cm for electron beams with the beam perpendicular to a 30 × 30 × 30 cm3 water phantom with 0.25 × 0.25 × 0.25 cm3 voxel size.

BeamNumber of particles ${n}_{\mathrm{ref},\mathrm{small}}$ Number of particles ${n}_{\mathrm{ref},\mathrm{large}}$
6X102 000183 000
6E4592 0004455 000
9E2840 0002457 000
12E2439 0001894 000
15E2535 0001895 000
18E3073 0002288 000
22E3349 0002278 000

The quadratic correction for the statistical uncertainty is motivated by the history-by-history approach, in which we have a statistic over the number of independent histories (equal to n as all sampled particles here are independent). The quadratic correction for the SSD is motivated by the inverse square law and the correction for the voxel size is motivated by the change of the number of histories impinging on a cubic voxel (linear increase with the area of the voxel surface orthogonal to beam direction).

2.1.2. Sparse format

The total beamlet data is very memory-inefficient, because many voxels receive zero dose or very low dose. Thus, the beamlet data can be enormously reduced by neglecting dose values below a certain sparse dose threshold. In the implemented beamlet calculation framework, the user can specify a fractional sparse dose threshold t (e.g. 1%) to determine an absolute sparse dose threshold dt in cGy/MU according to the following formula

Equation (2)

where dmax is the maximal dose value of beamlet j to any voxel in the whole body. Dose values in voxel i of beamlet j below this threshold (dij < dt ) are then set to zero. The optimization does not consider any dij = 0.0 leading to reduced beamlet data and to reduced optimization time.

2.1.3. Voxel merging

A larger voxel size (in units of cm × cm × cm) leads to less beamlet data but also to less accurate beamlets. A promising compromise is to increase the voxel size further away from the PTV, because a small voxel size is probably more important close to the PTV and in the PTV. This concept of distance dependent voxel merging is implemented in the beamlet calculation framework. The user can specify two distances dm and dl . First, all original voxels are grouped in large voxels of 4 × 4 × 4 = 64 original voxels. Those large voxels with a distance, determined from their center to the PTV surface, larger than dl are kept as large voxels. All others are re-grouped in medium voxels of 2 × 2 × 2 = 8 original voxels. Those medium voxels with a distance to the PTV surface larger than dm are kept as medium voxels and all others as original voxels. However, there is one exception: The original voxel size is always used for structures with voxel volume smaller than 500 original voxels such that small structures like the chiasma or the lenses are represented by a reasonable large number of voxels. Voxels overlapping the PTV contours are always of original voxel size. The concept of voxel merging is illustrated in figure 2.

Figure 2.

Figure 2. Illustration of the voxel merging approach implemented and investigated in this work on a two-dimensional plane of the CT. There are three sizes of voxels with the smallest being the original voxel size for beamlet calculation. The size is determined depending on the distance from the target to be treated.

Standard image High-resolution image

The plan optimization accounts for the different voxel sizes and that a voxel of any size may only partly overlap with a structure. During optimization each voxel is weighted by its volumetric overlap with a specific structure for the calculation of the objective function (Mueller et al 2022). Furthermore, any dostrimetric evaluation during optimization (e.g. mean dose to a structure) also considers these volumetric overlaps of voxels and structures.

2.2. Parameter study

The compromise between final treatment plan quality and computational efficiency for the parameters listed in table 2 is studied for several parameter values. Each parameter is tested separately by running the whole TPP with the different parameter values listed in table 2. The parameter values of all other parameters stay with the most conservative value listed in table 2 (e.g. when dm /dl is tested for several values, ux , ue and t have always the values 2.5%, 5% and 0.1%, respectively). All beamlet calculations and optimizations are performed on a single Intel Broadwell CPU with 2 × 10 cores.

Table 2. The dependency of several evaluation criteria describing final treatment plan quality and computational efficiency for the following parameter values is studied in this work. -/- under parameter dm /dl means that no voxel merging is applied.

ParameterDescriptionStudied values
ux Desired statistical uncertainty of photon beamlets2.5%, 5%, 7.5%, 10%, 15%, 20%, 30%
ue Desired statistical uncertainty of electron beamlets5%, 7.5%, 10%, 15%, 20%, 30%, 50%
t Fractional sparse dose threshold0.0%, 0.1%, 0.25%, 0.5%, 1%, 2.5%, 5%
dm /dl (cm/cm)Minimal distances for medium/large voxels from the PTV-/-, 8/16, 6/12, 4/8, 2/4, 1/2, 0/0

This parameter study is performed for five different treatment techniques, each planned for another clinical or academic case of another treatment site as listed in table 3. Original voxel size is always 0.25 × 0.25 × 0.25 cm3. The academic case consists of a 30 × 30 × 30 cm3 water phantom including contours of a wedge-shaped PTV and a single OAR surrounding the PTV. The PTV is located 0.5 cm from the body surface and the maximal depth from the closest body surface to the PTV is 5.5 cm.

Table 3. The clinical and academic cases used for the investigations in this work. x and e stand for photons and electrons, respectively. The static beam directions are used for intensity modulated step-and-shoot fields with a certain user-defined number of apertures. Each static beam direction for electrons is prepared for six different beam energies (6, 9, 12, 15, 18 and 22 MeV).

Treatment siteTreatment techniquePrescribed median dose in the PTVNumber of static beam directions (x/e)Number of static aperturesNumber of X arcsNumber of X dynamic trajectories
ProstateIMRT39 Gy = 13 fx × 3 Gy fx−1 5/-50
LungVMAT57 Gy = 19 fx × 3 Gy fx−1 -/-2
AcademicMERT50 Gy = 25 fx × 2 Gy fx−1 -/125
BreastMBRT50 Gy = 25 fx × 2 Gy fx−1 6/350
BrainDYMBER60 Gy = 30 fx × 2 Gy fx−1 -/2252

The objective function, which is minimized by the H-DAO and also the MU re-optimization after final dose calculation, consists always of the same objectives per case. There are always a max-dose and min-dose objective for the PTV, a normal tissue objective penalizing dose values depending on their distance from the PTV and one generalized equivalent uniform dose (gEUD) (Niemierko 1999) objective for each OAR. The whole formalism of the objective function value is described in Mueller et al (2022). The value of the tissue-specific factor as used in the gEUD formula is OAR specifically selected based on the work of Luxton et al (2008). The weights and dose or gEUD values of all the objectives are identified by iterative adjustments of the human planner.

The dependency of the final treatment plan quality on the single parameter values is evaluated with the objective function value, dose homogeneity index HI in the PTV, the gEUD value averaged over all OARs ${\overline{gEUD}}_{OARs}$ using the tissue-specific factors as already used for the objectives and the mean dose ${\bar{D}}_{NT}$ in normal tissue. All these quantities are evaluated after MU re-optimization of the final dose distribution, which has always a statistical uncertainty of 1% or lower. Furthermore, there is never any sparse format nor voxel merging applied for the final dose distribution and the original voxel resolution is the same as for beamlet calculation.

HI is expressed by

Equation (3)

where Dp is the prescribed dose and D2% and D98% the dose received by at least 2% and 98%, respectively, of the PTV. ${\overline{gEUD}}_{OARs}$ is expressed by

Equation (4)

where gEUD(s) is the gEUD value of OAR s of in total Nstr OARs. Vs is the volume and Ms is the number of voxels of OAR s. k is a voxel index with volume vk,s overlapping with OAR s. Dk is the dose of the plan in voxel k.

The dependency of the computational efficiency on the single parameter values is evaluated by the photon and electron beamlet computation time and the optimization time and memory usageof the H-DAO.

Additionally, the electron contribution is evaluated for the mixed beam plans. It is calculated by the mean dose to the PTV delivered by electron beams divided by the mean dose to the PTV of the whole plan.

2.3. Additional parameter value selection

One more treatment plan (selection plan) is created with the same computer hardware for each case listed in table 3 with the following combined selection of parameter values: ux = 5%, ue = 15%, t = 0.1%, dm = 1 cm, dl = 2 cm. This plan is compared to the conservative plan with the most conservative values under consideration: ux = 2.5%, ue = 5%, t = 0.0%, no voxel merging. The parameter values of the selection plan aim to achieve a reasonable compromise between degradation in final plan quality and computational efficiency. They are motivated by the results of the parameter study described in section 2.2.

3. Results

3.1. Parameter study

The dependency of the evaluation quantities on the values of the investigated parameters, namely statistical uncertainty of the photon beamlets and electron beamlets, fractional sparse dose threshold and distances for the voxel merging are shown in figures 36, respectively.

Figure 3.

Figure 3. The dependency of each evaluated quantity on the statistical uncertainty of the photon beamlets is shown in one of the eight subplots. All values are relative to the result of the plan using the most conservative parameter values. The connecting lines serve only for visual guidance and the legend at the bottom applies to all subplots.

Standard image High-resolution image
Figure 4.

Figure 4. The dependency of each evaluated quantity on the statistical uncertainty of the electron beamlets is shown in one of the eight subplots. All values are relative to the result of the conservative plan using the most conservative parameter values. The connecting lines serve only for visual guidance and the legend at the bottom applies to all subplots.

Standard image High-resolution image
Figure 5.

Figure 5. The dependency of each evaluated quantity on the fractional sparse dose threshold is shown in one of the eight subplots. All values are relative to the result of the conservative plan using the most conservative parameter values. The connecting lines serve only for visual guidance and the legend at the bottom applies to all subplots.

Standard image High-resolution image
Figure 6.

Figure 6. The dependency of each evaluated quantity on the voxel merging distances is shown in one of the eight subplots. All values are relative to the result of the conservative plan using the most conservative parameter values. The minimal distances for medium and large merged voxels are stated as a pair on the x-axis of each plot. The connecting lines serve only for visual guidance and the legend at the bottom applies to all subplots.

Standard image High-resolution image

For each investigated case with photons, the statistical uncertainty of the photon beamlets has large impact on the final treatment plan quality as shown in figure 3. However, the degradation is low for 5% compared to 2.5% statistical uncertainty. The two cases which are studied with mixed beams are less affected. The increased electron contribution might explain the smaller impact, because electron beams can be used instead of photon beams to a certain degree. Beamlet computation time for the photon beamlets is substantially reduced by increasing statistical uncertainty from 2.5% up to 7.5%, but for higher statistical uncertainties the computation time converges quickly. More surprisingly, the optimization memory usageand optimization time is also substantially reduced with higher statistical uncertainty of the photon beamlets except for the breast case, where these evaluations are probably dominated by the electron beamlets.

The statistical uncertainty of the electron beamlets has smaller impact on the final treatment plan quality compared to the photon beamlets as shown in figure 4. However, there are still small discrepancies visible, especially in the normal tissue for the brain case, which is caused by the increased photon contribution. Like for the photon beamlets, beamlet calculation and optimization time and also optimization memory usageis substantially reduced by increasing the statistical uncertainty for the electron beamlets.

Any of the investigated fractional sparse dose thresholds leads to degradations in final treatment plan quality, especially sparing of OARs and normal tissue in general as shown in figure 5. As expected, the beamlet calculation time is not substantially influenced but the optimization time gets reduced by at least 30% by using any of the investigated fractional sparse dose thresholds. The memory consumption of the optimization is also substantially reduced by at least 70% except for the prostate case only about 20%. However, increasing the fractional sparse dose threshold to more than 0.1% does only lead to minor reduction of optimization time and also memory usage.

Surprisingly, any of the investigated voxel merging distances did not lead to substantial degradation in final plan quality (see figure 6). Beamlet calculation time is as expected also not substantially affected, but optimization time and memory usageare enormously reduced with every shortening of the voxel merging distances. These observations are consistent over all cases.

3.2. Additional parameter value selection

The evaluation quantities are compared between the conservative and selection plan in table 4. Furthermore, DVHs between the two plans are compared in figure 7. According to the DVH comparison, dose homogeneity is nearly identical between conservative and selection plan for all cases, while for the selection plan the sparing of the OARs is slightly worse for some cases (${\overline{gEUD}}_{OARs}\,$is maximally 0.7 Gy higher). ${\bar{D}}_{NT}\,$is similar between the conservative plan and selection plan for all cases, except for the brain case for which the selection plan has a 0.4 Gy higher value.

Table 4. Evaluated quantities of the conservative plan with the most conservative parameter values and the selection plan with selected parameter values to achieve a reasonable compromise between degradation in final plan quality and computational efficiency. The last column depicts the relative change from conservative to selection plan.

 Conservative planSelection planRelative change
Prostate, IMRT    
Objective function value0.0680.079+15.4%
HI0.0870.094+8.2%
${\overline{gEUD}}_{OARs}$ 27.4 Gy27.6 Gy+0.8%
${\bar{D}}_{NT}$ 1.4 Gy1.4 Gy+1.0%
Photon beamlet calc. time0.4 min0.2 min−52.1%
Optimization time19.3 min0.7 min−96.4%
Optimization memory14.8 GB0.4 GB−97.3%
Lung, VMAT    
Objective function value0.1360.150+9.8%
HI0.0880.082−6.8%
${\overline{gEUD}}_{OARs}$ 4.4 Gy5.1 Gy+17.4%
${\bar{D}}_{NT}$ 3.6 Gy3.7 Gy+2.7%
Photon beamlet calc. time20.5 min8.6 min−58.1%
Optimization time345.1 min10.8 min−96.9%
Optimization memory20.7 GB0.6 GB−97.7%
Academic, MERT    
Objective function value0.2710.276+1.7%
HI0.1640.167+1.8%
${\overline{gEUD}}_{OARs}$ 24.4 Gy24.3 Gy−0.4%
${\bar{D}}_{NT}$ 1.2 Gy1.2 Gy+0.3%
Electron beamlet calc. time36.2 min4.5 min−87.4%
Optimization time9.6 min0.3 min−97.3%
Optimization memory41.2 GB0.8 GB−98.0%
Breast, MBRT    
Objective function value0.2520.260+3.4%
HI0.1250.122−2.4%
${\overline{gEUD}}_{OARs}$ 10.6 Gy10.7 Gy+1.4%
${\bar{D}}_{NT}$ 5.1 Gy5.2 Gy+2.0%
Photon beamlet calc. time3.1 min1.3 min−56.8%
Electron beamlet calc. time1120.1 min131.3 min−88.3%
Optimization time47.8 min3.8 min−92.0%
Optimization memory238.2 GB10.3 GB−95.7%
Electron contribution18.8%20.2%+7.3%
Brain, DYMBER    
Objective function value0.0690.074+6.6%
HI0.0860.0860.0%
${\overline{gEUD}}_{OARs}$ 18.0 Gy17.9 Gy−0.7%
${\bar{D}}_{NT}$ 7.0 Gy7.4 Gy+5.7%
Photon beamlet calc. time8.4 min3.3 min−60.7%
Electron beamlet calc. time151.6 min17.4 min−88.5%
Optimization time98.6 min11.2 min−88.7%
Optimization memory803.7 GB38.2 GB−95.2%
Electron contribution39.7%26.9%−32.2%
Figure 7

Figure 7 For each investigated case, DVH comparisons between the conservative plan with the most conservative parameter values and the selection plan with selected parameter values to achieve a reasonable compromise between degradation in final plan quality and computational efficiency.

Standard image High-resolution image

Performance improvements from the conservative to the selection plans are substantial and consistent over all investigated cases. The absolute numbers vary enormously between the different cases, because of the different number of fields. In more detail, computation time for photon beamlets is about 52% to 60% reduced and for electron beamlets about 88% reduced. For the selection plans, beamlet calculation time for the photons is between 0.2 and 8.6 min and for the electrons between 4.5 and 131.3 min. Optimization time and memory consumption are even more reduced for the selection plans by about 88% to 97% and by about 95% to 98%, respectively. In absolute units, optimization time and memory consumption are 0.3 to 11.8 min and 0.4 to 38.2 GB, respectively.

4. Discussion

An MC beamlet calculation framework processing beamlets in parallel is successfully implemented including the features to specify a desired statistical uncertainty for photon and electron beamlets, a fractional sparse dose threshold and minimal distances between voxel and PTV surface from which voxels are merged. These specifications have substantial influence on the computational efficiency of the beamlet calculation itself as well as the plan optimization, the optimization memory usageand the final plan quality. However, the parameter study in this work shows that it is possible to reduce computation time enormously without substantial degradation in final plan quality compared to not using sparse reduction and voxel merging at all and conservative low statistical uncertainty. The settings for the selection plans are demonstrated to fit many different treatment techniques. Furthermore, these parameters could be specifically tuned for treatment techniques or optimization strategies with a different end goal. An example is beam angle optimization, because there the goal is not primarily plan quality but to find a suitable set of beam directions. For this purpose, it might be suited to use less conservative parameter values.

The degradation in final plan quality of using 5% statistical uncertainty for photon beamlets is considered as acceptable and higher statistical uncertainty does not lead to substantial improvements in computation time. This appears surprising, as substantially less particles need to be simulated for higher statistical uncertainty. However, for such a low number of particles, the MC transport is not the most time consuming task anymore, but to transform the dose into the sparse voxel merged format. This step also includes the change of the unit from deposited energy per simulated particle to cGy/MU. For electron beamlets, a substantially higher statistical uncertainty of 15% seems acceptable. However, the number of particles to be simulated to reach this statistical uncertainty of 15% is still larger than double than the number of particles to reach 5% for photon beamlets (see table 1). Reason why the electron beamlets need many more particles to reach the same statistical uncertainty in comparison to the photon beamlets is their dose distribution which is much more distributed laterally to beam direction. Thus, electrons deposit in general a lot of their energy outside of the high-dose volume (dose larger than 50% of maximal dose) used to calculate the mean statistical uncertainty. However, when electron beamlets are added together during optimization, statistical uncertainty of the summed dose distribution is lowered more drastically than for photon beamlets. A technique that enables to further increase statistical uncertainty is denoising with techniques such as a median-filter or deep convolutional neural networks (Neph et al 2021). This technique seems promising especially for electron beamlets, because the dose distribution of an electron beamlet is comparably smooth in all directions, i.e. no change of the dose within short distance as this is the case for the photon beamlets with their short penumbra.

The results showed that a higher statistical uncertainty led to less beamlet data and therefore reduced optimization memory usgage. This is explained by the fact that MC calculated beamlets consider secondary particles leading to small dose values in a large volume of the patient and that this volume is increased with the number of simulated particles.

Summing beamlets together leads to a dose stacking effect, which is substantially affected in case of applying a high sparse dose threshold. Dosimetric characteristics of summed beamlets such as the range in the depth dose curve can drastically change. Hence, the sparse reduction is a critical method. The PTV is generally less sensitively affected than OARs and normal tissue (see figure 6), because in the PTV the beamlets have rather high dose values. Nevertheless, also a sparse format with a low fractional sparse dose threshold decreases beamlet data and therefore optimization memory consumption drastically. This is prominent for MC calculated beamlets with small dose values in a large volume of the patient. Voxel merging on the other side had nearly no impact on final treatment plan quality but very large influence on the optimization time. However, this might be also case specific, because depending on the case, OARs have different sizes and especially different distances to the PTV. The results of the voxel merging are very promising such that it might be worth to even introduce a third level of merged voxel size (e.g. 8 × 8 × 8 = 512 voxels).

5. Conclusion

This work successfully demonstrates several approaches improving computational efficiency of beamlet calculation and plan optimization of a fully MC based TPP without substantial degradation in final plan quality. The absolute computation times shown in this work indicate that such a TPP with highly accurate dose calculations is clinically feasible. This eases the introduction of fully MC based TPPs and novel treatment techniques and optimization strategies into clinical workflow.

Acknowledgments

This work was partly supported by Grant 200021_185366 of the Swiss National Science Foundation and partly by Varian Medical Systems. Calculations were performed on UBELIX (http://www.id.unibe.ch/hpc), the HPC cluster at the University of Bern.

Ethical statement

This work is part of a retrospective study with further use of health related data (Swiss Human Research Act—HRO). The study was approved by the cantonal ethics committee of Bern (project ID: 2019-01415).

Please wait… references are loading.
10.1088/1361-6560/acb480