Active noise control at low frequencies for outdoor live music events using the conjugate gradient least square method

Sound ﬁeld control can be applied to the problem of reducing noise emissions from outdoor live music events. One method employed in this type of applications is pressure matching. Different approaches can be used to ﬁnd a solution to this problem. Many of these methods can provide reduction of more than 10 dB in the frequency range of a subwoofer, between 30 and 120 Hz, thus reducing the loudness to half the original. Such a performance is adequate, but it comes with drawbacks and/or practical limitations such as side lobes that can create new problems in new areas, computational cost, difﬁcult parameter selection, etc. The method proposed here uses the conjugate gradient least square to compute a solution while providing an easier way to ﬁnd a suitable regularization and at the same time controlling the radiation pattern of the solution to reduce the possibility of side lobes. In addition, the use of an active set-type methods allows to include explicit constraints on the amplitude of the solution to avoid ampliﬁcation and non-linear behavior of the transducers. After introducing the theory, the performances are compared to other more established methods through simulations and outdoor measurements performed at a 2:1 scale to show properties and practical aspects of the method proposed. These experiments show that 10 dB insertion loss are achieved over a broad frequency range with peaks larger than 20 dB. We investigate the difference in performance between the different methods and use simulated versus measured transfer functions to derive the ﬁlters. We also analyze the numerical properties of the solutions provided by the different methods and relate them to the spatial properties of the corresponding sound ﬁelds. Furthermore, we present a convergence study to evaluate the effect that grids of different resolutions used in the simulations have on the insertion loss for different degrees of regularization. Finally, we present also a sensitivity analysis of the proposed method to uncertainties in the speed of sound and show how the regularization directly affects the robustness of the method against such inaccuracies. (cid:1) 2023 The Author(s). Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).


Introduction
The effects that noise has on the health and well-being of individuals is more clear now than ever [1]. The legislation is also following suit and becoming more and more restrictive [2]. Outdoor live events present a special case of noise source that comes with particular challenges and characteristics. Sound at these events usually contains a strong low frequency component (30 to 120 Hz [3,4]) that can travel over large distances with minimal attenuation from the atmosphere [5]. At the same time, these components are integral to the experience of the audience and cannot simply be tuned down [6]. In the last years, the use of active noise control systems has been studied to weaken the noise emissions from live events using additional control sources placed behind the audience [7,8]. These methods apply ad hoc filters to the control sources to create an anti-field that, through destructive interference, reduces the noise emissions generated by the sources at the stage in a given area, usually referred to as dark zone. A common way to derive these filters is by solving a pressure matching problem which consists of minimizing jjHq þ pjj 2 2 , where H 2 C MÂN is the transfer function matrix between M receivers in the dark zone and N control sources, q 2 C N is the vector with the complex amplitude coefficients of the transfer functions for N control sources and finally p 2 C M is the vector of the primary pressure field at the M receivers in the dark zone. This approach was first introduced in [9] which also provides a study of the influence of the secondary (control) array on the sound field in the audience area using a double layer of control sources or a single layer of cardioid subwoofers. The performance of such a system was further investigated in [8] which present results from different practical applications with different degrees of complexity: presence of buildings and large time frames with changing weather conditions which affects the transfer functions and thus the final result [10]. In both these works, the primary sound field p and the secondary transfer functions H necessary to compute the filters are measured. An alternative is provided in [7], where the transfer functions are modelled using a spherical harmonic expansion. The coefficient of each mode and the effective speed of sound, related to wind speed and temperature on the ground, have been cast as stochastic variables. The propagation model is then combined with measured data to find the best estimates for such variables. With this approach the results generalize better than if using only measurements. Furthermore, it drastically reduces the number of measurements needed to compute the filters.
One of the challenges in this type of applications is to avoid that the sound field generated by the control sources spills into the audience area; the control system should go undetected by the audience to not affect their experience. To do so, it is possible to use a double array: one array to reduce the level in the dark zone and one array to limit the effects in the audience area [11]. This approach was first applied to this problem in [12]. The problem has been reformulated as a double objective minimization problem allowing to control the balance between the reduction of the pressure levels in the dark zone and the spill of the control source in the audience area. However, [9] showed that it is sufficient to use a single layer if the subwoofers have a cardiod directivity pattern.
The method should be computationally efficient and produce a solution for all frequencies in a short time: the weather conditions change with time and affect sound propagation, thus modifying the transfer functions [10]. A solution is most effective when the actual weather conditions match the ones encountered when the transfer functions were measured or simulated [7]. An additional challenge is to avoid high gain filters that produce non-linearities in the loudspeakers, thus leading to distortion. The system here is assumed to be linear and any deviation from this assumption will negatively impact the performance. In general, this is less of a problem if the secondary sources are the same as the primary sources and/ or if the distance between them is large. A thorough discussion about this aspect is beyond the scope of this paper.
The pressure matching problem can be formulated and solved in different ways, each one with its pros and cons: Least square problem with Tikhonov regularization: This method is computationally efficient [13] and has been used effectively in this type of applications [7]. The main drawback is that the only way to avoid filters with high gain or to control the radiation pattern of the secondary array is through the regularization parameter. Automatic search methods like l-curve [14], generalized cross validation (GCV) [15] or normalized cumulative periodogram (NCP) [16] can find the best compromise between accuracy and amplitude of the solution. The filters generated by this compromise might still have a gain too large for this type of application and/or an undesirable radiation patter. This makes it necessary, at least for this type of application, to manually adjust the gain of the solutions obtained from these methods or to manually search for the best regularization parameter. Furthermore, the regularization parameter is also the only way to control the radiation pattern of the secondary array which, if not taken into account, can lead to side-lobes that can potentially increase noise emissions outside of the dark zone.
Convex optimization: This method allows to include explicit constraints on the amplitude of the solutions [17]. This restrict the search for a solution in a feasible set that do not violates the constraints. The constraints are therefore applied from the beginning and not after finding a solution which makes it potentially the most accurate option. The main drawbacks are its computational cost, the difficulty in controlling the radiation pattern and, when there are many control sources, it might not find a feasible solution. In this case, the constraints on the amplitude of the solution have to be relaxed. A new constraint is defined over the array effort but it does not strictly avoid high gains in some of the sources in the asrray. Subspace/projection methods: This family of methods uses a lower dimensional representation of H and then project p on it to find the solution [18]. The lower dimensional representation is obtained through a decomposition of H using either the singular value decomposition, eigenvalues decomposition or principal components analysis. A subset of the basis vectors is then used to compute the solution. The main advantage of this method is the potential of controlling the radiation patter by selecting basis vectors with the desired spatial properties. The drawbacks come from the need of performing the decomposition at each frequency, which can be computationally expensive for large-scale problems and is likely to introduce audible spectral artifacts. The lack of amplitude constraints on the solution and the basis vector selection that can be hard to automate and time consuming to perform manually.
In this paper we propose a method based on the conjugate gradient least square (CGLS) as an alternative that can provide similar performance to the aforementioned approaches but avoiding some of the drawbacks, making it better suited to a practical application.
The paper is organised as follows: In Section 2 we show how the drawbacks of the subspace/projection methods can be avoided using the Krylov subspace that can be efficiently accessed using the conjugate gradient least square. A similar approach was used in [19] where the conjugate gradient (CG) was used to find a basis for an acoustic contrast application as a less expensive alternative to other methods. It offered a large reduction in complexity at the cost of a reduction in performance. In this case, we use the conjugate gradient least square algorithm to directly compute the solution at each frequency without necessarily worsening the performance. We also show that a way to include explicit amplitude constraints without incurring in the computational cost associated with a convex optimization problem using an active set-type method. We also provide an alternative to the selection of a regularization parameter using problem specific stopping criteria that can be easily implemented given the iterative nature of the algorithm. The use of stopping criteria additionally helps to control the radiation pattern as it is explained in details in Section 4.1. The theory presented here is then be validated through measurements in Section 3 where we employed filters obtained using both measured and simulated transfer functions H and primary field p. Here, the results obtained from the method proposed are also be compared to the ones obtained with least square with Tikhonov regularization and automatic search methods and constrained convex optimization. The methods are compared in term of the insertion loss they provide, the spatial properties of the corresponding secondary sound fields, including the connection between spatial and numerical properties of the method, and their computation time. In addition, we provide an analysis of the robustness of the method to uncertainties in the speed of sound in Section 4.2 and a convergence analysis using grid of receivers with different resolutions in Section 4.3.

Krylov subspace and CGLS
The use of subspace/projection methods is interesting for this application since they allow the selection of basis vectors with desirable spatial properties to control the radiation pattern of the secondary array. Thus, with these methods it is possible to limit radiation outside the dark zone. The drawback of having to perform a decomposition at each frequency can be mitigated by providing a set of basis vectors such as the ones in a discrete cosine transform (DCT), discrete Fourier transform (DFT), a plane wave decomposition (PWD) or a random matrix instead of computing the Singular Value Decomposition (SVD) [13]. The main disadvantage of this approach is that such basis vectors are not adapted to the problem, in other words, it might be necessary to use many components of the basis to be able to properly model the problem. For example, if we consider a sound field created by an array with large spatial variation, either because in its near field or due to spatial aliasing, it can require the superposition of many basis vectors from a DFT, with different frequencies, to model it accurately. We would like, instead, to find a basis where the first few vectors allow us to model the main features of such sound field. A basis like this is provided by the Krylov subspace, which is defined as: This subspace is built iteratively, with k being the number of iterations, and its components are based on increasing powers of the covariance matrix H H H and projections of the primary pressure field H H p. The value of k also defines the dimension of the subspace and it cannot exceed rankðHÞ since any additional iteration would add vectors that are not linearly independent.
The problem of the Krylov subspace is that increasing powers of the covariance matrix H H H result in vectors that are richer in the direction of the first right singular vector of H [13]; this is the vector corresponding to the largest singular value. The basis obtained in this way would have components that are not orthogonal. A modified Gram-Schmidt algorithm can be used to obtain a new basis whose vectors follow the directions of the right singular vectors of H thus providing a better representation. This might seem more cumbersome than computing the SVD and selecting the basis vectors. However, the conjugate gradient (CG) algorithm [20] applied to the normal equations H H Hq ¼ H H p, associated to the unregularized least square problem jjHq À pjj 2 2 , actually computes a solution that lies in the Krylov subspace without the need to orthonormalize and store all the basis vectors [13]. It turns out that this is exactly what the conjugate gradient least square (CGLS) algorithm does. This algorithm is more expensive than CG but is still very efficient since it requires two matrix-vector products per iteration and its memory allocations is essentially independent of the number of iterations [21]. Furthermore, it allows to solve problems where H is not necessarily square or positive definite without the need to explicitly compute H H H.
The k-iterate solution is the minimizer of the following problem: q ðkÞ ¼ min q jjHq À pjj 2 2 s:t: q 2 K k ðH H H; H H pÞ: Since the solution q ðkÞ provided by the CGLS algorithm consists of a linear combination of the basis vectors in the Krylov subspace [13], it can be written as where c i are weights scaling the corresponding i-th Krylov basis vector. The SVD decomposition of the transfer function matrix is which means that the CGLS solution is a solution to an inverse problem where the singular values are filtered by the coefficients in U ðkÞ .
In this way it is possible to see how the CGLS algorithm applies a regularization to the problem that depends on the number of iterations. This is similar to a truncated singular value decomposition (TSVD, where the coefficients are 1 before truncation and 0 after) or a Tikhonov regularized least square (where the i-th coefficient is given by r 2 i =ðk 2 þ r 2 i Þ). The main difference is that the role of the regularization parameter here is played by the number of iterations. The more iteration the smaller the residual but also at the same time the larger the energy in the solution, jjq ðkÞ jj 2 6 jjq ðkþ1Þ jj 2 ; jjHq ðkÞ À pjj P jjHq ðkþ1Þ À pjj: The main consequence is that it is not necessary to find a truncation order or the value of a regularization parameter but it is possible to include stopping criteria suited to the problem; the algorithm will stop when these criteria are met. This allows to avoid the practical limitation encountered with a least mean square solution with Tikhonov regularization since its execution can be automated while fulfilling the amplitude or other requirement that a problem might present. Furthermore, this criteria can be more refined and not limited to looking for the best balance between amplitude of the solution and amplitude of the residual as done by GCV, NCP and l-curve. For instance, they can be used to easily control the radiation pattern of the secondary array as other subspace/projection methods and opposite to the least square solutions as it will be explained in Section 2.2. Even though it is beyond the scope of this paper, it is worth knowing that CGLS also allows the use of a Bayesian preconditioner to provide a priori knowledge of the solution [22]. This can be useful, for example, to control how the energy in the solution is distributed between the different sources to avoid over-driving a subset of them. On the other hand, one has to consider the sideeffects of doing so as it will also change the radiation pattern of the secondary array and possibly increase radiation towards the sides.

Number of iterations and spatial properties of the solution
The use of the stopping criteria in the CGLS algorithm allows to include problem specific requirements that can go beyond the magnitude of the solution or the amplitude of the residual. It was shown in [23] that the number of iterations of the CGLS algorithm can also be used to control the radiation pattern of the control array. In this application, the left and right singular vectors constitute pressure modes and source strength modes [24], respectively. The singular values encode the radiation efficiency of each of the modes or amplification factors once inverted [25]. The number of iterations can control the weight that each source strength mode has on the solution and in turns how large is the excitation of each of the pressure modes. The larger the number of iterations the larger are the weights applied to higher order modes. High order pressure modes tends to have a higher spatial frequency and more energy is directed towards the sides. Knowing this, it is possible to control radiation outside of the dark zone by controlling the number of iterations through a specific stopping criterion. For examples, one can monitor the total pressure field at control points outside of the dark zone and stop the algorithm when an increase above a certain threshold is detected. This feature is very important for this application since it allows to avoid side lobes that could create problems outside of the dark zone. When using least squares with Tikhonov regularization, it would be necessary to compute the solution for multiple regularization parameters to obtain the same result. The range over which one performs this search and the size of the steps would depend on the condition number of the transfer functions matrix, which is both setup and frequency dependent. On the other hand, the CGLS algorithm searches for the solution in the Krylov subspace that is molded by the transfer function matrix from the get go.

Active set-type method
The solution provided by Eq.(2) contains the complex filters coefficients to be applied to each secondary source at a given frequency. These filters should not amplify the driving signal to the point where non-linear effects from the transducers become significant. This means that the gain applied to each source should be smaller or equal to a user defined threshold (in this case set to 1, so jq i j 6 1 or 20log 10 ðjq i jÞ < 0 dB). The stopping criteria allow, to some extent, to control the amplitude of the solution. In this way one can stop the algorithm when any component of the solution becomes larger than the user defined threshold. Even if this is an improvement over other methods, it does not provide explicit amplitude constraints and could prevent the use of a meaningful solution because it violates the amplitude requirements even by a small amount. The active set-type method introduced in [26] allows to include explicit amplitude constraints on the solution of the CGLS algorithm. So, once one obtains a CGLS solution complying with user requirements but that violates the amplitude constraints, the active set-method can be employed to find a correction that allows the solution to fulfill the amplitude limits. The main idea is to fix the coefficients that are equal to the constraints and redistribute the energy from the coefficients that are larger than the threshold to the ones that are smaller. This is done by computing a correctionỹ to be applied to the solution allowing it to fulfill the constraints. This correction must not affect the coefficients equal to the constraints, i.e.ỹ i ¼ 0 for jq i j ¼ 1 in this case. The index i of these coefficients are stored in the active set A u ðqÞ. The original method applies box constraints to the solution and hence uses two sets, one for the lower bound and one for the upper bound. Since we are applying the constraints to the magnitude of the solution we are interested only in the upper bound.
The coefficients violating the constraints are clipped to the threshold providing an approximate solutionq which is used to compute the new residual: The method then introduces a matrix D ¼ diagðd 1 ; d 2 ; . . . ; d n Þ with entries: This matrix allows to not affect the coefficients in the active sets. The CGLS algorithm is then used again to find an approximate solution to the problem: The correction sought after is given byỹ ¼ Dz ðkÞ that can be used to obtain the new alternative solutionq ¼ q þŷ. This new solution could also violate the constraints in which case the steps described here should be repeated. Since the residual vectorr does not decrease monotonically, the algorithm is not guaranteed to terminate. This could lead to cycling, however in the current application such behavior has not been encountered so far. Furthermore, since this algorithm redistributes the energy between the coefficients of the solutions, it tends to work better when the solution has many coefficients since it offers more degrees of freedom. Solutions with few coefficients might not exploit the advantages offered by this method and it might not offer any improvement over just reducing the amplitude of the solution by an offset.

Experimental methods
The strategy proposed in Section 2.1-2.3 has been tested with the experimental setup shown in Fig. 1. Due to space limitations, the experiment was performed with a 2:1 scale so the frequency range has been shifted to the interval starting from 60 Hz and ending at 240 Hz and the distances have been halved to be consistent with the new scaling. The experiment was performed outdoor in semi free-field conditions on a football field with a small hill on the left, a set of trees at the back and a hedge with a river behind it on the right as it can seen in Fig. 1b.
The primary and secondary sources consisted of two arrays of 6 d&b audiotechnik Y10 loudspeakers that can be considered as omnidirectional in the frequency range of interest. The loudspeakers were driven by D80 amplifiers from d&b audiotechnik. The spacing between the sources was 1 m, corresponding to 2 m on a real scale, which is a realistic spacing for a real setup. The dark zone started 5 m from the control sources which in turn were placed at 20 m from the primary sources. To evaluate the performances in the dark zone (DZA from now), 24 microphones were arranged in 8 rows of 3 with 0.5 m spacing between them. Furthermore, an array of 8 microphones (BA), also spaced 0.5 m, was placed 45 m from the primary sources to evaluate the level reduction beyond the dark zone and two arrays of 16 microphones spaced 1 m were placed to the left (LA) and right (RA) of the main axis and at 10 m from it to measure possible side lobes. All the microphones were Beyer Dynamics MM1 provided with wind shields. The microphones were then connected to four Yamaha Tio 1608-D interfaces that performed the analog to digital conversion and returned the signal through a Dante network. The reference signal fed to the loudspeakers and the signal measured at the microphones were processed in MATLAB and distributed using a Dante virtual sound-card at a sampling frequency of 48 kHz. MATLAB was also used to compute the ideal filters from the different algorithms. These filters were then implemented as arbitraryphase finite impulse response (FIR) filters and uploaded directly to the DSP integrated in the amplifiers. Both the sources and the microphones were placed close to the ground to minimize the interferences from ground reflections in the frequency range of the experiment. The weather data was measured using a Davis Vantage Vue weather station set at a height of 2 meters plus an additional sensor on the ground.

Performance evaluation and comparison
The performance of the algorithm was evaluated using filters derived by a primary field p and secondary transfer functions H that were both measured and simulated. Both measurements and simulations used only the sensors in the DZA. The microphones were moved randomly within a circle of 15 cm diameter before evaluating the performance to avoid committing an inverse crime.
A summary of the weather conditions found during such measurements is included in Table 1. Having both measurements and simulations allowed to quantify the improvements that the measurements can bring when there are reflections and effects not taken into account by the model. Details of the simulations can be found in Section 2.6.
The performance of the system have been evaluated computing the insertion loss (IL) at each microphone. They are then averaged over the different evaluation areas: dark zone (DZA), back array (BA), left array (LA) and right array (RA), where p is the complex pressure at the microphone n within a given area and by the main array alone p p n or by the main and control arrays together p t n .
The algorithm proposed here has been evaluated using different number of iterations, ranging from 1 to 3. The active set-type method from [26] was used to limit the magnitude of the filters to 1 (0 dB) even though it was triggered only after three iterations of the CGLS algorithm. In addition, the method proposed has been compared to least square with Tikhonov regularization where the l-curve and GCV have been used to select the regularization parameter and the gain of the solutions has been adjusted when violating the amplitude constraint. The problem has also been formulated within a convex optimization framework and solved in MATLAB using the fmincon function. In this case we used two types of constraints, one on the amplitude of each individual source (jqj " 1) and one on the array effort (jjqjj 2 2 6 0:5). In each case the solution has been computed at 49 different frequencies in 1/24th octave bands. A summary of the methods can be found in Table 2. The table, also shows the running time of each method. These values comprises all the 49 runs to compute the solution  Table 1 Weather conditions during the measurement of the primary field and the transfer functions of the secondary sources. The secondary/control source are numbered from the rightmost with respect to the main axis (bottom in Fig. 1a) to the leftmost (top in Fig. 1a). The wind direction is relative to the main axis of the setup: 0 corresponds to wind blowing in the direction of propagation; 90 blowing towards the right of the main axis and À90 to the left.

Index
Source

Simulations
The simulations were performed using the complex directivity point source method (CDPS) [27]. This model relies on free-field conditions, so it does not really match the conditions of the experiment due to the obstacles highlighted in Fig. 1b. The simulations were performed reproducing the setup shown in Fig. 1a, using the same loudspeakers, same sensitivity and directivity pattern, a temperature of 5 C and no wind to match the weather conditions encountered when the same transfer functions were measured (see Table 1). In a first step, the simulations were used to obtain the primary field p and secondary transfer functions H only within the dark zone (DZA). The point grid in the DZA in the simulation matched the position of the microphones during the measurements. These data was used to generated the filters for the control sources using the methods in Table 2. In a second step, the points in the simulated DZA had been moved modifying their coordinates by drawing a correction from a normal distribution, mimicking the shift applied in the real experimental setup. Then, the primary field p and secondary transfer function H have been computed again in the DZA and also at LA, RA, and BA. Then, the different methods have been evaluated applying the filters obtained in the first step to the newly simulated transfer functions and computing the insertion loss as defined in Eq. (9).

Results
The results presented in this section are all obtained from physical measurements. Simulations are used to generate the filters in Section 3.1 and only in the case of Fig. 3 to calculate also the insertion loss.

Insertion loss
The insertion loss produced by the different algorithms using simulated transfer functions are shown in Fig. 2. The insertion loss are calculated from measured pressure fields obtained applying the filters computed from simulated transfer functions. The insertion loss in the dark zone provided by the different algorithms are quite close to each other. According to simulations, the differences were supposed to be much larger, as shown in Fig. 3. The insertion loss presented in this figure are the only one in this paper where simulations are used to derive the filters and to calculate the insertion loss. Inaccuracies in the model, that fails to take into account reflections, inhomogeneities in the medium and wind, reduced the gap and the overall performance. Solutions obtained using larger regularization such as the CGLS with one iteration, cgls k¼1 , and lcurve, that were supposed to give the worst performances, have degraded much less than the others and are more robust to such inaccuracies (more in Section 4.2). On the other hand, fmincon was supposed to provide insertion loss of more than 30 dB over a large frequency range according to the simulations described in  Fig. 1a. Bottom right: loss averaged over the right array RA in Fig. 1a. Section 2.6 and whose results are shown in Fig. 3. To achieve such large reductions, the magnitude and phase of the primary field need to be matched to such an accuracy that is not achievable with all the uncertainties that can be encountered in a real setting. The insertion loss provided by the 3 iterations CGLS method, cgls k¼3 , between 100 and 150 Hz are much larger than for any other method. The real sound field in this frequency range present a dip, probably due to interactions with the surrounding obstacles, and the model used for the simulations tends to overestimate the sound pressure level in the DZA. The amplitude match between the primary and the secondary field is poor for most methods leading to a decrease in insertion loss. cgls k¼3 is the only solution where the active set-type method is active. This method redistributes the energy between the coefficients to comply with the constraints without any knowledge of the primary field. In doing so, it introduces a phase relationship between the sources in the middle of the array and the one at the extremes that also produce a dip in amplitude in the dark zone. This proves to be a much closer match to the amplitude of the primary field thus producing larger insertion loss.
The algorithms with smaller regularization show an improvement beyond the dark zone, at the back array. A possible explanation is that, with lower regularization, the secondary field better matches the first one in the dark zone while higher regularization matches only some of the largest spatial features of the primary field but not accurately enough to translate to larger distances and progressively degrade as the mismatch grows.
It is very important to also see what is happening off-axis. The algorithms that on the paper were supposed to provide the larger insertion loss are also the ones that increase the most the sound pressure level outside of the dark zone in Fig. 1. It can be seen how, as the regularization decreases, the insertion loss become more and more negative thus indicating an increased SPL in these positions. Solutions with weaker regularization excite more higher order pressure modes that present strong radiation offaxis (more in Section 4). The resulting secondary field is more prone to present side lobes. In this instance though, the total sound field, produced by the superposition of the primary and secondary fields, does not present side lobes itself but a level increase where the primary field alone had low sound pressure levels. This is the main reason for the negative side lobes and it is a result of the choice of the regularization combined with a mismatch between the simulated and the real sound fields as explained in the next section.

Primary and secondary sound fields
The negative insertion loss could be interpreted as side lobes. In this particular case though, the negative insertion loss are caused by dips in the primary field. In Fig. 4 we can see the measurements corresponding to cgls k¼3 as function of microphone position and frequency for the microphone array on the left of the dark zone (LA in Fig. 1a). The sound field that results from the interaction of primary and control sources does not present side lobes. It is increasing the overall level if compared with the primary field alone and is doing so at frequencies and points where the level of the primary field alone was quite low. This suggest that the problem it creates is not as bad as the insertion loss alone might suggest. It can still be a problem because the level increase is quite large and if there is a building in such a direction the difference will be very much noticeable. The control point stopping criterium discusses in Section 2.2 and implemented in the CGLS algorithm takes into account such situation. The larger the regularization (see cgls k¼1 ; cgls k¼2 and l À curve) the closer to 0 the insertion loss. This is because with a stronger regularization, the secondary sound field is more focused on the dark zone with limited radiation outside of it; resulting in a minimal level increase with respect to the primary field alone. The insertion loss at the left and right arrays (LA and RA) are similar but the amplitude and positions of the dips do not totally agree due to the asymmetry of the sound field produced by the different obstacles at the left and right sides of the domain, as it can be seen in Fig. 1b.

Measured transfer functions
To study the effect that simulated or measured transfer functions have on the performance of the system, a new set of filters has been computed using the measured transfer functions from Table 1. The resulting insertion loss have been compared to the ones obtained by using simulated transfer functions. We omitted the graphics corresponding to the LA and RA for cgls k¼1 and cgls k¼2 for clarity since the results using measured and simulated transfer functions match very closely. The results for cgls k¼1 are shown in Fig. 5a. The difference is quite noticeable in the dark zone. The measured transfer functions provide a better performance on average even though not much larger than with the simulated ones. Measuring the transfer functions provides an accurate description of the sound field in the dark zone. The main difference is located between 100 and 150 Hz due to a model mismatch. The The amplitude mismatch results in the drop in performance when using simulated transfer functions. In cases such as this one, where reflections are involved, the interference pattern outside of the measured area will be quite different. Because of this difference, the performance in the dark zone do not generalize well to other areas. This is confirmed by the measurements at the array at the back (BA, in Fig. 1a). Here the performances from the measured transfer functions degrade more than with simulations. Increasing the number of iterations to 2 improves the performance of both sets of filters in the dark zone as it can be seen in Fig. 5b. The set derived from the measured transfer functions provide very irregular performance over frequency but it still presents an improvement over simulations. This solution can provide better results than the one iteration version but it is more sensitive to inaccuracies. The weather conditions changed from the time when the transfer functions were measured to the time when the insertion loss were obtained. While the temperature did not change, the direction of the wind changed approximately 110 degrees while keeping its speed (see Table 3, entry number 23 compared to Table 1). Considering this shift, the accuracy of the match between primary and secondary field can change rather quickly with frequency and/or space since it does not only affect the direct field but also the reflections and thus the interference pattern. This can cause abrupt changes in the performance considering the high level of reduction reached here (up to 24 dB). However, even if there are large drops, the insertion loss are over 10 dB for most frequencies. The simulated transfer functions provide a smaller reduction but are not as irregular. This smoothness is due to the lack of reflections in the simulations. When there is a difference between the weather conditions used to compute the filters and the real one, their impact on the results are not as big. This is because these filters are matching only the direct field that does not changes as much as the interference pattern. In addition, also here the largest difference is between 100 and 150 Hz. The reason is the same as with cgls k¼1 . This time the difference is even larger and this is due to the increased number of iterations. We can see here that when the propagation paths are accurately characterized, the increased accuracy of the algorithm lead to larger insertion loss. On the other hand, the algorithm is also more sensitive to errors which cause the drop in performance due to the model mismatch described in Section 3.1.
Finally, the losses in the dark zone using 3 iterations are shown in Fig. 5c and they have an overall trend similar to the previous ones. This solution further reduce the residual providing larger insertion loss than cgls k¼1 and cgls k¼2 . The performance from simulated transfer functions is now closer to the measured ones between 100 and 150 Hz. As explained in Section 3.1, the active set-type method introduce a phase relationship between the secondary sources that produce a dip in level in such frequency range. This counterbalance the model overestimating the amplitude of the sound field in this region and range, providing a better match between the primary and secondary fields. The performance in this case is limited by the fact that the solution hits the amplitude constraints and it is even more sensitive to uncertainties. We can see here that the insertion loss reach peaks of more than 25 dB and is above 10 dB over the entire frequency range. At the back, the performance of the measured transfer functions is similar to the ones obtained with two iterations. When simulated transfer function are used instead, the insertion loss are worse than with 2 iterations. cgls k¼3 is even more sensitive to uncertainties due to the weaker regularization. The changes in the interference pattern outside of the dark zone introduce a larger drop in performance than in the previous case.
For all the algorithm tested, not only for cgls k¼1 and cgls k¼2 but also the other, the insertion loss at LA and RA from measured transfer functions closely match the results from simulated transfer functions in Fig. 2. The only exception is cgls k¼3 . This is the reason why we show these results only for this case. The performance of the filters derived from measured transfer functions do not produce a large increases of the sound pressure level off axis. On the other hand, there is a large increase when the simulated transfer functions are used instead. The background noise in the measured transfer functions actually improve the conditioning of the corresponding transfer function matrix. The higher spatial frequency associated with the noise field increases the amplitude of the high order singular values. This result in a more stable solution, with a smaller amplitude and that does not hit the amplitude constraints. It behaves as a solution with a stronger regularization with weak radiation off-axis. The worse conditioning of the simulated secondary transfer function matrix lead to a solution with a larger amplitude that hits the constraints and produce a stronger radiation outside the dark zone. This discrepancy between measured and simulated transfer functions only occurs in this case because it is the one with the weakest regularization. All the other solutions based on inverse problems have a stronger regularization and the better conditioning of the measured transfer function matrix does not produce such a striking difference. It does however produce a smaller level increase at LA and RA than the solutions from simulations do. The method based on convex optimization are affected differently by the noise and the amplitude of the solution keeps being large thus produce an increase in sound pressure level offaxis. The effect of the regularization and the level increase seen when using the simulated transfer function is also related to a mismatch between the predicted and real primary pressure fields. Because of this, the secondary pressure field fails to match the primary field on the sides and it increases the level where the primary field had a small amplitude. This does not occur in the dark zone because it is closer to the main axis and well within the main lobe of both arrays.
To facilitate the comparison, the primary field from simulation and from measurements at the left microphones array are plotted in Fig. 6. Away from the main axes, the simulated primary field present a dip that moves closer to the main axis as the frequency increases. In the measurements, this dip has a different extension in space and it behaves differently in both space and frequency. The possible reasons for this difference are reflections and a mismatch in the speed of sound due to temperature and wind. The sec- ond and third plots in the figure both represent the measured primary field but at two different times that had a temperature difference of approximately 8°C (entries 10 and 31 in Table 3). They show how a difference in temperature can change the shape of the radiation pattern. Now looking at the secondary fields in Fig. 7 we see how using filters generated by cgls k¼3 from simulations we obtain a good match with the simulated primary field but not with the measured one. The main difference is the large sound pressure level introduced by the secondary field in positions and frequencies where the level of the primary field was much lower. This results in the large dips observed in Fig. 5c. When we look at the secondary field generated with cgls k¼3 using measured transfer functions we see how it provides a closer match to the measured primary field with overall lower sound pressure levels.

Numerical properties of the solutions
In this section the focus is brought to the numerical properties of the secondary transfer functions and to the solutions provided by the different algorithms.
It is possible to analyze the solutions obtained using the CGLS or any other method in terms of which pressure modes they excite and in which measure. Using the SVD, one can express the secondary field p s as where w ¼ RV H q is a vector of weights or amplification coefficients applied to the pressure modes. We can study why some solution might be problematic outside of the dark zone by looking at the amplitude of the weights and the shape of the corresponding pressure modes. Fig. 8 shows the magnitude and phase of the left singular vectors, equivalent to pressure modes, of the simulated transfer functions H at 125 Hz. The domain used for the simulations was symmetric and as a consequence it can be seen how the odd order modes are also symmetric while the even ones are anti-symmetric. Another important observation regards the energy distribution within each mode. First of all, low order modes have a lower spatial frequency and second, the energy is more focused at the center while it moves towards the sides as the order increases. The different algorithms and regularization combine these modes in different ways and with different weights which determine the spatial characteristics of the solutions. Table 3 Summary of the weather conditions encountered during the measurements displayed in this paper. The wind direction is relative to the main axis of the setup: 0 corresponds to wind blowing in the direction of propagation; 90 blowing towards the right of the main axis and À90 to the left.  6. Overview of the primary field at the left array: simulated (left) and measured at two different times (center corresponding to entry 10 and right to entry 31 in Table 3). Fig. 7. Overview of the secondary field at the left array: simulated (left), measured applying the filters from simulations (center, entry 17 in Table 3) and measured applying the filters obtained from measurements (right, entry 38 in Table 3). The solutions from the different algorithms excite these modes with different weights that can be obtained applying Eq. (10). These weights are shown in Fig. 9 as a function of frequency. Considering that in the simulations, both the primary field and the secondary transfer functions are symmetric, the even order modes are not excited at all and have been omitted for clarity. Odd order modes have weights that in general decrease with the order. The first mode has approximately the same amplitude in each solutions. The amplitude of the third mode increases with frequency starting at one order of magnitude lower than the first mode and reaching approximately half its amplitude at the top of the fre-quency range. This is approximately the same for every solutions except for l-curve and cgls k¼1 . The l-curve present a slightly smaller amplitude than the other methods. However, cgls k¼1 has an amplitude of various orders of magnitude smaller than the first mode except at high frequencies. This mean that for this solution, only the first mode is relevant up to a frequency of approximately 200 Hz, so the third mode can be ignored for all practical purposes. The fifth mode is much smaller than the previous ones. It can be seen here that the solutions where the amplitude of this mode is larger are also the ones that in Fig. 2 had more energy radiated towards the sides. cgls k¼1 is the one radiating the least amount of  energy towards the sides because only the first mode has a large amplitude and it focuses the energy towards the main axis. The dip in the coefficients of the fifth mode is a numerical artifact due to how the modes are sorted in MATLAB.

Sensitivity analysis
The weather conditions, specifically temperature and wind and as a consequence the speed of sound, affect the transfer functions between a source and a set of receivers as describe in [10]. The performance of an outdoor active noise control system depends on how close the weather conditions are to when the transfer function were measured/simulated as explained in [7]. In this section, we present an analysis of the robustness of the cgls algorithm to inaccuracies in the speed of sound. The filters were computed using simulated transfer functions obtained using a speed of sound of 334.4 m/s corresponding to a temperature of 5°C as recorded when the transfer functions were measured during the experimental validation. Three different sets of filters were obtained by running the algorithm for 1, 2 and 3 iterations. An additional 512 realizations of the primary field and secondary transfer functions were computed using speed of sounds following a Gaussian distribution with mean 334.4 m/s and a standard deviation of approximately 2.5 m/s. The speed of sound was calculated using the dry air approximation c ¼ 20:05 ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 273:15 þ T p [28] with T the temperature in C. The distribution of speed of sound are shown in Fig. 10. Each set of filters was then applied to each of the 512 realizations of the secondary transfer functions and the resulting insertion loss in the dark zone have been averaged over space and frequency. The relative frequency of the insertion loss for k ¼ 1; k ¼ 2 and k ¼ 3 are shown in Fig. 11.
In general, the symmetric distribution of the speed of sound is now asymmetric. The reason is that as the estimation error of the speed of sound increases, regardless if due to underestimation or overestimation, the performance gets worse and the insertion loss decreases. Furthermore, the histograms for cgls k¼1 and cgls k¼2 present a sharp rise at high insertion loss. This is a consequence of using the mean speed of sound to also calculate the filters. It follows that error in estimating the speed of sound follows a Gaussian distribution with mean 0 and the same standard deviation as the distribution of the speed of sound. This means that small errors occur more often leading to larger relative frequencies for high insertion loss. Moreover, a sharper increase means that large insertion loss are achieved in more realizations hinting to a larger robustness of the method due to a larger tolerance to errors. cgls k¼3 is an exception since it does not show the same sharp rise at high insertion loss. Even when the error is small, in this case the performance is limited by the amplitude constraints and the active-set type method that is triggered only in this case. The performance from cgls k¼1 are worse both in absolute and average terms than the others. cgls k¼2 can provide the largest insertion loss when the speed of sound used to compute the filters is accurate. cgls k¼3 provides slightly worse performance in the best case scenario. This is again a result of the constraints. Since this limitation is intrinsic to the solution and independent from the accuracy of the sound speed, on average the losses tends to be worse than the for cgls k¼2 . The shape of the frequency distributions for cgls k¼2 and cgls k¼3 are rather similar and the center of gravity occurs at insertion loss comprised between the mean and the mean minus one standard deviation. The main difference is that cgls k¼3 has a smaller tail at higher insertion loss and larger relative frequencies at the center of gravity. This is reflected in the lower average value and a smaller standard deviation. cgls k¼1 is very different since the center of gravity is well distributed between plus/minus one standard deviation from the mean. Furthermore, the largest insertion loss have the largest relative frequencies. In this case, the mean is smaller than in the other two cases but also the standard deviation is considerably smaller meaning that cgls k¼1 is the most consistent in terms of performance.
The main take-away from this analysis is that cgls k¼1 is more robust and deliver consistent performance across a larger range of speed of sound even though the mean and absolute insertion loss are smaller than in the other cases. cgls k¼3 fails to deliver the improvement that one might expect by increasing the number of iterations due to the amplitude constraints. cgls k¼2 should be chosen if one desires larger insertion loss than the ones cgls k¼1 can provide.

Convergence analysis
In this section the focus is on the effect that different grid reso-   the ones used to compute the filters. In addition, the filters were also applied to the transfer functions that were actually measured on the field. In Fig. 12 the insertion loss averaged over the dark zone are shown for the different grid resolutions and number of iterations of the CGLS algorithm. When the filters obtained after 1 iteration are applied to the simulated transfer functions (Fig. 12a), the insertion loss increase as the resolution gets finer. The sound field is better captured with finer grids and the resulting anti-field matches the primary field more closely up to approximately 200 Hz where the relation between resolution and performance is not as clear. The results seems to converge for grid resolutions smaller than 0.1 m. When the filters are applied to the measured transfer functions (Fig. 12b), the finer resolutions do not affect the insertion loss. This is due to the fact that for better performance, the tolerance to errors gets smaller and the real world uncertainties spoil the gain in performance provided by increasing the sampling resolution. As a matter of fact, at some frequencies the coarser grids perform better than the finer ones because they are not modelling finer details and then do not suffer when such details do not exactly match the reality. Similar conclusions can be drawn when the algorithm ran for 2 iterations. In this case the simulated insertion loss (Fig. 12b are larger than with cgls k¼1 as one might expect. On the other hand, when the filters are applied to real transfer function (Fig. 12b), both the gain from a finer grid and the additional iteration fade and the performance are quite similar to the ones shown in Fig. 12b. This is due to the fact that this solution is less robust and the real world uncertainties affect all the solutions regardless of the grid size used.
The situation is different when the algorithm ran for 3 iterations. When the filters are applied to simulated transfer functions (Fig. 12b) we see a big difference at low frequencies when grids with a resolution finer than 0.1 m are employed. At mid frequencies there is a trend similar to the previous cases where the performance improves by making the grid finer, reaching convergence for a resolution of 0.1 m. At higher frequencies the effect of the grid resolution on the performance is not clear. When the filters are applied to measured transfer functions (Fig. 12b) we see a trend similar to the one saw when using simulations and, as it happened in the previous two cases, the improvements are not as big as one might expect, except at low frequencies. The reason why in this case there is such a big difference using different grid resolutions, and why this difference is also present in the measured transfer functions, can be found inspecting the actual filters in Fig. 13.
In this particular case the solutions hit the amplitude constraints forcing the active set-type method to kick in and find an alternative solution with a magnitude smaller than one. The solutions with grids finer than 0.1 m do not hit the constraints below 100 Hz. This is the reason why, up to this frequency, these solutions provide much larger insertion loss. Above 100 Hz, all solutions hit the constraints and the different grid resolutions do not matter as much anymore and the performance in terms of insertion loss drop dramatically. The high frequency inconsistencies might be due to how the active set-type method works. Instead of just clipping the solution, this method fix the coefficients of the solution hitting the constraints, and redistribute the energy between the coefficient above and below the constraints. In this way, it finds an alternative solution with no coefficient larger than the user-defined threshold. For this reason, it is not straightforward to find a relation between grid resolutions and insertion loss. Finer resolutions do not hit the constraints at low frequencies because the corresponding transfer function matrices have larger singular values. Even the high order singular values are close to unity. This means that when they are inverted to compute the solution, they do not boost the amplitude of the filters as much as the singular values corresponding to the coarser grids do, so the corresponding solutions are more stable. This can only be seen with cgls k¼3 because in this case the smaller regularization lead to higher order modes having a larger weight on the solution making the amplitude of the high order singular values more significant.

Discussion
In Section 2 it was explained how the number of sources determines rankðHÞ which in turns limits the number of iterations of the cgls algorithm. In the experiment presented in Section 2.4, we ran the algorithm a maximum of 3 iterations because of the symmetric nature of the primary field and secondary transfer functions used in the simulations. This symmetry halved the degrees of freedom and subsequent iterations would not have added any new information or improvement to the solutions. We can also see this in Section 4.1 where only three of the six pressure modes were active in all the solutions from the different algorithms.
The iterative nature of this algorithm can be both a strength and a limitation. It is a strength since it makes it easier to select the appropriate regularization by controlling the number of iterations and monitoring relevant performance criteria in between iterations. On the other hand, the amount of regularization changes in discrete steps going from very large at the first iteration to almost no regularization. However, this limitation is lessened when the number of secondary sources increases. When more sources are used, the maximum number of iterations increases too. This would result in more discrete steps and smoother changes in the regularization. More sources are also beneficial for the active set-type method. More sources means more degrees of freedom, allowing this algorithm to find a better alternative solution. Few degrees of freedom and large violations of the constraints, as it was the case with cgls k¼3 , can lead to solutions with discontinuities because the algorithm has to intervene more aggressively with fewer options for redistributing the energy between coefficients. Such discontinuities can make the implementation of the filters problematic, they can add signal artifacts and a drop in performance in a limited frequency range that could spoil the overall performance of the system. It was also noticed that the number of iterations should be kept constant across frequency. The reason for this is that different number of iterations means different energy in the solution. When different number of iterations are used for adjacent frequencies, the solution can present jumps that would make it harder to implement the resulting filters. Furthermore, even if the number of iterations is kept constant, this method still provide the advantage of a frequency dependent regularization. As it was seen in Section 2, the regularization provided depends on the filter coefficients that in turn depends on H and p that are both frequency dependent.
It can be beneficial to use measured transfer function instead of simulations when increasing the number of iterations. Inaccuracies in the model can be amplified and the noise reduction in the dark zone is accompanied by an increase in the sound pressure level outside of it. The disadvantage is that during the day, weather conditions can drastically change reducing the improvements. In addition, when there are reflections, solutions obtained using measured transfer functions do not generalize well beyond the dark zone while with simulations, since they only compensate for the direct field, the performance do not degrade as much. This agrees with [7] since the interference pattern generated by reflections in the dark zone does not just propagate beyond it without substantial changes. On the other hand, the direct sound field can be extrapolated out of the dark zone and it still constitutes the most prominent component of the sound field.
In case it is not possible to measure the transfer functions and the primary field, it can be a better option to use simulations and cgls with a low number of iterations. The results obtained in this case are comparable to results obtained in [7] and in [8], for cases with a similar topological complexity. This means that it is possible to avoid measuring the transfer functions, thus reducing the practical limitations for the use of such a system in day-to-day applications. One can also use more advanced modelling tools reducing the gap between simulations and measurements. Furthermore, large insertion loss obtained in simulations are often not achieved in practice due to uncertainties in the modelling parameters. Solu- tions obtained with a small number of iterations are in general preferable since they tend to be more robust to limitations of the model or inaccurate medium parameters as described in Section 4.2 and therefore present a lower risk of increasing the sound pressure level outside the dark zone.
The experimental setup described in Section 2.4 was not designed to maximize the loss outside of the dark zone. The main purpose was to compare the different algorithms in terms of insertion loss and radiation patterns. Placing the dark zone further away from the secondary sources would have produced better results at the BA and at larger distances in general. A dark zone close to the secondary array present a lower condition number than one further away [29]. This is due to the fact that there is larger spatial variation in the pressure field due to near field effects and this results in larger singular values for the higher order modes. At the same time, because near field effects are included in the solution, this tends to not generalize well when moving away from the dark zone. Furthermore, the rate of decay of the level of the two sound fields can be quite different leading to an increasing mismatch between primary and secondary field thus performance that degrade with distance.
When the dark zone is placed far from the secondary sources, the far-field is being modelled instead, allowing for a better generalization. The improved performance in this case is also related to the wavefronts of the two fields becoming more plane and similar. The drawback is an increase of the condition number. The main consequence is that instability in the solution can occur with a smaller number of iterations.
The proposed method is slightly more heavy than regularized least square but still in the same order of magnitude. The increase in the number of iterations has a small effect on the total running time. The active set-type method produce a more noticeable increase but allows to include explicit amplitude constraints. The alternative is constrained convex optimization, although, in this case the running time increases by two orders of magnitude compared to cgls k¼3 . However, the running times presented in Table 2 are all considered as acceptable for this type of application since changes in the mean properties of the medium occur on a much larger time scale: from 10 min to approximately 1 h [30].
In this work, we did not include compensations for changing propagation conditions and thus the corresponding variations in the transfer functions. Instead, we run the measurements in small time windows of approximately 10 min, in which we can consider the mean properties of the medium to be quasi-static, to minimize the influence of such changes. The influence of the medium and the robustness of the proposed method has been analyzed in Section 4.2. However, at short distances, it is possible to correct variations in the transfer functions caused by changes in the propagation conditions as described in [31]. At larger distances the effects of the moving inhomogeneous medium are more complex and harder to compensate for.
Finally, all filters have been implemented as arbitrary-phase Finite Impulse Response (FIR) filters. The implemented filters do not show pre-ringing or other time-domain artifacts. The filters obtained from simulations look very similar with small differences in time and amplitude, which increases as the regularization decreases, as expected. It is different when using measurements. First, the filters are longer since also the reflections are taken into account. fmincon produces solutions that are different from the other algorithms and present larger oscillations and longer response. The difference between the two fmincon solutions using different constraints is minimal. The rest of the algorithms resemble each other closely except for small differences in amplitude and time.

Conclusions
In Section 3 it was shown that the CGLS based method proposed here provides performance comparable to regularized least square and constrained optimization in terms of insertion loss and to other studies with similar topological complexity. It was also shown how it is possible to include amplitude constraints on the solution using an active set-type method [26] without drastically increasing the computational effort in contrast with constrained convex optimization ( Table 2). The advantages of these two methods and their combination increases with the scale of the problem. In Section 2.2 was shown how it is possible to control the directivity pattern of the secondary array by controlling the number of iterations using application-specific stopping criteria. This is an efficient way to incorporate a feature that would normally require careful and manual selection of a regularization parameter in a regularized least square approach, mode selection in a subspace/projection method or additional constraints in a convex optimization setting.
In Section 4.1 was found that the magnitude of the regularization is not only important to control the amplitude of the solution but also the directivity pattern of the control array. A sensitivity study in Section 4.2 also shown how regularization directly affects the robustness of the solution against model inaccuracies, noise in the measurement or uncertainty in the simulation parameter such as the speed of sound. In general, it was noticed that stronger regularization focus the energy of the solution into the dark zone limiting the risk of a level increase outside of it thus avoiding creating new problem and complaints in new areas. Even though, on paper, stronger regularization is associated with larger residuals, which translates to lower insertion loss, it was found that this is not necessarily the case in practice. Large insertion loss tolerate very small magnitude and phase errors between the primary and secondary fields that are hard to achieve in a dynamic environment with changing weather conditions and possibly reflections from obstacles and from the ground. In general, solutions obtained with stronger regularization are recommended. Moreover, it was found that measured transfer functions can provide better insertion loss than simulations. However, the simulated transfer function do better in complex topologies since they only model the direct field. The measurements include both direct field and reflections which is harder to model correctly thus the tendency of presenting larger errors outside of the dark zone or with changing weather conditions.
Finally, a convergence study in Section 4.3 showed that grids with higher resolution can provide larger insertion loss due to the increased accuracy in the sampling of the sound field during simulations. In practical applications tough, these gains might be cancelled by a mismatch between the model and the dynamic properties of the propagation paths. Furthermore, it showed that increased grid resolutions results in a better conditioning of the problem allowing to achieve more stable solutions even with a relatively large number of iterations.
Future work should focus on the application of this method in the time domain and/or in combination with an adaptive method to compensate for changes in the propagation conditions in real time.
CRediT authorship contribution statement

Data availability
The authors do not have permission to share data.

Declaration of Competing Interest
The authors declare the following financial interests/personal relationships which may be considered as potential competing interests: Pierangelo Libianchi reports financial support, equipment, drugs, or supplies, and travel were provided by d&b audiotechnik GmbH & Co. KG. Pierangelo Libianchi reports a relationship with d&b audiotechnik GmbH & Co. KG that includes: employment.