The application of Buckingham π theorem to Lattice-Boltzmann modelling of sewage sludge digestion

Abstract For the first time, a set of Lattice-Boltzmann two-way coupling pointwise Euler-Lagrange models is applied to gas mixing of sludge for anaerobic digestion. The set comprises a local model, a “first-neighbour” (viz., back-coupling occurs to the voxel where a particle sits, plus its first neighbours) and a “smoothing-kernel” (forward- and back-coupling occur through a smoothed-kernel averaging procedure). Laboratory-scale tests display grid-independence problems due to bubble diameter being larger than voxel size, thereby breaking the pointwise Euler-Lagrange assumption of negligible particle size. To tackle this problem and thereby have grid-independent results, a novel data-scaling approach to pointwise Euler-Lagrange grid independence evaluation, based on an application of the Buckingham π theorem, is proposed. Evaluation of laboratory-scale flow patterns and comparison to experimental data show only marginal differences in between the models, and between numerical modelling and experimental data. Pilot-scale simulations show that all the models produce grid-independent, coherent data if the Euler-Lagrange assumption of negligible (or at least, small) particle size is recovered. In both cases, a second-order convergence was achieved. A discussion follows on the opportunity of applying the proposed data-scaling approach rather than the smoothing-kernel model.


Introduction
Wastewater process is an energy-intensive operation. EU figures [8] for Member States show that wastewater treatment works (WwTWs)'s energy consumption exceeds 23,800 GWh per annum, with further increase of 60% in the next 10-15 years being expected due to tightened regulation of effluent discharge (Water Framework Directive, 14 ). [46] forecasted a global need of water increase by 30% and food and energy by 50% in the next decades, and hence, it is urgent to address the water-energy link in order to tackle climate change.
UK WwTWs product 1.5M tonnes of sludge per annum [43] . Sludge, the principal by-product of wastewater treatment, is usually treated through mesophilic gas-mixed anaerobic digestion: anaerobic bacteria degrade sludge at 22-41 • C into more stable * Corresponding author.
E-mail address: ddapelo@bradford.ac.uk (D. Dapelo). compounds and methane-rich biogas, which in turn is harnessed as a renewable source of energy through combined heat and power technology.
Mixing is key for stable digestion process. Despite mixing being responsible for 17-73% of digester's energy consumption [32] , mixing optimization is still far from being optimal: [23] 's experimental work demonstrated that input mixing energy can be halved without impacting nutrient distribution, and hence, biogas yield. It is therefore imperative to orient mixing design and operation towards a more modern approach where stress is given to input mixing / output biogas energy balance, rather than merely to digestate quality.
Numerical modelling is an economic alternative to lengthy and expansive tracer experiments; furthermore, whilst tracer methods can only provide "black box" representations, numerical modelling allows analysis of flow patterns in details. A considerable amount of numerical modelling has been devoted to improve mixing in anaerobic digestion [6,9,10,12,19,20,26,28,29,36,38,41,44,47]  Seventh dimensionless group ˆ π i i th arbitrarily-modified independent dimensionless parameter group describing a generic physical process π i i th tuned independent dimensionless parameter group describing a generic physical process Lattice relaxation time before turbulence correction, s ϕ Map extracting the "physically-meaningful" dimensionless group from the set of all the independent dimensionless groups describing a generic physical process χ Map mapping a set of arbitrarily-modified to a set of tuned dimensionless groups describing a generic physical process A K Acceleration of the K th Lagrangian pointwise particle, m s −2 B K Bounding cube around the sphere centered at X K of radius L K C Collision operator, kg m 3 C d Drag coefficient C K Normalization factor of the K th Lagrangian pointwise particle at a given timestep C Smago Smagorinsky constant D Tank diameter, m F K Total force acting on the K th Lagrangian pointwise particle, N F a K Added-mass force acting on the K th Lagrangian pointwise particle, N F b K Buoyancy force acting on the K th Lagrangian pointwise particle, N F d K Drag force acting on the K th Lagrangian pointwise particle, N D Tank height, m Cube of side δx centered at x Parellelepiped where to evaluate e K L Chosen value for the smoothed kernel radius, m L K Smoothed kernel radius of the K th Lagrangian pointwise particle, m K Power-law consistency coefficient, Pa s n K (as a subscript) Generic label to a Lagrangian point- top of the above-mentioned difficulties to perform full-scale experiments, sludge is biologically and chemically hazardous; moreover, it is opaque, which makes it difficult to perform imaging experiments. For these reasons, an approach previously followed in the literature towards numerical modelling validation is to validate a model in the laboratory-scale first, and then, to apply the validated model to full-scale scenarios [11,12] . Recently, the Lattice-Boltzmann numerical method has been used for the first time to reproduce laboratory-scale, gas-mixed anaerobic digestion [13] . The above-mentioned approach towards validation was followed, and in fact the work provided the preliminary step of laboratoryscale validation.
The Lattice-Boltzmann algorithm is time-dependent, and solves the one-particle density function in the phase space. Density and velocity fields are recovered from the one-particle density function, and for low Mach numbers (viz., Ma −→ 0 ), they are proved to obey the incompressible Navier-Stokes equations. This approach presents important advantages over other CFD methods in terms of numerical efficiency [37] . First, it is fully explicit-no iterative loop is required to solve velocity and pressure Navier-Stokes equations in a segregated way. Then, the non-linear and non-local parts of the Lattice-Boltzmann equation are decoupled-no differentiating scheme is required and thus parallel algorithms can be implemented for hundreds of cores with negligible loss of efficiency. Finally, a straightforward structure allows relatively simple extension towards a large variety of phenomena. [13] showed that a Lattice-Boltzmann model for laboratory-scale mixing in anaerobic digestion performed around 180 times better than finite-volume analogues.
The model proposed by [13] is an implementation of the Homogenized Lattice-Boltzmann Method (HLBM) [39] , which is a non-pointwise Euler-Lagrangian method where the particlefluid interface is resolved. This means that the lattice spacing is bounded to be at least 10 to 20 times smaller than particle diameter [34] : or: where R K is the radius of a generic ("the K th") particle (or bubble) in the system, and δx is the lattice cell size. The number of biogas bubbles is expected to be "low" for gas-mixed anaerobic digestion (with "low" meaning " ࣠ 10" for laboratory-scale, Dapelo et al. [10 , 13] , and " ࣠ 10 4 " for full-scale, Dapelo and Bridgeman [11 , 12] ). Under these condition, the Euler-Lagrange model is in general preferrable over other approach, i.e., the Euler-Euler [1] . In particular, the laboratory case presented a high ratio of bubble diameter over computational domain dimension (viz., ~0.1), and therefore the close-to-interface fluid phase flow could have exerted a non-negligible effect; under these circumstance, the HLBM is preferrable over non-interface-solving methods for the obvious reason that it resolves the particle surface and reproduces the fluid phase flow around it in greater detail. However, this ratio is bound to fall considerably when the computational domain size grows, with the consequence that the level of detail consisting of a precise reproduction of the close-to-interface fluid flow is no longer relevant. On the other hand, the requirement of 10 to 20 lattice sizes per particle diameter poses a burden in terms of computational expense: for instance, a pilot-scale model of 1 m 3 size would require around 8 billion cells in the face of around 10 million required for a laboratory-scale. This expense is likely to be unnecessary because of the low ratio of bubble diameter over computational domain in the pilot and full-scale (viz., 10 −2 ), and hence, it is important to develop more economic models in order to fully take advantage of Lattice-Boltzmann's potentialities.
In the pointwise Euler-Lagrange model, each particle is treated as a material point, and particle-fluid interface is not resolved [1] ; therefore, contrarily to HLBM, the pointwise Euler-Lagrange model is free from constraints on the minimum number of lattice sizes per bubble diameter. As such, the Euler-Lagrangian model is potentially a much cheaper alternative to the HLBM. In other words, the hypothesis that the particle dimension remains negligible means [42] : or: Eqs. (4) and (2) show the existence of a gap in modelling, which is not covered by either HLBM and the pointwise Euler-Lagrange model. In fact, the hypothesis of Eq. (4) was found to be violated in [10] as well as in the laboratory-scale simulations described in this article. In [10] , this problem was shown to determine a loss of grid independence in the most refined meshes, but was partially solved by generating larger cells where bubbles were expected to pass through, which means throughout the vertical axis above the nozzle. However, this solution cannot be extended to Lattice-Boltzmann modelling as only uniform cubic grids are applicable, and lattice size is bound to be constant throughout the computational domain. Two alternative solutions are proposed to resolve this specific problem and, in general, to fill the gap in modelling evidenced by Eqs. (4) and (2) . The first consists of a smoothedkernel Euler-Lagrangian model, similar to the one described in [15] , which is specifically designed to simulate Lagrangian particles when the assumption in Eq. (4) does not hold. A further, novel, approach is described in this paper and consists of applying the Buckingham π theorem [7,40] . This approach consists of scaling the relevant dynamics parameters in order to keep R constant whilst maintaining the same physics. The underlying assumption, which was tested in the work described within this article, was that independence loss is imputable to a variable systematic error depending on R : keeping the latter constant would thus allow to have the same systematic error in all the runs, and hence, to retrieve grid independence. An analysis of the grid independence in the scaled and unscaled cases for laboratory and pilot-scale setups of 1 m 3 follows. The laboratory-scale runs are performed due to the necessity of validating the model against experimental data and comparing it to HLBM in terms of the balance of accuracy over numerical expense. The pilot-scale runs are executed to test the behaviour of the model for setups, which are more similar to the final goal of the full-scale scenarios. A comparison with laboratoryscale data is traced, and the two approaches of smoothed-kernel model and dimensional scaling are then compared. All the simulations reported within this article were performed through the OpenLB open-source version 1.3-1 code www.openlb.net [22] . This article is structured as follows. On Section 2 , the scaling mechanism based on the buckingham-π theorem is explained.
Then, the Euler-Lagrangian Lattice-Boltzmann model is described on Section 3 . The specifications the numerical simulations performed in the work described within this article are reported on Section 4 . The presentation of the results follows on Section 5 , and then, a discussion is performed on Section 6 . Finally, conclusions are drawn on Section 7 .

Buckingham-π dimensional scaling
The Buckingham theorem [7,40] states that it is possible to completely describe an arbitrary physical process through a set of opportunely-chosen dimensionless parameters (or "groups"examples are the Reynolds number, geometrical ratios,...) in place of the usual (and generally larger) set of dimensional parameters (e.g., tank diameter, flow rate,...). As such, any scaling of the physical parameters will describe the same physical process, provided that the set of dimensionless groups remains unchanged.
As an application of this, we introduced a scaling procedure, illustrated as below. A thorough description of the mechanism and of the justifications behind it is reported in the Supplementary Material.
1. Identify a set of linearly-independent dimensional parameters ( q 1 , . . . , q n ) ; 2. Build a set of linearly-independent dimensionless groups π 1 , . . . , π n −k through Rayleigh's dimensional analysis ( k be-ing the number of independent units of measure involved in the process); 3. Build another set of "physically-meaningful" (meaning that their value is important to classify the underlying physics of the process, e.g., the Reynolds number classifies a flow as laminar or turbulent) dimensionless groups π P 1 , . . . , π P t independent from the original set of dimensionless groups π 1 , . . . , π n −k , not necessarily linearly-independent and not necessarily describing the physical process completely; 4. Alter a first subset of dimensionless groups ( π 1 , . . . , π s ) ⊂ π 1 , . . . , π n −k to the desired values ( π 1 , . . . , π s ) in such a way that the numerical simulation of the process becomes easier, more stable or less computationally-expensive; 5. Tune a second subset of dimensionless groups ( π s +1 , . . . , π s + t ) ⊂ π 1 , . . . , π n −k to ˆ π s +1 , . . . , ˆ π s + t in such a way that the subset of physically-meaningful dimensionless groups π P 1 , . . . , π P t remains unaltered; 6. Arbitrarily choose the new desired values of k dimensional parameters and compute the new values for the whole set . . , π n −k of dimensionless groups.

Assumptions for modelling sludge
Sludge is made of water, organic and inorganic solid fragments and, for gas mixing, biogas bubbles. Solid fragments vary from molecular size (organic polymers) to some centimeters (grit or larger agricultural debris). As such, sludge is characterized by a variety of multiphase phenomena. Gas-liquid interaction gives rise to interphase (two-way) momentum transfer; gas-gas to bubble-bubble (four-way) transfer, bubble breakup and coalescence; liquid-solid to sedimentation, flotation and complex non-Newtonian rheology (e.g., pseudoplasticity, thixotropy, shear banding and yield stress).
Considering the complexity of the problem, a number of assumptions are needed in order to effectively model sludge. ( i ) Sedimentation and flotation are ignored because their typical timescales (months if not years) are some orders of magnitude larger than the mixing timescale (15 minutes every hour), which is relevant to the problem taken into consideration. ( ii ) As solid phase is now considered in suspension to liquid phase, both are modelled together as a liquid phase, with the remaining liquid-solid interaction being modelled as non-Newtonian rheology, with thixotropy being ignored because typical digestion timescales (15-30 days) are such larger than mixing timescales (see Section 3.1.1 below). ( iii ) Bubble coalescence, breakup and in general bubble-bubble interactions were not observed experimentally [10] nor found in previous numerical work [11,12] , whilst liquid-gas interaction is fundamental as momentum transfer from buoying bubbles to liquid constitute the foundation of the mixing mechanism itself; as such, gas phase is modelled as a Lagrangian phase dispersed inside a continuous Eulerian phase (Euler-Lagrange model), and two-way coupling is defined.

Liquid phase
A common and effective way to model sludge rheology is to adopt the pseudoplastic model [45] : apparent viscosity μ and shear rate magnitude | ˙ γ | obey a power-law relationship: where n is the power-law coefficient which obeys the constraint n < 1, and K is the consistency coefficient. Divergence at zero shear rate and unphysical near-zero apparent viscosity values at high Table 1 Rheological properties of sludge at T = 35 • C. From [25] . shear rates are avoided through introduction of μ min and μ max , respectively a minimum and a maximum boundary for the apparent viscosity.
The power-law coefficient of Eq. (5) depend on total solid content (TS) and temperature. [25] measured sludge power-law coefficients at different TS values, and the results are reported in Table 1 .
Under mesophilic conditions, the temperature is rigorously kept constant at 35 • C, and hence dependence on temperature can be ignored. As deviation of sludge density from that of water at 35 • C (994 kg m −3 ) accounts for less than 1%, the value of 10 0 0 kg m −3 is assumed for the sake of simplicity.

Lattice-Boltzmann description of the liquid phase
Whilst other CFD methods directly solve the observable fields of velocity, pressure and density, the Lattice-Boltzmann solves the one-particle density function is evaluated at the point of the phase space ( x , c ), and describes the probability of finding one particle of fluid within the elemen- As such, the Lattice-Boltzmann is a mesoscopic model. The observable fields (viz., density, velocity and shear stress) correspond to the particle density function's zeroth, first and second moments respectively [24] : The Boltzmann equation is solved: The equation above is nothing else than a continuity equation for f in the phase space, plus the contribution of inter-particle collisions as a source-sink term. In the diluted gas hypothesis, the collision operator C models the effect of binary particle collisions. Despite being potentially extremely complicated, C can be drastically simplified following the Bhatnagar-Gross-Krook (BGK) hypothesis [3] , which substitutes the detail of all the possible particle collisions with an isotropic average, relaxing towards the equilibrium particle density function f (eq) : where τ is the relaxation time, and f (eq) is the Maxwell equilibrium distribution [24] : where c s is the sound speed, and density and velocity are evaluated through Eqs. (6) and (7) .
Space is discretized as a cubic lattice of lattice size δx , time is subdivided into timesteps of duration δt , and the threedimensional velocity space is discretized as a finite set of vectors c 0 , . . . , c q −1 pointing to the zeroth, first, second and third neighbours of a generic lattice site, with moduli respectively √ 0 , √ 1 , √ 2 and √ 3 times δx / δt . A given discretization is conventionally identified by a tag D d Q q , with d the dimensionality of the problem and q the dimension of the discrete velocity space. The one-particle distribution function is now defined over the discretized lattice and timesteps, and rewritten as a set f i ( x , t ), each f i indicating the probability of finding a particle with the corresponding discretized velocity c i . The integrals in Eqs. (6) , (7) and (8) are substituted with summations over the velocity set: The error arising from the discretization of the velocity set is cancelled by writing the Maxwell equilibrium function ( Eq. (11) ) in terms of orthonormal Hermite polynomials up to the second order: where the weights w i and the sound speed are defined in a standard way for the specific D d Q p lattice, with the latter usually being defined as: Density and velocity are evaluated through the discretized version of Eqs. (6) and (7) [24] . In this way, the adiabatic dynamics with a compressibility error of Ma 2 with Ma being the Mach number, is recovered. The Boltzmann Eq. (9) , considering the BGK assumption ( Eq. (10) ), becomes the Lattice-Boltzmann Equation: The updating process for each lattice site at a given timestep comprises two steps. First: a local, non-linear collision : Then: a linear, non-local streaming : In the limit Ma 1, the Lattice-Boltzmann Eq. (17) reproduces the incompressible Navier-Stokes equations [24] , with pressure and kinematic viscosity defined as:

Non-dimensionalization and stability
While humans understand measurement units, computers deal only with dimensionless quantities. In Lattice-Boltzmann, the nondimensionalization procedure should be performed accurately in order to strike a convenient balance between stability, accuracy and computational expense [24] . The standard way to perform non-dimensionalization in OpenFOAM is through using δx, δt and a reference value of density as the conversion factors for length, time and density respectively. Therefore, viscosity and reference velocity are converted to: Combining Eqs. (16) and (21) , we also find the following relation between Mach number and lattice velocity: which shows that the determination of the Mach number depends on the choices of δx and δt through Eq. (21) . Finally, Eqs. (20) and (21) allow the determination of the values of τ and τ LB ≡ τ / δt from the values of ν and the choices of δx and δt : The value of τ LB consequently decreases if ν decreases, until the limit case of τ LB −→ 1 / 2 for ν −→ 0 .
Furthermore, the BGK collision has been found to be unstable in under-resolved turbulent flow [31] ; therefore, the choice of a too large value of δx can negatively affect stability. Accuracy depends on grid resolution (therefore, δx ) and compressibility error (which is ~Ma 2 ). Therefore, the desired level of grid resolution ( δx ) must be tuned together with a choice of δt in order to prevent the compressibility error (viz., Ma or, equivalently, U LB ) from increasing, and to maintain stability (viz., preventing τ LB from becoming too small, and from U LB from diverging from the recommendations of Eq. (25) ). Of course, the computational expense is inversely proportional to both δx and δt . Finally, an appropriate scaling of δt should be defined when the grid is coarsened or refined in order to control stability and the behaviour of the compressibility error. A common choice of scaling, which is followed in this work, is δt ~δx 2 , or "diffusive scaling". This ensures that τ LB remains constant, thereby leaving the stability properties of the system unaltered. Furthermore, the spatial discretization error in Lattice-Boltzmann is ~δx 2 : the choice of diffusive scaling means that Ma ~δx , thereby imposing the incompressibility error to be ~δx 2 , as the spatial discretization error. Therefore, the Lattice-Boltzmann algorithm is theoretically second-order accurate in space and first-order accurate in time when choosing diffusive scaling [24] .

Lattice-Boltzmann rheology and turbulence modelling
Following [13] , rheology is modelled as in [5] , and turbulence through a Smagorinsky closure [18] Each timestep, at each lattice node, before the collision step: 1. The rate of shear tensor S αβ ≡ 1 2 ∂ α u β + ∂ β u α is evaluated through its relationship with the second momentum of the non-equilibrium term of the particle distribution function f (1) i [24] : and the dynamic viscosity μ PL ( x , t ) is calculated through Eq. (5) with the substitution: 2. The relaxation time is locally altered a first time τ −→ τ PL through inverting the second of Eqs. (20) with ν PL ≡ μ PL / ρ in place of ν; 3. Rate of shear tensor S and shear rate magnitude | ˙ γ | are computed once again through re-applying Eq. (26) with τ PL in place of τ , and Eq. (27) respectively, and the turbulent kinematic viscosity is computed through the Smagorinsky closure: with the Smagorinsky constant C Smago being set to 0.14; 4. The relaxation time is locally altered a second time τ PL −→ τ eff through inverting the second of Eqs. (20) with ν eff in place of ν.

Euler-Lagrange modelling of pointwise particles in Lattice-Boltzmann
Within the Euler-Lagrange models, the dispersed phase is treated as an ensemble of pointwise elements, which are conventionally called "particles" (in the case of the work reported in this article, each "particle" corresponds to a gas bubble) [42] . Each (spherical) particle P K present inside the computational domain at a given time is formally represented as a tuple composed of coordinate X K , velocity U K , acceleration A K , radius R K and mass M K : Further quantities may be added to the list depending on the complexity of the problem-for instance, moment of inertia, angular coordinate, velocity and acceleration may be added if rotational effects (e.g., lift) are important, or aspect ratios if significant deviations from sphericiy are present. However, rotational effects were found to be negligible for the application taken into consideration in the work described here, and deviations from sphericity were ignored [10,13] .
Each particle was set to evolve separately through verlet integration of Newton's second law: (no Einstein summation rule here!), where F K is the resultant of the forces acting on P K ( forward-coupling ). For the problem described in this article, we have: where F b K is buoyancy: with g being the acceleration of gravity, and F a K and F d K are the added mass and the drag force respectively. The last two are modelled according to the specific models described below (see Sections 3.3.1, 3.3.2 and 3.3.3 ). In general, they depend on both P K and the observables from the liquid phase ρ, u and ν.
The effect of P K on the surrounding liquid phase consists of a body force −F K acting on the continuous phase ( back-coupling ). A body force can be included in the Lattice-Boltzmann framework in a standard way [17] through the addition of a source term in the Lattice-Boltzmann Eq. (17) : where the source term S i is defined as: The forcing term ≡ K K ( K being the specific contribution from particle P K ) depends on the specific model (see Sections 3.3.1, 3.3.2 and 3.3.3 ). The equilibrium velocity ( Eq. (13) ) is redefined as: The forcing model described above presents the advantage of avoiding spurious O Ma 2 terms in the macroscopic equations of motion whilst keeping a nominal order of convergence of two [16] .

Local model
Within this model, forward-coupling (viz., the computation of F K , Eq. (31) ) occurs locally: liquid phase density ρ( X K ) and velocity u ( X K ) are interpolated linearly across the cells surrounding P K position X K , while kinematic viscosity ν X next K is computed at the nearest lattice node coordinate X next K for the sake of simplicity. The added mass in Eq. (31) is modelled as: and the drag force as: The drag coefficient C d is modelled according to [30] : The particle Reynolds number Re p is evaluated as: where ν PL is computed as per in Section 3.2.2 .
Back-coupling occurs through applying −F K to the most near cell from P K . This is done trough defining the forcing K in Eq. (34) as (see Krüger et al. [24] ):

First-neighbour model
Within this model, forward-coupling occurs exactly as per in the local ( Section 3.3.1 ). Back-coupling occurs through applying −F K to the most near cell from P K , and the first neighbour cells of the latter, with weights normalized to 1, as in [27] : δ δx is a discrete delta function of width δx (Peskin [33] , Eq. (6.28)).

Smoothing-kernel model
The presence of Lagrangian particles of non-negligible size was accounted for by [15] in a finite-volume CFD model for atomization process through a smoothing-kernel procedure, which in turn produced: ( i ) the field equations for the continuous phase (viz., the Navier-Stokes) were weighted over volumetric phase fraction; ( ii ) a modified formula for the drag coefficient, dependent on the volumetric phase fraction, was used in place of the one of Eq. (37) ; and ( iii ) back-coupling occurred through smooth transfer of momentum from the particle to the cells inside one smoothing-kernel length from particle's centre of mass. Conversely, forward-coupling remained as per in the local case (see Section 3.3.1 ).
The approach described within this article is a translation of the model presented by [15] from the finite-volume to the Lattice-Boltzmann framework, with the following differences: ( i ) the field equations for the continuous phase (viz., the Lattice-Boltzmann Eq. (17) ) is not weighted over volumetric phase fraction because such arrangement is not yet available for the Lattice-Boltzmann at the time the work is carried out; and ( ii ) the discretized smoothing procedure is different because, in [15] , the back-coupling momentum term becomes unphysically negligible and dependent on lattice size for R < 1 after smoothing.
The smoothing-kernel model used in this work is defined as follows. A smoothing-kernel function is defined for every P K as a compact-support polynomial normalized to 1, as in [15] apart from the bespoke normalizing factor: where L K is the kernel radius (in practice, a multiple of R K ), C K the normalized factor to be recalculated at every timestep in order to keep g K normalized to 1, and W ( ξ ) is the Wendland function: The choice of adopting two-way coupling was underpinned by the fact that bubbles were observed to be well-distanced each other [10] : the observed space between the bubbles also justifies the following hypothesis, that bubbles' centres of mass never come closer to each other than the sum of the respective kernel lengths at any time. Under this hypothesis, volumetric phase fraction at any lattice point is determined by the presence (or the absence) of one particle at most. As such, the (smoothed) liquid phase fraction at P K 's location ε , K is computed as the summation of contributions e K coming from each node x , weighted by g K : The idea behind the computation of each contribution e K is to consider a number of points within the cube I 3 ( x ) of side δx centered at x , and determine the value of e K depending on how many points fall within one particle radius R K from X K . In order to do so, we proceed as follows. Let be B K the bounding cube around the sphere centered at X K of radius L K ; then, let the parallelepiped I 3 Fig. 1 ): are respectively defined as:

Fig. 2.
Distributing N points throughout I 3 is filled regularly and isotropically with N 3 points, N per coordinate direction. As shown in Fig. 2 , there are two ways to do so, which are labelled as "(a)" and "(b)" respectively. As such, let the integer vector n be defined as n := ( i , j , k ) with i, j, k ∈ 1 , . . . , N.
Then, two sets of points, p (a) K ( n , x ) and p (a) K ( n , x ) , are defined as: and: Then, after defining a colour function K as: e K is defined as: The drag force is then defined as in [15] : The added mass force is defined as in the previous models, as in Eq. (36) . The back-coupling by applying −F K to the cells inside P K 's kernel length: (53)

Transparent sludge substitute
The employment of a transparent sludge substitute has been a common strategy to reproduce sludge's hydrodynamic behaviour while maintaining transparency and thus the ability of performing optical measurements [45] . Accordingly, in [10] , sludge was modelled through a 2 g −1 water solution of Sigma-Aldrich 419,338 sodium carboxymetyl cellulose (CMC) with average molar weight of 70 0,0 0 0. The power-law coefficient and consistency index were measured through a rheometer within the shear rate interval 100-500 s −1 , and resulted to be 0.805 and 0.054 Pa s n respectively. As such, transparent sludge's rheological characteristics resulted to be intermediate between 2.5 and 5.4% TS sludge (see Table 1 ).
Following what concluded in Section 3.1.1 , sludge density was set to ρ L = 10 0 0 kg m −3 , while bubble density was set to the density of air at atmospheric pressure ρ P = 1 kg m −3 .

Laboratory-scale simulations
The experimental apparatus used in [10] and reproduced in the work presented within this article consisted of a 4 l cylindrical tank of diameter D = 20 cm, filled with CMC solution up to a height of H = 13 cm. Bubbles of diameter d = 7 . 01 mm were created one at a time, with the centre of mass being h noz = 1 cm from the centre of the domain's bottom face, at time intervals chosen to match a gas inlet flow rate of q = 5 . 30 ml s −1 (see Eq. (54) ) The bubbles reaching the top face were removed from the system. A schematics of the computational domain is reported in Fig. 3 .
The boundary conditions were defined as follows: zero-velocity Bouzidi [4] at the bottom and the lateral walls; free-slip [2,35] at the top. The initial conditions were set to zero velocity and constant density throughout the computational domain. For the smoothed-kernel runs, the kernel length was set to L = 1 . 5 d. The dimensionless velocity U LB was set according to diffusive scaling: with U 0 LB = 0 . 3 and N 0 = 40 . Grids with a number of lattice nodes N along the diameter spanning between 40 and 200, corresponding to 36,936 and 4,515,701 overall nodes respectively, were generated. The simulations were run up to a simulated time of 21 s. The numerical work was performed on the same hardware of [13] , viz., 18-core (36-thread) Intel Xeon E5-2695 v4 "Broadwell" (2.20 GHz clock, 8 GT/s QPI bus) processors, but on two instead of eight processors.

Pilot-scale simulations
The setup was the same of the laboratory-scale case ( Section 4.2 and Fig. 3 ), with the following differences: D = 1 m, H = 1 m, h noz = 10 cm, U 0 LB = 0 . 1 , maximum value for N set to 120 corresponding to 976,835 cells, maximum simulated time of 121 s.

Current setting of the scaling mechanism
As emerged in Sections 4.2 and 4.3 , the relevant dimensional parameter were ten: while the units of measure involved were three: (57) Accordingly, following the Rayleigh's dimensional analysis (Point 2 in Section 2 , also see Section A.2 in the Supplementary Material), the following repeating variables were chosen: and the following seven dimensionless groups were defined: n := n ; According to Point 3 of Section 2 , the following physicallymeaningful dimensionless groups were defined: (68) < u > is the characteristic velocity-in this case, the approximated average velocity, which was computed by approximating the velocity flow patterns as constant rising velocity along a central column of diameter d , and zero elsewhere: where u d is the approximated asymptotic liquid phase drag velocity, and was computed by solving numerically Eq. (30) for A K = 0 .
< ν > is the characteristic kinematic viscosity, which was computed substituting the characteristic shear rate | ˙ γ | into Eq. (5) , with the former being defined as: P d is the total drag power being dissipated at a given timestep, which was computed as: where N bubbles is the total number of bubbles present within the computational domain at a given timestep, and was evaluated as: Point 4 if Section 2 was implemented as follows. In order to keep R constant throughout the different runs, d was adjusted in the following way: while the other repeating variables ( Eq. (58) ) were kept unaltered.
At the same time, the dimensionless group π 4 ( Eq. (65) ) was altered as: Finally (Points 5 and 6 of Section 2 , the map χ of Eq. (A.7) was applied in ten possible ways with the related systems of equations being solved numerically, as illustrated on Table 2 .

Numerical setting
All the simulations were performed on a D3Q27 lattice.
The value of ν LB was computed from < ν > .

Pilot-scale runs
In Fig. 4 , example results of the unscaled pilot-scale runs are reported for N = 100 . All the three Lagrangian models described in Section 3.3 are tested. The flow patterns displayed very scarce dependence on the Lagrangian model, apart from a very slight increase in intensity for the smoothed-kernel model. The computational expense was found to undertake scarce variations, being comprised between 270 0 and 340 0 CPUs. The fact that the smoothed-kernel required slightly less computational expense than the other two model was surprising: it would have been expected the contrary because the smoothed-kernel required an additional cycle per particle over the cells comprised within the smoothingkernel length. However, for R ≤ 1 the number of cells comprised within the smoothed-kernel length tend to be around the unity, and hence the additional contribution of such cycles on the computational expense tend to become negligible. A possible untackled optimization problem in the code is probably responsible for the increased computational expense on the local and first-neighbour models. Fig. 5 reports the results of the grid independence test under diffusive scaling for the pilot-scale runs, comprising all the La-grangian models. The x coordinate of the vortex, which is evident in all the plots of Fig. 4 , was plotted against N . All the models displayed a similar behaviour, and convergence is evidently achieved for N ≥ 80. This was expectable, as R < 1 for all the runs. Remarkably, no data points are reported from the smoothed-kernel runs for N < 49: this happened because, for d < δx , configurations can occur where no grid points are comprised between one smoothed-kernel length from a bubble centre, thus raising an irresolvable divide-by-zero error. Variations in the vortex's x coordinate between contiguous values of N resembled oscillations, and this is especially true for the first-neighbour and the smoothedkernel model. It was not possible to ascertain what caused them.

Laboratory-scale runs: comparison of different Lagrangian models
In Fig. 6 , example results of the laboratory-scale runs are reported for N = 120 . All the three Lagrangian models are tested. On the left, scaled flow patterns following the "0" method are reported, on the right the unscaled. Experimental flow patterns from [10] are reported for comparison. The flow patterns reproduced the experimental data well, apart from a mild overall underestimation of the intensity, which is more pronounced in the scaled runs. A slight inwards horizontal displacement of the vortex position for the unscaled runs, and outwards for the scaled, was also present. The computational expense was comprised between 850 and 8500 CPUs, with the smoothing-kernel model being remarkably more expensive (up to ten times) than the other models. This was due to the fact that the number of cells comprised within the smoothingkernel length grows as N 3 for R > 1 , and the same holds for the computational expense related to the smoothing-kernel internal cycles. Fig. 7 reports the results of the grid independence test under diffusive scaling for the laboratory-scale runs, comprising all the Lagrangian models. On the right, scaled flow patterns following the "0" method are reported, on the left the unscaled. The local unscaled model clearly does not converge, while the judgement is more difficult for the other unscaled models. This was expectable, as we had R > 1 . On the contrary, convergence was evident in all the scaled runs, and grid independence could be considered to be achieved for N = 120 -at the cost, however, of a more pronounced discrepancy from experimental data (12-26% against 8-12%). Oscillations are clearly visible.

Laboratory-scale runs: comparison of different types of scaling
In Fig. 8 , example results of the laboratory-scale runs for all the scaling types reported in Table 2 are provided for N = 120 . Only the local Lagrangian was used. Experimental flow patterns from [10] are reported for comparison. As expectable, not all the scaling types resulted to produce flow patterns compatible with the ex-perimental data. In particular, they did so only the "0", the "2PNu" and the "12PNu-Q". Fig. 9 reports the results of the grid independence test under diffusive scaling for the laboratory-scale runs, comprising all the scaling types reported in Table 2 , and the local Lagrangian model. Convergence was found only under the scaling types "0"and "12PNu-Q'. This despite the presence of some off-trend values in "12PNu-Q" following a quasi-periodical recurrence. This was probably due to an unidentified error in a floating-point-to-integer conversion. Performing a large number of runs helped identifying the entries suffering of this problem, and excluding them. A comparison with Fig. 8 shows that these scaling types produce grid-independent results which are compatible with experimental data, with the "12PNu-Q", with the last one being the best choice.

Convergence and scaling
A correlation between value and variation of R , and convergent (or non-convergent) behaviour, is evident. In the laboratoryscale simulations ( Section 5.1 ), we had 0 . 3176 ≤ R ≤ 0 . 9528 < 1 and convergent behaviour; in pilot-scale, unscaled simulations ( Section 5.2 ), we had 1 < 1 . 588 ≤ R ≤ 7 . 94 and nonconvergent behaviour; finally, in laboratory-scale, scaled simulations ( Sections 5.2 and 5.3 ), we had 1 < 1 . 588 ≡ R and convergent behaviour (depending on the choice of scaling type). This allowed us to conclude that: 1. For R < 1 , convergence occurs and is unaffected by the value of R ; 2. For R < 1 , the value of R does affect conver gence, and in particular: (a) for variable R , non-convergent behaviour occurs; (b) for constant R , it is possible to retrieve convergent behaviour.
In synthesis, convergence can occur if R < 1 or R = const . The introduction of a scaling mechanism has the effect of constraining R to a constant value, hence enabling convergence de- pending on the choice of the physically-meaningful dimensionless parameter(s). On the other side, however, introduces a systematic error, which may affect the model's predictive power.
In both cases, the model was verified (see Fig. 10 ) to be secondorder accurate, as the Lattice-Boltzmann general framework under diffusive scaling, and as the HLBM [21] . The error was defined as N is the superior cutoff value for N , and it was chosen as 120 for the pilot-scale (because it was the highest number for N tested in the pilot-scale), and 128 for the laboratory-scale (because close to the value of N = 120 , for which convergence was found to be achieved in Section 5.3 ).

Choosing a simulation strategy
The considerations above allow us to trace a two-step strategy for Lagrangian modelling in the R > 1 case.   1. Grid independence should be evaluated through scaling, and the best compromise between grid independence and computational expense should be determined. Different scaling types should be tested in order to find the most appropriate physically-meaningful dimensionless parameter(s). For the specific example of the laboratory-scale computational work reported within this article ( Sections 5.2 and 5.3 ), gridindependent behaviour was shown, and the best compromise between grid independence and computational expense was evaluated to be N = 120 . 2. The best grid determined in the point above should be used to run unscaled simulations: as they are free from the systematic error affecting the scaled runs (see Section 6.1 ), better physical results will be obtained.

Comparison with HLBM [13]
All the Lagrangian model apart from the smoothed-kernel display strong similarities as regards computational expense. The latter is generally several times more expensive because of the cycles over the cells comprised within the smoothing-kernel length, as explained in Section 5.2 . It was also shown that this additional amount of computational expense vanishes for R ∼ 1 .
Compared to the HLBM [13] , the advantage of the Lagrangian methods is evident. Both the applications reported here and in [13] presented the same structure and computational expense resulted to depend on the number of cells. The HLBM constraint, reported in Section 1 , of having 10-20 cells per particle diameter meant that HLBM laboratory-scale simulations were found to converge only for N ≥ 400: as the Lagrangian laboratory-scale simulations were found to converge for N ≥ 120 in Section 5.2 , that means that the Lagrangian model requires around 37 times less resources than the HLBM to produce the laboratory-scale problem described in Section 4.2 . The same constraint means that hypothetical pilot-scale HLBM simulations of the pilot-scale problem ( Section 4.3 ) would be expected to converge for N ≥ 20 0 0 (in Section 5.1 it was found that pilot-scale Lagrangian simulations converge for N ≥ 100), which would mean around 8000 times the resources needed for Lagrangian runs.
Qualitatively, both the Lagrangian and the HLBM models reproduced the experimental flow patterns, with the first slightly underpredicting them, and the latter equally overpredicting. Quantitatively, the Lagrangian model overpredicted the vortex's x position by around 19%. This is relevant larger than HLBM's error on the same quantity, which fell under 1%. However, the HLBM's good performance was obtained thanks to the expedient of introducing partial-slip boundary conditions for the top liquid surface. Such ex-pedient was not adopted in the Lagrangian model presented within this paper. HLBM's error in the case of free-slip boundary conditions peaked to around 30%.

Conclusions
Euler-Lagrange Lattice-Boltzmann models were applied for the first time in gas-mixed anaerobic digestion. Laboratory-scale validation was performed, and pilot-scale scenarios were studied. The advantage of these models against previously-adopted solutions (viz., the HLBM) was evaluated as being between 37 and 80 0 0 times less resources being necessary.
The convergence behaviour of the Euler-Lagrange models was found to be dependent on the ratio R between particle diameter and lattice size. In particular, convergence did not naturally occur for R > 1 , corresponding to the violation of the Euler-Lagrange's fundamental assumption of negligible particle size.
A smoothed-kernel Euler-Lagrange model was introduced in order to retrieve convergence for R > 1 , but a definitive answer could not be found. As an alternative, a novel scaling system, based on the Buckingham-pi theorem, was introduced. The scaling system was proved to be effective. Limitations of the scaling approach and best ways to employ it were discussed.
The work described within this article will have applications in the modelling of complex flow patterns in laboratory, pilot and full scale geometries; in particular, it represents a significant step forward in advancing the topic of mixing optimization of gas-mixed anaerobic digesters.
This work is part of an ambitious project (see "Acknowledgements" below), which aims to fully couple biokinetics and hydrodynamics in full-scale anaerobic digesters. Next steps will include a separated validation of a simplified biokinetic model, the validation of a coupled model against laboratory experimental data, and the application to industrial digesters.

Declaration of Competing Interest
No interests to declare.