A Framework for Hyperbolic Approximation of Kinetic Equations Using Quadrature-Based Projection Methods

We derive hyperbolic PDE systems for the solution of the Boltzmann Equation. First, the velocity is transformed in a non-linear way to obtain a Lagrangian velocity phase space description that allows for physical adaptivity. The unknown distribution function is then approximated by a series of basis functions. Standard continuous projection methods for this approach yield PDE systems for the basis coefficients that are in general not hyperbolic. To overcome this problem, we apply quadrature-based projection methods which modify the structure of the system in the desired way so that we end up with a hyperbolic system of equations. With the help of a new abstract framework, we derive conditions such that the emerging system is hyperbolic and give a proof of hyperbolicity for Hermite ansatz functions in one dimension together with Gauss-Hermite quadrature. AMS subject classification: 35Q20, 35Q35


Acknowledgment
The following thesis was written between April and September 2013 at MathCCES, RWTH Aachen University in partial fulfillment of the requirements for the degree Master of Science in Computational Engineering Science.
I would hereby like to thank all the people involved in the work on my thesis and during my studies helping me to achieve the goals I pursued and to finish on time.
First of all, I want to thank my supervisor Manuel Torrilhon for helpful advices earlier on during the past years as well as for his enduring support and encouragements from the proposal of the topic until the final version of this thesis. The many fruitful discussions and impulses helped a lot in focussing on the important parts of the work and still left the necessary freedom for own ideas and developments.
Many thanks also go to my colleagues at MathCCES, especially to my advisor Roman Schaerer who always patiently listened to my questions and deliberations and replied with useful hints and ideas for further investigations.
Thanks to Claudia, Marc and Marcus for proofreading my thesis. It must have been a tough job.
I do not want to miss my fellow students in the course of Computational Engineering Science (CES) as well as my old and new friends in Aachen and elsewhere, who proved themselves a valuable support during the years of my studies and especially in the past few months.
Special thanks I would like to give to my parents and my family, who always animated me to go my own way and gave me a good deal of curiosity to take along that helps me whichever topic I work on.
Last but not least, I convey grateful thanks to the Friedrich-Naumann Foundation for Freedom, who greatly helped me during my studies with a fellowship of large value for myself. Apart from the financial support, it opened up new possibilities for me that I had never thought of and gave me the freedom and independence I appreciate so much.

Motivation
Kinetic equations are the basis for many different applications and are widely used in industrial and scientific fields. Especially for rarefied flows they provide an accurate setting for the successful solution of important numerical simulations. In Chapter 2 we will see that there are more or less distinct regions of the flow in which the application of standard fluid dynamic models like the Euler or Navier-Stokes equations is not appropriate for a physical solution. One then has to apply more advanced kinetic models that are motivated directly by kinetic equations.
A standard method proposed by Grad in [10] derives equations for the macroscopic flow variables like density, velocity and temperature of the flow by expanding the unknown distribution function of the Boltzmann equation in a Hermite series. The drawback of this rather simple method is that the resulting system of partial differential equations (PDEs) can loose hyperbolicity for certain values of higher moments. The loss of global hyperbolicity is a serious problem, because hyperbolicity is needed for physical solutions and stability of the solution in particular. The admissible region of variables for hyperbolicity of the system in fact becomes smaller for higher accuracy of the methods, as shown by Cai in [5].
There are some methods for which it can be shown under certain conditions that they are hyperbolic in special cases like one-dimensional flows. One of those is based on multi-variate Pearson-IV-Distributions and was proposed by Torrilhon in [23]. Another method to achieve hyperbolic equations has been published by Levermore in [18], but this method is unfortunately not given in analytical form.
In [5] Cai et al. have successfully performed a regularization of Grad's moment system in one dimension that is globally hyperbolic. They essentially derived the characteristic polynomial of the corresponding matrix analytically and used this information to set certain variables or entries in the matrix to zero so that the new characteristic polynomial has real roots and the system becomes hyperbolic for all values of the variables involved.
The approach by Cai et al. gives only limited insight into the underlying theoretical Introduction foundations of this regularization and it is not really clear how to generalize the procedure to similar problems. Another important question is the possibility of an efficient numerical simulation. The velocity can usually attain very large values leading to the necessity of a very fine discretization in the velocity space with many unknowns. Recent developments by Kauf in [17] show a way to circumvent these problems at the expense of a more difficult PDE involving additional terms.
Our concrete question for this thesis is therefore: Is it possible to set up a general framework for the derivation of efficient, yet stable and hyperbolic systems of PDEs for the solution of kinetic equations such as the Boltzmann equation?

Aims of the Project
As specified above, the main part of this thesis is concerned with the setup of a general framework to derive hyperbolic PDEs for the solution of the Boltzmann equation. With the help of this framework it should be feasible to decide about the hyperbolicity of the emerging system a priori before inserting a special ansatz and performing projections of the equation, just by the specific choice of the ansatz and the projection method. We want to investigate the use of quadrature-based projection methods in particular and analyze the application of those methods with respect to the effects on the structure of the equations as well as on the eigenvalues of the system matrix, which is closely related to the hyperbolicity of the system.
The framework should include these quadrature-based projections and give concrete conditions under which the system will be hyperbolic.
After the framework has been set up, application to some choices of important classes of functions for the expansion together with related quadrature methods should yield resulting systems that are hyperbolic. and explain the difference to exact projections. 1D as well as multi-dimensional examples show the desirable properties of the emerging systems for the basis coefficients and help for a better understanding and the developments in the next chapter.
The main part of this thesis is presented in Chapter 5, where we first derive the formulation of the Boltzmann equation under a non-linear transformation of the velocity variable that allows for efficient simulations. Next is the development of the conceptual framework for the derivation of conditions to achieve hyperbolicity. This part covers different kinetic equations and draws an analogy between the discrete velocity method and the quadrature-based projections. Using the mathematical properties and the framework developed before, we also give here a proof for hyperbolicity of the regularized system in the case of the transformed one-dimensional Boltzmann equation with Hermite ansatz and Gauss-Hermite quadrature.
We summarize the results in Chapter 6 and discuss future work on the project.

The Boltzmann Transport Equation
Before we turn our attention towards the detailed mathematical discussion and the different models that we want to develop, we should first explain the Boltzmann equation itself and the context it is used in. We therefore describe the kinetic setting, important properties of the Boltzmann equation and the most common solution methods together with their benefits and drawbacks.

Basics of Kinetic Theory
In standard fluid dynamics the fluid is modeled as a continuum meaning that the atoms and molecules or particles as we will call them from now on in general are in constant contact with the other particles. This is obviously valid for a fluid, which usually also has a relatively large density. When it comes to rarefied gases, particles do only interact rarely and are in free flight for the most of the time. This fact can be related to low density, for example for low ambient pressures. At this point, the flow behavior is more and more influenced by binary collisions of the particles. It is therefore required to model the interactions between individual particles in a different way than the standard continuum approach. In the following chapter we want to explain the viewpoint of kinetic theory and briefly explain the most important properties and basic terms related to this point of view. For more information about kinetic theory, we recommend the textbook [22] by Struchtrup. An approach from the engineering viewpoint can be found in the book by Heinz [12].

Knudsen Number, Applications and Effects of Gas Rarefaction
Many flow problems can be characterized by small or moderate velocities and ambient conditions. However, the advanced technical capabilities made it possible to reach more extreme values for all parameters involved. In order to further distinguish different flow regimes, the Knudsen number Kn was introduced. It is the quotient of the mean free path length λ, e.g. the average distance a particle travels between two collisions, and a characteristic length L of the flow problem, e.g. the size of a plane or the diameter of a pipe. The definition of the Knudsen number Kn reads As a dimensionless flow parameter, the Knudsen number is an important quantity that influences the behavior of the flow. Standard models like the Navier-Stokes equations or the Euler equations are only valid for very small Knudsen numbers, because they rely on the assumption of a continuum in the so-called equilibrium.
According to Kauf [17] and Struchtrup [22] , the Knudsen number can be used to roughly divide the flow field into different regimes as follows: • Kn ≤ 0.01: equilibrium or hydrodynamic regime, which is accurately described by the Navier-Stokes equations; • 0.01 ≤ Kn ≤ 0.1: slip flow regime, where the Navier-Stokes equations need additional slip boundary conditions to be still valid; • 0.1 ≤ Kn ≤ 1: transition regime, in which the Navier-Stokes equations are not valid, Boltzmann equation or advanced models are needed; • 1 ≤ Kn ≤ 10: kinetic regime, here the Boltzmann equation is also valid, but a direct simulation is expensive; • 10 ≤ Kn: free flight regime, where direct simulations start to become efficient; The specific flow regime therefore suggests a corresponding model for the flow and is closely related to the numerical solution approach to solve the flow problem. For the rarefied gases with Kn ≥ 0.1, one is interested in efficient and accurate methods to solve the Boltzmann equation, which is the topic of this thesis.
In the literature (see e.g. [17], [6]), there are many relevant applications that are covered by the kinetic regime of a rarefied gas. Among those are: • Reentry flights of spacecrafts at very high altitude: Gas pressure and density are very low, leading to large mean free paths and in turn to a large Knudsen number, even for large characteristic lengths like spacecrafts. The correct prediction of the heat flux close to the thermal shield is crucial in this example.
• Shock waves at very high speed: The velocity jumps from super-to sub-sonic yields sharp gradients over only a very small distance. The shocks also influence the behavior of the flow further downstream.
• Microscopic channel flows: At very small length scales, the Knudsen number will become large even for ambient conditions. Examples are porous media or ion channels in membranes.
When dealing with applications above, it is absolutely necessary to use extended models and methods to correctly predict the various effects of gas rarefaction. The use of standard methods like Navier-Stokes, for example, may lead to wrong predictions of the relevant macroscopic quantities. The method therefore fails to correctly simulate the typical effects for large Knudsen numbers, the so-called kinetic effects (see also [17]) like the Knudsen paradox. It says that the mass flow through a tube is decreasing with the diameter of the tube only until the diameter reaches a certain value of the order of the mean free path λ. From then on, the mass flow increases again. This is not consistent with the Navier-Stokes equations and only one example for the need of better models.
Another example that is often cited, is the very small Knudsen pump (see [16]). It has no moving parts and works only with temperature differences along the wall. Due to that, the gas inside moves from the cold end to the warm end. This allows for a precise and reliable control of the gas flow.

Phase Space and Probability Density Function
When it comes to rarefied gases, one might think about a straightforward method that tracks the way of every single particle. The collisions between particles then couple the evolution of the particles' positions. The problem is that this procedure leads to the solution of a vast number of coupled partial differential equations, as even a rarefied gas still consists of too many particles per volume. For each of those particles, one would have to determine the corresponding three-dimensional positions x and velocities c at every time t, leading to a seven-dimensional solution space, the so-called phase space.
In kinetic theory we introduce a probability density function (PDF) or distribution function f (x, c, t). The PDF f is related to the number of particles with velocities c at position x and time t. The number of particles with velocities in [c, c + dc] in a certain interval [x, x + dx] at time t is given as N = f (x, c, t)dxdc.
Instead of following each single particle, it is in principle enough to know the value of f at all times, positions as well as velocities to have complete knowledge about the state of the gas.

Macroscopic Quantities of the Flow Field
Assuming a given PDF f , it is important to recover the macroscopic quantities of the flow field, because we are usually interested in variables like the overall velocity of the gas or the temperature. These and other quantities are all computed with the help of so-called moments of f .
The mass density ρ is simply the mass m of one particle times the integral of the PDF f over the velocity space R 3 : with number density n := The mean velocity v can be computed by means of the momentum density ρv x, c) dc, (2.3) or in componentwise/tensor notation Higher moments can be defined using the peculiar velocities C i , where With this definition the thermal energy (sometimes also called internal energy) u is given as where the notation is C 2 ii := C 2 1 + C 2 2 + C 2 3 in three dimensions. For an ideal gas, the temperature T is closely related to the internal energy u by u = 3 2 k m T , where k is the Boltzmann constant. Writing the temperature in energy units, we define a new variable θ = k m T . Lastly, we give the definition of the pressure tensor p ij and heat flux q i Apart from these definitions, there are several thermodynamical laws connecting different variables and giving restrictions on parameters that can be found in [14].

Properties of the Boltzmann Transport Equation
In order to calculate the moments mentioned in the section before, one has to solve the Boltzmann equation for the unknown PDF f . The Boltzmann equation is a partialintegro-differential equation for f with usually seven independent variables t, x, c: here, the first term denotes the change in time, the second term is due to the convective transport with velocity c i and the third term on the left hand side denotes changes in velocity in the presence of external forces G i , such as gravity. The operator S(f ) on the right hand side is the so-called collision operator that models collisions of particles with other particles. For most of this thesis, we will neglect external forces and consider only the convective part as the main difficulty.

Equilibrium Distribution
The right hand side operator S(f ) in Equation (2.8) forces the process towards its equilibrium state and is zero, if equilibrium is achieved. At equilibrium, the density function f has the form of a local Maxwellian that is in d dimensions defined as follows As density, mean velocity and temperature may vary with t and x, a local Maxwellian is assigned to each point in time and space. Proper definitions of the collision operator S(f ) have to ensure that S(f M ) = 0.
In the non-equilibrium case, the density function f differs from a Maxwellian. When we use a special ansatz for the form of the density function f later on, it is nevertheless important to have the Maxwellian in the solution space in order to give the right solution in the equilibrium case. This will later justify some particular choices for a basis of the ansatz space.

Collision Operator
The form of the collision operator has a huge impact on the behavior of the distribution function f . The specific choice of S(f ) is in fact already part of the model.
A well-known model for the collision operator was proposed by Boltzmann itself and was derived using the so-called Stosszahlansatz where the notation f indicates the PDF for the post-collision velocity. In principle it is possible to use this approach for numerical simulations, but its high dimensionality makes already the evaluation of S (Boltz) at discrete points in t and x very costly. For more information and details about this approach, see for example [6].
With some simple assumptions of the collisions (see e.g. [6]), it is possible to derive a linearized collision operator from the Boltzmann operator that is known as the BGK model [2]: There is also another approach, which assumes small velocity changes due to collisions and results in a Fokker-Planck operator for the collisions with relaxation time τ FP and sensible energy e s = 3 2 k m T in three spatial dimensions. The derivation of this operator is shown in detail in [6]. An application using the Fokker-Planck model is described in [13].

Solution Methods
Numerical methods for the solution of the Boltzmann equation have been under development since the late 1960s. During the past fifty years, different methods have been successfully applied to various problems. We will now give a short overview about the most important classes of methods.

Direct Simulation Monte Carlo
It was Bird, who proposed the first method to actually solve rarefied gas flows using the so-called Direct Simulation Monte Carlo method (DSMC) [3]. His method was later improved and used for many problems emerging from real world applications, see also [4].
The key to the DSMC method is the different viewpoint: The behavior of the gas is actually not modeled by a PDE like the Boltzmann equation, but the gas is described by a system of particles. Each of the particles has a position and a velocity at every time. Note that in a real computation the number of numerical particles is substantially smaller than the real number of particles, which is of the order of 10 20 . So usually some hundreds of thousand particles are used to simulate the gas flow. Now the particles' positions and velocities evolve according to the following steps (see [6] for more details): (1) a proper initialization is done by sampling velocities and positions from initial values.
(2) in each time step, the particles first have a free-flight phase, where they are moved according to their assigned velocities for the time interval ∆t.
(3) the free-flight phase is followed by a collision phase, where particles undergo collisions that are modeled by a collision probability. Binary collisions then change the velocities of the involved particles.
The particles thus move and collide in every time step. The calculation is by definition unsteady and steady results are obtained by asymptotic limits in time. Macroscopic quantities can later be derived from the particles' velocities by averaging over small cells of the flow field.
The benefit of this type of method is clearly its simplicity. It is straightforward to implement, once the collision probabilities are modeled. On the other hand, the number of particles needs to be sufficiently high to enable accurate solutions. This was especially a problem during the development of the method. Furthermore, the number of required particles increases with the density of the gas, making the method less suitable for problems with moderate Knudsen numbers.

Method of Moments
A relatively new method is the so-called method of moments (MoM), in which equations for the macroscopic moments are directly derived from the Boltzmann equation. A good summary of the method and the problem of the closure below is given by Levermore in [18].
The general procedure can be demonstrated for a simple, one-dimensional equation Now the integral operator I n (·) (2.14) is applied to the PDE, where I n multiplies the equation by c n and integrates over the velocity space (2.14) After the application of I n (·), we can identify the so-called moments (2.15) in the equation (2.15) The lower moments have a direct physical meaning. For example, we have M 0 = ρ, The one-dimensional Boltzmann equation now transforms to By this simple trick, it is possible to eliminate the velocity dependence. We will essentially use the same procedure for our quadrature-based projection methods later on. Note that the convective term leads to the appearance of the higher moment M n+1 . Now, one has n equations for n + 1 variables, as the last equation also contains M n+1 . The difficult part is now to find a so-called closure that is an additional relation to close the system of equations.
The easiest way would be to simply set M n+1 = 0. Unfortunately, this simple approach does not yield satisfactory results. It can lead to negative values for the density, for example. There are more advanced methods to close the system that relate the highest moment to the lower moments. One of these approaches is the maximum entropy method. The value of the highest moment is chosen to maximize the mathematical entropy, leading to physical solutions. A disadvantage is that the relation can no longer be written down in closed form, but is the solution of an optimization procedure in every step of a numerical method.
Moment methods are also often used in kinetic models for radiative transfer, where the three-dimensional velocity of the particles is written in terms of direction and energy of the particles, see [9] for an example.

Lattice Boltzmann Method
The Lattice Boltzmann method (LBM) is actually a modification of another particle method, the Lattice Gas Automata method (more information can be found in [7]) in which particles are only allowed to travel along a discrete lattice through the flow field. As soon as two particles meet somewhere on the discrete lattice, a collision event takes place, changing the velocities similar to the DSMC method. As for the LBM method, the single particles have then been replaced by the particle density function in order to reduce statistical noise.
Most of the time LBMs uses a BGK model for the collisions, where a collision step is followed by a free-flight step just like in case of the DSMC method. The important difference is that the velocity space is discretized and only allows for particle velocities along the lattice to the next lattice grid point. One possible velocity space discretization in two dimensions could be This is called the D2Q9 discretization, as it is two-dimensional and consists of 9 discrete values for the velocity lattice. Despite its simplicity, limitations of the LBM are flows with high Mach numbers, because the methods were originally developed for isothermal problems (see [7]).

Discrete Velocity Method
Another method that is in fact closely related to our quadrature-based projection methods is the Discrete Velocity Method (DVM). A good description of the method can be found in [6]. The DVM discretizes the Boltzmann equation in distinct points of the velocity space. The discretization points c n are called discrete velocities. In the one-dimensional case, this means we end up with a system of equations for each of the discrete velocities Note that this special discretization of the velocity space leads to the unknowns f n (t, x) := f (t, x, c n ) that do only depend on t and x. The discretization of the right hand side collision operator is more delicate, as the collision invariants and conservation properties impose some restrictions for a meaningful discretization.
There is a relation between DVM and the MoM. In the case of MoM, the equation is multiplied by the function c n and then integrated over the velocity space. DVM evaluates the equation at certain velocities c n which is equivalent to a multiplication with the dirac function δ(c − c n ) followed by integration over the velocity space. Thus, the DVM can be seen as another projection method, where the test functions (which are used for the multiplication of the equation) are simple dirac functions.
In this sense the DVM method can also be seen as a special ansatz for the distribution function. With the point evaluations f k = f (t, x, c k ) this ansatz has the form This chapter is about the necessary mathematical elements needed for the formulation and the expansion of the transformed Boltzmann equation as well as the solution using quadrature-based projection methods. We first introduce the different types of basis functions along with their respective properties. These functions will later be used as ansatz or test functions in the various settings and will determine the structure of the emerging PDE system. We derive normalized versions of the respective functions and show important recursion results that will be used in Chapters 4 and 5.
The last part of this chapter is about quadrature methods and Gaussian quadrature in particular. Together with the different quadrature methods, we also give an introduction to non-classical quadrature and explain useful properties of quadrature approximations.

Hermite Polynomials
We explain the standard version of Hermite polynomials first, before defining an orthonormal set of Hermite functions that we will later use for the expansion of the unknown distribution function of the Boltzmann equation. The interested reader is referred to [1] for a more detailed summary of properties and formulas concerning Hermite polynomials.

Standard Definition
There are actually two different ways of defining the standard Hermite polynomials, each of them leading to a scaled version of the other one. We will stick to the so-called probabilists' Hermite polynomials H n that are defined in the following way for n ≥ 0 H n (ξ) = (−1) n e ξ 2 /2 d n dξ n e −ξ 2 /2 . The other type of definition, leading to the so-called physicists' Hermite polynomials He n , is Note the missing factor in the exponent that leads to the conversion formula In the context of the Boltzmann equation, the probabilists' Hermite polynomials are usually considered, because they are closely related to the Maxwellian that is the equilibrium distribution of the Boltzmann equation. This is the reason to use them as a set of basis functions for the expansion of the distribution function in one spatial dimension.
The first Hermite polynomials can be easily calculated as The corresponding polynomials of higher degree can be derived using the following recursion formula that holds because of the definition (3.1) H n+1 (ξ) = ξ H n (ξ) − n · H n−1 (ξ). (3.5) Furthermore, the derivative of a Hermite polynomial can be expressed in terms of the lower order polynomial according to H n (ξ) = n · H n−1 (ξ). (3.6) It is easy to show that the Hermite polynomials are an orthogonal basis of the corresponding space of polynomials with respect to the weighted scalar product using the weighting function w(ξ) = 1 √ 2π e −ξ 2 /2 . According to that, we can compute for m ∈ N < H n , H m > w = n!δ nm , (3.8) which shows that the Hermite polynomials are in fact orthogonal but not normalized.

Orthonormal Hermite Polynomials
As we have seen in Equation (3.8), the standard version of the Hermite polynomials are not normalized with respect to the weighted scalar product defined above. We therefore define a normalized Hermite polynomial as follows Due to their definition, we directly conclude the orthonormality of these polynomials (compare Equation (3.8)) Using the definition (3.9) and the properties from Section 3.1.1, we can derive recurrence relations and the derivative of the normalized Hermite polynomial: and Another useful property is Combining (3.11) and (3.13), we obtain the relation (3.14) These relations will play a major role when we use a Hermite ansatz for our distribution function in the Boltzmann equation (see Section 5.2.2.3).

Laguerre Polynomials
The Hermite polynomials are defined such that they fulfill an orthogonality relation when integrated over the whole ξ ∈ R. But in special cases it is necessary to have similar polynomials but with different weights and integration intervals. One different approach are the so-called Laguerre polynomials, which are defined as The first Laguerre polynomials are (3.16) The polynomials also follow a recursion rule for the computation of polynomials of higher degree (using the definition L −1 (ξ) := 0) The recursion formula for the derivative looks rather different from the one for Hermite polynomials (compare Equation (3.11)). Derivatives can be calculated using The Laguerre polynomials are already orthonormal as they are defined in (3.15) with respect to the scalar product with weighting function w L (ξ) = e −ξ when integrated over the positive domain ξ ∈ R + Consequently, we have for m ∈ N The Laguerre polynomials therefore form a set of orthonormal basis functions. It is possible to use them for the expansion of the unknown distribution function of the three-dimensional Boltzmann equation in the radial velocity direction.

Generalized Laguerre Polynomials
The Laguerre polynomials introduced in (3.15) are orthogonal with respect to the weighted standard scalar product (3.20). But for a transformation of variables, additional terms appear in the integrals due to the Jacobian of the transformation rule. For spherical velocity coordinates basically the term r 2 sin(θ) has to be considered for a proper definition of spherical coordinates (r, θ, φ). In the radial velocity direction, we then have to compute integrals of the following form We therefore use polynomials that are orthogonal in the proper sense. This can be achieved by so-called generalized Laguerre polynomials L α n of degree n that are defined as follows for a parameter α ∈ R.
The case α = 0 gives back the traditional version of Laguerre polynomials defined in (3.15).
The first three generalized Laguerre polynomials are Similar to the standard Laguerre polynomials, there exist some recursion rules and a formula for derivatives. One important formula is the following shift of the parameter α: The formula remains valid for n = 0, if we define L α n (ξ) := 0 for all n ≤ 0. We will here now concentrate on the important orthogonality result: The generalized Laguerre polynomials are orthogonal with respect to the scalar product for Gamma-function Γ. Consequently, we can write down a normalized version Corresponding to (3.25), we can also derive a recursion formula for the normalized functions using the definition (3.28) as follows This formula can be used for the analytical computation of integrals emerging from a special ansatz in three spatial dimensions.
Choosing α = 2 we can now compute integrals of the form (3.22).

Spherical Harmonics
The Hermite as well as the Laguerre polynomials do only depend on one variable, which makes them suitable for one-dimensional applications as well as e.g. for a onedimensional part of a more complex application. We aim at a full three-dimensional setting of our simulations later in order to come close to real-world experiments. Assuming functions of Laguerre type for the radial part, we also need ansatz functions for the angular portion of the solution. This is where the so-called spherical harmonics (SH) come into play. The spherical harmonics are polynomials in spherical coordinates that can be evaluated for every point on the unit sphere and return a single value. We only consider real SH, but there are also complex-valued SH functions.
We will now introduce a normalized version of the spherical harmonics, such that the upcoming integrals are easy to calculate and the system matrix will become symmetric. First, we need the so-called normalized associated Legendre polynomials L m l of degree l ∈ N and order m ∈ N, which can be calculated from the associated Legendre polynomials P m l using the following formula and P m l := Together with the setting L m l := 0 for l > m, the normalized associated Legendre polynomials satisfy a set of recursion relations that are very useful for computations with the spherical harmonics later. Now the real SH function of degree l and order m (−l ≤ m ≤ l) is defined as follows At first sight it is important that Y 0 l does not depend on φ and all the SH functions are polynomials in trigonometric functions of θ and φ. Furthermore, we have 2l +1 functions for each degree l.
The first few SHs are: (3.34) In order to get an impression of how a spherical harmonics looks like, it is possible to plot Y m l in the following way: we scale each vector that points from the origin to the unit sphere by the absolute value of Y m l for this particular choice of θ and φ corresponding to the direction in which the vector points. The results are shown in Figures 3.1  By construction, the SH functions are orthonormal with respect to the scalar product meaning that we simply have The set of spherical harmonics is therefore an orthonormal basis for all functions defined on the unit sphere. It is possible to use them as part of the ansatz functions in a full three-dimensional setting.

Cartesian Spherical Harmonics
As we have seen in the previous section, the spherical harmonics are part of a basis of all polynomial functions in x, y, z in the three-dimensional space. But they are naturally formulated in the spherical coordinates r, θ, φ together with a radial part that depends solely on the radius r. Consistently, one would have to calculate the emerging integrals using spherical integration, too. Depending on the context, this can be inefficient or inflexible.
In [15] a cartesian version of the solid spherical harmonics is defined. The solid spherical harmonics N m l (r, θ, φ) are related to the spherical harmonics Y m l (θ, φ) by an additional factor r l : The additional factor r l enables the representation of the solid spherical harmonics in cartesian coordinates x, y, z. The cartesian solid spherical harmonics are first defined as follows This is already an equivalent basis of the space spanned by the orthonormal solid spherical harmonics r l Y m l (θ, φ). The definition can be normalized using a normalization factor (3.40) According to [15], this version of the solid spherical harmonics can easily be written in the cartesian coordinates (x, y, z) = (r sin(θ) cos(φ), r sin(θ) sin(φ), r cos(θ)) using the following recursion formula The cartesian version of the SSH is by definition orthogonal with respect to the scalar product defined in (3.35) because we have The definition (3.41) now allows for a representation of the solid spherical harmonics in the cartesian basis. As the corresponding integrals are very easy to solve (in fact, even a simple cartesian quadrature rule of sufficient order of exactness gives exact results), this can be a possibility to speed up computations. On the other hand, we can now easily identify the basis functions at the different levels l, m of the spherical harmonics with simple cartesian polynomials and see the differences to a full tensor product with a polynomial basis, for example.

Jacobi Matrix
In this context, we will briefly mention the so-called Jacobi matrix, which is a tridiagonal matrix containing the coefficients from the recursion of orthonormal polynomials (e.g. (3.11), note that the Jacobi matrix is not to be mixed up with the so-called Jacobian matrix that is the first derivative of the numerical or analytical flux calculation). We will later see this matrix when we show examples for the computation of eigenvalues of the system matrix. It turns out that the eigenvalues of the Jacobi matrix are just the zeros of the (n + 1)st orthonormal basis function they correspond to (see also [24]).
Our sets of orthonormal polynomials Φ n satisfy a recursion rule like In case of the orthonormal Hermite polynomials, for example, we have The Jacobi matrix J n corresponding to a set of orthonormal polynomials is defined as And the characteristic polynomial is actually some factor γ ∈ R times the (n + 1)st orthonormal polynomial.
This leads to the fact that the eigenvalues (as roots of the characteristic polynomial) are the zeros of Φ n+1 , too. Note that the Jacobi matrix for Hermite polynomials is symmetric, due to the coefficients (3.45).
In Section 4.1, we will recognize the Jacobi matrix as the system matrix of the PDE system for the basis coefficients after the projection.

Gaussian Quadrature
Like every quadrature rule, Gaussian quadrature approximates integrals of a certain kind using evaluations of the integrand at discrete points. In the case of Gaussian quadrature the integrand consists of a weighted product of a function f and a weighting function w. Gaussian quadrature of order N ∈ N is performed according to where for i = 1, ..., N the w i are called weights and the x i are the sampling points.
For a proper choice of the weights and the corresponding sampling points, the Gaussian quadrature rule is exact for all polynomials f up to degree 2N − 1.
We will further consider the special case, where the weighting function is equivalent to the weighting function of the Hermite or Laguerre polynomials.
It is well known that the sampling points are the roots of the N -th corresponding orthogonal basis polynomial p n . The weights w i can be calculated according to the formula where a N is the coefficient in front of x N in the respective polynomial p N . It can be shown that the weights are all positive and all the sampling points lie inside the interval (a, b).

Gauss-Hermite Quadrature
If we choose the weighting function in Equation (3.48 by a weighted sum of function evaluations. The sampling points x i for the function evaluation are the roots of the N -th Hermite polynomial, which is given in (3.9) and here denoted as p N .
According to the general formula (3.49), the weights for the Gauss-Hermite quadrature can be calculated to be (3.51)

Gauss-Laguerre Quadrature
For another weighting function w(x) = e −x and a = 0, b = +∞ we end up calculating the following integrals by the so-called Gauss-Laguerre quadrature as a weighted sum of function evaluations. The sampling points x i for the function evaluation are now the roots of the N -th Laguerre polynomial p N .
Similar to the previous version (compare Section 3.5.1), the weights for the Gauss-Laguerre quadrature are (3.53) Thus, the Gauss-Laguerre quadrature is essentially performed in the very same way as the Gauss-Hermite quadrature with the important difference of another weighting function as well as different interval bounds.

Generalized Gauss-Laguerre Quadrature
For weighting function w(x) = x α e −x and a = 0, b = +∞ we calculate the following integrals as a weighted sum of function evaluations. The sampling points x i for the function evaluation are now the roots of the N -th generalized Laguerre polynomial p N .
The weights for the generalized Gauss-Laguerre quadrature are (3.55)

Non-Classical Quadrature
There are quadrature formulas for various types of weighting functions in combination with the respective domains of integration and the corresponding orthogonal polynomials. The so-called classical quadrature formulas include Gauss-Hermite-, Gauss-Laguerre-and Gauss-Legendre quadrature, for example.
However, in special cases one might be confronted with a different weighting function, a so-called non-classical weight, or different domains for the integration for that none of the formulas above is applicable. Under certain conditions, it is possible to find corresponding weights and quadrature points. As we are interested in a Gaussian-quadrature, we also need a set of orthogonal polynomials with respect to the desired integral. We will explain the general procedure of finding the orthogonal polynomials and the weights in this section.
For our non-classical quadrature, we consider integrals of the type b a f (x)w(x)dx (3.56) and want to compute the exact value for polynomials f (x) up to a certain degree using a quadrature formula like in (3.48) First, we have to check, whether a Gaussian-quadrature rule with orthogonal polynomials exists. According to [19] this requires the following Hankel-matrix to be nonsingular Regularity of (3.57) ensures the existence of an orthonormal set of polynomials. The next step is to construct a basis for the space of polynomials that are orthogonal (or orthonormal) with respect to the given integral (3.48). There are several possibilities to do this, of which one of the easiest would be to apply a Gram-Schmidt method to orthonormalize the monomials and end up with a set of orthonormal polynomials p j (x). Other possibilities are the method of moments, the Stieltjes procedure and the Lanczos algorithm, for more information, see [19].
As we have seen before, compare Section 3.5, the quadrature points x i are now just the roots of the N th orthonormal polynomial if we are interested in a formula that is exact up to degree 2N − 1. As the Gram-Schmidt method is already a numerical procedure, the calculation of the roots is usually also done numerically and thus very efficient.
Next is the calculation of the corresponding weights. The weights are determined by the condition of exactness of the formula up to degree N −1. In principle, one can use the general quadrature formula (3.48) for every monomial x i , for i = 0, . . . , N − 1 and get a linear system of equations by the requirement that the quadrature reproduces the exact result. Alternatively, the condition is imposed for the orthonormal basis polynomials p j (x). Thus, one has to solve the set of linear equations (see also [21]) Note that the right hand side usually includes many zero entries, because the polynomials p j (x) are orthogonal to the constant function p 0 (x). This may not be the most efficient way to calculate accurate weights, because the solution of the linear system (3.59) can be unstable, especially for large N . It is also possible to use the general formula (3.49) from above. The reader is referred to [21] for more advanced methods. We will later again see matrices similar to the one on the left hand side in Equation (3.59). If the polynomials p i form an orthogonal basis and the x i are the corresponding quadrature points, then this matrix will be non-singular, because a set of non-zero weights w i has to exist.
In the end, we have an orthonormal set of basis polynomials, the roots of the N −th polynomial and the corresponding weights for a Gaussian-quadrature formula to calculate the integrals with respect to non classical weighting functions.

Interpolation Property and Aliasing
With the help of Gaussian quadrature, it is possible to prove an interpolation property of an expansion of the arbitrary function f in terms of an orthonormal set of polynomials.
We therefore consider two expansions of the original function f up to a certain order n − 1 ∈ N as follows with basis coefficients α i , α i , basis polynomials Φ i , the corresponding weighting function w (e.g. w(x) = e −x for Laguerre polynomials) and i = 0, ..., n − 1.
The basis coefficients are either computed by exact integration as or employing a Gaussian quadrature approximation for the integral using the same set of orthonormal functions Φ i Although the second method is somewhat an approximation to the exact integral, it is possible to derive an interesting property for the quadrature based coefficients. Using these approximated values, we have (3.63) Which means that the value of the quadrature based approximation f and the value of the exact function f are the same at the sampling points of the quadrature rule. This is not necessarily the case for the coefficients computed with exact integration.
The reason for this is the so-called aliasing error that is introduced by the exact integration. It is caused by sort of high frequencies in f that spoil the point interpolation property.
Equation (3.63) can be proven by the assumption that we have Computing α k we get: where we used the exactness up to degree 2n − 1 of the quadrature formula From α k = α k , we then deduce, that our assumption was wrong and we thus have which completes the proof. The approximation f is of course still converging to the right solution f as n → ∞.
Prior to the development of an abstract framework as explained in Chapter 5, we want to show some tests using different kinetic equations and basis functions for the expansion in order to see how the emerging system for the basis coefficients looks like.
The experimental results of this chapter help us to understand the difference between exact and quadrature-based projections as we will explain the derivation of the PDE system. We start with some small one-dimensional examples and consider different simple kinetic equations with generalized advection velocities that are closely related to the fully transformed Boltzmann equation, which we will see in the next Chapter 5. We will observe that the use of recursion formulas for the basis functions allows an exact calculation of the system matrix.
Furthermore, we extend the examples to multiple spatial dimensions using a Hermite tensor ansatz. The results are consistent with the one-dimensional case and show that the properties of the system are closely related to the choice of the basis functions and the projection method.

One-Dimensional Cases
Before we cover kinetic equations in three spatial dimensions, we will first take a look at the simpler one-dimensional equation and explain the ideas and methods, which we will essentially also use for the more important case with three spatial dimensions.

Simple Kinetic Equation
In order to get a better understanding, we start with the standard kinetic equation, choose Hermite basis functions for test and ansatz space and apply either an exact projection or a quadrature-based projection to investigate possible differences.
We consider the following equation which can be seen as the easiest case of a collision-free Boltzmann equation, when we think of a(c) = c or a general kinetic equation for any a(c). We want to use an ansatz for the unknown distribution function f . We expand f around the equilibrium distribution (or weighting function) w(c) = 1 √ 2π e −c 2 /2 by a series of polynomials Φ i as follows which (using the Einstein sum convention) leads to Consider now the case a(c) = c. Exemplarily, we here choose normalized Hermite polynomials for Φ, so we have Φ i (c) = H i (c). Using the recursion formula from Equation (3.11), we can express cH i (c) in terms of Hermite polynomials only We now project the emerging equation using different projection methods. First, consider the continuous projection which means that we basically multiply with the j−th basis function and integrate over the whole velocity domain. The second projection method employs a quadrature rule for the calculation and thus reads with quadrature weights w i and sampling points c i according to the specific quadrature rule. The subscript N = n + 1 is the number of sampling points or the order of the quadrature rule. Note that we have to multiply with the inverse of the weighting function to cancel out the weighting function inside f . Applying the continuous projection P j (f ) =< f, H j >, we obtain Now we make use of the orthonormality property of the Hermite polynomials (see Equation (3.10), i.e. < H i , H j > w = δ i,j . For technical reasons, we set α j = 0 for j < 0 and also for j > n. Thus for j = 0, ..., n We have now derived a system of equations for the unknown basis coefficients α i . It is possible to write the emerging system in compact matrix form using α = {α i } i=0,...,n as Interestingly, the eigenvalues of the matrix are exactly the roots of the (n + 1)-st normalized Hermite polynomial H n+1 : (4.13) Going back to the previous Section 3.4, this becomes clear, as the system matrix is just the Jacobi matrix of the Hermite basis.
If we use the quadrature-based projection method (4.7), the result would not change at all, because we just replace the standard scalar product < f, Φ j > w by the corresponding quadrature rule < f, Φ j > N and we know from Section 3.5 that the quadrature rule is exact up to order 2N − 1 = 2(n + 1) − 1 = 2n + 1 of the integrand. As the integral with the highest order integrand is < H n+1 , H n > N , the integrand thus has a polynomial degree of 2n + 1 and is just integrated exactly.

Generalized Kinetic Equation c 2
Now we consider another kinetic equation with a(c) = c 2 . The reason for that is that in the transformed Equation (5.4) we will later have quadratic and mixed terms like ξ i ξ j which we want to understand in an easier setting before. We can make use of the recursion relation (see Equation (3.11)) twice, to transform the occurring term a(c)H i (c) as follows: (4.14) After that, we again perform the continuous projection (4.6) first and use the orthonormality property (see Equation (3.10)) of the Hermite polynomials to arrive at for j = 0, ..., n, which leads to the following system of equations In this case, the eigenvalues of the matrix A c 2 are not exactly the zeros of the n + 1-st normalized Hermite polynomial H n+1 . The eigenvalues are actually a mixture of the squared zeros of the n + 1-st and the n + 2-nd Hermite polynomial: The relation to the Jacobi matrix is that we can take J n+1 · J n+1 , delete the last column and row and then end up with A c 2 . This holds, because we have (4.20) Contrarily, we obtain So that the last value (A c 2 ) n+1,n+1 = 2n + 1 differs from the last value of J n · J n , which leads to the change of the eigenvalues.
Like before, we now apply the quadrature-based projection (4.7) and will notice very soon that there is a difference in the emerging system of equations. The integral with the integrand of highest degree is now (n + 2)(n + 1) < H n+2 , Hn > N , which is not integrated exactly, due to n+2+n = 2n+2 = 2(n+1) = 2N > 2N −1. Exact integration would lead to (n + 2)(n + 1) < H n+2 , Hn > w = 0 due to orthogonality, but we get a value of (n + 2)(n + 1) < H n+2 , Hn > N = −(n + 1) = −N using quadrature. This changes the last entry of the matrix to n.
The new system matrix A c 2 then is The small difference in the last entry of the matrix leads to a change in the eigenvalues. Now the eigenvalues of A c 2 are purely the squared roots of the n + 1-st Hermite polynomial: The reason is that now the system matrix is A c 2 = J n · J n , including the smaller entry ( A c 2 ) n+1,n+1 = n.

Generalized Kinetic Equation c + c 2
As we have seen in the previous case, the quadrature method is able to change the eigenvalues of the system to be simply the squared roots of a Hermite polynomial. For our shifted and scaled equation later on, we will also need another type of equation, which is very similar to equation (4.1) using a(c) = c + c 2 . With this setting, it is now possible to redo the same calculation from above and calculate the eigenvalues of the corresponding system. After insertion of the ansatz (4.2), we can perform a projection.
For the continuous projection (4.6) we get eigenvalues that do not correspond to the roots of a Hermite polynomial at all.
If we use the quadrature-based projection method (4.7), the eigenvalues are again changed and in fact related to the roots of the (n + 1)-st Hermite polynomial in the following way: It therefore seems apparent that the quadrature projection always changes the eigenvalues corresponding to the equation using a(c) to be the roots λ of the n + 1-st Hermite polynomial inserted into a (λ). We also did tests with different basis functions, for example Legendre polynomials or Laguerre polynomials and the eigenvalues always behaved in the respective way.

Relation between Quadrature Projection and Discrete Velocity Method
In the previous Section 4.1, we have seen that simple quadrature-based projections suffice to change the system slightly to systematically obtain specific eigenvalues of the system matrix. This is closely related to the procedure during a discrete velocity method (DVM), which we have described in Section 2.3.4. Motivated by this examples, we will now take a closer look at the relation between the two methods and see that they are essentially the same for the simple kinetic equations we used in the previous sections.
Regarding the PDE (4.1) and a specific ansatz with arbitrary basis function Φ i , so that w(c) = 1, the distribution function reads (4.25) If we insert the ansatz (4.25) into Equation (4.1), we can evaluate the equation at discrete velocities c j for j = 0, . . . , n to obtain n different equations (4.26) This system of equations can be written in matrix vector form using the following definitions α = (α 0 (t, x), . . . , α n (t, x)) T , (4.27)

28)
A = diag(a(c 0 ), . . . , a(c n )). (4.29) The system can then be written in the following very compact form Assuming B is regular, we can multiply by its inverse B −1 from the left to end up with the system We can now directly see that the eigenvalues of the system are the diagonal entries of A, as the multiplication with the regular matrices B and B −1 does not change the eigenvalues of A. The eigenvalues of the system are therefore point evaluations of the function a(c) at the discrete velocities.
Now we proceed differently using the following quadrature formula (with respect to weighting function w(c) = 1)) for the projection of Equation (4.1) with the projection operator P j (f ) (4.32) This leads to the PDE system (j = 0, . . . , n) Similar to the system before, we write this system in matrix vector form using the following additional definitions Assuming that B is regular and the weights are non zero (which is usual for quadrature rules), we again obtain the same system as before which means that the eigenvalues of the system are still the diagonal entries of the matrix A, i.e. point evaluations of the function a(c) at the distinct quadrature points.
Our observation did not rely on the specific choice of basis functions. The only requirement is that the matrix B is invertible, including the fact that the quadrature points or discrete velocities respectively have to be pairwise distinct.
The observation can be made, because the quadrature method is basically evaluating the equation at distinct points and combining the resulting equations in a linear way to end up with a more complicated system than the discrete velocity method. But as we have seen, the systems and their properties are essentially the same.
If we now choose a standard projection method that computes exact integrals over the velocity space, this linear combination property is not given anymore. The appearance of the function a(c) inside the integral leads to the use of recursion formula, as we have done it in the previous Chapter 3. But the result cannot be generalized to arbitrary functions a(c), as recursion formulas are usually only available for a limited class of functions, such as polynomials. Even for polynomial a(c), recursion formulas lead to different values than for the quadrature case, as we have seen before (see Section 4.1.2), changing the behavior of the system as well as the eigenvalues.
This result is a first glimpse at the generalization of the projection procedure, here still for a very simple equation. We will later build up on the formulations and develop an abstract framework to understand the quadrature-based projections also in the transformed Boltzmann equation in Chapter 5.

Multi-Dimensional Cases
In more than one spatial dimension, there are different possible ways to propose an ansatz for the distribution function f . The most simple one is the consistent extension of the 1D example from the previous Section 4.1. Now we describe a Hermite tensor ansatz in three dimensions by taking a basis that is a Hermite polynomial in every velocity direction.

Simple Kinetic Equation 3D
The first 3D kinetic equation we consider is again rather simple Where we first only cover the case a(c) = c and use the consistent extension of our one-dimensional ansatz from before (see Equation 4.
But the basis function is now a Hermite polynomial in every direction: The projections are either done by analytical computation of the integrals or by a Hermite-Gauss quadrature formula of order N as follows (4.43) The derivation of the system of equations can be carried out just like in the 1D case (see Section 4.1.1). The only difference in this 3D case is that we have more terms emerging from the expansion in each of the three velocity components. We will omit the derivation here and go directly to the results of the different projection methods.
Using the Hermite tensor ansatz from above (see Equation (4.40)), we cannot expect rotational symmetry of the problem, as we have used an expansion in cartesian coordinates along each single axis that is not invariant under rotational transformation. We can therefore only expect symmetry with respect to one of the coordinate axes for both projection methods.
The projection of Equation (4.39) with the operator (4.42) or (4.43) leads to a coupled system of PDEs in three spatial variables. The general form of the system is where we usually have A = I n for orthonormal basis functions. In order to check hyperbolicity of the system and get some knowledge about the behavior of the system, we take a look at the generalized system matrix for unit vector β = (β 1 , β 2 , β 3 ) of length one, so that β = 1. The system matrix can be written as This is also a consistent extension of the one-dimensional case, because the system matrix is evaluated for every direction β, where the unit vector β can be seen as a vector pointing in the direction of interest. The system is said to be hyperbolic, if A sys has real eigenvalues for all directions β. For numerical calculations, it is possible to plot the eigenvalues of the system matrix depending on the direction β in a 3D plot for different n. We can see the same behavior for the case n = 2, but the high number of degrees of freedom (16) leads to a very dense plot. Thus we cut the plots at c z = 0 showing values inside the x − y−plane in Figures 4.1b and 4.2b. There we can in principle observe analogous results for the 2D case. Circles extending up to the coordinates of roots of the Hermite polynomials are combined in the plane.
For the sake of completeness, we give the roots of the involved polynomials As we have seen, the results are in perfect agreement to the one-dimensional case from the previous Section 4. 1. But in addition we now have eigenvalues of A sys depending on the direction of the flow. Due to the Hermite tensor ansatz in cartesian coordinates,  the system's behavior is no longer isotropic. We therefore have certain directions of the flow in which information is propagated faster than in other directions. This is usually not desired, because the discrete model looses its symmetry on the one hand and is not longer Galilei invariant on the other hand. An ansatz as simple as the Hermite tensor in 3D can obviously not overcome these problems. Note that the results of the projection are the same for both the continuous (4.42) and quadrature projection (4.42), because the degree of exactness of the quadrature formula ensures exact integration of all the involved terms. This will not be the case for a(c) i = c 2 i in the following section.

Generalized Kinetic Equation c 2 i 2D
After assuring comparable behavior of the 3D case regarding the standard simple kinetic equation (4.39), we now turn our attention to the multi-dimensional version of the second test case from the previous section. Replacing the convective velocities c x , c y , c z by their squares or simply setting a(c) i = c 2 i , we obtain the equation Using the exact same ansatz and projection methods (see Equations (4.40) and (4.42) or (4.43)), we can derive a new system of equations for the unknown basis coefficients α of the form (4.44).
The eigenvalues of the system matrix A sys do again depend on the directional vector variable β, but can be plotted for various values of β and n.  We see (n + 1) 2 circles going through the marked points at their largest distance from the origin. These points' coordinates are combinations of all squared nonzero roots of the (n + 1)-st and the (n + 2)-nd Hermite basis polynomial. This corresponds to the 1D case, where eigenvalues were also related to either the (n + 1)-st and the (n + 2)-nd Hermite polynomial.
The results for the quadrature-based projection method as defined in the general 3D setting in Equation (4.43) are very different from the continuous projection. The eigenvalues for a quadrature projection can be seen in Figures 4.3b, 4.4b and 4.5b.
For n = 1 (see Figure 4.3b), there is only one circle, because all four eigenvalues degenerate to one with algebraic multiplicity 4. This is in agreement with the 1D case, because the corresponding squared zero of the (n + 1)-st Hermite polynomial is also only 1, which is in fact the c x and c y coordinate of the circle's point with the largest distance from the origin.
Proceeding to the case n = 2, the relevant squared zeros of the third Hermite polynomial are 0 and 3, which can be combined to the coordinates of the circles' points with the largest extension from the origin, again. Note that this time, there is also a circle with radius zero, represented by the origin itself.
For n = 3, we see the same behavior regarding the squared zeros of the 4-th Hermite polynomial, which are 3 ± √ 6. Combinations of those values mark the specific points in     The results show a different behavior of the quadrature-based projection methods compared with the exact projections. The emerging system of PDEs is always hyperbolic, because there is no variable transformation involved, the PDEs are linear in the unknowns α i . As we will see in the next chapters, the hyperbolicity of the system depends on the variables themselves, if we perform a transformation of the velocity variable.
This chapter is divided into two sections. The first section deals with a transformation of the velocity variable in the Boltzmann equation to allow for physical adaptivity of our discretizations. The result is a transformed Boltzmann equation that is coupled to the macroscopic flow variables by so called compatibility conditions. In the second section we develop the abstract framework to understand and prove hyperbolicity of the emerging PDE system. After the derivation of hyperbolicity conditions, we show that these conditions hold in the case of a Hermite ansatz in one dimension.

Preliminaries
The construction of our conceptual framework relies on the fact that the solution of the Boltzmann Transport Equation still poses difficult challenges for modern methods, because the discretization of the velocity space adds to the complexity of the scheme. The velocity space is in general unbounded and especially in real applications highdimensional due to lack of symmetry. This leads to major drawbacks when it comes to standard discrete velocity methods, see also Section 2.3.4, because the number of discrete velocities needed to accurately resolve the physical domain makes the scheme computationally very expensive. Other methods like DSMC, compare Section 2.3.1, are only applicable for reasonable high Knudsen numbers.
Here we want to follow the philosophy of moment methods, where a small number of variables (the so-called moments) describes the behavior of the flow with sufficient accuracy. We are in principle deriving sets of closed partial differential equations which model rarefied gas flows. The set of equations includes the balance laws for mass, momentum and energy and the closure is done by a coupling of the heat flux to the rest of the equations.
We closely follow the approach proposed by Kauf in [17], but perform the derivations in a general d−dimensional setting and take a closer look at the emerging equations in order to examine stability properties of the PDE system. The central ideas of our work are summarized in the following sections.
First, we show how a transformation of variables combined with a flexible expansion allows for an efficient solution of the Boltzmann equation. We will also cover the problem of a closure of the emerging system by so-called compatibility conditions. Later, we want to establish the theoretical framework that enables us to apply certain quadrature-based projection methods and achieve hyperbolic equations.

Variable Transformation for Physical Adaptivity
One of the reasons why the Boltzmann equation is so difficult to solve is that a discretization requires a very large mesh, because the mean velocity of the flow can in principle attain very large values in magnitude and even for moderate mean velocities we can observe large microscopic velocity values. It is nevertheless possible to rescale the equation, such that the mean velocity is zero and the microscopic velocity is always of the order O(1). One then only has to consider a relatively small domain for the discretization of the velocity space and can still capture most of the relevant effects.
Likewise to [17], the velocity is transformed in a highly non-linear way to allow for intrinsic physical adaptivity of the scheme. As a probability distribution, the density function f usually extends over a large domain in velocity space, but in some cases it can also exhibit a very small region of non-negligible values. The position and size of this region are all connected to the mean and the variance of the distribution function, which correspond to the velocity v and the temperature θ. A reasonable transformation now shifts the microscopic velocity c by its macroscopic counterpart v and scales by the variance √ θ, such that the transformed density function is centered around a transformed mean of zero and has variance of one.
Now ξ can be seen as a shifted and scaled (or dimensionless) velocity variable. The distribution function f (ξ) is usually close to zero for already moderate values of ξ and centered around ξ = 0 because of the velocity shift in the definition of ξ.
Note that the transformation is more complicated than it looks like, because the variables involved are in fact moments of the distribution function, see (2.3) and (2.6).
The transformation to ξ enables a better approximation quality, as we can see from an 1D example in Figures 5.1a and 5.1b.
Using the transformed variable ξ, the distribution function f is centered around zero and decays rapidly for already moderate values of ξ. Therefore, an accurate resolution only requires a limited number of point values for ξ inside the domain [−3, 3], for example. It is important to mention that the general form of the distribution function stays like that during the calculation, while the velocity v and temperature θ change.

Derivation of Transformed Boltzmann Equation
We will now see how the Boltzmann equation is changed when applying the transformation from above. As a first example, we consider the 1D version of (5.2) and perform the transformation as specified in (5.1. According to Kauf [17] the derivatives change because of the chain rule and the Boltzmann equation turns into a slightly more complicated equation as follows (using the convective time derivative D t := ∂ t + v∂ x ) The full multi-dimensional case is essentially a consistent extension of the simple 1D case from above. The d-dimensional Boltzmann equation (5.2) is transformed using the change of variables from above and results in (5.4) Note the difference between the original version (5.2) and the new transformed versions (5.3) or (5.4) as additional terms depending on v and θ appear due to the transformation of the velocity variable ξ. If we want to utilize the advantageous adaptive formulation from above, we have to deal with these additional terms. Note that the density ρ did not yet enter the equations, this happens only after the insertion of the ansatz specified in the next subsection 5.1.3 as the ansatz also depends on ρ.
A simpler transformation takes only the velocity shift into account and does not use the temperature scaling. The transformation thus reads This leads to a simplified version of (5.4) without any temperature appearance. The equation can be obtained by analogous calculations from above or by setting θ ≡ 1 in Equation (5.4). The result is the shifted Boltzmann equation When it comes to the compatibility conditions (see Section 5.1.4), Equation (5.6) only needs the conditions for the velocity, because the density ρ and the temperature θ do not enter the equation. This equation is thus easier to analyze, but still exhibits the general problems we want to overcome.

Expansion Using Basis Functions
Additional to the transformation of the velocity variable, we propose a similar form of ansatz that Kauf used in [17]. The expansion of the unknown distribution function f is done around its equilibrium distribution, which is a Maxwellian (compare (2.9)). This makes sense, as we usually have perturbations from the Maxwellian distribution if we are in non-equilibrium. The specific form of the perturbations is modeled by the coefficients of a set of arbitrary basis functions Φ. The expansion in d dimensions thus reads or with another notation This type of expansion uses the information about the overall shape of the distribution function, which is close to a Maxwellian most of the time. Deviations from that shape should be efficiently expressed by the basis functions Φ i .
The type of basis function is not yet specified here. The Φ i can in principle be any function, e.g. polynomials, spherical harmonics, wavelets, splines. This gives the framework some kind of flexibility and variation to choose from later.
Using the notation from Equation ( with partial derivatives ∂f ∂ρ Inserting this for the expressions D t f and ∂ x j f , we obtain Where all the derivatives can be seen as partial derivatives, so that the prefactor ρ √ θ d has been factored out and we multiplied with its inverse to make the equation simpler.

Compatibility Conditions
Since the macroscopic variables ρ, v and θ are already included in the ansatz (5.7), we need additional relations in order to end up with a closed system of equations later. We therefore insert the expansion (5.7) into the definition of the macroscopic variables from Section 2.1.3. We only consider dimensionless masses of the particles, i.e. m = 1.

1D Case
To begin with the 1D case, we have to compute integrals over the one-dimensional velocity space. For the transformation of the integrals, we use dξ = dc √ θ . For the density, this yields The definition of the macroscopic velocity results in Third, we use the energy to derive the last condition Summarized, our three compatibility conditions for one-dimensional problems are It is important to point out that these conditions are independent from the particular choice of the basis functions Φ i . With the help of specific basis functions, the conditions (5.12) translate into conditions for the basis coefficients α i (t, x).

3D Case
Similar as for the one-dimensional case, we need to derive compatibility conditions for the basis coefficients. The only difference is here that we need to perform three-dimensional integrals over the whole velocity space, as specified in the definition of the macroscopic quantities. For the transformation of the integrals, we now use dξ i = dc i √ θ . We start with the density For the velocity components, we proceed analogously (i = 0, 1, 2) The definition of the energy gives the third condition (index i uses sum convention) We see that the five compatibility conditions are in fact a consistent extension of the one-dimensional conditions (5.12): For a specific choice of basis functions, these conditions will impose conditions on the basis coefficients of the expansion ansatz.

Coupling of Compatibility Conditions to PDE System
After the derivation of the compatibility conditions (5.13), we have a set of PDEs and a set of algebraic equations for the variables α i as well as ρ, v and θ. The total set of equations is closed with respect to the variables involved, but one still has to think about how to incorporate the algebraic equations into the PDE system. There are several different approaches, of which Kauf [17] already mentioned a projection method that numerically tries to project the ansatz space to the admissible space by trying to minimize the distance. For the optimization problem, one could apply the various methods described in [8], for example. Another method proposed by Kauf is to construct a new basis by linear combination of old basis functions such that the new basis satisfies the compatibility condition. This all leads to the reduction of the dimension of the ansatz space, so that the PDE system can be used solely with the new given basis.
In contrast to our procedure, Kauf replaces the total derivatives of the macroscopic variables ρ, v and θ by the conservation laws and then adds the conservation laws to the set of equations, together with a coupling through the heat flux.
Instead, we aim to proceed in a different way: We use the algebraic compatibility conditions to eliminate a set of basis coefficients of the expansion ansatz and then add the macroscopic variables in their place. This leaves us with a reduced set of basis coefficients α i and a closed system for in general n variables and n PDEs.
In the general case, we can incorporate the compatibility conditions into the PDE system by the following procedure that is related to the treatment of constrained ODEs, so-called differential-algebraic equations (DAEs, compare [11]): Consider a specific ansatz f (t, x, c) = n i=0 α i (t, x)Φ i (c) for the unknown distribution function and a given quadrature formula for the calculation of the integrals in the compatibility conditions (5.13). Assuming a total number of l conditions, the compatibility conditions are simply an underdetermined system of equations according to Qα = g, with Q ∈ R l×n+1 , g ∈ R l . (5.14) Now we perform a decomposition of the matrix Q, for example a singular value decomposition, such that the matrix Q can be written as Now it is possible to rewrite the discretized compatibility conditions to for β 1 ∈ R l and β 2 ∈ R n−l , so that we can directly solve for l variables of β, namely the first l variables β 1 .
As a decomposition like in (5.15) is always possible, we can always eliminate the necessary number of variables from our unknown vector α (or β respectively) and thereby close the system of PDEs by inserting ) for a proper splitting of the columns of T −1 = T −1 1 , T −1 2 . With the help of the compatibility conditions, the set of n + 1 basis coefficients α is thus reduced to a set of n − l variables β 2 that leads to a closed PDE system with the same number of equations and unknowns.
The concrete compatibility conditions for 1D Hermite ansatz functions are used in Section 5.2.3.3 and an example for compatibility conditions in a three-dimensional setting with either spherical harmonics and Laguerre polynomials or Hermite ansatz functions can be found in the Appendix A.

Theoretical Concept
Using the ansatz in Equation (5.7) and the Boltzmann equation as written in Equation (5.3) together with the coupling compatibility conditions, we can in principle decide on a specific basis for the ansatz space and then project the equations using a second set of test functions. As we will see later, the choice of the projection method will have a major impact on the structure of the equations as well as on their stability properties. The result after the projection is a set of equations for the reduced basis coefficients α i and ρ, v, θ. Now we turn our attention towards the projection method and the systematic investigation of the emerging PDEs.
Our aim is to work on a framework that enables the solution of the Boltzmann Transport Equation for real applications. We want to be able to choose from different types of basis functions for the ansatz and test space and specify a projection method of choice to derive the PDE system for the coefficients with incorporated compatibility conditions. The question of hyperbolicity of the system should be answered a-priori by the particular choices we have made. It is crucial to end up with a hyperbolic system, because a loss of hyperbolicity would lead to a totally nonphysical behavior of the solution.
Here is where the quadrature-based projection methods come into play. Unlike standard projection methods employing exact integration over the velocity space, these quadrature-based methods can modify the equations in the exactly right way to result in global hyperbolicity even for the transformed equation, as we will see. But it is still not known, which specific quadrature method together with the specific basis functions leads to the desired hyperbolicity.
We therefore want to develop a theory that gives us necessary and sufficient conditions for hyperbolicity. The framework depends largely on the considered PDE and we will start with a simple so-called generalized kinetic equation. The similarity of this equation with the transformed Boltzmann equation (5.3) allows us to conduct comparable results in a still rather straightforward setting.
For the case of the standard Boltzmann equation with BGK collision operator (without velocity shift or variable transformation), Shan and He [20] have already shown the equivalence of a LBM (which is very similar to DVM) and an expansion using Hermite functions, when evaluating the collision operator at the nodes of Hermite quadrature. We go much further by considering the transformed equation (5.4) and observe a comparable structure of the emerging system of equations with respect to the DVM.
As discrete velocity methods are hyperbolic by construction, compare Section 2.3.4, we will start with the DVM and then draw an analogy using quadrature-based projection methods.

Generalized Kinetic Equation
First, we develop the theory for the so-called generalized kinetic equation in d dimensions, which reads for d real-valued functions a i (c), the so-called advection velocities.

Discrete Velocity Method
The following Lemma (5.2.1) shows that the DVM indeed leads to a hyperbolic system of equations of a very simple form.

Quadrature-based Projection
The major difference between exact and quadrature-based projection methods is that quadrature rules use point evaluations of the integrand. Therefore, the resulting system has to be related to the discrete velocity method somehow. The relation is stated by the following lemma: x, c) be a probability density function. Furthermore, let c k ∈ R d , k = 1, . . . , n be discrete quadrature points with corresponding quadrature weights w k ∈ R. Let Φ i : R d → R and Φ i : R d → R be the ansatz and test functions, respectively (i = 1, . . . , n). The expansion of f using summation convention reads f (t, x, c) = α j (t, x)Φ j (c). Proof After insertion of the ansatz (5.27) into Equation (5.20) and using a discrete, quadrature-based projection with the quadrature formula specified by the c l and w l and the test functions Φ k , the system reads for k = 1, . . . , n. Now we define α := (α 1 , . . . , α n ) T ∈ R n , (5.29) The system can then be written in the following form B and B T are invertible by assumption and W is also invertible as a diagonal matrix of non-zero quadrature weights (quadrature weights are usually positive). Thus we can multiply by their inverse matrices and obtain The generalized system matrix is now (||β|| 2 = 1) and has real eigenvalues λ k = β i a i (c k ), k = 1, . . . , n. Remark With the help of we can relate the evaluations f i (t, x) of the DVM and the coefficients α j (t, x) of the quadrature-based projection method by the simple equation There is thus an one-to-one relation between the two sets of variables if the matrix B is invertible and the discrete velocities are equal to the quadrature points c k . The evaluations of the distribution function f at the discrete velocities c k is only a linear combination of the coefficients of the quadrature formulation.
If we indeed only multiply (5.33) by B T W −1 and use the relation (5.38), we obtain Equation (5.24). This shows that the quadrature method is in fact equivalent to a pointwise evaluation of the equation.
Remark It is important to note that the eigenvalues do not depend on the specific choice of basis functions for the ansatz and test space. As long as the matrices B and B are invertible, the eigenvalues only depend on the evaluations of the advection functions a i (c) at the quadrature points and the respective direction β.

Shifted Boltzmann Equation
The transformed Boltzmann equation (5.4) or (5.11) respectively, is actually much harder to analyze, because of the coupling with the macroscopic variables ρ, v and θ. We therefore first consider the following shifted Boltzmann equation (5.39)(compare with (5.6)) and perform basically the same procedure as before (compare Section 5.2.1), but with the notable difference of the additional v-dependence and some algebraic constraints. An extension to the full transformed Equation (5.4) is then straightforward and only requires some technical arguments and definitions as can be seen in Section 5.2.3. As we want to draw an analogy between discrete velocity methods and the quadraturebased projection methods, we proceed similar to the previous Section 5.2.1 and first derive the complete system of equations (including the algebraic coupling with the compatibility conditions) for the DVM followed by the quadrature-based methods in a second step.

Discrete Velocity Method
For a set of discrete velocities ξ k ∈ R d , k = 1, . . . , n, we consider pointwise evaluations of Equation (5.39). This leads to the following relation for the unknowns f k (k = 1, . . . , n) (see also Equation (5.22)): Again, we define f := (f 1 , . . . , f n ) T ∈ R n (5.41) and The evaluation of the derivatives is for now summarized by the matrix Df : The unknowns v and f are combined into one vector u as follows This allows to write system (5.40) in compact notation as The system (5.45) states n equations for n + d unknowns u. As additional relations, we have the compatibility conditions for the velocity: In total, we then obtain n + d equations for n + d unknown variables. The system therefore seems to make sense as a system of equation for the unknowns. However, it is more than unclear how to evaluate the derivative terms in the definition of Df (5.43). There are only point values of the distribution function f available and there is no natural way to define a derivative or a point evaluation of this derivative at the discrete velocities. In general, one could suggest simple finite differences on the point values or do a reconstruction of the point values and then compute the derivative of the reconstruction to evaluate this at the discrete velocities. We get back to this problem later.
We will now make an attempt to incorporate the compatibility conditions (5.46) into the PDE system (5.45) by directly eliminating d unknowns from the system. This method can also be seen as an exact subspace projection, provided that such a solution exists.
The integral version of Equation ( we are able to rewrite the system (5.45) with only n variables as follows As we can see, system (5.51) includes now only n variables and consists of n equations. Analogously to Lemma (5.2.2), hyperbolicity of the system is guaranteed, if the matrix in front of D Dt u is invertible. We do not want to talk about the calculation of Df in great detail for the moment, one could simply use a finite difference as an approximation of the derivatives, but the effect on the system is still to be investigated. The calculation of the corresponding term will be more straightforward in the quadrature-based case, which we will cover now.

Quadrature-Based Projection
Again, we consider the shifted Equation (5.39) in d dimensions and want to make use of a quadrature-based projection to arrive at a similar system like (5.51) in the end.
First, we use the ansatz with some basis coefficients α j and ansatz functions Φ j . We then need a quadrature formula to compute the following integrals of an arbitrary function g with weights w k and quadrature points ξ k , for k = 1, . . . , N . We will later only consider the case n = N , so that there is an one-to-one relation between the basis coefficients and the quadrature rule. Note that the weighting function that we used before is now hidden in the definition of the Φ j or g respectively. After insertion of (5.52) into Equation (5.39) and multiplication with some test function Φ m , we can apply the quadrature-based projection according to Equation (5.53) and obtain the following system of equations for m = 1, . . . , n.
In order to write this system in matrix-vector form, we need to make the following definitions analogous to the non-shifted case α := (α 1 , . . . , α n ) T ∈ R n , (5.55) N, j=1,...,n ∈ R N ×n , (5.57) The evaluation of the derivatives with respect to ξ i goes into the definition of the matrix D i B: With the help of the definitions above, we can write Equation (5.54) in matrix-vector form This allows us to obtain the full system of n equations for the n + d unknowns u as follows Provided that the matrices B T and W are invertible (this is only possible for N = n), we can reduce the system (5.63) to the simpler version This does indeed look very similar to the DVM case (5.45). System (5.64) gives us n equations for n + d unknowns u. To close the system, we employ the velocity compatibility conditions (5.46) like before and get additional d relations.
In contrast to the DVM case, we now have a system in which we can in principle easily evaluate each expression and directly compute solutions via numerical methods for constrained PDEs, because the derivatives inside the matrix DB can be calculated, once a specific basis for the ansatz space is chosen.
Continuing in the fashion of the DVM approach before, we directly want do build in the constraints into the system in order to reduce the dimension of the unknown vector u and have a quadratic system to analyze. Using the quadrature method proposed before, we can directly evaluate the integrals in (5.46) and get discrete relations between the basis coefficients in α Similar to the DVM case, we write the emerging system as Qα = 0, i.e. g = 0, with Q ∈ R d×n . Splitting up the variables as shown in Section 5.1.4.3, we have β 1 = 0, because of β 1 = Q −1 S −1 g and g = 0. We then insert our new variables into the system (compare to (5.19)): We now only need to insert this into the PDE system (5.64) and obtain a closed set of n equations for n variables u = u g 2 . We therefore define We end up with the system Equation (5.69) is now a system of n equations for a total number of n unknown variables. Hyperbolicity is obtained, if the matrix − DB, B in front of the time derivative is regular and thus invertible, because the system matrix then behaves just like in the simpler cases (see Section 5.2.1).
After this lengthy calculations, we can now summarize the conditions that have to be fulfilled to arrive at a hyperbolic PDE system using the quadrature-based projection method. The conditions are: (1) Regularity of matrix B, (2) Regularity of matrix W or equivalently w i = 0, ∀i = 1, . . . , N, (3) Regularity of matrix − DB, B including rank DB = d and rank B = n − d.
(5.70) The conditions 5.70 show that it is sufficient to analyze the involved matrices for regularity to prove hyperbolicity. The choice of the ansatz functions together with the compatibility conditions are the necessary ingredients for the derivation.
A verification of these conditions 5.70 in the case of Hermite functions for the ansatz space in one dimension is made in the following Section 5.2.2.3.

Hyperbolicity of Shifted BTE and Hermite Ansatz
The previous Section 5.2.2.2 allows us to construct hyperbolic PDE systems for the solution of the shifted Boltzmann equation. We want to use this rather abstract setting and show an example in which we can apply the framework to actually obtain the resulting hyperbolic system.
With the concrete conditions (5.70) at hand, we now want to check, whether the specific choice of Hermite functions for the ansatz and test space satisfies the conditions so that we arrive at a hyperbolic system of equations.
To shorten notation, we consider the one-dimensional case d = 1 (one dimensional x and v). We use the following expansion of the unknown distribution function: Where Φ H i are the normalized Hermite polynomials from Section 3.1.2. The expansion can be interpreted by taking in the setting of the previous calculations in Section 5.2.3.
To be consistent with this choice of basis functions, we use Gauss-Hermite quadrature with N = n, where the quadrature points ξ k are the zeros of Φ H N +1 and the w k are the associated weights. Now we have to check the conditions (5.70) one after another: The matrix B can be written as The matrix E is regular as a diagonal matrix with non-zero diagonal entries and the matrix B is actually the same matrix as in (3.59) for the case of Hermite polynomials, where this matrix is used to compute the quadrature weights. As the quadrature weights w k correspond to the Gauss-Hermite quadrature, this matrix B has to be invertible.
For later use, we write B columnwise with b j = (B i,j ) i=0,...,n The weighting matrix is also invertible, because the diagonal entries are the quadrature weights and they are guaranteed to be positive for Gauss-Hermite quadrature.
The last and a bit more difficult part to prove is the regularity of the matrix − DB, B . Here we first have to calculate the derivatives of the ansatz functions where we used (3.13) and then the definition of the basis function (5.72). The matrix D 1 B can then be explicitly derived or in terms of (5.74) Note that the last column of D 1 B or b n+1 is in fact zero, because the ξ i are just the roots of Φ n+1 . Now we have to figure out, which variable α i we can solve for in order to reduce the number of variables of the system by one. In the case of Hermite polynomials with the respective additional weighting function, the compatibility condition for the velocity reduces to a very simple relation for the second basis coefficient α 1 . The condition simply demands α 1 = 0 (5.79) In the framework of Section 5.1.4.3, we can denote for this special case of Hermite basis functions: Q = (0, 1, 0, . . . , 0) . As meaning that the application of T −1 2 to a matrix is equal to the effect of deleting the second column of that matrix.
Thus, the other matrices B and D 1 B can be written without their respective second column as and (remember that b n+1 = 0) The combined matrix DB is here actually a vector and reads Together, this yields the matrix − DB, B as follows: Regularity of the matrix from Equation (5.90) demands that this matrix has column rank n, which is valid if α 0 = 0. Otherwise, the first column is a linear combination of the last n − 1 columns. The condition α 0 = 0 is only technical in this calculation, because in a full shifted and scaled equation, there would be additional compatibility conditions requiring to set α 0 = 1 for example in the Hermite case.
We can thus conclude that the system of PDEs is hyperbolic for non-vanishing α 0 .

Fully Transformed Boltzmann Equation
The previous Section 5.2.2 shows that we will achieve a hyperbolic system if we use Hermite polynomials for the expansion of the distribution function and perform the corresponding Gauss-Hermite quadrature for the projections at least in the case of a shifted Boltzmann equation. Now, we will turn our attention to the fully transformed Boltzmann equation with a scaled ansatz as specified in Equation (5.8). We therefore consider the Equation (5.11) and again want to derive conditions for the hyperbolicity of the emerging system.
The PDE we analyze is We will first use an DVM approach and derive the corresponding system of equations and later pass over to the quadrature-based projections by some simple redefinitions. We will further write f instead of f to ease notation.

Discrete Velocity Method
This section is somewhat analogous to Section 5.2.2.1 with the difference that we now consider the fully transformed Boltzmann equation (5.91) from above in d dimensions.
For a set of discrete velocities ξ k ∈ R d , k = 1, . . . , n, we consider pointwise evaluations of Equation (5.91). This leads to the following relation for the unknowns ρ, v j , θ, f k (k = 1, . . . , n): Again, we define and for the derivatives In addition we have D j F := j-th column of DF (5.97) and a diagonal matrix storing some scaling variables This allows us to write the system emerging from (5.91) in compact notation as (5.100) The system (5.99) states n equations for n+d+2 unknowns u. As additional relations, we have the compatibility conditions for the density, velocity and temperature: In total, we then obtain n + d + 2 equations for n + d + 2 unknown variables.
As for the shifted case, it is again more than unclear, how to evaluate the derivatives in the definition of DF , because there is no continuous or differentiable reconstruction of the distribution function available for the DVM. This problem does not arise in the quadrature-based projections, because we have an expansion of f that is differentiable due to its basis representation.

Quadrature-Based Projection
Instead of deriving the whole system from the governing PDE (5.91) using the expansion (5.7), we proceed by redefining the matrices in the system (5.99).
We use the ansatz with some basis coefficients α j and ansatz functions Φ j . For the projections, we multiply by test function Φ j and integrate over the whole velocity space. We then need a quadrature formula to compute the following integrals of an arbitrary function g with weights w k and quadrature points ξ k , for k = 1, . . . , n.
We first need to introduce some notation again: ..,n, j=1,...,n ∈ R n×n (5.105) The evaluation of the derivatives goes into the definition of the matrix DB: With the previous definitions, the DVM and the quadrature-based method are linked by the following relations that are easy to verify: Plugging the Equations (5.108) into (5.100), we obtain the new matrix for the quadrature-based projections (5.109) and the whole system reads for the unknowns u := (ρ, v 1 , . . . , v d , θ, α 1 , . . . , α n ) T ∈ R n+d+2 (5.111) Note that the multiplication with B T W is due to the quadrature rule, which is employed after the multiplication with Φ j .
We can see that the system is quite similar to the one obtained for the shifted Boltzmann equation in (5.69).
Next is the addition of the compatibility conditions into the system. As before, we aim at an unconstrained system of size n for n unknown variables rather than a system of n equations for n + d + 2 variables closed by d + 2 algebraic relations.
Likewise to the shifted case, we evaluate the integrals in the compatibility conditions (5.46) and obtain d + 2 equations for the basis coefficients in α.
Writing the emerging system as Qα = g (compare (5.14)) with Q ∈ R d+2×n , we can decompose the matrix Q by means of singular values decomposition as described in Section 5.1.4.3. Splitting up the variables as shown above, we have β 1 = Q −1 S −1 g. We then insert our new variables into the system (compare to (5.19)): As β 1 is constant, we now only need to insert this into the PDE system (5.64) and obtain a closed set of n equations for n variables u T = ρ, v T θ, β 2 T . We define The matrix A Q essentially transforms to Using all of these definitions, the system is Again, the result is very similar to the previous case of the shifted Boltzmann equation 5.2.2. Checking the regularity of the matrices in the conditions 5.120, we can prove hyperbolicity of the emerging PDE system. We only need to choose a specific basis for the ansatz space and use the compatibility conditions to reduce the number of unknowns in the described way.
In one dimension, a proof of hyperbolicity is done in the following Section 5.2.3.3 for the case of Hermite ansatz functions.

Hyperbolicity of Fully Transformed BTE and Hermite Ansatz
In Section 5.2.2.3, we have shown that the reduced PDE system is in fact hyperbolic for the specific choice of Hermite polynomials as ansatz and test functions together with suitable Hermite-Gauss quadrature as the projection method for the shifted Boltzmann equation. We now want to do the same for the one-dimensional (in x and v) fully transformed Boltzmann equation using the conditions and notation from the previous sections.
In the following, we thus check the conditions (5.120). We use basically the same ansatz as before (but for f ): where Φ H i are the normalized Hermite polynomials from Section 3.1.2. The expansion can be interpreted by taking (5.8) and using From the discussions above, we can already guarantee the regularity of B and W as we are using Gauss-Hermite quadrature.
The matrix Λ is regular, if and only if ρ = 0 ∧ θ = 0. These are obvious conditions, because the ansatz (5.8) would not make sense otherwise.
The last thing to check is now the regularity of the matrix A, which we will do in the following deliberations.
We proceed analogously to Section 5.2.2.3 up to the solution of the compatibility conditions. In one spatial dimension, we have three constrains to fulfill from which we get three explicit values for α 0 , α 1 and α 2 : According to Section 5.1.4.3, we use (exact) Gauss-Hermite quadrature to evaluate the integrals in 5.123 and obtain a linear system of equations Qα = g. By application of the quadrature rule, we identify the following terms: On the other hand, this is equivalent to setting (α 0 , α 1 , α 2 ) = (1, 0, 0). As T = I n , the application of T −1 2 to a matrix is equal to the effect of deleting the first three columns of that matrix and a multiplication with T −1 1 is equal to an extraction of the first three columns.
The matrices in ( This leads to the matrix D 1 B β (note that b n+1 = 0) The term with A j D j B β also needs some special attention. It can be derived using the recursions from Equations (3.13) and (3.14): (5.137) With the help of we can write the vector A 1 D 1 B as a sum of the z i α i : In the expression A j D j B β , the first three α i are replaced by 1, 0, 0, which then leads to the equation for the missing vector: for linear independence. As has been discussed above, every column of A is a linear combination of some b i . We can therefore prove linear independence by checking where which column b i appears. In total, we must not have more than n columns b i involved and no column of A has to be linearly dependent of another column in A.
(1) B has columns b 3 , . . . , b n (2) DB β is a combination of the columns of B minus an additional b 1 and thus linearly independent of B (3) B β is a combination of columns of B plus b 0 and thus linearly independent of the others (4) A 1 D 1 B β is a combination of columns of B minus b 0 and √ 2b 2 and thus linearly independent of the others Matrix A therefore has full rank and is regular. The PDE system then is hyperbolic for all values of the α i . We see that the quadrature-based projection methods in fact lead to a globally hyperbolic system for the unknown basis coefficients and macroscopic variables.

Relation to the Conservation Laws
The compatibility conditions serve as a coupling of the macroscopic variables ρ, v, θ to the basis coefficients α i and the previous section showed how to insert these conditions directly into the PDE system to end up with a closed system of PDEs. It is still to be mentioned how the PDE system is related to the classical conservative equations and which closure we get. Using the framework from above, it is possible to compute all the involved matrices for Hermite basis and test functions, for example. We will now show, that the first three equations of the PDE system are in fact the conservation laws including a special closure for the heat flux: The one-dimensional With the help of the convective time derivative D t := ∂ t + v∂ x and some modifications, we can transform this set of equations (5.149) into the conservative form of the conservation laws that read ∂ t ρ + ∂ x (ρv) = 0 ∂ t (ρv) + ∂ x ρv 2 + ρθ = 0 ∂ t ρv 2 + ρθ + ∂ x ρv 2 + ρθ · v + 2ρθv + q = 0 (5.150) where the closure for the heat flux q can be derived by a comparison of the corresponding terms. It is q = θ 3/2 ρ √ 6α 3 . (5.151) We therefore see a coupling of the conservation laws to the basis coefficients through the coefficient α 3 . Instead of the full system (5.142), we can thus replace the first three equations by the conservation laws (5.150) and use the closure as specified in (5.151).

Remark
In the end of this chapter, we want to add a remark about the formulation with the convective time derivative D t := ∂ t + v i ∂ x i . If we use this shifted formulation, we will also derive the corresponding shifted eigenvalues. The real eigenvalues that are the characteristic speeds of the system are obtained by addition of v i in the respective direction. As v i is real, this has no effect on the hyperbolicity and is thus not a problem for our framework. Of course, one has to keep the addition in mind when it comes to real calculations.
Taking into account the eigenvalues λ A i k = (ξ k ) i of the matrix A i , we get the following eigenvalues for the complete system: which is a generalization of the results found by Cai in [5].

Summary
After the motivation, the thematic introduction and the explanation of the mathematical ingredients in the first chapters, we have successfully developed a theoretical framework for the derivation of hyperbolic PDE systems for the solution of kinetic equations such as the Boltzmann equation in a very efficient transformed version.
The conceptual framework described in Chapter 5 not only demonstrates the general procedure for an analysis of the emerging PDE system, it also yields practical conditions that guarantee hyperbolicity of the system. The key to the results obtained in this chapter was the application of quadrature-based projection methods. With the help of an analogy to discrete velocity methods, it was possible to generalize hyperbolicity conditions for simple kinetic equations to more advanced cases like the transformed Boltzmann equation.
The conditions for hyperbolicity could be verified according to Sections 5. The one-dimensional Hermite example together with the compatibility conditions in fact yielded the well-known conservation laws with a coupling to the basis coefficients via the heat flux, which states the closure of the system.
Together with the examples from Chapter 4 we gained more concrete insight into the application of quadrature-based projections, as the results could be explained from another point of view looking directly at the system matrix as well as with the proofs in Chapter 5. The application of quadrature-based projection methods therefore can be seen as a successful way to derive hyperbolic PDE systems for the efficient solution of the Boltzmann equation.

Future Work
The work on this topic will be continued by the following work on a PhD thesis. The conceptual work of this master thesis opens many different areas for further research.
One important task would be the extension of the proof from Hermite ansatz functions to more arbitrary functions and quadrature formulas. The techniques of the proof could in principle be used for the generalization to orthogonal basis functions together with their respective Gauss quadrature formulas.
It is also an open question, if one can deal with the problem of the evaluation of the derivatives in the DVM setting so that the PDE system for the point evaluations will become hyperbolic. There is possibly again a relation to the quadrature points that we can use to obtain meaningful spectral methods for the derivative term.
A more practical problem is the consideration of proper quadrature rules for the spherical harmonics expansion in three dimensions. Lebedev quadrature or spherical t-designs could potentially be useful for this, but it is not clear if they can be used in terms of the developed framework.
The theoretic results should also be supplemented by some numerical examples investigating the approximation quality of the quadrature-based methods. For the consideration of efficiency, sparse grids could be beneficial in three-dimensional applications as well.
In the end, the goal would be a computational tool that lets us choose an equation, a specific ansatz and projection method and guarantees hyperbolicity of the equations so that the system of equations for the unknowns can be successfully solved by some dedicated numerical method in a second step.

A.2 Spherical Harmonics Ansatz
If we use a Spherical Harmonics ansatz, the unknown distribution function is expanded by f (t, x, ξ) = f (t, x, r, θ, φ) = ρ Note that the Laguerre function times r l specifies the radial part and the SH accounts for the angular part of the basis function.
The functions Ψ k l,m form an orthonormal basis of polynomials in R 3 , as has been discussed in Sections 3.3 and 3.3.1.
The compatibility conditions now again lead to 5 additional equations for the basis coefficients α k l,m , if we use the transformation to spherical velocity coordinates Using corresponding quadrature formulas, one can now proceed in the same fashion as in Sections 5.2.2.3 and 5.2.3.3 and start to show hyperbolicity of the corresponding equations with the help of the framework from Chapter 5.