Thermonuclear supernovae: a multi-scale astrophysical problem challenging numerical simulations and visualization

The numerical modeling of type Ia supernovae is a demanding astrophysical task. Relevant physical processes take place on vastly different length- and timescales. This multi-scale character of the object poses challenges to the numerical approaches. We discuss an implementation that accounts for these problems by employing a large eddy simulation (LES) strategy for treating turbulence effects and a level-set technique to represent the thin thermonuclear flames. It is demonstrated that this approach works efficiently in simulations of the deflagration model and the delayed detonation model of type Ia supernovae. The resulting data reflect the multi-scale nature of the problem. Therefore, visualization has to be tackled with special techniques. We describe an approach that enables the interactive exploration of large datasets on commodity hardware. To this end, out-of-core methods are employed and the rendering of the data is achieved by a hybrid particle-based and texture-based volume-rendering technique.

Providing an understanding of the mechanism of type Ia supernovae (SNe Ia) is one of the major tasks of theoretical astrophysics. These bright transient events which are for several days as luminous as an entire galaxy consisting of billions of stars are the focus of many astronomical research programs. This is primarily due to the application of SNe Ia as distance indicators in observational cosmology. Although there is considerable scatter in the properties of SNe Ia, they form a class of objects that is remarkably uniform by astronomical standards. Applying empirical calibration methods (e.g. Phillips 1993, Phillips et al 1999, SNe Ia may be used as 'standard candles' to map out the geometry of the universe. With such measurements, the currently accelerating expansion of the universe has been discovered (Perlmutter et al 1999, Riess et al 1998 implying a new 'dark' energy component dominating the universe. The nature of dark energy may be constrained by determining its equation of state from SN Ia distance measurements (Astier et al 2006, Wood-Vasey et al 2007. This situation led to large observational efforts which have accumulated a wealth of data awaiting theoretical interpretation and modeling. In particular, a goal of SN Ia modeling is to provide a theoretical basis for the calibration methods on which SN Ia cosmology rests. The astrophysical scenario of SN Ia explosions as described in section 2 features a pronounced multi-scale character. Relevant physical processes take place on length scales 3 separated by 11 orders of magnitude (see section 2.2). This poses great challenges both to the numerical modeling approaches and to the visualization of the results of simulations. Techniques to deal with the former are described in section 3, whereas section 5 describes ways of visualizing the simulation results which are of crucial importance in developing the numerical models as well as in analyzing the data.

Astrophysical model: thermonuclear explosions
The commonly accepted astrophysical scenario of SNe Ia is that of thermonuclear explosions of white dwarf (WD) stars (Hoyle and Fowler 1960)-low-or intermediate-mass stars in which nuclear burning has ceased. These objects may consist of a mixture of carbon and oxygen and are supported against gravity by the Fermi pressure of a degenerate electron gas. This, in principle, stabilizes them for arbitrary times unless they are part of a binary system. In this case, they may accrete matter from the companion in a dynamical process that ultimately triggers the thermonuclear explosion by burning the carbon/oxygen material to heavier species and releasing huge amounts of nuclear energy. Depending on the nature of the companion star (see Livio (2000), for a review), the scenario is called double-degenerate (two WDs merging) or single-degenerate (a WD accreting matter from a 'normal' companion star). As for the first scenario it is not even clear that a thermonuclear explosion will result (Saio and Nomoto 1985), most recent work focused on the second scenario. In the Chandrasekhar-mass model (for an alternative see, e.g. Fink et al (2007)), the matter accreted from the companion star is hydrostatically burned into carbon and oxygen at the surface of the WD such that its mass increases until it comes closer to the limit of stability, the Chandrasekhar mass (∼1.4 M ). At this point, the density at the center of the WD reaches values sufficient to trigger nuclear reactions. Carbon fuses to heavier elements and after about a century of convective carbon burning, a thermonuclear runaway forms a flame which subsequently propagates outward burning the WD material and exploding the star.

Thermonuclear combustion in WD stars
Describing the flame propagation through the WD is the goal of thermonuclear supernova explosion modeling. Hydrodynamical conservation laws allow for two distinct modes of flame propagation: a subsonic deflagration mediated by heat conduction of the degenerate electron gas and a supersonic detonation driven by shock waves. These options give rise to various explosion scenarios. Only a prompt detonation is clearly inconsistent with the observations and thus ruled out as a model for SNe Ia (Arnett 1969). Therefore, the flame has to start out in the subsonic deflagration mode and is subject to instabilities. As it burns from the center of the WD outwards, it creates an inverse density stratification in the gravitational field of the star with dense fuel above hot and light nuclear ashes. Consequently, the Rayleigh-Taylor instability leads to the formation of bubbles of burning material rising into the fuel. At their interfaces, strong shear flows with Reynolds numbers of about 10 14 generate turbulent eddies due to the Kelvin-Helmholtz instability. A turbulent energy cascade forms which transports kinetic energy to smaller scales. The flame interacts with turbulent eddies of various sizes inside this turbulent cascade and thus deflagrations in thermonuclear supernovae are a problem of turbulent combustion. The deformation and wrinkling of the flame increases its surface, enhances the net burning rate, and hence accelerates the flame propagation significantly. Currently, four scenarios of explosion models are explored in multi-dimensional simulations. Pure deflagration models (e.g. Gamezo et al 2003, Reinecke et al 2002, Röpke and Hillebrandt 2005, Röpke et al 2007a try to describe the explosion process solely by a turbulence-boosted deflagration flame. These models lead to explosions in the range of the fainter normal subset of observed SNe Ia (see section 4.1). In order to model the brighter end of the observational sample, a detonation phase in the burning seems to be inevitable. In this context, scenarios are studied with initial deflagrations undergoing a transition to detonations in late stages of the explosion ('delayed detonation model'; e.g. Gamezo et al (2004Gamezo et al ( , 2005, Golombek and Niemeyer (2005), Röpke and Niemeyer (2007)), off-center ignitions possibly leading to a gravitationally confined triggering of detonations (Jordan et al 2008, Livne et al 2005, Röpke et al 2007b, Townsley et al 2007, and detonations initiated in pulsational stages after a deflagration has failed to explode the WD (Bravo and García-Senz 2006).

The multi-scale nature of the problem
Numerical simulations of the thermonuclear explosion on scales of the WD star pose significant challenges to the modeling approaches. This is obvious from the wide range of relevant length scales involved in the turbulent deflagration phase (see figure 1): at flame ignition, the WD has a radius of ∼2000 km and it increases during the explosion process. Down to the Kolmogorov scale at which the turbulent energy is dissipated, the energy cascade extends over 11 orders of magnitude. For most parts of the explosion, the internal structure of the thermonuclear flame is not broader than millimetres to centimetres (Timmes and Woosley 1992). The flame interacts with turbulent eddies down to the Gibson scale where its laminar burning speed equals the turbulent velocity fluctuations (about 10 4 cm at the onset of the explosion and decreasing with expansion of the WD).
This clearly illustrates the spatial multi-scale character of the problem. But the multi-scale nature has also a temporal aspect. The nuclear reactions proceed on vastly different time scales, some of which are orders of magnitude smaller than typical flow timescales.

Numerical modeling
Full-star simulations of the explosion process are a tool to determine which theoretical scenario leads to results consistent with spectra and light curves of SNe Ia. These observational data 5 are compared with synthetic observables derived from the results of the explosion simulations by means of radiative transfer calculations (e.g. Blinnikov et al 2006, Kozma et al 2005, Sim et al 2007. Obviously, such an approach is promising only if there are no (or very few) free parameters in the models in order to allow a clear distinction between their predictions. In particular, in the initial deflagration flame propagation phase inherently three-dimensional (3D) effects, such as instabilities caused by buoyancy and shear and turbulence dominate the flame evolution and thus determine the overall explosion characteristics. Therefore, 3D modeling of the physical processes is essential. The ultimate goal of such approaches is to develop a thermonuclear explosion model free of tunable parameters that reproduces the SN Ia observations. Such a model would pave the way to addressing the questions arising from the cosmological application of SNe Ia.
Although the flow speeds range from subsonic (in the deflagration flame) to supersonic (in the detonation), the fluid dynamical treatment is possible in an efficient way with a compressible scheme. Commonly, the piecewise parabolic method (Colella and Woodward 1984)-a higher order Godunov-type finite volume scheme-is applied in simulations and all examples discussed below are based on the PROMETHEUS implementation (Fryxell et al 1989) of it. The equation of state is that of WD matter and accounts for an arbitrarily relativistic and degenerate electron gas, an ideal gas of nuclei, radiation and electron-positron pairs.

Multi-scale challenges
The achievable resolutions in current 3D simulations on nonuniform nested grids (or, alternatively, with adaptive mesh refinement) are ∼1 km at best. Thus, the scale range of flame-turbulence interaction is not resolvable, not to mention the inertial range of the turbulent cascade. Clearly, modeling approaches are required here. One way is to artificially broaden the flame (Khokhlov 1995) and to resolve this structure with adaptive mesh refinement. This, however, is computationally very expensive since the flame may fill a large fraction of the WD star requiring refinements virtually everywhere. The simulations presented in sections 4.1 and 4.2 follow a different strategy which is outlined below.

Flame model
One of the challenges in the numerical implementation is to represent the thermonuclear flame on a computational grid that cannot resolve its internal structure. Obviously, seen from scales of the WD, the flame will appear as a discontinuity separating the fuel from the ashes. Treating it numerically as a sharp interface therefore seems a natural approach to the problem. This is achieved by applying the level set technique (Osher and Sethian 1988) which is widely used in computational applications to represent surfaces in 3D setups. An implementation of this technique to follow flames in thermonuclear supernovae was proposed by Reinecke et al (1999). The flame is modeled as the zero-level set of a signed distance function G, with G defined to be positive in ash regions and negative in fuel regions. This interface is propagated by modifying G according to the so-called G-equation, where v u , n and s u denote the flow velocity of the unburned material, the normal vector to the flame front and the flame burning speed with respect to the unburned material, respectively. As the last quantity bears physical meaning only at the flame front, it is spread out from the front in order to apply the G-equation globally (or at least in some band around the flame). The level set approach to modeling the flame propagation has several advantages. Its numerical implementation is rather simple and robust and the method can handle topological changes (such as merging flames) in a straightforward way. Moreover, since nuclear burning occurs only at the zero-level set of G, the flame is represented as an interface between fuel and ash to the accuracy the hydro scheme allows for. This way, a local refinement of the grid around the flame becomes unnecessary saving computational resources. In contrast to the commonly used broadened flames, this approach avoids the introduction of an artificial structure that may obscure the flow field around the flame.

Large eddy simulations (LES)
Describing the flame as a sharp discontinuity is not sufficient for handling the multi-scale challenges associated with the burning process. This is because of the fact that turbulence acts on the flame and accelerates it by folding and stretching its surface. A direct simulation of this effect would require to resolve the Gibson scale which is out of reach for current supercomputational resources. Consequently, the discontinuous flame model described above does not represent the laminar flame structure, but rather the location of an unresolved turbulent 'flame brush' resulting from corrugation by turbulent eddies. This flame brush advances with a velocity different from the laminar flame propagation speed. According to Damköhler (1940), the effective propagation speed s t of a flame brush at some scale is proportional to the turbulent velocity fluctuations v at this scale, Thus, in order to propagate the flame front on the computational grid, one has to determine the turbulent velocity fluctuations at the grid scale. As the turbulent cascade is not resolved, this requires a modeling approach. The simulations described below follow a LES strategy (see Röpke and Schmidt (2008), for a review of the concept). Only the onset of turbulence is resolved on the computational grid and the turbulent kinetic energy on the unresolved scales is accounted for by a subgrid-scale model (Niemeyer and Hillebrandt 1995). In the implementation of , the necessary closures of this subgrid-scale model are determined locally by a filtering approach. In this way, no particular scaling behavior of the turbulence is assumed in the modeling and the approach is applicable also for situations where turbulence is neither homogeneous nor isotropic. This is expected to be the case at least on larger scales in the situation under consideration here.

Nested nonuniform moving grids
Another challenge with respect to the spatial scales involved in the problem arises form the fact that the WD star expands rapidly in the explosion. On a fixed grid with sufficient resolution of the initial stage, it would quickly leave the domain. Following the explosion process over longer periods of time, however, is not only necessary in order to capture the full burning and energy release, but also for evolving the remnant of the explosion to a hydrodynamically relaxed state 7 in which the ejecta expand self-similarly, i.e. their velocities are proportional to their radii. Only after reaching this state, synthetic observables can be reliably derived from the simulations. Homologous expansion sets in about 10 s after the ignition of the explosion (Röpke 2005) and by this time the object has expanded by about two orders of magnitude.
An elegant way of dealing with this problem is to diverge from a strict Eulerian discretization by using moving meshes. The symmetries of the problem make this approach particularly appealing. The expansion of the star is spherical on average such that the grid can simply be expanded isotropically. According to the geometry of the problem, an expanding spherical grid seems to be a natural choice. However, since such a grid geometry suffers from a coordinate singularity at the center, a Cartesian grid is used instead. This grid is expanded equally in all directions in order to account for the expansion of the WD.
Another peculiarity of the modeled situation is that more resolution may be desired at the flame than in other parts of the WD. The part of the star filled with the flame, however, increases in size as the flame propagates. This is accounted for by using two nested moving grids as suggested by Röpke et al (2006b)-an approach that leads to optimal resolution for a fixed number of grid cells. The inner part of the domain filled by the flame is covered by a fine uniformly spaced Cartesian moving grid (uniformity is required by the implementation of the subgrid-scale model discussed above). The outer part of the star is represented on a nonuniform Cartesian moving grid with exponentially increasing grid spacing such that the spacings match at the interface of both grids. The inner grid tracks the propagation of the flame, whereas the outer grid co-expands with the WD. Since the flame burns through the WD, the inner grid expands faster than the outer grid. Therefore, it subsequently collects cells from the outer grid that match its current grid size. In this way, more and more cells become associated with the inner uniform grid. Once the flame reaches the surface of the WD, the star is entirely covered by a uniform Cartesian moving grid.
As a consequence of the dynamic grid geometry, the numerical resolution is neither constant on the computational domain, nor in the temporal evolution. Although this prevents a fixed resolution of the flame, it has the advantage of providing an efficient way of reaching the phase of homologous expansion of the ejecta. Once the burning has ceased (about 2 s after the ignition), degrading the resolution is acceptable for following the further evolution. The benefit is that the growing sizes of the computational grid cells allow for successively larger timesteps in the simulations.

Nucleosynthetic post-processing
In current 3D simulations of thermonuclear supernovae, nuclear reactions are treated only in a very coarse manner. Depending on the fuel density, material crossed by the flame is converted either to nuclear statistical equilibrium (represented by a temperature-and density-dependent mixture of Ni and He) or, for lower fuel densities, to intermediate mass elements (represented by Mg). This is justified by the fact that the dynamics of the explosion is only determined by the energy release and therefore the details of the nucleosynthesis are of secondary importance. However, when it comes to deriving synthetic observables such as spectra and light curves from the simulation results, knowledge of the chemical composition of the ejecta is required. This can be attained only by following the details of the nuclear reactions which requires a large reaction network including hundreds of isotopes and thousands of reaction rates. As the nucleosynthesis partially takes place at much smaller time steps than the flow timescale, this 8 reaction network has to be solved several times per hydro step. The solution of a large reaction network is computationally very expensive and thus this direct approach is impossible on current resources. A small reduced reaction network including only a handful of species seems feasible in the near future, but the full solution is clearly out of reach.
In order to deal with this timescale problem, virtual 'tracer' particles are introduced in the simulations which are advected with the flow. They add a Lagrangian component by following packets of matter and recording their temperature and density trajectories. These data then serve as input for a nuclear post-processing step with a full reaction network (Röpke et al 2006a, Travaglio et al 2004. A typical number of tracer particles in a full-star thermonuclear supernova simulation is ∼10 5 . The outcome of the nucleosynthetic post-processing step provides an adequate representation of the chemical composition of the ejecta for radiation transfer calculations. These are subsequently carried out in order to derive synthetic observables from the model.

Thermonuclear supernova simulations
In order to illustrate how the approaches described in section 3 work in thermonuclear supernova simulations, we discuss examples of the deflagration and the delayed detonation scenarios here. The visualization of the outcome of these simulations is described in section 5.

Deflagration model simulations
The key to the deflagration models of thermonuclear supernovae is the turbulent flame acceleration. Without this effect, the slow flame propagation would not catch up with the expansion of the star. The amount of material burned this way would release insufficient energy to cause an explosion of the WD. Several numerical simulations (e.g. Gamezo et al 2003, Reinecke et al 2002, Röpke and Hillebrandt 2005 have shown that the turbulent deflagration model can indeed lead to an explosion if the flame ignites centrally in the WD and (on average) isotropically. The strength of the explosion depends on the flame ignition configuration (see García-Senz and Bravo 2005, Röpke et al 2006b. In extremely asymmetric ignition configurations, the burning may even fail to unbind the WD . The approach outlined above provides a way of simulating the turbulent combustion process without free parameters. A recent high-resolution simulation (Röpke et al 2007a) was performed in order to test the modeling approaches and the capability of the scenario to reproduce observational data from SNe Ia. The temporal evolution of the explosion is shown in figure 2 (and also in the animation available from stacks.iop.org/NJP/10/125009/mmedia). After ignition in a multitude of sparks around the center of the WD, the flame propagates outward. The formation of burning bubbles due to buoyancy instability is clearly visible in the subsequent snapshots. Larger bubbles break up and smaller modes of the instability develop. The structure of the flame becomes very complex. It generates turbulent eddies, which corrugate the flame surface on smaller scales. About one second after ignition, the outermost flame features reach the surface of the star. The energy release is already sufficient to gravitationally unbind the WD. Subsequently, the turbulent motions and instabilities freeze out in the expansion and the density of the fuel ahead of the flame falls below the burning threshold. The nuclear energy generation ceases about 2 s after the ignition and the simulation approaches a hydrodynamically relaxed state after about 10 s. The ejected material of the explosion starts to freely coast into space and no compact remnant is left behind.
Only a highly resolved simulation is expected to clearly show the onset of a turbulent cascade on the computational grid which is required for consistency of the subgrid-scale modeling. In fact, the energy spectra derived from the simulated velocity fields show Kolmogorov-type turbulence (Röpke et al 2007a). Moreover, the high resolution offers the possibility of a detailed analysis of the turbulence characteristics in thermonuclear supernovae which is important for modeling the flame propagation and the possibility of deflagration-todetonation transitions. But at the same time, the high resolution of the simulations leads to challenges in the visualization of the data, which are discussed in section 5.
By choosing a favorable, nearly isotropic distribution of small ignition sparks around the center of the WD, the result falls into the range of the strongest explosions that can be achieved with a pure deflagration model. Although a reasonable agreement with observational data of fainter normal SNe Ia is found, the brighter part of the SN Ia sample is clearly out of reach for the model. Therefore an extension is required that leads to a more complete burning of the WD material.

Delayed detonation model simulations
A possibility of extending the deflagration model in order to enhance the fuel consumption and energy release is the delayed detonation model. Here, the burning starts out as a deflagration and the flame propagates subject to turbulent acceleration and large-scale buoyancy instabilities similar to the model described above. At a late stage, however, a detonation triggers and leads to a supersonic incineration of the remaining fuel material.
The physical mechanism of triggering the detonation is still unclear. One possibility is that it is associated with the flame itself-such deflagration-to-detonation transitions are observed in terrestrial combustion, but there they occur usually near obstacles or walls of the combustion vessel. The astrophysical situation is more complicated because the transition has to proceed in an unconfined environment. Such transitions have not yet been found in terrestrial environments, but one must keep in mind that the scales of the astrophysical situation are huge and mechanisms that have too low a possibility to occur in terrestrial experiments may be realized here. Currently, a deflagration-to-detonation transition associated with the onset of the distributed burning regime is under discussion (e.g. Röpke 2007, Woosley 2007). This regime is encountered once turbulent eddies start to penetrate the internal flame structure, i.e. the Gibson scale is less than the flame thickness. This mechanism is assumed to work in the simulations discussed here. The detonation is triggered at the patch where the flame first enters the distributed burning regime.
Introducing a detonation in the scenario, we again encounter a scale problem. For most parts of the explosion process the width of a detonation front is too small to be resolved on the computational grid. It is therefore treated with a second-level set (Golombek andNiemeyer 2005, Röpke andNiemeyer 2007). In this approach, one can easily account for the fact that detonations cannot cross even tiny volumes of nuclear ash left behind from the initial deflagration stage as pointed out by Maier and Niemeyer (2006). The detonation front therefore wraps around the complex deflagration ash structure. Figure 3 shows the explosion at the onset of the detonation phase for the simulation DD 020 of Röpke et al (2007a). An animation is available from stacks.iop.org/NJP/ 10/125009/mmedia. In this simulation, the detonation was triggered at one location of the The detonation front is indicated by the white isosurface and volume-rendered (yellow/orange) is the density of the exploding WD star. (See also the animation available from stacks.iop.org/NJP/10/125009/mmedia.) deflagration flame only: the patch where it first enters the distributed burning regime. The subsequent snapshots show how the detonation wave propagates and wraps around the ash structure from the deflagration phase. It burns out the funnels of fuel in between the ash bubbles and therefore leads to a more homogeneous composition of the supernova ejecta than the deflagration model discussed above.
Simulations such as presented here allow us to study whether delayed detonation scenarios are capable of producing models in better agreement with SN Ia observations. In fact, by assuming that the ignition of the initial deflagration is a stochastic process which may sometimes occur in many sparks around the center of the WD and sometimes in a sparse distribution, it is possible to vary the degree of pre-expansion of the WD before the detonation sets in. This changes the nucleosynthetic yields and the amount of burning in the detonation phase, and thus leads to a variation of the explosion energy and the expected brightness of the event. The range of parameters covered by the model seems to be in agreement with the observations of normal to bright SNe Ia (Mazzali et al 2007). The shift of the emphasis of the model on the deflagration phase for the brighter events to a more pronounced deflagration phase for dimmer objects predicts observable consequences. A different degree of mixing of different species in the ejecta of the explosion is expected to be detected in the spectral observations. Testing the model by comparison with observed SNe Ia of different brightness is work in progress.

Visualization of the simulation data
The multi-scale challenges that arise in the simulations of thermonuclear supernovae apply in a similar way to the visualization of the data. Two key features have to be represented: the structure of the WD star and the flame front(s). Again, the scale of the WD is huge compared to the width of the flame. A natural approach is therefore to visualize the flame in the way it is simulated, i.e. as a 2D interface in the 3D volume of the star. Problems potentially arise from the fact that the structure of the interface is very complex since the flame is subject to instabilities and corrugated by turbulence. In fact, there have been attempts to characterize the deflagration flame front as a fractal surface (Woosley 1990). In the case of the deflagration model, the dataset consists of two parts that are relevant for the visualization: • The density field is needed to visualize the overall structure of the exploding star.
• The thermonuclear deflagration flame is the key structure of the dataset. It is the surface on which the nuclear reaction takes place.
In the delayed detonation model, the level set description of the detonation contributes another dataset. In highly resolved simulations, these data may be rather large and therefore specialized visualization approaches become necessary which will be described below. The goal of the visualization effort is to achieve an implementation of techniques that are capable of dealing with the complex and large dataset. In particular, we strive for the possibility of an interactive exploration of the entire dataset at full resolution. The creation of animations of the temporal evolution of the simulation is another challenge. These animations play an important role in the presentations of the results.

Visualization concept
For lower resolved simulations, marching-cube-based (Levoy 1989, Lorensen andCline 1987) iso-surface techniques together with hardware-based volume-rendering (Wilson et al 1994) are efficient enough to achieve these goals. Especially the nonuniform rectilinear moving grid simplifies the iso-surface generation. An example for this visualization is the delayed detonation simulation described above. It is shown in figure 3. Here, the thermonuclear deflagration and detonation flame datasets are of comparable size to the density (a 256 3 array).
It showed that a different approach is required for the highly resolved simulation discussed in section 4.1. Now the deflagration flame structure is defined on a 1024 3 nonuniform moving rectilinear grid. It is highly complex and detailed. Regular generation of iso-surfaces using marching-cubes is very memory and time consuming. Furthermore, because of the complexity of its structure, the generation of a polygonal approximation of the iso-surface is flawed, as the interior and exterior of the iso-surface cannot always be correctly determined (Nielson and Hamann 1991). Another drawback of a polygonal mapping of such complex structures is that a transparent (hardware-)rendering is ridden with artifacts. However, the high resolution of the flame front makes massive point cloud rendering a suitable alternative. Therefore, we choose a point-based rendering method combined with an oct-tree meta-structure as described by Samet (1990) to achieve high fidelity transparency. Now the surface of the flame is approximated by point splats that are sized by an estimate of the iso-surface polygons, generated by marching cubes. As discussed by Nuber et al (2003), this type of rendering is similar to direct volumerendering, but requires much less resources and hence can be used interactively. The density data does not cause a particular problem as it is relatively homogeneous. Therefore, it is reduced to a uniform 256 3 rectilinear grid. The visualization of the density can be achieved interactively using hardware-based volume-rendering (Wilson et al 1994). The result is later overlayed by the transparent flame rendering.
For the purpose of creating an animation, a time-dependent dataset containing 2000 timesteps was written out during the simulation. The size of the entire dataset is too large to fit it into the main memory of a visualization workstation. Our approach is to successively read in each timestep from a high-speed storage system and to visualize it on the fly. This is called an out-of-core technique, as only the data to be visualized is retrieved from a mass storage system (for an introduction to out-of-core techniques see Silva et al 2002). It showed that especially the later timesteps are so complex, that out-of-core techniques (especially data reduction algorithms) are needed to enable a real-time processing. The extraction of the iso-surface is simplified as the iso-value does not change: it is the zero level of the distance function G (see equation (1)), so the dataset is already preprocessed.
To enable interactivity, we used a dedicated visualization machine that is optimized for out-of-core methods based on commodity hardware. The out-of-core processing is augmented by using an optimized RAID (redundant array of inexpensive disks, see Patterson et al (1988)) system.

Algorithms and data-structures
As described by Silva et al (2002), most out-of-core visualization strategies consist of a twostage approach with a preprocessing and a visualization step. We follow this approach here. During the preprocessing step, the dataset is optimized for disk accesses in the subsequent visualization step. As most of the visualization parameters of our dataset are already known, the interaction mainly consists of selecting the timestep to be visualized, changing of the viewing position, or selecting rendering parameters.
5.2.1. Stage 1: preprocessing of the raw data. The raw data of the high-resolution deflagration simulation to be visualized comprises two datasets: the density, which is defined on a regular, 256 3 rectilinear grid as a floating point number and the deflagration flame, which is defined on the original nonuniform 1024 3 grid. For the latter, only the volume elements containing some patch of the flame surface are stored on the mass storage system in order to reduce the memory footprint. Both datasets are handled separately during the preprocessing stage. The density is 14 read into memory and quantified to an 8 bit 3D texture based on the absolute maximum and minimum of all timesteps, which is sufficient because the density fluctuations are moderate. This allows the application of a single transfer function as a 256 entry color map. All generated textures are stored in one file consecutively to reduce file system overhead.
In a second step, the deflagration flame data is loaded. For each volume element, the isosurface is approximated as a point. It is stored together with the normal determined by the gradient-over-the-edges method (Zhang and Kaufman 2006). The coordinates of the point are then quantified to 16 bit fixed point numbers each, based on the dimensions of the underlying grid. Retaining the original grid structure keeps the error low. The normal is quantified to three further 16 bit fixed point numbers. In addition to the point coordinates and the normal, the point size based on the granularity of the underlying grid is stored as a 32 bit single precision floating point number. This adds up to 128 bits of data for each point to be processed and rendered. All quantified coordinates are consecutively stored in a file. The inherited 128 bit alignment enables high efficiency in the processing using a 64 bit architecture. The indices, bounding boxes and size information are stored in a separate header file, which is loaded into the memory before data retrieval.

Stage 2: data retrieval and rendering.
In the visualization stage, the preprocessed data are retrieved from the hard drives depending on the user input. During start up, the indices, bounding boxes, and sizes are read from the header file and stored in lists. Depending on the chosen timestep, the density data is loaded as a 3D texture into the graphics card. It can now be volume-rendered directly using a 3D texture (see figure 4).
In order to render the points representing the flame, the coordinate data are loaded into the main memory and subsequently into the graphics card. There, the coordinates are rendered as double-sided lighted anti-aliased points. The double-sided lighting is necessary, because the visualized flame is not convex and transparent.
In the default rendering mode, the points are not spatially sorted. A simple transparency mode can be applied by adding up the (weighted) RGB values of the rendered particles. The set of particles now looks like a luminous cloud that can be rotated and scaled in real time. The result is shown in figure 5. Obviously, in this approach the structure of the cloud is not rendered satisfactorily. One can guess the structure by interactively rotating it, but snapshots of the rendering are more confusing then elucidating.
Therefore an improved, high-end transparency mode is needed. This requires back-to-front ordering, resulting in a special form of volume splatting as described by Levoy (1988). If the sorting is not applied, the resulting image usually contains transparency artifacts. These are subtle, but can clearly be seen in an animation or while interactively exploring the dataset. To achieve the back-to-front ordering, an adaptive spatial partitioning scheme in the form of an oct-tree is used. This oct-tree is generated on the fly, reverting to the loaded coordinate data. Based on the vertices of the transformed bounding box, it is traversed back-to-front during the rendering. This results in a volume-rendering like transparency, showing all the details of the deflagration flame (see figure 6).
As the oct-tree is stored in main memory and traversed using the CPU, real-time rendering can only be achieved with relatively small numbers of particles (less than 1 million). For larger particle numbers, the frame rate drops below 1 frame s −1 . Although insufficient for real-time exploration, it still allows the efficient generation of offline animations.

Implementation and performance
The current software is implemented using Qt 4 and Coin3D 5 . Qt is a 2D, operating-system independent user interface builder. Coin3D is an open source OpenInventor implementation that is supplemented with a texture-based volume-rendering library.
The oct-tree was implemented directly as a C ++ class based on simple, memory aligned arrays. While the direct volume-rendering is integrated into the library, point-based rendering (especially with the quantization) had to be implemented in OpenGL (Rost 2004) through a callback node.
The visualization was carried out on a workstation with two dual core AMD Opteron processors combined with two GeForce 8800GTX graphics cards running under Ubuntu Linux. As RAID, we used 16 hard-drives with 10 000 rpm on two controllers, configured as RAID 05 using hardware RAID 5 and the Linux software-RAID feature. The result is a raw data throughput of about 600-800 MB s −1 .
For the high-resolution deflagration simulation, the number of particles to be rendered is about 23 000 for the first timestep and increases linearly to about 12 600 000 corresponding to the final stages of the simulation. Without high-definition transparency, we achieved a frame rate of 20-5 frames s −1 (depending on the timestep) during the online animation of the entire dataset. This shows that the bottleneck is the transfer of the data through the bus. Visualizing only one timestep, the frame rate depends on the number of particles rendered and ranges between 10 and 60 frames s −1 . With the high-end transparency enabled, an index needs to be transferred through the bus for each particle. Furthermore, the tree needs to be traversed. Therefore, the frame rate drops under 1 frame s −1 for timesteps with more than 1 million points.

Results
As the visualization is highly interactive, all images have been generated in a few minutes. The user chooses the timestep of interest and adjusts the view point and transparency value. Only a few seconds are needed to generate the oct-tree. Further rendering is still interactive, with reduced frame rate as described. Figure 7 shows the combined visualization of the density and flame structures in the highresolution deflagration simulation. Clearly, the quality improves significantly when applying the high-end transparency mode of the flame rendering as described in section 5.2. In particular, the intricate details of the flame can be seen clearly, together with the deformation of the density field. As described, this kind of visualization is similar to high-resolution direct volumerendering that could only be achieved on much larger machines without real-time interactivity. With the described visualization approach, an animation of the evolution was created. It is included in the supplementary material (available from stacks.iop.org/NJP/10/125009/mmedia) and snapshots are shown in figure 2.

Conclusions and future work
The multi-scale character of thermonuclear supernova explosions calls for special techniques in modeling these events numerically as well as in visualizing the results of such simulations.
A strategy to simulate the hydrodynamical evolution of the explosion was discussed. It is based on a LES approach in combination with a level-set treatment of the thermonuclear flame. The expansion of the exploding WD is accounted for by moving nested grids. These techniques allow for an efficient implementation of the astrophysical scenario-both in the framework of the pure deflagration model and the delayed detonation model of SNe Ia as demonstrated by our two examples. The efficiency of the approach facilitates a systematic exploration of the initial parameters of the explosion in 3D simulations, which is work-in-progress. This is important in order to connect observable features of SNe Ia to properties of the progenitors-a problem that has to be addressed in order to provide confidence to the application of SNe Ia as distance indicators in observational cosmology.
For developing the numerical implementation and for analyzing and presenting the results, an efficient visualization of the data is essential. We showed that the interactive exploration of the time-dependent large dataset is feasible on commodity hardware using a hybrid out-of-core particle-based and texture-based volume-rendering. Whether the oct-tree traversal can be implemented on the graphics card using a shader program needs to be explored. An automatic mode, where the traversal will be switched on and off during user interaction should be implemented. It is planned to integrate the point-based algorithm into a standard visualization package in order to unify and to extend the user interface. Further graphics card programming using shaders should be applied to accelerate the de-quantification process, especially for the processing of normals.