Framework and algorithms for illustrative visualizations of time-varying flows on unstructured meshes

https://doi.org/10.1016/j.advengsoft.2016.02.004Get rights and content

Highlights

  • A framework for illustrative visualization of fluid simulation datasets is presented.

  • New algorithms are developed for feature identification and matching in field data.

  • Novel implementations are described for multiple illustrative visualization effects.

Abstract

Photo- and physically realistic techniques are often insufficient for visualization of fluid flow simulations, especially for 3D and time-varying studies. Substantial research effort has been dedicated to the development of non-photorealistic and illustration-inspired visualization techniques for compact and intuitive presentation of such complex datasets. However, a great deal of work has been reproduced in this field, as many research groups have developed specialized visualization software. Additionally, interoperability between illustrative visualization software is limited due to diverse processing and rendering architectures employed in different studies. In this investigation, a framework for illustrative visualization is proposed, and implemented in MarmotViz, a ParaView plug-in, enabling its use on a variety of computing platforms with various data file formats and mesh geometries. Region-of-interest identification and feature-tracking algorithms incorporated into this tool are described. Implementations of multiple illustrative effect algorithms are also presented to demonstrate the use and flexibility of this framework. By providing an integrated framework for illustrative visualization of CFD data, MarmotViz can serve as a valuable asset for the interpretation of simulations of ever-growing scale.

Introduction

The challenge of presenting intelligible and useful visualizations of data has attracted substantial attention due to the growing prevalence of large-scale computational simulations across most fields in science and engineering [1], [2], [3]. Such simulations are tackling increasingly ambitious problems that require unsteady modeling in complex 3D geometries or consider inherently unsteady 3D physics such as direct numerical simulations of turbulence in fluid mechanics, and necessitate considerations of multiple physical and temporal scales.

Various physically accurate or photo-realistic visualization techniques have been employed in the investigation and presentation of such simulation data. Vector plots and streamlines/surfaces can indicate the trajectory and magnitude of vector fields (typically velocity). However, these techniques have limited ability to present 3D vector data, because data are projected to a 2D view plane in most display systems [4], and dense lines/arrows can introduce visual clutter. In isosurface rendering, the 3D dataset is reduced to a set of surfaces that lie along some threshold value of a scalar field. This approach can help observers identify regions of interest but may obscure internal features enclosed by isosurfaces. Semi-transparent isosurfaces and cutaway views can help present such features, but application of such corrections can be a user-intensive process. Volume rendering simulates the passage of light through a 3D dataset that is assigned varying color, absorptivity, and transmissivity properties. These optical material properties are specified through a transfer function, which may depend on dataset fields. The flexibility of volume rendering enables users to explore and highlight diverse characteristics of interest, but manual tuning of transfer functions can be tedious.

Time-varying simulations present the additional visualization challenge of indicating histories and development of data in a simulation. Animations offer an intuitive approach to presenting time-varying phenomena, when supported by the presentation platform. However, the value of animations may be limited for complex simulations, because research on human vision indicates that viewers have difficulty simultaneously tracking many moving phenomena [5]. Image series are amenable to more presentation environments than animations, but can take longer to interpret [6]. Additionally, individual image size is often limited in a time series, reducing the clarity of the overall visualization.

The field of illustrative visualization has primarily been inspired by historical work in scientific illustration. Born et al. [7] highlighted drawings by Dallman [8] that qualitatively demonstrate key properties of vortical flow using techniques such as half-toning to indicate interior surfaces and dashed lines for hidden sections (Fig. 1a). Similarly, Correa et al. [9] discuss work in medical and anatomical illustrations that reveal internal details clearly by deforming bodies, removing extraneous structures, and coloring tissues in vivid fashion [[10], Fig. 1b]. Born et al. [7] summarized the objectives of illustrative visualization as: “simplification of complex contexts, concentration on relevant features, and neglect of details that obstruct understanding.” Effective illustrative visualizations should present representations that facilitate this qualitative partitioning and interpretation. Similarly, analyses of unsteady phenomena often focus on histories and development of individual features. Illustrative visualization tools should therefore have mechanisms for tracking the histories of features of interest.

Considering the goals and desirable features discussed in Section 1.2, a framework for illustrative visualization of time-varying CFD simulation data on unstructured meshes is presented here. In this framework, source simulations are assumed to record data fields (such as pressure, velocity, etc.) for each point or cell in mesh geometries at discrete time steps. Following the formulation of Joshi and Rheingans [11], the illustrative visualization process is divided into three stages for each time step.

  • 1.

    Region-of-interest identificationRegions-of-interest (ROI) in the present time step of the simulation are identified and assessed.

  • 2.

    Feature matching and tracking – Identified ROIs in the present time step are matched with corresponding nearby ROIs in other time steps. A feature is defined as a time series of such matched ROIs, and could represent a physical object such as a bubble, shock front, vortex, etc.

  • 3.

    Illustrative visualization – Illustrative techniques or effects are applied to specific features (or the entire domain), and the complete visualization is rendered and presented to the user.

This illustrative visualization process is distinct from conventional approaches in that it enables application of visualization techniques to specific features, rather than the entire data domain or geometrically defined subdomains. Thus, the resulting visualizations can highlight key elements of simulation data, and better capture the intuitions of subject matter experts. Reviews of research conducted in these three tasks for illustrative visualization are presented in the following sections.

Monga et al. [12] classified ROI identification algorithms into two groups: those that group “homogeneous” cells and those that identify boundaries around regions of interest. Monga et al. found that it was difficult to identify suitable homogeneity conditions for their target applications in medical imaging; therefore, they pursued boundary detection methods. Their study presented formulations for first order (gradient based) and second order (Laplacian based) boundary detection algorithms. They discussed challenges and resolutions in setting threshold gradient values and boundary closing on rectilinear meshes.

Banks and Singer [13] investigated the problem of identifying vortices in flow simulations. Given the specialized problem domain, physical insights could be used as the basis for ROI identification techniques. Banks and Singer developed a seed-and-growth algorithm in which vortex cores are iteratively adjusted to lie along minimum pressure curves, effectively being transported by centripetal forces. Such physically inspired approaches to ROI identification can yield accurate and “sensible” results, but are often application specific.

Silver and Wang [14] presented a number of criteria for identifying homogeneous regions, including threshold intervals, vector directions, and neighborhood connectivity. However, these approaches often rely on fine-tuning by a user, and may be geometry specific (for connectivity based approaches). In summary, a variety of ROI identification algorithms have been proposed, from generic techniques (e.g., gradient based) to specialized physics-based approaches.

The feature-matching or correspondence problem has also received significant attention in the computer vision community for image processing applications. Reinders et al. [15] investigated pixel- and feature-based approaches. Pixel-based methods measure correspondence of individual cells between nearby ROIs in two dataset time steps. Feature-based approaches operate at a higher level of abstraction, and could range from methods that compare bulk attributes (e.g., volume, mass, and/or products of inertia) to those that compare certain cells in known ROIs. Such methods are less computationally expensive, but yield less specific information about feature history.

Meyer–Spradow et al. [4] developed a cell-based matching technique. In their approach, each cell and its surrounding neighbors in one frame are matched to most similarly valued cells in a second frame. Their approach can track the motion of individual cells in a feature, but is somewhat limited as presented, because it only supports datasets with single features on uniform structured rectilinear grids.

Kalivas and Sawchuk [16] developed a feature-based matching algorithm that measured the difference between 3D regions in a relatively sparse fashion. In their approach, regions are projected into a 2D plane and matched based on minimization of best-fit affine transforms.

Silver and Wang [17] presented a feature-based algorithm for datasets on regular meshes. Their matching algorithm assumes that subsequent time steps are sufficiently close; therefore, ROIs corresponding to the same feature at different times overlap in space, significantly reducing the computational cost of searches. ROIs from two time steps are matched if their relative volume of intersection exceeds some threshold, and pairs with the greatest volume of intersection are matched in the case of multiple overlaps. Silver and Wang [18] published a follow-on study in which they generalized their algorithm to unstructured static mesh geometries and achieved a 20× speedup by using refined data structures to compare features.

Reinders et al. [15] developed a matching algorithm that represented individual ROIs in terms of a few bulk attributes including centroid, volume, and mass. In their algorithm, ROIs in newly considered time steps are matched by extrapolating attribute values from known feature histories. Matching is performed in multiple forward and backward passes with relaxing tolerances, yielding rapid matching of closely corresponding ROIs, and subsequent matching of less-similar ones.

This survey of feature-matching algorithms highlights a number of approaches that range from individual cell- to bulk-attribute-level comparisons. In general, these approaches trade computational cost for additional feature development data and matching accuracy.

A wide variety of illustrative visualization effects has been explored in the literature, ranging from relatively simple techniques that adjust colors to complex approaches that may deform or alter the underlying mesh geometry. This section presents a survey of these effects.

Pagendarm and Walter [19] investigated simplified visualizations for flow simulations. Their goal was to find compact and distinct representations of various phenomena to enable presentation of multiple results in a single visualization. In one figure of hypersonic flow over a wing, they presented vortices (using stream-ribbons), shock-lines, and skin-friction lines.

Post et al. [20] presented a visualization approach that performs feature extraction and uses icons as symbolic representations of data. The goal in their study was to replace original complex feature data with clear and compact representations, such as replacing complex features with best-fit ellipsoids.

Silver and Wang [14] proposed a number of illustrative techniques for time-varying data for surface- and volumetric-rendering paradigms. In enhanced surface visualization features are assigned distinct surface colors to assist in tracking over multiple time steps. They also suggested rules for coloring features generated from the coalescence or breakup of other features (e.g., red and blue features could combine into a purple feature). Similarly, in enhanced volume rendering, matched features (perhaps from the same bifurcated source feature) can be assigned specific transfer functions for volumetric rendering. In feature isolation, features of interest can be emphasized by reducing the color saturation or opacity of nearby features.

Ebert and Rheingans [21] developed a number of illustrative visualization techniques for volumetric rendering. Their formulations do not operate on specific features, but instead, are incorporated into complex transfer functions that are applied to the entire volumetric dataset, thus reducing visualization flexibility and possibly increasing computational cost. In their silhouette enhancement technique, the opacity of cells around the silhouette of a feature is increased to assist in distinction between features (also implemented by Stompel et al. [22] and Born et al. [7]). Similarly, in feature halos the region around features, parallel to the view plane, is assigned increased opacity. These halos provide depth cues to the viewer, as features partially obscured by halos are easily recognized to be in the background. In surface shading, sections of feature surfaces facing away from the view direction are assigned desaturated and darkened colors. This provides additional cues about the shape and orientation of features.

Stompel et al. [22] presented a number of illustrative feature enhancement techniques for both surface and volume rendering. In their depth enhancement approach, cells closer to the view plane are rendered with warmer colors to provide additional depth cues to the viewer. In temporal enhancement, cells or surface segments with high temporal derivatives can be rendered in warm colors to indicate regions of rapid change. Stompel et al. also investigated multivariate visualizations using multiple rendering approaches in a single image. For example, in a CFD application, the vorticity field could be rendered volumetrically and the velocity field can be indicated using sparse strokes.

Joshi and Rheingans presented a number of visualization techniques geared toward time-varying datasets – particularly those with moving features. In their speedlines technique, superimposed lines and streaks are rendered to indicate feature motion and history. With opacity modulation, instances of features from previous time steps are presented at corresponding locations with increasing transparency and blurriness. This permits the user to observe the history of a feature in a single image. This technique was adapted by Hsu et al. [23]. With strobe silhouettes [11], the receding sections of feature boundaries are offset along the path of motion with reduced thickness and detail.

Viola and Gröller [24] surveyed multiple illustrative visualization techniques. Their study included importance-drivenfeature enhancement in which features are assigned “importance” values, and low-importance features are rendered transparently or with reduced detail if they obscure or lie near more important features. In viewpoint-dependent distortion, geometry is distorted to increase the relative size of features of interest, and occluding features may be moved aside [25]. In volume splitting and leafer deformation, outer layers of nested features are split and shifted to permit simultaneous visualization of inner and outer regions [26].

Svakhine et al. [27] investigated a number of illustrative visualization techniques geared toward the specialized problem of visualizing CFD data. They demonstrated the use of Mixed data fields for boundary enhancement. In an example rendering, boundary color was determined from temperature data, and opacity from velocity data. Using two-dimensional transfer functions, “soft” tangent-lines can be generated automatically in a feature by using a periodic function to modulate cell opacity. Svakhine et al. also developed computational techniques to simulate schlieren and shadowgraph “photographic” techniques.

Correa et al. [9] presented a formulation for illustrative visualization motivated by illustrations found in anatomy texts. Their algorithms simulate common surgical and dissection tools such as peelers, retractors, or pliers. Correa et al. also highlighted the value of using feature data for generating cut-away views of 3D volumetric data without impinging on key geometry.

Representative illustrative visualization techniques are summarized and categorized by application in Table 1. Readers are also directed to a review of illustrative visualization by Rautek et al. [1] that discusses the history of the field and perspectives on its future.

The three primary tasks in the illustrative visualization process (region identification, feature matching, and effect visualization) have received considerable attention in the literature. However, many of these investigations focus on specific geometries (often uniform rectangular grids), and cannot be directly extended to applications on unstructured and dynamic meshes, which are frequently employed in CFD. Additionally, a substantial amount of software development effort has been duplicated by researchers working on proprietary or specialized tools. The objective of the present study is to develop a modular framework for the illustrative visualization of time-varying CFD simulations on general unstructured mesh geometries. It is not feasible to support all illustrative visualization approaches considered in the literature, but by providing a flexible framework, users can harness a variety of ROI identification algorithms, feature matching and tracking algorithms, and illustrative effects, and quickly implement additional techniques.

The developed framework is implemented as a plug-in filter, MarmotViz, for the ParaView [29] environment. ParaView was selected for this effort because it is an extensible open source visualization tool that supports numerous data formats and mesh geometries, and operates on a wide variety of computing platforms. Additionally, ParaView was developed using the Visualization Toolkit (VTK) [30] library, which provides access to useful functionality for many illustrative effects – such as gradient filters, computational geometry routines, and specialized data structures. Paraview operates in a pipelined fashion, so MarmotViz receives generalized input data from a file reader or pre-processing filters, and passes output down the pipeline, potentially to other post-processing filters, and eventually to the rendering and display system. Thus, the developed plug-in can be used in conjunction with other visualization tools available in ParaView.

ParaView provides data to the MarmotViz filter at individual time steps in a simulation – potentially out-of-order, depending on user input. Thus, at each update call, the MarmotViz filter performs the following processes:

  • 1.

    Read new user inputs to the MarmotViz GUI (update parameters, create/modify illustrative effects, etc.)

  • 2.

    Receive dataset from new time step, if the time step has been processed previously, skip to step 3

    • a.

      Identify new ROIs using the user-selected algorithm and parameters. Assess ROIs.

    • b.

      Match identified ROIs to known features from other time steps using the user-selected algorithm and parameters

  • 3.

    Construct the initial output data field with cell values corresponding to feature indices

  • 4.

    Apply illustrative effects to the output dataset

  • 5.

    Pass the output dataset to the next stage in the ParaView pipeline

A schematic of the MarmotViz program structure is presented in Fig. 2.

The following sections provide detailed descriptions of the subsystems and algorithms developed in this study for ROI identification and assessment (Section 2) and feature matching and tracking (Section 3). Section 4 presents the illustrative effects subsystem and the developed effects, which include: feature coloring and selective visibility, feature smoothing, tube outlines, feature halos, speedlines, and strobe silhouettes.

All implemented ROI identification, feature matching, and illustrative effects algorithms are formulated to support general unstructured meshes.

The algorithms developed here are demonstrated using data from a two-phase flow simulation of a bubble-column – a container of liquid with multiple gas injection ports at its base (Fig. 3). In particular, the fluid-phase field will be considered because it yields intuitive features for the following discussion, such as bubbles, droplets, and fluid layers.

Section snippets

Subsystem overview

The ROI identification and assessment subsystem receives the input mesh geometry and data fields. It first applies a user-selected ROI identification algorithm to the dataset, which can be controlled with user-specified parameters. A gradient-based ROI identification algorithm was developed in this effort to demonstrate the use and flexibility of this subsystem (Section 2.2). These ROIs are then assessed to evaluate useful attributes (Section 2.3), including: volumes, centroids, and average

Subsystem overview

The feature matching and tracking subsystem is applied once for each newly visited simulation time step. The subsystem receives information about ROIs identified at the present time step and known features from other time steps. It applies a user-selected feature-matching algorithm to assign ROIs to existing features and define new features for unmatched ROIs. A representative adaptive volume based feature matching algorithm is presented in Section 3.2. A schematic of the feature matching and

Feature coloring and selective visibility

Feature coloring and selective visibility are presented first, because they are independent of the main illustrative effect subsystem that supports application of general effects (Section 4.2).

After completing feature matching, the MarmotViz filter generates an output field in which mesh cells are colored by feature number. By viewing renderings of this output field, users can easily track distinct features in animations or image series of time varying simulation data. This effect can be

Discussion

Illustrative visualization can serve as a valuable asset in the exploration and presentation of CFD simulation data. The ability to autonomously track, selectively visualize, and highlight particular features can aid in the understanding of evolution of phenomena such as bubbly flows and turbulence structures wherein multitudes of similar features are present. Feature halos and similar techniques can also provide depth cues when 3D simulation datasets are rendered in 2D media. Illustrative

Conclusions

In this study, a generalized framework was developed for illustrative visualization of time varying CFD datasets on unstructured meshes. The framework was implemented in MarmotViz, a ParaView plug-in, and supports the use of generalized ROI identification algorithms, feature matching and tracking algorithms, and illustrative effects. This implementation incorporated a gradient-based ROI identification routine, and a novel, adaptive, volume-based feature matching algorithm. A number of

Role of funding source

The funding sources: Krell Institute and U.S. Department of Energy Idaho Operations Office did not participate in the design and execution of this study or in the preparation of this manuscript.

Acknowledgments

The authors wish to acknowledge generous financial support from the U.S. Department of Energy through the Krell Institute (Contract DE-FG02-97ER25308) and the DOE Idaho Operations Office (Contract DE-AC07-05ID14517).

References (33)

  • MongaO. et al.

    3D edge detection using recursive filtering: application to scanner images

    CVGIP - Image Und

    (1991)
  • SilverD. et al.

    Visualizing evolving scalar phenomena

    Future Gener Comput Systs

    (1999)
  • KalivasD.S. et al.

    A region matching motion estimation algorithm

    CVGIP - Image Und

    (1991)
  • RautekP. et al.

    Illustrative visualization: new technology or useless tautology?

    SIGGRAPH Comput Graph

    (2008)
  • JohnsonC. et al.

    NIH/NSF visualization research challenges

    (2006)
  • JohnsonC.

    Top scientific visualization research problems

    IEEE Comput Graph Appl

    (2004)
  • Meyer-SpradowJ. et al.

    Illustrating dynamics of time-varying volume datasets in static images. of vision

    Model Vis

    (2006)
  • PylyshynZ.W.

    Seeing and visualizing: it's not what you think

    (2003)
  • JoshiA. et al.

    Evaluation of illustration-inspired techniques for time-varying data visualization

    Comput Graph Forum

    (2008)
  • BornS. et al.

    Illustrative stream surfaces

    IEEE Trans Vis Comput Graph

    (2010)
  • Dallmann U. Topological structures of three dimensional flow separations: deutsches zentrum fur luft- und raumfahrt;...
  • CorreaC.D. et al.

    Feature aligned volume manipulation for illustration and visualization

    IEEE Trans Vis Comput Graph

    (2006)
  • GrayH. et al.

    Anatomy of the human body

    (1918)
  • JoshiA. et al.

    Illustration-inspired techniques for visualizing time-varying data

  • BanksD.C. et al.

    A predictor-corrector technique for visualizing unsteady flow

    IEEE Trans Vis Comput Graph

    (1995)
  • ReindersF. et al.

    Visualization of time-dependent data with feature tracking and event detection

    Vis Comput

    (2001)
  • Cited by (0)

    View full text