Inside the ice shelf: using augmented reality to visualise 3D lidar and radar data of Antarctica

From 2015 to 2017, the ROSETTA‐Ice project comprehensively mapped Antarctica's Ross Ice Shelf using IcePod, a newly developed aerogeophysical platform. The campaign imaged the ice‐shelf surface with lidar and its internal structure with ice‐penetrating radar. The ROSETTA‐Ice data was combined with pre‐existing ice surface and bed topography digital elevation models to create the first augmented reality (AR) visualisation of the Antarctic Ice Sheet, using the Microsoft HoloLens. The ROSETTA‐Ice datasets support cross‐disciplinary science that aims to understand 4D processes, namely the change of 3D ice‐shelf structures over time. The work presented here uses AR to visualise this dataset in 3D and highlights how AR can be simultaneously a useful research tool for interdisciplinary geoscience as well as an effective device for science communication education.


Introduction
AS THE EARTH'S ICE SHEETS are coupled to the atmosphere, oceans and solid earth, glaciology is an inherently interdisciplinary field that is growing increasingly reliant on the development of a "systems" perspective. Ice shelves are particularly important glaciological features to study, as they modulate ice-sheet stability and sea-level rise by buttressing the flow of grounded ice out to the oceans (Scambos et al., 2004;Dupont and Alley, 2005). A recent study of the Ross Ice Shelf ( Fig. 1(a)), led by a cross-disciplinary team of researchers from five different institutions, found that the ice shelf's stability is linked to ocean circulation beneath the shelf that is steered by the underlying geology and bathymetry (Tinto et al., 2019). While it has been established that Antarctica's ice shelves lose much of their mass from sub-marine melting at the ice-shelf base (Rignot and Jacobs, 2002), the connection between the ice, ocean and rock has not been explored at this scale. A unique aerogeophysical dataset collected by the ROSETTA-Ice project formed the basis for this interdisciplinary study that provides crucial context for ice-shelf stability. From 2015 to 2017, the ROSETTA-Ice project mapped the ice-shelf surface, interior structure, geology and near ice-front hydrography in unprecedented detail, with 10 km line spacing across about 480 000 km 2 (Depoorter et al., 2013) of ice shelf ( Fig. 1(a)). Prior to these surveys the most comprehensive mapping of the ice shelf was done in the 1970s with 55 km grid spacing (Albert and Bentley, 1990).
The paper explores the role that augmented reality (AR) can play in geoscience research that is interdisciplinary, data rich and four-dimensional (4D). The term AR describes an experience that lets users interact with virtual objects superimposed into the real world in real time (Azuma et al., 2001). In this case, the virtual objects are ROSETTA-Ice data. The authors have used their experience analysing ROSETTA-Ice data to define the essential attributes of a data visualisation tool. For these purposes, a good data visualisation tool: (i) displays data in three dimensions and at multiple scales; (ii) allows users to recall broad spatial context; (iii) promotes communication between users; and (iv) is readily accessible. Because three-dimensional (3D) data visualisation tools are useful to geoscience students, the aim is for a tool that can be useful for scientific data exploration and geoscience learning and is engaging to non-expert users.
The authors have developed a data visualisation application for the Microsoft HoloLens, a head-mounted display unit for AR, that allows users to engage with two major ROSETTA-Ice datasets: (i) laser altimetry (lidar) data that maps the ice-shelf surface topography at sub-metre resolution; and (ii) shallow-ice ice-penetrating radar data that reveals the internal structure within the upper several hundred metres of the ice shelf.

Visualising 3D Data
The goal of the ROSETTA-Ice project was to interrogate multiple ice-shelf processes by integrating several datasets at different scales. Many geophysical studies are concerned with changes in both space and time, making scientific questions 4D. However, many of the ROSETTA-Ice datasets map the ice shelf in two dimensions (2D). Radar data, for example, shows planar cross sections through the ice shelf ( Fig. 1(c)). Lidar data shows 3D swaths of the ice surface, but are often presented as a 2D single line of elevation data ( Fig. 1(b)); the width of the lidar swath is small compared to both the ice shelf extent and the length of a survey line. Additionally, the ROSETTA-Ice datasets are collected in a grid with lines oriented north-to-south and east-to-west ( Fig. 1(a)), so ice-shelf features are mapped in two directions. In order to study 4D processes, researchers have to first combine multiple 2D datasets to build 3D structures in their minds. The ability to perform the mental rotations required to build 3D structures from 2D images depends on how the 2D images are oriented (Shepard and Metzler, 1971). Using visual tools that accurately orient 3D data make it easier for researchers to understand changes over time and space (Shelton and Hedley, 2002). The ability to access the 2D data in three dimensions allows researchers to consider processes that change in time more easily. With data visualised in 3D, researchers can focus on the fourth dimension, which is essential for dynamic earth systems processes. The ROSETTA-Ice datasets include ice-penetrating radar, which is used in glaciology to resolve structures within the ice and to better understand ice geometry (Gogineni et al., 2001(Gogineni et al., , 2014, history (MacGregor et al., 2015), flow (Vaughan et al., 1999) and basal properties (Chu et al., 2016). The radar data provides 2D cross sections through the ice (Fig. 1(c)), similar to seismic reflection profiles. The radar profiles record internal reflection layers that contain information about 3D structures within the ice sheet. Ice-penetrating radar data is often quantified and converted into 2D grid form, which introduces gridding artefacts and could potentially result in a loss of structure through interpolation. Scientific analysis of original datasets has the potential to be more powerful than analysis of processed and subsampled data products. This is a challenge for large quantities of high-resolution radar data, such as the 55 000 line-kilometres collected by the ROSETTA-Ice project.
As is often the case for radar and lidar studies of ice sheets, the first step in analysing ROSETTA-Ice data was to explore the dataset visually. At this early stage, several radar echograms are visualised simultaneously, and similar features in this 2D data are used to interpret 3D structures. Several data visualisation approaches are used: (i) radar echograms can be viewed individually on a high-resolution display; (ii) high-resolution images of the echograms can be printed out; or (iii) geophysical software packages can be used to create "fence diagrams" that represent multiple 2D echograms of the 3D data and create the impression of a "perspective" view ( Fig. 1(d)). The first two methods require viewers to build 3D structures in their minds. Though geoscientists are skilled and practised 3D thinkers (Kastens and Ishikawa, 2006), this process poses significant challenges to even those with above-average spatial reasoning skills (Shelton and Hedley, 2002). Geophysical software programs overcome the visualisation challenges of the first two methods by displaying data with proper spatial referencing. However, these programs limit access to data visualisation as they are proprietary and require expertise to operate.

Visualising Data at Multiple Scales
The ROSETTA-Ice project collected datasets at different scales, with the greatest detail coming from the sub-metre laser altimetry measurements of the ice-shelf surface. The ROSETTA-Ice datasets support the investigation of processes that act on a range of spatial and temporal scales, from geologic processes that build mountains over millions of years, to ice that flows up to 1000 m/year and ocean currents and temperatures that change daily and seasonally. The two datasets discussed here, lidar and radar, measure different scales of the Ross Ice Shelf surface and internal structure; the variability of the ice-shelf surface topography is of the order of metres, while the variability of the internal structures is of the order of tens of metres (Figs. 1(b) and (c)). In order to visualise the structure of both the lidar-derived surface topography and radar internal structure, the axes must be scaled differently and the reader must always bear in mind the difference in scale. A more effective data visualisation tool would remove the dependence on static plot axes to convey scale. Spatial context is often lost when interpreting data at different scales and when data does not show the entirety of the study area at once. It is easy to forget the expanse of the Ross Ice Shelf when examining a single line of radar data across it. Referring to a map with labelled gridlines (Fig. 1(a)) helps the viewer contextualise the data but still leaves room for confusion. For example, it can be disorienting to read a horizontal radar echogram or lidar surface measurement from one of the north-south tie lines that is oriented vertically (Figs. 1(a)-(c)). Furthermore, unless the map is set to the same scale as the data, it can be difficult to locate a feature in the radar data on the map. A more useful data visualisation tool would dynamically link the data to the map, or let viewers see detailed data within a larger map so that broad context is preserved and understood intuitively.

Maintaining a Collaborative Environment
The integrative science supported by this rich 3D dataset necessitates communication between scientists from different disciplines, making each collaborator simultaneously an expert and a novice. A good data visualisation tool enables conversation and also allows the viewers to communicate non-verbally. These capabilities, noted by others , are important to keep in mind when designing data visualisation tools.
ROSETTA-Ice team members were based at research institutions spanning the globe. Often data was interrogated during video conference calls, or sometimes at conferences where team members could meet in person. To facilitate this type of collaboration, a data visualisation tool must not require site-specific hardware.
The work presented here is the first published use of AR in glaciology leveraging multiple datasets. The authors' HoloLens application, called "Inside the Ice Shelf", presents shallow-ice radar and lidar data collected over the Ross Ice Shelf in an immersive 3D setting. The application introduces users to the Ross Ice Shelf, allowing them to engage with the datasets in a new and more intuitive way, and maintaining an environment for viewer collaboration and discussion. This paper defines the datasets and workflow used to create Inside the Ice Shelf, describing the features and limitations of the application, and discusses this and future work in the context of previous relevant work in AR and virtual reality (VR).

Data
From 2015 to 2017, the ROSETTA-Ice team, supported by the National Science Foundation, the Gordon and Betty Moore Foundation and the Old York Foundation (all based in the USA), collected geophysical data over the Ross Ice Shelf ( Fig. 1(a)), providing a complete top-to-bottom view of the ice-shelf surface, base and internal structure (Figs. 1(b) and (c)), as well as the bathymetry below the ice shelf. The ice-shelf surface, base and structure were mapped with the IcePod system, an integrated ice-imaging platform externally mounted on cargo aircraft (Lockheed LC-130 Hercules) flown by the New York Air National Guard (USA). The IcePod was developed at the Lamont-Doherty Earth Observatory of Columbia University (LDEO), based in New York state, USA. It includes shallow-ice radar, deep-ice radar, laser altimeter, magnetometer, visible camera, infrared camera, pyrometer and a Global Navigation Satellite System inertial navigation system (GNSS-INS).
The ROSETTA-Ice project has collected more than 55 000 line-kilometres of data over the Ross Ice Shelf, including about 50 000 line-kilometres of shallow-ice radar data and about 41 000 line-kilometres of lidar data. The survey was designed to map the ice shelf at high resolution; survey lines roughly east-west in orientation (hereafter indicated with "L") were spaced 10 to 20 km apart and intersected by lines roughly north-south in orientation (hereafter indicated with "T") spaced 55 km apart ( Fig. 1(a)). This paper focuses on the shallow-ice radar (SIR) and lidar measurements of the ice-shelf interior and surface.

Lidar
The lidar instrument on board the IcePod is a Riegl VQ-580 airborne laser scanner, which is specifically designed for use over snow and ice. The laser transmits pulses at a near-infrared wavelength over a 60°field of view. In cloud-free conditions and at an average flight elevation of about 750 m above ground level, the instrument provides elevation measurements for about 15Á2 cm footprints at a typical point density of 1 to 4 pts/m 2 across a 0Á75 to 1 km wide swath. Preliminary statistics from three crossovers flown during a 2016 ROSETTA-Ice flight that sampled the southern portion of the Ross Ice Shelf suggest an internal accuracy of 7Á3 to 12Á4 cm.

Radar
The SIR on board the IcePod is a linear frequency-modulated continuous-wave radar system that was designed and built at LDEO (Frearson and Dhakal, 2018). The SIR emits a 1 ls chirp, with a 600 MHz bandwidth and a centre frequency of 2 GHz. Its power output is about 1 W. At an average survey elevation of about 750 m above ground level, the SIR can image to a maximum depth of 450 m below the ice-shelf surface. It cannot image the full thickness of the Ross Ice Shelf, which can be up to 1000 m, but the large bandwidth enables high-resolution imaging within the upper portion of the ice column. The postprocessed SIR data provides information about the ice-shelf structure at a resolution of 25 cm.

Continent-wide Data
To provide the user with a geospatial reference and context for the two ROSETTA-Ice datasets, these datasets were visualised against a backdrop of ice-sheet and ice-shelf surface topography, bed topography and bathymetry. This data comes from the Bedmap2 product (Fretwell et al., 2013), which integrates measurements from satellite, airborne and modelled outputs to produce a 1 km grid across the Antarctic continent.

HoloLens Platform
The Microsoft HoloLens is a mixed reality self-contained unit that allows users to interact with virtual objects while still being able to interact with the real world; virtual objects can overlay and be occluded by objects in the physical world. The HoloLens is an untethered wearable computer that enables users to explore an AR experience more freely than with tablet-or phone-based AR, which requires users to hold a device. The HoloLens runs a version of the Windows 10 operating system and the authors built their application using the Unity game engine (Unity, 2017).
The user selects virtual objects with their gaze, which is tracked by the HoloLens, and interacts with virtual objects using voice commands or hand gestures read by the HoloLens' depth-sensing cameras (Microsoft, 2019a). A complete description of HoloLens gestures is There are two notable limitations of the HoloLens. First, the field-of-view is restricted to 30°due to power and processing considerations. Second, the complexity and resolution of meshes and textures that the device can render at a reasonable frame rate are limited by computational power; exceeding this limit can lead to jitter and instability in the virtual objects.

Data Preparation
ROSETTA-Ice SIR and lidar data processing prior to data interpretation included geolocation and data-specific geophysical corrections (Fig. 2). To precisely locate the SIR and lidar measurements, the raw data was combined with position and attitude data from the GNSS-INS on board the IcePod. This system consists of a NovAtel SPAN-SE L1/L2 GNSS receiver and a Litton LN-200 inertial measurement unit (IMU) which uses gyroscopes and accelerometers to track aircraft motion.
The lidar and Bedmap2 data used here were transferred to the European Petroleum Survey Group's (EPSG) 3031 polar stereographic projection with a true scale at 71°S, which is the geographic convention for many Antarctic studies. The geographic coordinates of the radar and lidar data were retained and projected to EPSG 3031 during the application development stage (Fig. 2). Survey lines L470, L590, L840, T1060 and T1120 were selected ( Fig. 1(a)); these lines each contained defined features in the radar data, and the lidar data was continuous along each segment of interest.
Radar. SIR data was processed and analysed using in-house MATLAB-based software. The raw SIR data contains beat frequencies that are proportional to the distance from the ice-shelf surface. These beat frequencies were digitised at a 50 MHz sampling rate. Surface reflections were aligned and coherently averaged to improve the signal-to-noise ratio. The GNSS-INS trajectory was then used to correct the data for aircraft motion. The data was smoothed with a symmetrical Hanning window and an inverse fast Fourier transform was applied to extract the time domain content of the data. Horizon-picking software was used to automatically select the ice-surface layer. The post-processed SIR data was saved in five-minute increments, each of which corresponds to an along-line distance of about 30 km. These increments were exported as greyscale images in the portable network graphic (PNG) format.
The next step in SIR processing required user input. Using the~30 km segments, which make it easier to identify horizons in the ice shelf, the in-house interactive software was guided to select various interfaces in the ice shelf. Where visible, the horizon showing features within the ice shelf was picked. These picked horizon points were saved as MATLAB (.mat) and ASCII (*.txt) files containing the depth of the horizon below the ellipsoid in metres and position information in degrees. The data was not projected at this stage.
In the HoloLens, the fully processed data (PNG images) was displayed and the picked horizons were shown as lines overlain on the PNGs. Radargrams were plotted with an additional 50 m of airspace above the surface in order to make variations in the surface reflection more distinct. The image data was scaled so that strong reflectors, such as the iceshelf surface and base, appear dark and weak reflectors appear light (Figs. 1(c) and 3(a)).
Strong reflectors are the focus of the SIR data and are an integral part of the analysis of the internal structure of the Ross Ice Shelf for the ROSETTA-Ice project. One of these strong reflectors is a layer within the ice that is visible throughout the Ross Ice Shelf (Fig. 3). If the ice shelf is about 450 m thick, SIR can identify the base of the ice shelf.
Lidar. Accurate measurement of the range from the laser scanner to the surface spot requires repeated measurements of the translational offsets and reference system differences between the lidar and the GNSS-INS during the survey period. Since the raw VQ-580 data was stored in Riegl's proprietary RXP format and coordinate system, initial lidar geolocation and processing were performed with Riegl's RiPROCESS software, which exports sets of individually geolocated points as point clouds. These 3D point clouds were exported in LAS v1.4 (without waveform) format and the data was converted to ASCII format with the open-source Point Data Abstraction Library (PDAL, 2018).
Geophysical corrections were applied to the WGS 84 elevation point clouds. The elevations were corrected for ocean tides with the CATS2008 circum-Antarctic tidal model (an update to the model described by Padman et al., 2002) and ocean tide loading with the TPXO7.2 tidal model (Egbert and Erofeeva, 2002). The EIGEN-6C4 geoid model (F€ orste et al., 2014) and an approximate value for the sub-ice-shelf mean dynamic topography were then used to convert the ellipsoidal elevations to orthometric heights above mean sea level.

Continent-wide Data
The Bedmap2 ice-surface, ice-base and bathymetry data were exported from ArcGIS's ArcMap (ESRI, 2015) in GeoTIFF format with the EPSG 3031 polar stereographic projection. Separately, a subsection of the Bedmap2 dataset was exported delineated by the Ross Ice Shelf's extent ( Fig. 1(a)).

Workflow
Unity (Unity, 2017) game-engine development software that allows for 3D design, was used to develop the HoloLens application "Inside the Ice Shelf". In Unity, "GameObjects" (such as meshes) are placed, rotated and scaled within the application's scene space and organised using the application's hierarchy. GameObjects can be assigned associated materials (such as colour and texture) that control how the object is rendered. They can also be designated as the "children" of other objects and inherit their parents' coordinate system. Scripts written in the C# programming language can be attached to an object and used to control its properties, such as its position, scale and the user's ability to interact with it. The datasets required several preprocessing steps in preparation for manipulation in Unity. This began with the Bedmap2 data that form the reference and base map of the application. Bedmap2 GeoTIFFs were converted into 3D surface triangulated irregular networks (TINs) using ArcGIS's ArcScene (ESRI, 2015; Fig. 2). As large TINs with too many triangles lead to jitter in the HoloLens, the number of triangles in each TIN was limited to 200 000. The TINs were exported in the virtual reality modelling language (VRML) format, the only 3D format supported by ArcScene (Fig. 2).
As the VRML format is not supported by Unity, the VRML files were converted to the Wavefront OBJ 3D format using Blender (Blender, 2017) open-source 3D modelling software (Fig. 2). Blender, which allows the retention of more detail in the mesh while keeping it within the approximate 200 000 triangle limit, was also used to more accurately reduce mesh resolution. Even though 100 000 is greater than the HoloLens recommended triangle count, the application is still usable with meshes with up to 250 000 triangles.
Finally, these OBJ files were imported directly into Unity and placed within the application's hierarchy of objects (Fig. 2). The resulting Bedmap2 GameObject in Unity took its EPSG 3031 projected coordinate system from the underlying Bedmap2 dataset, which was maintained throughout this procedure. The Unity hierarchy ensured that any child of the Bedmap2 GameObject with appropriate geolocation data could be accurately located and related to other datasets in the application space.
The ROSETTA-Ice radar PNGs were cropped in Adobe Photoshop to the geographic extent of the image. The cropped radargrams were imported at Unity's maximum resolution of 8192 9 8192 pixels, which was lower than the resolution of the radargrams themselves. In Unity, the "quad" GameObject, which is used to create planes, was assigned to the PNGs. A two-sided unlit texture material was assigned to the PNGs to retain the contrast of the original image and allow viewers to see the radargrams from all sides. Finally, a C# script was attached to the radar GameObject (in its original geographic coordinates) to position it accurately as a child in the Bedmap2 EPSG 3031 reference frame.
The incorporation of the lidar data into the HoloLens application required extensive decimation (reduction) of the point cloud (from about 6 million points to fewer than 200 000 points, resulting in about a 97% loss in resolution) in order for it to be rendered at a reasonable frame rate. The lidar swath was examined using the opensource software CloudCompare and areas of particular interest were selected to display in the application (Fig. 2). Point-derived OBJ meshes (constructed through Delaunay triangulation) were exported from CloudCompare and the same procedure was followed in Blender as before to decrease the mesh size of the Bedmap2 data (Fig. 2). The submetre resolution of the lidar data was lost in the decimation process; however, vertical features larger than about 3 to 5 m in amplitude, including large crevasses, mounds and rifts, are among the main surface features relevant to process-oriented ice-shelf studies and are still resolved.
Once the application was fully developed in Unity, it was built for HoloLens using the Universal Windows Platform settings. The resulting project solution was opened in Microsoft Visual Studio 17 (Microsoft, 2017) where the application metadata was set. From here, the application was uploaded to a HoloLens using a USB connection for the purpose of testing and development. Later, when uploading for the final "master" build, it was tested and the application was uploaded to the Microsoft Store for public availability. A link to the application is also included at the Github Repository at https://github.com/martinjpratt/ ZoomAntarctica.

Results
The ROSETTA-Ice data processing described above produced a detailed map of the ice-shelf surface at 3 to 5 m horizontal ground sample distance (GSD) across about a 750 m wide swath from the lidar and a map of the underlying shallow (up to 450 m depth) ice-shelf structure at up to 0Á25 m GSD from the SIR (Fig. 3). Even with the subsampling required for display in the HoloLens, this work shows the Ross Ice Shelf in unprecedented detail and in 3D. The application "Inside the Ice Shelf" allows users to investigate the relationship between the ice-shelf surface and interior, thus shedding light on important ice-shelf processes. The current version of the application displays lines L840, L590, L470, T1060 and T1120 from the ROSETTA-Ice dataset.
Inside the Ice Shelf features two scenes that reinforce the broad-context ROSETTA-Ice SIR and lidar data interpretation. The initial scene displays the Bedmap2 digital elevation models (DEMs) of the bathymetry/sub-ice topography and ice surface topography across all of Antarctica (Fig. 4(a)). These datasets, along with graticule lines of latitude and longitude, provide spatial context for the user. The second scene is restricted to the Ross Ice Shelf extent ( Fig. 4(b)). In this scene, the DEMs are shown in more detail and serve as a backdrop and reference for viewing the SIR and lidar data.
The viewer can switch between scenes, change the vertical exaggeration of the DEMs and view the ROSETTA-Ice flight lines ( Fig. 4(b)) by interacting with a menu (Fig. 4(a)) that is always in view and accessible with a voice command. Viewers can change the vertical exaggeration of the DEMs by manipulating the slider using a "tap-and-hold" gesture (Microsoft, 2019a); they can switch between the Antarctica and Ross Ice Shelf scenes and access ROSETTA-Ice flight lines using the "air-tap" gesture (Microsoft, 2019a). A video showing the user experience of Inside the Ice Shelf can be found at https://youtu.be/SM_ FfTjj95Y.
The viewer can interact with both scenes using two-handed gestures to resize (tap-andhold with both hands and move them away from, or towards, each other), move (tap-anddrag with one or two hands) and rotate (tap-and-hold with two hands and move one in front of the other) the virtual object.

Visualising Data in 3D
Inside the Ice Shelf gives a 3D immersive view of the ROSETTA-Ice radar data and helps users to grasp the 3D structure of the Ross Ice Shelf easily. The 2D SIR data is displayed in its correct geographic position, so viewers can stand in-between or over echograms to see the 3D ice-shelf structure instead of having to build 3D structures in their minds from the 2D SIR images (Fig. 4(c)). The immersive view can reinforce the 3D structure of the ice shelf by forcing the viewer to perform a physical action (either turn around, rotate the model or walk around the model) in order to see a feature from a different perspective.
In addition to viewing the raw data in the SIR images, the user can view the internal horizon pick (boundary between different layers in the ice shelf) with a tap gesture and easily access the "next level" data product (Fig. 4(d)). Higher-level radar data projects, like horizon picks, can be subjective; the ability to see the raw data simultaneously to verify the high-level data product is useful to glaciologists using radar to study the ice.

Visualising Data at Multiple Scales
Maintaining a sense of scale is important for the understanding of ice-sheet processes, so Inside the Ice Shelf allows the user to explore Antarctica and the Ross Ice Shelf at a The Photogrammetric Record range of scales. In the initial Antarctica scene (Fig. 4(a)), the DEMs appear flat because they have equal vertical and horizontal scales. Rarely do ice-sheet data visualisations display the true aspect ratio of the ice sheets, since vertical exaggeration greatly helps to explore topography. However, displaying the true scale is useful since the true aspect ratio of the ice sheets is important for understanding ice-sheet processes. As previously mentioned, the viewer can adjust the vertical exaggeration of the DEMs. When the viewer increases the vertical exaggeration in the Ross Ice Shelf view, surface and bed DEMs separate, and the user can examine the ice-sheet and/or ice-shelf thickness and the underlying bed structure (Figs. 4(b) and (c)). In the Ross Ice Shelf scene, SIR lines are displayed in their geographic locations, with the vertical exaggeration set to match that of the DEM (Fig. 4(c)). Scale is again reinforced when the viewer accesses the lidar data ( Fig. 4(e)); on the radargram for line T1060, four selectable boxes show the extent of the viewable ice-shelf surface (Figs. 4(c) and (d)). When selected, the lidar segments appear above the radar-traced ice surface but can also be moved (Fig. 4(e)). The lidar meshes are rendered with the same vertical exaggeration as the radargram scene. For optimal viewing, the viewer should set the vertical exaggeration to between 19 and 39. For ease of use, the viewer can disable the ability to move and resize the Bedmap2 surface DEMs when viewing the lidar data. The lidar meshes can be hidden, and the user can return to the radargram scene by clicking the associated button in the tagalong menu.
The images were captured at the Fossett Laboratory for Virtual Planet Exploration at Washington University in St Louis using a Spectator View video feed that returns images at 1080p resolution, the maximum possible with Fossett Laboratory equipment.
The ability to move between datasets and views also enhances the sense of scale. The static plots of radar and lidar in Fig. 1 require viewers to remind themselves of the different respective scales shown. The application shows scale in a more dynamic way; the Ross Ice Shelf can be seen in its entirety with all of the ROSETTA-Ice survey lines, but the viewer must zoom in to see the detail of the radar, and then zoom in further to see the detail of the lidar. These distinct views and the interaction necessary to move between them also can help the viewer retain important context (Fig. 4(e)).

Maintaining a Collaborative Environment
AR has been shown to be a more effective collaboration tool for 3D data visualisation than VR because of its facilitation of natural communication . The ROSETTA-Ice team collaborates in person with 5 to 10 people in a room and also remotely via video conference calls. Inside the Ice Shelf allows users to interact with each other in each of these settings. In both situations, collaboration is facilitated using the Mixed Reality Capture function over a Wi-Fi connection. Since Inside the Ice Shelf is developed for a hands-free AR platform, the user's awareness of the physical world is maintained and he/ she can gesture normally to communicate with others in the same physical space. Inside the Ice Shelf lets glaciologists easily discuss how 3D features in the ice change over time. In the case of a video conference call, where non-verbal communication may be limited by the screen size, all remote collaborators can still at least view Inside the Ice Shelf by sharing the Mixed Reality Capture screen.
AR has been used as an effective teaching tool in the geosciences. It overcomes some of the 3D spatial thinking challenges that students face and helps all students grasp scientific concepts (Shelton and Hedley, 2002). In addition to facilitating collaboration amongst researchers, the application introduces users to ice sheets by showing them a unique new dataset that is currently being used at the forefront of glaciology research and can be used as a teaching tool.

Discussion and Future Work
The timing of the ROSETTA-Ice fieldwork coincided with a sharp rise in popularity of AR in 2016 (Majumdar, 2016), which sprang from decades-old research in AR and VR. While AR and VR tools for geoscientific applications have been developed over the last few decades Kreylos et al., 2006Kreylos et al., , 2008Sanyal et al., 2008;Hedley and Lonergan, 2012;Veas et al., 2013;Pratt, 2017), including in industry applications like engineering and mining (Lato, 2019), and in education (Shelton and Hedley, 2002), neither AR nor VR has become a mainstream research tool in the geosciences.
Media and journalism have adopted VR to tell stories in an immersive setting. During the 2016-2017 season of ROSETTA-Ice, the field team partnered with the New York Times, which produced four VR videos (Corum and Roberts, 2017) published online through the New York Times in conjunction with a printed article (Gillis, 2017) and a richly illustrated three-part online interactive article (Corum, 2017). In the field, the ROSETTA-Ice team experimented with VR and 3D imaging for documenting the field environment. A combination of Ricoh Theta S 360°cameras and a cubic arrangement of GoPro cameras were used to recreate the experience of life in Antarctica using the Google Cardboard VR platform, the development of which (in 2014) made immersive VR experiences widely accessible. In addition, efforts like Google Earth's Google Expeditions (Google, 2019) enable users to understand and appreciate distant environments without having to leave their classrooms and offices. The compelling VR story produced by the New York Times was effective science journalism that allowed an unfamiliar and remote environment to be explored by many people. However, its scope did not include data visualisation or exploration. The authors consider the ROSETTA-Ice data space to be similarly unfamiliar and the remote Antarctic environment can be made accessible through the use of AR.
Inside the Ice Shelf was developed on the HoloLens to accurately represent radar and lidar data collected over the Ross Ice Shelf in an immersive 3D environment. The application supports moderate interaction with the data and lets users explore the 3D dataset in a more intuitive way and to collaborate with each other. This work joins decades of research on 3D data visualisation and interaction in the geosciences that has explored the use of walk-in cave automatic virtual environments (CAVEs, with projections onto a roomsized cube) (Cruz-Neira et al., 1993;Ohno and Kageyama, 2007), VR headsets, AR headsets, mobile AR (Veas et al., 2013) and combinations of AR and VR experiences . Each of these techniques presents strengths and weaknesses, but there is general agreement that 3D data visualisation including AR and VR is a powerful tool for geoscience research (Kreylos et al., 2006;Ohno and Kageyama, 2007) and teaching (Shelton and Hedley, 2002). Research in AR and VR platforms has grown alongside the development of high-resolution digital outcrop models (Bellian et al., 2005;Nesbit et al., 2018), as well as increased research into spatial visual thinking and the use of 3D models to develop spatial reasoning skills in the geosciences (Kali et al., 1997).
Today, VR platforms take the shape of either wearable headsets or CAVEs, where the user is immersed in the virtual world. Both wearable VR platforms and CAVEs perform well optically. Wearable VR headsets, however, isolate the user from the physical world, and are not suitable for the collaborative data exploration experience that this study aims to create. CAVEs allow users to collaborate easily, as several people can explore data in a CAVE simultaneously. However, CAVEs are not broadly accessible since they are site-specific room installations that require considerable resources to implement (Miller et al., 2005). A smaller, less expensive platform like the HoloLens is more suitable for collaborations between researchers who are far apart; it is also easier to bring into classrooms and outreach events.
Engaging with 3D datasets through AR and VR has been shown to be an effective way for users to understand their data and to foster collaboration, especially across disciplines (van Dam et al., 2000). AR platforms, in particular, have been found to enhance collaboration  during data exploration and interpretation, which is crucial in a field like glaciology that is growing increasingly interdisciplinary. The HoloLens was chosen because, as it is a wearable AR device (as opposed to wearable VR), it can aid collaboration. In addition to wearable computers like the HoloLens, mobile AR platforms have been designed for use in the geosciences. Veas et al. (2013) note the disconnect that occurs when researchers return from the field and analyse data in their offices; they create a mobile AR platform to allow researchers to engage with data and models in their real-world context. These authors conclude that AR is a promising field for environmental monitoring. However, mobile AR is not viable for on-site work for ice-sheet studies due to the lack of a reliable Internet connection and cold temperatures that limit the performance of tablets.

Further Development
The HoloLens is a promising platform for exploring geophysical datasets, but future work is required to make Inside the Ice Shelf an accessible and easy-to-use research, teaching and outreach tool. The HoloLens is currently limited in its processing power, field of view, and ability to render high-resolution meshes and images, especially when compared to current VR devices that run on high-powered computers via a tether.
The HoloLens' optical performance presents the biggest limitation to use. In the current application, virtual objects can jitter and prevent the viewer from seeing data clearly. The reduced performance also limits the amount of data that can be displayed; the current version of the application displays about 7% of SIR data and only about 0Á1% of lidar data. In addition to including the substantial remaining SIR and lidar data, the authors aim to visualise other ROSETTA-Ice datasets, including gravity and magnetic anomalies, resulting bathymetry and visible imagery. Adding these datasets will make Inside the Ice Shelf a truly useful tool for cross-disciplinary science.
It is anticipated that future developments in hardware will improve the HoloLens' optical performance, support better data display and allow the inclusion of more of the ROSETTA-Ice data in future versions of Inside the Ice Shelf. HoloLens development is not the only way to improve the quality of Inside the Ice Shelf; the application can be developed for use on a more optically robust AR platform. Additionally, display jitter could be reduced and more data could be displayed if the application accessed data on a server through a Wi-Fi connection, rather than loading all the data into the HoloLens' memory.
The authors have used Inside the Ice Shelf at multiple outreach events. In general, there is a learning curve for new HoloLens users as they acclimatise to selecting objects with their gaze and performing the two HoloLens gestures. Especially in outreach settings, where most users have not yet used the HoloLens and are not familiar with Antarctica's geography, a guide is crucial. These situations are reliant on a strong Wi-Fi connection to run the Mixed Reality Capture interface.
Inside the Ice Shelf has been tested on groups of 5 to 10 people, with one person wearing the HoloLens. It is found that, even though the HoloLens maintains the user's awareness of the physical world and allows the user to communicate naturally with others, it can isolate the user from the group, especially if the user is still learning how to operate the HoloLens. Using the Mixed Reality Capture interface helps keep the HoloLens user connected to the group, but the user is still the only person able to interact with the experience. Inside the Ice Shelf can be developed on multiple HoloLenses in the future to create a more collaborative environment. Accessibility for HoloLens users (as well as users of other AR platforms) can be increased using Microsoft's Spectator View (Microsoft, 2019b), for example.
While AR is still in its early stages of use in the geosciences, the work presented here highlights how one AR experience can be used to advance research through collaboration and be a useful communication tool for science education and outreach. Inside the Ice Shelf allows users to explore the Ross Ice Shelf in a new and intuitive way; its implementation on the HoloLens was developed to show the rich ROSETTA-Ice datasets in a 3D setting, intuitively communicate scale, foster a collaborative and interdisciplinary research environment, and maintain accessibility. The authors have described their processes and workflow to encourage others to create similar experiences with their geophysical data, and expect that more geoscientists will take advantage of these useful tools in their scientific workflows and educational endeavours as the field of AR grows.