Robust and practical measurement of volume transport parameters in solid photo-polymer materials for 3D printing

Volumetric light transport is a pervasive physical phenomenon, and therefore its accurate simulation is important for a broad array of disciplines. While suitable mathematical models for computing the transport are now available, obtaining the necessary material parameters needed to drive such simulations is a challenging task: direct measurements of these parameters from material samples are seldom possible. Building on the inverse scattering paradigm, we present a novel measurement approach which indirectly infers the transport parameters from extrinsic observations of multiple-scattered radiance. The novelty of the proposed approach lies in replacing structured illumination with a structured reĆector bonded to the sample, and a robust Ątting procedure that largely compensates for potential systematic errors in the calibration of the setup. We show the feasibility of our approach by validating simulations of complex 3D compositions of the measured materials against physical prints, using photo-polymer resins. As presented in this paper, our technique yields colorspace data suitable for accurate appearance reproduction in the area of 3D printing. Beyond that, and without fundamental changes to the basic measurement methodology, it could equally well be used to obtain spectral measurements that are useful for other application areas. Published by The Optical Society under the terms of the Creative Commons Attribution 4.0 License. Further distribution of this work must maintain attribution to the author(s) and the published article’s title, journal citation, and DOI.


Introduction
Optically active materials, interchangeably referred to as participating media, are ubiquitous. Consequently, many diverse Ąelds Ű medical imaging [1,2], appearance reproduction [3], lighting design [4], marine sciences [5] or remote sensing [6,7], to just name a few Ű require an accurate knowledge of the intrinsic optical transport parameters of the involved media. Once known, these parameters can be used for numerical predictions obtained via Monte Carlo or Ąnite-element simulation frameworks, which then facilitate informed decisions about the problem at hand.
Directly measuring these transport parameters is often impossible, though, due to the fact that they arise as a combination of the optical properties of individual particles and their distribution in the material (medium). Crucially, isolating individual particles for measurement purposes is typically infeasible, especially in solid and composite Ćuid media. We therefore need to rely on indirect measurements of the transport parameters, i.e.,measurements that are based on the observations of many cumulative light-particle interactions.
Indirect measurements can then be performed for each transport parameter in isolation, or jointly. Instances of the former approach include: • weakly scattering media can be characterized via the Beer-Lambert-Bouguer law by observing the uncollided radiance passing through them (e.g.,via spectrophotometric means), and • the mediumŠs scattering albedo can be directly related to the bulk reĆectance of a sufficiently large piece of material under the assumption of diffusive conditions (cf. Section 2).
However, recovering the scattering (phase) function of the medium is more difficult, even when assuming a simpliĄed parametric form, such as the Henyey-Greenstein [8] or Fournier-Forand [5] models. Moreover, an isolated inference of the parameters can lead to an unknown systematic error in the simulation utilizing them Ű even if bounding the error of each partial measurement were possible, its cumulative impact on the resulting solution error would most likely be unpredictable. We therefore opt for a joint approach of obtaining the transport parameters via a single measurement procedure, also known as the inverse scattering approach [9]. The novelty of our method lies in two key aspects.
• First, instead of structuring illumination, we use a simple diffusely reĆective surface placed underneath the measured sample to probe the materialŠs scattering and absorption properties.
This allows us to perform the measurement under unstructured diffuse illumination, which relaxes calibration requirements and reduces the costs of the setup.
• Second, we propose a robust, dictionary-based Ątting procedure, which, as we demonstrate, compensates for the inaccuracies of our acquisition.
The resulting setup represents a practicable, low-cost way to obtain colorspace transport parameters, and even naturally extends to multi-spectral measurements, where spectral narrowband illumination or sensing are available.

Method overview
The presented method was originally developed in the context of 3D appearance fabrication, speciĄcally for accurate color and texture reproduction using photo-polymer 3D printers [10]. However, the methodology proposed here is in no particular way restricted to measuring the transport parameters of printer resins: it could easily be used for other solid materials that can be sliced and mounted in a similar manner as the resin samples (see Section 4). This makes the proposed technique potentially interesting for a wider audience than just 3D printing engineers. The printer which we characterized Ű a Stratasys J750 [14] Ű provides facilities for full-color multi-material 3D printing. It uses cyan, magenta, yellow, black and white (CMYKW) photopolymer materials as the core tonal set, with additional clear and support auxiliary materials. Given the inherent translucency of the print materials, accurate knowledge of their physical properties is needed to print surfaces with complex color textures. SpeciĄcally, high-Ądelity appearance predictions are needed to drive the structural optimization that attempts to recover details lost due to lateral scattering (as shown in Fig. 1).
Design principles. Our design objective was to develop a pragmatic way to optically characterize the photo-polymers used by current 3D printers. The primary intent is to provide a setup suitable for developers of new printing technologies and materials. To this end we followed these general design principles: Fig. 1. Based on the knowledge of the J750 printer materialsŠ optical properties obtained with the presented method, we have color-calibrated the printer and optimized for the sub-surface material distribution in our previous work [10]. Compared to the default results produced by the printer driver, this expands the color gamut and counteracts the visible lateral bleeding of details caused by sub-surface scattering. This is enabled by an accurate appearance prediction based on Monte Carlo path tracing [11Ű13]. (For reference, the vertical size of these prints is about 5 cm; the colored materials are limited to the top 1 mm of the prints, underlaid by 10 mm of the white material.) • Accessibility. Use the smallest possible number of restrictive assumptions about the measured materials. Ensure robustness in case of weak violation of the assumptions.
• Affordability. Reduce the number of specialized hardware components; use open-source or free software whenever possible; minimize operator time in exchange for machine time.
As the only hard assumption about the measured materials is the ability to slice them (details in Section 4), our setup can be used for other solid translucent materials outside of 3D printing. Overview of exposition. After a brief review of existing measurement methods in Section 2, Section 3 will formalize the problem of obtaining optical parameters of 3D-print materials, before we introduce our novel measurement principle and inverse scattering approach in Section 4. Section 5 presents our physical setup, calibration and acquisition protocol, together with a detailed description of our dictionary-based inverse scattering implementation. An overview of the general workĆow is provided in Fig. 2. We analyze the measurements of our printing materials in Section 6, and conclude in Section 7 with an open discussion about the remaining shortcomings of our setup and ideas how to improve them in the future.

Related work
Given our focus on material characterization for use in physical reproduction we begin our literature review starting from computational fabrication, followed by approaches used in other related Ąelds.
Computational fabrication. Several approaches suitable for translucent polymeric media have been developed in this context, with the aim to provide pragmatic means to characterize the involved printing materials [15Ű17]. As such, their measurement approaches are typically adapted to the method-speciĄc goals: Hašan et al. [15] and Dong et al. [16] both focus on reproducing spatially varying translucency effects, whereas Papas et al. [17] aim at the reproduction of arbitrary homogeneous materials by mixing silicone pigments.
Consequently, common to these methods is the reliance on specialized data (implicitly [16] or explicitly [15,17] represented scattering ŚproĄlesŠ) in conjunction with the reduced optical density and scattering albedo [18]. Both of these representations carry the assumptions of smooth target signals and diffusive transport conditions, which is problematic as sharp material transitions are crucial for our work (cf. Fig. 1). We also operate with strongly absorbing materials, which violate the conditions of diffusive transport [19,20] Ű indeed, some of these methods [16,17] need to speciĄcally compensate for that.
Other contexts. For bulk media, it is possible to obtain the transport parameters using, for instance, the inverse adding-doubling method [21,22]. In such conditions, this approach relates the total reĆectance and transmittance of the medium to its intrinsic properties. It is also feasible to perform an isolated inference of some of the transport parameters (such as the scattering albedo α), followed by generalizing the relationship in a convenient closed mathematical form [10,12,17]. These approaches often make the critical assumption of diffusive conditions [19] and are limited by the low degrees of freedom of the measured quantities, meaning they do not generalize well for strongly absorbing and spatially varying materials (cf. Correia et al. [4]).
Media in speciĄc conditions, or with properties that are known a priori or can be derived from Ąrst principles, do not need to be characterized by fully general measurement methods. A good example are Ćuid media, where dilution can enable approximate isolation of single light-particle interactions [23]. This is generally infeasible for most solids, since dissolving a solid material might alter its optical properties in an unknown way.
Specialized approaches exist for media with known components at unknown distributions, such as clouds [6,7,24,25], biological tissues [1,26], or milk [27]. In many cases including ours, knowledge like this is not available, precluding a specialized solution of any such kind.
To our knowledge, the most complete method has been proposed by Gkioulekas et al. [3] (who also provide an excellent literature review). This inverse-scattering approach measures the full set of optical parameters, including an approximate shape of the phase function (i.e., not only its anisotropy g used by our model). While successfully used to characterize liquids, gels, and suspensions with high accuracy, it is unclear whether this method would perform equally well on solid materials. Furthermore, it comes with strict calibration requirements for both lighting and geometry. This, in combination with the high material costs of the setup, has motivated us to develop an alternative methodology with emphasis on simplicity and affordability. Nevertheless, we draw inspiration from the work of Gkioulekas et al. through the use of material dictionaries to solve the inverse scattering problem.

Problem statement
We formally deĄne the measurement problem addressed in this paper as follows. The discrete space of the print materials shall be denoted as M ≡ {C, M, Y, K, W}. In order to predict the appearance of an object fabricated from these materials we need to obtain the intrinsic volume transport parameters for each m ∈ M deĄned in the continuous space V ≡ (µ, α, g, η), where • µ ∈ [0, ∞) mm −1 is the optical density (also known as the extinction or transport coefficient, i.e.,the expected number of collision events per unit length, or inverse mean free path), • α ∈ [0, 1] is the single-scattering albedo (i.e.,the expected relative portion of light energy scattered, or not absorbed, by an interaction with the material particle), • g ∈ (−1, 1) is the scattering anisotropy (the Ąrst angular moment of the phase function, which we use to parameterize the simple Henyey-Greenstein [8] function), and Ąnally • η ∈ (1, ∞) is the dielectric refractive index (IOR) of the material. For each material m ∈ M we need to acquire multiple sets of parameters v ∈ V Ű one for each spectral band or color channel. Given our primary interest in reĆective colorspace characterization under white illumination, we acquire three parameter sets, for R, G and B, respectively; however, the workĆow outlined in this paper would naturally extend to full spectral characterization if, in the absence of Ćuorescence, the broadband illumination was replaced with multi-spectral narrowband lighting, or if narrow bandpass Ąlters were used with the camera. To simplify the ensuing joint measurement, one of the parameters Ű the refractive index η Ű was directly determined via optical ellipsometry. Using an Accurion NanoĄlm EP3 ellipsometer, we obtained η values in the range 1.48Ű1.53 for all materials m ∈ M . For computational efficiency we use η ≡ 1.5 for simulating all our materials. This slight inaccuracy is implicitly compensated for by our Ątting method detailed in Sections 4.2 and 5.6. From this point onwards we are concerned with measuring the remaining three (material and color-channel speciĄc) parameters: µ, α and g.

Design and methodology
Any form of subsurface scattering measurements requires non-uniformities, either in the spatial distribution of material properties observed, or in the form of spatially modulated illumination. If both were uniform, scattering parameters would be underdetermined; for instance, there would be no way to determine the mean free path. Existing measurement principles either employ structured illumination (lasers or video projectors [1,3,28Ű30], as well as local sources [17,31,32]); or material heterogeneities with known geometry or known spectral signatures [26].
Our work falls into the category of structured illumination but innovates in how the light transport is modulated, leading to a particularly practical setup with very little calibration requirements.
At the core, our method is transmissive and observes an intensity step function that underwent scattering while traversing a thin homogeneous material slab, subsequently Ątting model parameters to the observed (blurred) edge proĄle in an inverse scattering reconstruction. Our reconstruction requires a known material thickness and assumes a near-perfect, blur-free step function of incident illumination, but, crucially, relaxes the requirement of known contrast and brightness of that step function. Given the experimental challenge of creating a crisp, vignetting-free step illumination of known vergence, we further decided to rely on more easily obtainable diffuse (i.e., omnidirectional) incidence instead.
In principle, these conditions could be created by coating a diffuser with a very thin opaque blocker and directly attaching the material sample to the side with the blocker. It practice, however, applying a thin-enough blocker of the required geometric precision would be challenging outside specialized laboratory conditions. Instead, we capitalize on our methodŠs resilience against (unknown) shifts in contrast and brightness and generate the diffuse incidence through reflective means, allowing light to enter the material slab through the same side that is being imaged, then traverse the material, before being diffusely reĆected from a thin backing whose albedo follows a step function. Note that any absorption and diffusion of the light traveling through the homogeneous material before the diffuse bounce would only affect brightness and contrast of the step edge. Most conveniently, however, such a diffuse backing can easily be produced by printing a black-white transition (for instance within a checker board) onto diffuse paper.
While additional care has to be taken to account for imperfections in surface roughness and index-of-refraction transitions (see below), this generally renders the setup practicable enough to allow for ad-hoc measurements with minimal experimental overhead.
In summary, our measurement setup consists of the following components (also sketched in Fig. 3): • Material sample. A thin solid slab 3D-printed with one of the materials from M , with surface Ąnish as smooth as possible. We use multiple sample thicknesses, as described later. • Step-edge reĆector. A near-Lambertian reĆective surface that contains two regions (maximum achromatic dark and light colors) separated by a straight vertical step edge; placed under the sample.
• Camera. Placed about 1 m above the sample. Vertically aligned and roughly laterally centered with the observed step edge (in camera coordinates).
• Light source. Diffuse, neutral illuminant, placed symmetrically around the sample to provide a uniform Ąeld of illumination, which helps compensate for any non-Lambertian components in the reĆector. This can be obtained from two area light sources positioned roughly along the bisector between the camera optical axis and the sample plane, to minimize direct reĆection off the sample into the camera but still provide sufficiently strong illumination.
• Thin glass slip. Intended to provide a well deĄned (Ćat, clear, specular) material/air interface on top of the sample.
• Transparent gel. Intended for suppressing unwanted scattering and ill-deĄned internal reĆections arising from any potential residual roughness on the sample surface. Applied thinly and evenly between the sample/reĆector and the sample/glass slip surfaces to reduce their mutual refractive index differences.
For each material in M , the output of this setup is a linear RGB image. From this image we crop an area of 10×10 mm with the reĆective step edge running vertically through the center (see Fig. 4). Under the above assumption of a spatially uniform Ąeld of illumination, we then average it vertically to suppress noise and any residual surface variations. We normalize all RGB values in the resulting 1D vector of colors by mapping the darkest and brightest observation within each color channel to 0, and 1, respectively, so that all its values lie within C ≡ [0, 1] 3 . We denote the resulting averaged 1D vector as the edge profile ⃗ P, with which our measurement back-end further operates (more in Section 4.2).

Virtual setup
The Ąrst step was to virtually replicate (that is, numerically simulate) this measurement setup, in order to verify our assumptions: mainly that the changes in the transport parameters discernibly impact the resulting edge proĄles. Here we used the open-source light transport simulation package Mitsuba by Jakob [33] established in the rendering community. We modeled the scene from simple planar elements, ensuring proper dimensions, orientation and alignment of the measurement ŚstackŠ, and paying special attention to the correct ordering and indexing of the interfaces between the optical elements.
To ensure robust and unbiased results, we use the volumetric path tracer of Mitsuba as the light transport solver (instead of faster but biased methods like photon mapping). This is the  same algorithm which the later reproduction stages of our pipeline [10] use for the appearance prediction (cf. Fig. 1), which is desired to ensure consistency. We set up the virtual sensor to only render a single edge proĄle ⃗ P that matches the location of the ones imaged within the physical setup.
The speciĄc reĆectance properties of the black-and-white regions of the step-edge reĆector are obtained during the physical setup calibration (see Section 5.2). For the initial investigation we chose the reĆectances of 0.05 and 0.95 respectively. Additionally we estimated the roughness β of the BRDFs of both the polished samples (β ≈ 0.05) and the reĆector (β ≈ 0.15) by visually comparing to rendered templates with different values of β (using the physically plausible MitsubaŠs roughdielectric BRDF with the GGX micro-facet distribution [34]).
For the illuminant, we used a toroidal diffuse emitter. This conĄguration (from the perspective of the measurement region) has a comparable geometry as the illumination in our physical setup, but leads to a more efficient simulation (more camera paths successfully Ąnd the source, reducing the simulation noise by about an order of magnitude).
Discriminative power. The initial evaluation of the virtual setup has shown that for a major portion of the volume parameter space V the proposed design offers adequate discriminability, i.e.,that the resulting proĄles are clearly distinguishable for different points in the volume parameter space V to which we want to map the printing materials. In Fig. 5 we visualize this for two different sample points in V , both in terms of the proĄle shape and its gradient. Discriminability varies with material thickness, motivating our measurement of multiple sample thicknesses.

Fig. 5.
Edge proĄles simulated with our virtual setup (Section 4.1) for two different parameter sets ∈ V , each at two different sample thicknesses t ∈ {0.5, 1} mm. Each plot compares a reference proĄle solid blue with two proĄles resulting from a small change (≈ twice the delta we use in the dictionary, cf. Section 5.3) along one of the three dimensions of V dashed red and yellow. We also compare the respective proĄle gradient magnitudes, since these are accounted for by our Ątting (Sections 4.2 and 5.6).
One exception to the above turned out to be materials with very high absorbances, where the discriminability has been too low for obtaining reliable measurements. As we document in Section 6, this is a consequence of the chosen sample thicknesses; for this case we designed a simple workaround likewise explained therein.

Dictionary-based measurement
The actual measurement, i.e., the derivation of optical parameters from the observations, employs a material dictionary [3]: it compares the acquired edge proĄle P A of the reĆectorŠs step edge observed through the translucent sample to a precomputed dictionary of simulated edge proĄles P S rendered within this virtual setup. The parameter set that generated the closest (most similar) dictionary match is then assumed to best characterize the measured material.
How to deĄne that similarity is subject to designing a suitable metric, as discussed below. Note that, we seek a good Ąt of the net optical parameters, while errors in reconstructing the physical parameters are acceptable as long as they do not affect the effective light transport.
DeĄnition. We deĄne the dictionary Ątting as a minimization across the volume parameter space V , jointly over all the acquired sample thicknesses t ∈ T: where ν ≡ (µ, α, g) and γ is a positive scaling coefficient. The function d is a distance metric in the edge proĄle space (where we consider both the intensities and gradients of the compared proĄles); we deĄne it as with λ being a scalar parameter. The core motivation of including the proĄle gradient in our metric is that the difference of the measured optical parameters is not always clearly projected into the intensities alone. As discussed in detail later (see Section 5.6), scattering tends to inĆuence the proĄle ŚsharpnessŠ more than its intensity, which we quantify by the gradient estimates. Solving 1 for every material Ű and for each RGB channel Ű leads to the desired characterization of the full material space M . Further details pertaining to the implementation and robustness of this procedure, as well as the discussion of the regularization parameters γ and λ, are provided in Section 5.6. Possible future improvements (for instance pertaining to the reĄnement of the dictionary Ąt) are discussed in Section 7.

Implementation
Having our design and measurement methodology laid out, we proceed with describing its concrete implementation. This consists of the following steps: • physical setup construction (Section 5.1), • setup calibration (Section 5.2), • generation of virtual material dictionary (Section 5.3), • physical acquisition (Section 5.4), • data pre-processing and extraction (Section 5.5), and Ąnally • Ątting-based measurement (Section 5.6). The output of the procedure is colorspace characterization of the material space M , i.e.,a set of parameters (µ, α, g) for every material m ∈ M and each RGB channel.

Setup construction
In line with our design principles (Section 1.1), we have made an effort to build the physical setup from commonly available, inexpensive parts: • Sample. Ten samples (Ąve materials in M , each of thickness t = {0.5, 1} mm) were 3D-printed on a Stratasys J750 machine, using the standard Vero Opaque materials family.
We added a small delta (0.15 mm) to the reference thicknesses to compensate for the material lost by polishing, see Section 5.4. • Step-edge reĆector. A black-and-white checkerboard with 20 mm square size, printed with a laser printer on Toughprint waterproof paper. We opted for a pattern instead of a single step edge between two homogeneous regions to facilitate the setup calibration (especially to calculate the physical size of the pixel footprints, which amounted to 28.5 µm/px).
• Camera. A Canon EOS 700D body with an EFS 18Ű135 mm lens Ąxed at maximum zoom, mounted on a heavy tripod; manually focused on the step-edge reĆector (not the top of the sample), optical stabilization disabled; exposed in manual mode at 1/20 s and f-11, triggered remotely via standard Canon utility software.
• Light. Two simple studio lamps with 55 W, 5500 K-equivalent Ćuorescent bulbs, covered with stock transmissive diffusers. The lights were positioned symmetrically above and below the sample (from the cameraŠs viewpoint), at a height to ensure an approximately 0 • /45 • acquisition geometry. Flat-Ąeld capture validated spatially sufficiently uniform illumination.
• Transparent gel. Clear Anagel ultrasound transmission gel with η = 1.33. The advantage of this type of gel is its low air bubble content: such bubbles can pollute the measurement, and are hard to notice with the naked eye. Since the gel has a rather low refractive index, a possible alternative could be an optical glue that is index-matched to the samples. However, that would permanently bind the entire measurement ŚstackŠ (including the step-edge reĆector) together, which in turn would complicate the acquisition. In addition, a perfect index-matching is impossible anyway, due to the slight mismatch between the glass slips and the materials.
We built the setup in a neutrally colored room and ensured that no sources of vibration are present. Stray light did not require managing, given the assumption of diffuse illumination.

Setup calibration
We extracted and developed all acquired raw images using the ImageJ [35,36] and DCRaw [37] open-source tools. To verify a linear, neutral camera response, we used the Xrite Passport color checker, speciĄcally its six neutral patches with provided reference reĆectance values. Without loss of generality, we forgo a fully color-managed processing in favor of a simpliĄed three-channel model that treats channels independently and assumes Ąxed broadband illumination. Second, to verify our assumption of spatial illumination uniformity over the measurement region of interest, we captured an image of a white office paper placed in the imaging plane. As Fig. 6 demonstrates, while the captured intensity varies signiĄcantly across the image plane (due to the uneven Ąeld of illumination and the camera lensŠs vignetting), the central measurement region is within a 1% average deviation. We therefore made sure that all calibration readings and measurements themselves are taken in this region.
These steps allowed us to determine our step-edge reĆectorŠs linear RGB values as (0.985, 0.972, 0.971) (white squares) and (0.024, 0.023, 0.026) (black squares), as averaged over an area of roughly 10×10 mm. These values are then used in the virtual setup for dictionary generation.

Material dictionary generation
With our virtual setup as a back-end (Section 4.1), we scripted the dictionary generation using the Python interface of Mitsuba. After multiple reĄnements of the dictionary resolution (where the main criterion was to stabilize the overall residual error of the Ąts, according to Eq. (1)), we settled on using the following static sampling of the parameter space V : Generating the resulting 34.2k-record dictionary (each record being 200 pixels wide) took approximately two days on a 500-core CPU cluster. We empirically determined that 250k samples per pixel yield an amount of noise that is comparable to our physical acquisition. To give an idea of what the contents of the dictionary are, a visualization of a small subset is shown in Fig. 7.

Physical acquisition
Sample preparation. After 3D-printing and cleaning the set of 10 samples, we proceeded to polish them. This is necessary since the printer creates small but visible structures on the surface of a print-out: these are unavoidable artifacts of the printing process. We manually polished the samples in a two-step procedure: with a grade-1000 sandpaper until reaching an even matte Ąnish, then with an electric drill using a soft bit coated with plastic-compatible polishing paste, until we achieved a glossy Ąnish. Since the heat buildup from the polishing can cause small distortions of the samples, we afterward Ćattened them by brieĆy placing them in boiling water, and then pressing for about 30 s between two Ćat heavy objects. At this temperature these particular 3D printing materials become pliable but do not lose structural integrity or change optical properties.
Setup initialization. We switched on both the lights and camera at least 30 minutes prior to the acquisition, to provide a warm-in period for the components. We then veriĄed the camera is aligned with the imaging plane in all three dimensions, and that the step-edge reĆector is in focus. This was followed by taking several mockup images and then an image of the white patch of the Xrite color checker, to obtain the scene white point for the current run (since, as we found out, it can slightly change between consecutive runs, likely due to the low grade of our illuminants).
Capture protocol. For capturing each of the samples we have followed the same procedure.
1. Wipe the step-edge reĆector clean; apply a small amount of the clear gel onto its surface.
2. Using latex gloves, press sample Ąrmly until only a minimal amount of gel remains underneath.
3. Apply a small amount of the gel on top of the sample.

Press a blank glass slide
Ćatly into the gel, again pushing as much as possible out at the sides. Take care not to contaminate the top surface with the gel.
5. Visually inspect the sample for any air bubbles remaining in the gel. If found (in about 5Ű10% in our case), clean up and repeat the entire procedure for that sample.
6. Capture two images of the sample, slightly shifting the entire stack (but not the reĆector) in between to decrease the likelihood of the data being contaminated by remaining impurities.

Data extraction
After developing the acquired raw images with ImageJ and DCRaw, all further processing is done using Matlab scripts. We mark the step-edge location manually in each image, and if necessary, rectify them so that the edge appears vertical. After that the measurement region is extracted and averaged as described in Section 4 to yield a single 1D edge proĄle ⃗ P per material sample and color channel. This procedure could alternatively be done via the Radon transform [38], which might be desired if automated processing of larger amounts of data was needed.

Dictionary fitting
Here we expand on Section 4.2, providing further details and considerations about the dictionarybased measurement.
Because of the relative simplicity of our measurement setup, inherent acquisition inaccuracies together with potential modeling inconsistencies between the setups could lead to systematic errors in the measurement. To minimize that, we sought a Ątting procedure with a high degree of built-in robustness against such issues. We have therefore developed three provisions in this direction: joint Ątting over multiple sample thicknesses, regularization of the Ątting, and regularization of the acquired data.
Joint Ątting. The advantage of our setup is that it directly allows us to build a customizable degree of redundancy into the acquired data, by measuring multiple sample thicknesses for each given material. While this does not safeguard against ŚglobalŠ systematic errors present in either dataset (acquired or simulated), it increases robustness against ŚlocalŠ errors in each sample (e.g.,surface scratches), as well as possible lack of discriminability in some regions of the material parameter space (as discussed further in Section 6). We therefore seek a joint Ąt across all acquired thicknesses t ∈ T (Eq. (1)). For the sake of efficiency we choose to work with two different thicknesses Ű 0.5 mm and 1 mm Ű though using more would very likely increase the Ątting precision even further, if necessary.
The choice of the sample thicknesses should in general follow a simple guideline: the samples should be thick enough so that the transport has a signiĄcant multiple-scattering component, but thin enough to ensure a useful signal-to-noise ratio. That is, it should roughly be on the order of 1/µ, or a small multiple of the mean free path.
Fitting regularization. The design of our proĄle distance metric d was motivated by the fact that we want to Ąnd a Ąt that is most similar in terms of: • overall intensity of the proĄle, especially its ŚtailsŠ (predominantly determined by absorption, i.e.,correlated with α); • distortion induced to the step edge (mostly linked to scattering, i.e.,correlated with µ and g).
The two terms of Eq. (2) directly correspond to these two criteria. Since both P A and P S are inherently noisy, we robustly estimate their gradient magnitudes |∇P A | and |∇P S | using the method of Luo et al. [39] based on discrete wavelet decomposition. The regularization parameter λ controls the relative importance of the above criteria; we used the empirically determined value of λ = 20 in all our measurements. Acquisition regularization. While the previous two measures treat possible inaccuracies in the data or acquisition, the last one addresses any remaining inconsistencies in the setup geometry and lighting. This complex issue is simpliĄed by the fact that the measurement takes place in a very small region, hence we assume this potential systematic error in the data to be consistent and model it by a single multiplicative factor γ, see Eq. (1).
To Ąnd the value of γ, we seek to obtain the lowest possible aggregate error when characterizing the entire material space M . We therefore perform a meta-optimization on top of Eq. (1), across all materials m and RGB channels c: The division by γ is necessary to normalize the resulting Ątting error, which would otherwise tend to zero for γ → 0 since the dictionary in fact does contain proĄles with virtually zero intensity (cf. Fig. 7). Given the prior expectation of γ ≈ 1, we quickly determined that γ ∈ [0.8, 1]. A linear search in this interval with the step of 10 −3 then yielded γ = 0.913 (see Fig. 8), and took several hours on a desktop PC. Performing the Ątting (Eq. (1)) with this value of γ then resulted in a set of parameters that in our further experiments [10] yielded predictions more consistent with observed printoutsŠ appearance. x-axis. The error drops quickly between γ = 1 and the global minimum at γ = 0.913, and then grows steadily again as γ tends to zero.
Discussion. The measurement itself, i.e.,the solution to Eq. (1), is efficiently implemented as an exhaustive search in the material dictionary (Section 5.3), and takes only a few minutes on a desktop PC. For illustration, we show the resulting best Ąt for the Vero PureWhite material in Fig. 9; the complete results are provided and analyzed in Section 6. Fig. 9. Fitting results for the Vero PureWhite material. The plots show the best-Ąt proĄles, as well as the corresponding gradient magnitude curves and adjacent dictionary records. Our Ątting is robust to residual aberrations in the acquired data (e.g.,see the blue channel at t = 0.5mm), yielding a reasonable match even in such cases.

Results
In this section, we present the measured optical parameters for the Stratasys Vero Opaque family, which comprises Cyan (C), Magenta (M), Yellow (Y), BlackPlus (K) and PureWhite (W) materials. The resulting parameters are listed in Table 1. In particular, note that we constrained our Ątting to a single scattering anisotropy value, speciĄcally g = 0.4. We motivate this decision in Section 6.1. For now, this particular value of g has been chosen by systematically performing constrained Ąts at each available g, and then choosing such value that minimizes the global Ątting error: arg min Similarly to searching for the regularization parameter γ (Eq. (3)) this is efficiently done by exhaustively searching the dictionary, and takes only a few minutes due to the relatively coarse sampling of the anisotropy domain. For a detailed demonstration of the Ątting, we document the results for the white material in Fig. 9. Here we show, for every RGB channel, the resulting joint Ąt (obtained with Eq. (1)) of the Vero White material across the two sample thicknesses we operate with. Each plot also shows the proĄle gradient magnitude curves, and for illustration also the neighboring dictionary records adjacent to the best Ątting one. As one can see, our procedure obtains a good match for the proĄle itself, but also for its gradient. We obtain similarly good Ąts for the remaining materials (see the Ątting errors in Table 1), with one exception.
Issues with strongly absorbing materials. The main limitation of our current conĄguration turned out to be the measurement of materials with high absorbance, as the resulting edge proĄles have very low discriminability in that part of V (as Fig. 7 allows to see). This turned out to be an issue for the R channel of Cyan, the G channel of Magenta, and the B channel of the Yellow material (marked bold in Table 1). We have resolved this issue by independently printing a color chart with patches of all possible equal combinations of the CMYKW materials. We then ran a 6D brute-force search in the problematic channels (while still keeping the constraint g = 0.4) to identify their values, by seeking the best Ąt of our color predictions to that printed color chart, using our tonal mapping as a prediction model. This tonal mapping itself is a part of our reproduction pipeline [10], and being computationally cheap due to its analytic nature, the brute-force search took only a few hours on a single PC.
Note, however, that this is not a principal limitation of our design: this situation occurs because at the chosen sample thicknesses (0.5 mm and 1 mm) the problematic channels simply absorb too much of the transmitted light. The natural solution is therefore to include an additional sample set at a different thickness, which our Ątting formulation (Section 4.2) supports without any modiĄcations. Using the Ąts from Table 1, we estimate that a suitable thickness for this additional sample set should be around t = 0.1mm, at which the setup would still have sufficient discriminability given the noise levels we have observed. Since, however, at the moment our system does not allow us to reliably fabricate and polish samples that thin, we have instead opted for the independent reconstruction via the CMYKW chart described in the previous paragraph.

Validation
We present three different ways to validate our data: a thin-material setup, a structured-material setup, and cross-validation against the similarity theory [18,40].
Thin-material validation. As the Ąrst, most direct step, we validated the measured data in the same conĄguration in which the Ątting operates, i.e.,the setting already presented in Fig. 4. As can be seen, the simulated results render a high-Ądelity impression of their acquired counterparts. Note this is not a trivial consequence of the Ątting, since this operates on averaged 1D proĄles and has to jointly fulĄll the constraints of the available sample thicknesses.
Structured validation. We use the measured data to compute predictions of the optimized printouts produced by our method. In Fig. 1 we show several examples which demonstrate the accuracy of predicting the appearance of the resulting prints. The parameters, when used in our reproduction pipeline [10], proved to be sufficiently accurate to drive a gradient-descent optimizer towards compositions that clearly challenge the current state of the art in terms of color Ądelity, and surpass it regarding the amount of reproduced detail.
Theoretical validation. Similarity theory (which originated in general particle transport, was brought to medical imaging by Wyman et al. [18], and which has recently gained attention in computer graphics [40]) states that under certain conditions the volume parameter space will contain equivalence classes which can lead to equivalent (or ŞsimilarŤ) transport solutions. While these conditions are not guaranteed to apply in our case, we can still attempt to evaluate our measurements within the scope of this framework.
According to the simplest (Ąrst-order) similarity relation, the scattering coefficient µ s and absorption coefficient µ a (where µ = µ s + µ a and α = µ s /µ) will lead to a similar transport solution as for some other medium with µ ′ s and µ ′ a , if Expressing the latter relationship for µ ′ s as therefore gives us a way to derive the parameters of a different, similar material, given a target g ′ . This is demonstrated in Fig. 10 for the Vero PureWhite material (the remaining materials behave in a like fashion, with the exception of the three problematic cases explained in Section 6). For every Ąt we marked its location in the volume parameter space by a solid green crosshair (overlaid in the distance metric maps). Then, according to Eqs. (5) and (7), we computed the parameters of the similar media for all other anisotropy values (i.e.,representing g ′ in Eq. (7)). The resulting ŚextrapolatedŠ parameterizations (marked with red dashed crosshairs) tend to be well aligned with the local minima of our distance metric (green dashed crosshairs), across the entire anisotropy sub-space. We conclude that this experiment provides a theoretical validation for our data, as it independently justiĄes the behavior of our Ątting metric.  2)) for the entire optical parameter space V for the Vero PureWhite material: each sub-plot corresponds to a single value of g, with the co-albedoᾱ = 1 − α varying along the x-axis and µ along the y-axis. The green crosshairs show the positions of the Ąts for different anisotropy g values: the solid one pinpoints the best overall Ąt, and the dashed ones the best local Ąts for the respective g. The red dashed lines then show the predictions according to the Ąrst-order similarity theory Ű please refer to Section 6.1 for details on this. This property has the additional practical beneĄt of enabling us to constrain the Ątting process to a speciĄc anisotropy g (discussed above ). As mentioned, we found that the best such constrained Ąt is obtained for g = 0.4, which is reĆected in the measured data in Table 1. We use this property to simplify the construction of an analytical tonal mapping, which is one of the key components of our appearance reproduction pipeline [10].
Robustness. Finally, to supplement the previous analysis of the acquired optical parameters, we now focus on the setup itself. SpeciĄcally, we need to understand the robustness of the setup with regard to calibration of its components. As the optical properties of most of the components are outside our control, we opted for an experiment using our virtual setup (Section 4.1).
As the Ąrst step, we have identiĄed the most salient non-material parameters of our setup. As we rely on unstructured illumination, these are primarily linked to the acquisition region: the thickness of the optical gel and its refractive index, the (total) albedo of the white and black parts Fig. 11. Differential error analysis when perturbing seven selected salient dimensions (rows) of our measurement setup, evaluated densely for different combinations of µ a and µ s (in mm −1 ) and at 4 different sample thicknesses (columns). The deltas for each of the parameters are listed in the respective labels. The errors are computed by the proĄle distance metric (Eq. (2)), and normalized by the parameter deltas. of the reĆector, the roughness of the reĆector and the material sample, and Ąnally the sample thickness. We then generated a second dictionary, by densely sampling the µ s and µ a dimensions (Ąxing g at 0.4) at four different material sample thicknesses: 0.1 mm, 0.2 mm, 0.5 mm, and 1 mm. For each of these records, we computed not only the resulting material proĄle, but also its positive and negative differential proĄles along each of the seven salient setup parameters, resulting in approximately 30k proĄle records. We then computed the distance of each of these differential proĄles with respect to the base proĄle, giving us a measure of the errors induced by perturbations (i.e., miscalibration) of the different setup components.
Rather than expressing the error in reconstructed material parameters, this proĄle distance is the more relevant metric, as it is the domain our appearance reconstruction method targets. Figure 11 presents the resulting error analysis, which yields the following outstanding Ąndings: • The albedo of the reĆectorŠs white and black patches has the biggest impact on the accuracy of the measurement and thus warrants careful calibration.
• Conversely, reĆector roughness has negligible impact, conĄrming that angular diffusion can robustly be achieved at the required precision.
• The IOR and thickness of the gel do not distort the measurement signiĄcantly. This is fortunate, as these parameters are difficult to precisely control for.
• The material sample, unsurprisingly, has signiĄcant impact on the measurement error. Good and consistent polishing is advisable to minimize the effect of roughness. The sample thickness needs to be carefully measured, especially for thinner samples Ű this is not difficult to do, although care has to be taken not to distort the sample in the polishing process.

Conclusion
This article describes a practicable approach for measuring the optical transport parameters of translucent printing materials, based on the inverse scattering paradigm. The data obtained with our method are accurate enough for visual reproduction applications geared towards capture of intricate appearance details, as demonstrated in our parallel work [10] and further validated by Sumin et al. [41] for more general object geometries. In general, we believe our approach provides a viable alternative to current measurement methods for volumetric materials in a wider context, as it achieves visually plausible results while keeping the overall design accessible without compromising its reliability. This is achieved through the combination of a simple, easy-to-calibrate measurement setup and a robust Ątting procedure, together ensuring that a reasonable set of parameters is found even for imperfect data.
Compared to conĄgurations that combine both reĆective and transmissive measurements (e.g., Gkioulekas et al. [3]), our setup does not provide the full phase function. Yet, as demonstrated, our optical characterization is complete enough to support predictive rendering for highly heterogeneous conĄgurations of materials used in 3D printing.
The main limitation turned out to be strongly absorbing materials, which we resolved by the independent ad-hoc Ąt explained in Section 6. Fabrication parameters allowing, however, this could equally have been addressed by thinner material samples.
Possible improvements. In order to increase the overall precision of the acquisition beyond what has been achieved, several reĄnements could be considered: • more consistent physical pre-processing of the samples (to minimize the global error, cf. Section 5.6, and to allow the preparation and inclusion of thinner samples), • better calibration target (Spectralon instead of a standard color checker), • better index-matched optical binding medium, • fully color-managed processing pipeline, or ultimately, • dense spectral measurement for full spectral characterization. Aside from these, other, qualitative improvements are left to be examined. For instance, in spite of the density of our dictionary, we could still beneĄt from a Ąner sampling. As this is costly, an alternative could be to interpolate several closest Ąts instead of selecting only the single closest one. A gradient-descent-type search (akin to Gkioulekas et al. [3]) in the proĄle space could also provide a solution with greater adaptivity. On the other hand, the dictionary approach provides indisputable advantages as well, mainly the efficiency of global queries essential for our regularization (Eq. (3)), among others. We therefore envision that a hybrid approach combining the strengths of both the dictionary and differential approaches would be superior to either of the variants alone.