X-ray-to-visible light-field detection through pixelated colour conversion

Light-field detection measures both the intensity of light rays and their precise direction in free space. However, current light-field detection techniques either require complex microlens arrays or are limited to the ultraviolet–visible light wavelength ranges1–4. Here we present a robust, scalable method based on lithographically patterned perovskite nanocrystal arrays that can be used to determine radiation vectors from X-rays to visible light (0.002–550 nm). With these multicolour nanocrystal arrays, light rays from specific directions can be converted into pixelated colour outputs with an angular resolution of 0.0018°. We find that three-dimensional light-field detection and spatial positioning of light sources are possible by modifying nanocrystal arrays with specific orientations. We also demonstrate three-dimensional object imaging and visible light and X-ray phase-contrast imaging by combining pixelated nanocrystal arrays with a colour charge-coupled device. The ability to detect light direction beyond optical wavelengths through colour-contrast encoding could enable new applications, for example, in three-dimensional phase-contrast imaging, robotics, virtual reality, tomographic biological imaging and satellite autonomous navigation.


S2. The positioning principle and error analysis
The photoluminescence part of the azimuth detector consists of three sets of CsPbX3 nanocrystals, which emit red, green, and blue light. Since the absorption of light or radiation of each part changes with the incident direction of light, there is a mapping between the color of luminescence and the azimuth angle of excitation light. Each azimuth detector can determine the angle  of the incident beam with respect to the reference plane, so three such azimuth detectors can be arranged to locate the spatial position of the excitation source ( Supplementary Fig. 4). In the three-dimensional Cartesian coordinate system, detector A and detector B are perpendicular to the XOY plane at coordinates (b, 0, 0) and coordinates (0, 0, 0), and cylinder C is arranged parallel to the XOY plane along the Y axis. Assuming that the X axis is the reference direction, the projection of the light or radiation source S onto the XOY plane is S', the angle between the line (connecting S' and detector A) and the reference direction is θ1, and the angle between the line (connecting S' and detector B) and the reference direction is θ2. The angle between the line (connecting S and the detector C) and the XOY plane is θ3. θ1, θ2, and θ3 are determined by the color of the luminescence of azimuth detectors A, B and C, respectively. Therefore, the spatial position (x, y, z) of the source S can be solved by the following formula: The positioning errors dx, dy and dz of the source S depend on the angular detection error dθ of each azimuth detector, the distance b, and the position coordinates x, y, and z of the source. The dx, dy and dz as a function of dθ are:    2  1  1  2  1  2  1  2  2  1  2  1  2   2  2  2  2  1  2  2  1  1  2  2  2  1  2  1  2  2  1 tan ( and 2 3 3 Theoretical analysis shows that dx, dy, and dz are all positively correlated with dθ, dx is positively correlated with b, and dy and dz are negatively correlated with b. Positioning errors are closely related to position of S ( Supplementary Fig. 5). As a proof-of-concept, we fabricated three azimuth detectors arranged according to the schematic diagram in Supplementary Fig. 4b. We achieved 3D spatial localization of the X-ray source with a localization accuracy of approximately 0.5% (Supplementary Fig. 6).

S3. Principle of 3D light-direction detection
We first define that azimuth is the counterclockwise angle measured from the positive direction of the x-axis on the x-y plane, and elevation is the elevation angle formed with the x-y plane (Supplementary Fig. 7a). The photoluminescence part of the azimuth detector consists of three sets of CsPbX3 nanocrystals that emit red, green, and blue light. Since the absorption of light or radiation of each part changes with the incident direction of the light, there is a mapping between the color of luminescence and the azimuth angle of the excitation light.
In Fig. S7a, the color change caused by the change of the elevation angle of the light is not obvious (its effect on the color output can be eliminated during calibration), but the change of the azimuth angle can cause a very large color change. Therefore, we give an approximate solution for the color function of the light output from a single detector when the incident ray has an elevation angle of 0° but different azimuth angles. In this case, it is more convenient to analyze the color as a function of azimuth α using the top view of the detector ( Supplementary Fig. 7b). The white circle with a radius of r in the center represents the transparent material used to transmit light to the bottom of the detector. When light is incident from the direction shown in the figure, materials closer to the incident direction emit stronger light due to the exponential decay of the excitation light. As long as the power of the light to be measured is not too low, the color response does not depend on the light power because the light intensity does not affect the light color.
For simplicity, we assume r = 0 and assume that the luminescence of the materials near the incident direction is uniform, while the luminescence of the materials far from the incident direction is neglected. This approximation affects the final chromaticity response function, but can still show the huge color change due to the change of azimuth angle α. Under this approximation, the output light spectrum of the nanocrystal sensor can be expressed as: where ( ) They are functions of azimuth α and can be expressed as a piecewise function: Substituting Eq. (6) into Eq. (5), we obtain the functional relationship between the output spectrum S(λ) of the detector and the azimuth α of the measured light. The spectrum S(λ) can be converted into CIE color tristimulus values (X, Y, and Z) by the following formula: Where K is the proportional coefficient, and CIE1931Std is the standard data.
The relative coefficients x and y of the three primary colors can be obtained by: Then the color corresponding to each azimuth α can be intuitively displayed in the chromaticity diagram. In our experiment, the chromaticity response of the output light shows a large triangle on the chromaticity diagram when the azimuth α varies from 0 to 360 degrees. Two azimuth detectors arranged perpendicular to each other can perform 3D omnidirectional light-field detection. In spherical coordinates, a detector placed parallel to the y-axis ( Supplementary Fig. 8a) can measure S10 the angle variation of light around the y-axis in the XOZ plane. That is, when the light drawn in blue scans along the direction indicated by the red arrow, changes in the angle α2 can be detected. For a detector that is parallel to the x-axis ( Supplementary Fig. 8b), it can measure the angle variation of the light around the x-axis in the YOZ plane. That is, when the light drawn in blue scans along the direction indicated by the red arrow, changes in the angle α1 can be detected. Accordingly, in spherical coordinates ( Supplementary Fig. 9a), for a beam incident from any direction (θ, φ), detector 1 detects the angle α1 between the projection of the beam onto the YOZ plane and the Z axis, while detector 2 detects the angle α2 between the projection of the beam onto the XOZ plane and the Z axis. The relationships between α1, α2 and θ, φ are as follows:

Supplementary
where α1 and α2 are encoded for the color output of detectors 1 and 2, respectively. In a specific experiment, α1 and α2 are obtained from the CIE tristimulus value of the color output of detectors 1 and 2, respectively. The azimuth angle φ and elevation angle θ of the beam are then obtained from the following expressions derived from equations (9) and (10): We further designed a 3D light-direction image array using perovskite nanocrystals in which adjacent pixels are perpendicular to each other. For simplicity, the angle detected by detectors parallel to the x-axis is denoted by αi,j (i and j refer to the rows and columns of the nanocrystal arrays), and the angle detected by detectors parallel to the y-axis is denoted by βi,j. Each of the two azimuth detectors, which are perpendicular to each other, can reconstruct the angle of the beam incident at the center of the two pixels. For example, α1,1 and β1,2 can be used to calculate the 3D angle of the beam incident at point s11, whereas β2,1 and α1,1 can be used to calculate the 3D angle of the beam incident at point s21.

S4. Fabrication and integration of 3D light-field sensor arrays
Processing steps for the light-field detector array Supplementary Fig. 10 │ Schematic of the fabrication process of the pixelated perovskite nanocrystal arrays. a, Pre-patterned Si templates used to fabricate red and blue pixel arrays. b, The prepared red-emitting QD-PDMS ink is injected into the corresponding rectangular holes of the template using a direct printing method. c, The prepared blue-emitting QD-PDMS ink is injected into the corresponding rectangular holes. d, The PDMS is then spun onto the blue and red ink printed template and then heat-treated in a vacuum oven for 30 minutes. e, The film printed with the red and blue pixel arrays was obtained by demolding. f, The gaps between red and blue pixels were filled with transparent PDMS. g, The prepared green QD-PDMS ink was injected into the rectangular holes of the prefabricated template. h, The processed PDMS film with red and blue pixel arrays from step F was overlaid on the green ink printed Si template and heat-treated in a vacuum oven for 30 minutes. i, The film printed with red, green and blue pixel arrays was obtained by demolding.

Error analysis of the fabrication and calibration
A highly robust flat processing is used to fabricate azimuth detector arrays. Typical fabrication errors include random defects and misalignment (Supplementary Fig. 11). The demolding process used in this work has high processing accuracy and edge defects can be controlled within 0.1%. The random defect error of the entire azimuth detector pixel is almost negligible due to the averaging effect. Because alignment is required between the upper and lower material layers, an alignment error exists as shown in Supplementary Fig. 9. In the figure, w represents the thickness of a single-color pixel, D represents the vertical distance between the measured object and detector, ∆d denotes the alignment deviation (<2%), and ∆θ1 denotes the angle measurement deviation, which is proportional to the measurement distance. When w << D, the angle deviation can be ignored.
Supplementary Fig. 11 │Typical fabrication errors caused by random defects and misalignment.

S5. Geometric model of the 3D imaging system
The 3D imaging scheme used was the triangulation method based on multiline structured light illumination.
For simplicity, we first analyzed the situation under single-line structured light illumination ( Supplementary   Fig. 12). Projections of the camera's coordinate system onto the Y0O0Z0 plane.

Supplementary
To increase the change of the incident angle on the detector with distance changes and reduce the lateral movement of the light spot on the detector, we designed an objective composed of two lenses. In S15 Supplementary Fig. 12, L1 and L2 are the center points of lens 1 and lens 2, respectively. The light source at point O emits a single-line structured light perpendicular to the XOZ plane, and the distance between the points O and L1 is D. For each object point P (x, y, z) irradiated by the single-line structured light, its image by lens 1 in the X0Y0Z0O0 coordinate system of lens 2 is located at point P0 in the X0O0Y0 plane. The X0O0Z0 plane and the XOZ plane are coplanar. The angle between the ray OP and the XOZ plane is φ, and the angle between the projection OP' of ray OP onto the XOZ plane and the OX axis is α. The angle between the optical axis O0Z0 of lens 1 and OL1 is βz. The angle between the projection P'L1 of ray PL1 onto the X0O0Z0 plane and the optical axis O0Z0 is β0, and the angle between the projection PzL1 of the ray PL1 onto the Y0O0Z0 plane and the optical axis O0Z0 is θ0. The projections of coordinate system of the lens 1 onto the XOZ plane and the Y0O0Z0 plane are shown in Supplementary Fig. 12b and 12c. The light L1P0 is refracted by lens 2 onto the detector plane X1O1Y1. The projection of the camera's coordinate system onto the XOZ plane and the Y0O0Z0 plane are shown in Supplementary Fig. 12d and 12e, respectively. The angle between the projection of ray P0P1 onto the X1O1Z1 plane and the optical axis O0Z0 is β1, and the angle between the projection of the ray P0P1 onto the Y1O1Z1 plane and the optical axis O1Z0 is θ1. The distance between lens 1 and lens 2 is d, and the distance between lens 2 and the detector plane is l. n and m represent the number of pixels on the detector in the X and Y directions, respectively.
According to the geometric relations in Supplementary Fig. 12, the position coordinates x, y, z of the object point P can be solved: where s and s1 represent the dimensions of a single pixel of the detector in the X and Y directions, respectively.
In a specific experiment, α, βz, D, d, and l need to be calibrated in advance. β1 and θ1 are obtained from the color output of angle detection, and then the coordinates x, y, z of object P are solved by formula (13)(17).

S6. Parameter selection of the 3D imaging system.
In the 2D scheme of the designed imaging system in Supplementary Fig. 13, at a certain distance z, the lateral position x and the angle βt between the reflected or scattered light ray PL1 and the X axis are: and According to the geometric relationship in Supplementary Fig. 13 and the Gaussian formula in geometric optics, the object distance l1 and the image distance l1' of lens 1 are: where f1 is the focal length of lens 1 and βz is the angle between the optical axis of lens 1 and the coordinate axis OX.
The object distance l2 and the image distance l2' of lens 2 are: where f2 is the focal length of lens 2.
The vertical magnifications of lens 1 (β1), lens 2 (β2), and the combined system (β) are: Therefore, the height of images on the primary imaging plane and the detector imaging plane are: The angle between the light beam incident onto the primary image plane and the optical axis is: The angle between the light beam incident onto the detector imaging plane and the optical axis is: The goal of parameter optimization is to minimize the change of y2' with distance z and the maximum change of β0' with z. Therefore, we analyzed the relationship between δy2'/δz and δβ0'/δz and the system parameters D, α, βz, d, f1 and f2 (Supplementary Fig. 14). Considering the resolution and the detectable range of distance, we set the system parameters as D = 50 mm, α = 90°, βz = 78°, f1 = 75 mm, f2 = 25mm, and d = 145 mm.
Optimal imaging parameters are listed in Table S1. When the distance z is changed by 0.1 mm, the angle β0' of the light incident on the detector changes by approximately 0.0038°, and the light spot moves on the detector by approximately 1 μm, which is indistinguishable for a conventional CCD with a pixel size of 310 μm. By attaching the light-field imaging film onto the CCD, the distance change of 0.1 mm can be differentiated by angle detection. Under optimized S18 system parameters, a distance change of 200 mm causes the spot on the detector to move by 1.4 mm. It should be noted that the selection of imaging parameters depends largely on the distance z, so the system parameters must be determined according to the distance range of the application.

Quantitative relationship between depth of field and depth precision.
According to the principle of triangulation ranging, the farther the object is from the imaging system, the smaller the angle change caused by the depth change and the lower the depth precision. The detectable depth range depends on the dynamic range of the angle measurement and the imaging system parameters. Under the designed system parameters (D = 50 mm, α = 90°, βz = 78°, f1 = 75 mm, f2 = 25 mm, and d = 145 mm), the incident angle on the azimuth detector varies from -30.2° to 40.4° when the detection distance varies from 200 mm to 2500 mm. When the angular resolution of the azimuth detector is 0.02°, the distance accuracy varies from 0.01 mm to 19.7 mm (Supplementary Fig. 15).

Supplementary Fig. 15 │ Quantitative relationship between depth of field and depth precision. a,
Theoretical relationship between the imaging depth and the angle incident on the azimuth detector for the given system parameters (D = 50 mm, α = 90°, βz = 78°, f1 = 75 mm, f2 = 25 mm, and d = 145 mm). b, Depth precision versus imaging depth z. c, Theoretical relationship between the imaging depth and the angle incident on the azimuth detector for the given system parameters (D = 125 mm, α = 90°, βz = 78°, f1 = 100 mm, f2 = 30 mm, and d = 170 mm). d, Depth precision versus imaging depth z.

S20
With system parameters (D = 125 mm, α = 90°, βz = 78°, f1 = 100 mm, f2 = 30 mm, and d = 170 mm), the incident angle on the azimuth detector varies from -27.2° to 39.2° when the detection distance varies from 450 mm to 4000 mm. When the angular resolution of the azimuth detector is 0.02°, the distance accuracy varies from 0.07 mm to 19.3 mm. For a depth of 1000 mm, the depth precision under both systems is 2.8 mm and 0.76 mm, respectively. To achieve a suitable depth of field and depth precision, different system parameters must be selected according to equations 1826.

Quantitative relationship between angular resolution and depth precision.
There is a quantitative relationship between angular resolution and depth precision, depending on system parameters and imaging depth z (Supplementary Fig. 16).

Experimental results of the relationship between depth accuracy and depth of field
Under the designed system parameters (D = 50 mm, α = 90°, βz = 78°, f1 = 75 mm, f2 = 25 mm, and d = 145 mm), the depth of field ranges from 200 mm to 2500 mm. We tested the depth accuracy of a plate sample at distances of 500 mm, 1000 mm, and 2000 mm, respectively (Supplementary Fig. 17a). Under the designed system parameters (D = 125 mm, α = 90°, βz = 78°, f1 = 100 mm, f2 = 30 mm, and d = 170 mm), the depth of S21 field ranges from 450 mm to 4000 mm. We tested the depth accuracy of a plate sample at distances of 1000 mm, 2000 mm, and 3000 mm respectively (Supplementary Fig. 17b).

Calibration of the emission angle of multiline structured light
The system uses an optical grating after a light source to generate multiline structured light and scans the object surface in a normal incidence mode. The angle between the two edge light planes of the structured light is α, the angle between each structured light plane and the XOY plane is αi, and the angle between the structured light planes is w (Supplementary Fig. 18). Since the structured light is incident perpendicular to the target, αi can be obtained by the following formula: Where n represents the number of the line-structured light plane.
In the actual calibration, the structured light was vertically incident on a white flat plate, OO' is the optical axis Then the angle α between the two edge light planes of the structured light is:

Calibration of homemade camera parameters
The conversion formula of the world coordinate system (xw, yw, zw) and the pixel coordinate system (u, v) of the CCD is: Where s is the scale factor, K is the internal parameter matrix of the camera, R is the rotation matrix of the camera in the world coordinate system, and T is the translation matrix. The Zhang calibration method was used to determine internal parameters, external parameters, and distortion parameters of the camera. First, we printed a piece of paper with a black and white grid, and then took several images of the paper from different angles with the camera to be calibrated. Further, the camera calibration library toolbox_calib of Matlab was used to identify and process the feature points in the collected images to obtain the internal and external parameters as well as the distortion parameters of the camera. increases from -40° to 40° with φ = 0°, the blueness of the pixel in the yellow square gradually fades. When θ increases from -40° to 40° with φ = 90°, the blueness of the pixel in the red square becomes gradually weaker.

S10. Factors affecting angular resolution
We further conducted an analysis of the relationship between angular resolution and optical power as well as the X-ray dose rate (Supplementary Fig. 21). The results show that angular resolution is independent of the excitation power as long as it is not particularly low, since the intensity does not affect color tristimulus values.
A weak excitation, however, results in a decrease in angular resolution. By measuring the direction of light at 405 nm and 0.5 mW, an angle resolution of approximately 0.01° can be obtained. It should be noted that the optical power is the set power of the laser and not the absolute power that actually hits the detector. When the thickness of the material layer is greater than 100 m, perovskite nanocrystals can generate visible radioluminescence that is strong enough to be detected by the CCD or color sensors when irradiated at a dose rate of 10 Gy/s. When measuring the direction of X-rays at a dose rate of 10 Gy/s, the angular resolution is approximately 0.013°. The sensitivity of the angle measurement over the entire dynamic range was analysed.
At a low light power (< 0.3 mW), the detection sensitivity of 60-degree incident light is attenuated by approximately 75% compared with the detection of normal incident light. c, Sensitivity versus angle of incident light. The curve was measured at an incident optical power of 0.3 mW.

Supplementary
Twenty measurements were performed at each angle, and data are presented as mean values +/-SEM.

S12. Wavefront detection principle
Wavefront detection for extreme ultraviolet (EUV) light or X-rays typically uses Hartmann (e.g., Shack-Hartmann) wavefront sensing techniques in which a beam passes through a hole array (e.g., microlens array) and is projected onto a CCD camera that detects the beam sampled from each hole (e.g., microlens). The positions of individual point centroids are then measured and compared with reference positions. This enables the wavefront's local slopes to be measured at a large number of points by the following formula: tan ,tan In our light-field sensor-based wavefront measurements, the local slope of the wavefront is directly obtained by the angle detectors without the need for an array of apertures or microlenses. The wavefront's local slopes can be written according to the following expressions: