Digital coded exposure formation of frames from event-based imagery

Event-driven neuromorphic imagers have a number of attractive properties including low-power consumption, high dynamic range, the ability to detect fast events, low memory consumption and low band-width requirements. One of the biggest challenges with using event-driven imagery is that the field of event data processing is still embryonic. In contrast, decades worth of effort have been invested in the analysis of frame-based imagery. Hybrid approaches for applying established frame-based analysis techniques to event-driven imagery have been studied since event-driven imagers came into existence. However, the process for forming frames from event-driven imagery has not been studied in detail. This work presents a principled digital coded exposure approach for forming frames from event-driven imagery that is inspired by the physics exploited in a conventional camera featuring a shutter. The technique described in this work provides a fundamental tool for understanding the temporal information content that contributes to the formation of a frame from event-driven imagery data. Event-driven imagery allows for the application of arbitrary virtual digital shutter functions to form the final frame on a pixel-by-pixel basis. The proposed approach allows for the careful control of the spatio-temporal information that is captured in the frame. Furthermore, unlike a conventional physical camera, event-driven imagery can be formed into any variety of possible frames in post-processing after the data is captured. Furthermore, unlike a conventional physical camera, coded-exposure virtual shutter functions can assume arbitrary values including positive, negative, real, and complex values. The coded exposure approach also enables the ability to perform applications of industrial interest such as digital stroboscopy without any additional hardware. The ability to form frames from event-driven imagery in a principled manner opens up new possibilities in the ability to use conventional frame-based image processing techniques on event-driven imagery.


Introduction
Event-based, bioinspired imagers have been in development for decades now on account of a number of advantages they hold over traditional imaging modalities. The pixels of these imagers report an event has occurred when the brightness of the pixel undergoes a change that exceeds a preset threshold. The event information typically includes a timestamp indicating when the event occurred, the pixel locations, and the polarity of the intensity change. One of the earlier examples of a functional, high-performance event-based imager is found in [1]. Since event-driven imagers have become commercially available their use has been studied for applications in mobile robotics and control. A thorough overview of event-based vision techniques can be found in [2].
The problem of imaging the high speed dynamics of physical scenes has been of interest for nearly two centuries. Early approaches to the problem took the form of leveraging the stroboscopic effect to visualize and image the dynamics of high-speed mechanical systems that could not otherwise be observed [3]. Stroboscopic   approaches have found wide use in industry and science and medicine [4][5][6]. In 2006, Raskar developed the concept of the coded exposure photography for reducing motion blur when taking images of dynamic scenes [7]. The coded exposure was implemented using a 'flutter shutter', which added additional hardware to the photography pipeline, but could reduce motion blur in a digital image which is important for a number of industrial and artistic applications. A CMOS implementation of pixel-wise coded exposure imaging that made use of compressive sampling for sensor nodes was implemented by Zhang [8].
The low-latency associated with event-based imagers is particularly attractive for capturing dynamic scenes. The nature of the operation of event-based imagers results in event data that does not exhibit temporal aliasing or motion-blur phenomena. These two properties make event-based imagers attractive for quantitative measurement of structural dynamics that would typically require an expensive high-speed camera that generates large amounts of redundant data [9]. The most popular and well-studied technique for measuring structural deformation information from image sensors is digital image correlation [10]. However, traditional digital  image correlation techniques are built on the analysis of image frames. In order to use event-based imagery in existing digital image correlation data processing pipelines it is necessary to first convert the events into image frames. However, it must be noted that when performing digital image correlation on dynamic/vibrating structures, the framerate and exposure time used to form the frames must be carefully selected in order to ensure the dynamics of the system are properly captured. Other engineering applications where framerate and exposure time must be carefully controlled include structural health monitoring, vibration monitoring, machinery monitoring, and structural dynamics identification. One approach to obtaining a result similar to a custom coded exposure for structural dynamics is presented in [11]. In this work, the illumination intensity is modulated in a sinusoidal manner during the exposure in order to primarily capture structural dynamics in the photo that correspond to the frequency at which the illumination is modulated. This illumination modulation technique allows for the capture of fast structural dynamics using a relatively low-speed camera. However, the drawback of this approach is the necessity for illumination hardware and the fact that only one frequency can be captured per measurement.
In this work we present the development of a digital coded exposure approach to frame generation from event-driven imagery data [12]. This approach takes the hardware coded exposure technique presented in [7] and extends it to a digital domain appropriate for event-driven imagery. In addition, the digital coded exposure concept is generalized such that it not only allows for control of motion blur during frame formation, but we also demonstrate the ability to implement digital coded exposures that assume the form of arbitrary window functions [13], which results in the ability to effectively filter the temporal dynamic content that is allowed to contribute to the formation of the final image.
Digital coded exposure to emulate the frame formation of a conventionalcamera. One of the primary concerns when capturing images of a dynamic scenes is the shutter speed setting which determines the time the sensor is exposed. The effects of shutter speed/exposure time are demonstrated using a photograph of a rotating fan as shown in figures 1 and 2. If the shutter speed is set low, the exposure time will be long, and the photograph could suffer from motion blur effects that smear moving features across the image. This is illustrated in figure 1. The rotating blade is smeared across the image, increasing the uncertainty associated with the blade position. In contrast, a high shutter speed reduces motion blur effects. Figure 2 demonstrates that when the shutter speed is set to a low value, motion blur effects are reduced and it is possible to make out the position of the rotating blades with low uncertainty relative to the slow shutter speed case. However, the ability to reduce motion blur phenomena comes at a cost of the number of photons that the image sensor is exposed to. A long exposure time will result in more photons impacting the sensor, resulting in brighter photographs (figure 1). In contrast, a short exposure time will reduce the number of photons that impact the sensor resulting in darker photographs (figure 2).
A more quantitative description of the effect of shutter speed for an individual pixel on the image sensor is provided in a graphical form in figure 4. This illustration assumes that images are being collected in a continuous form as would be done with a video camera. The video camera would have a frame rate associated with it, which determines the total amount of time that is dedicated to forming each frame in the video. The exposure time indicates the amount of time during the frame formation that the shutter is open and light is integrated at the pixel to form the image. When the shutter is closed, light cannot reach the image sensor, and so light that arrives during the shutter-closed time period does not contribute to the formation of the frame. This process repeats for each frame captured in the video. More details on image formation in a conventional camera can be found in [14,15].
In a traditional imager, the shutter is roughly considered to assume binary states of being open or closed.  binary open-closed flutter shutter sequences that would reduce the motion blur present in the final frame. Raskar implemented the flutter shutter by adding additional hardware to a conventional camera. It is important to note that since the flutter shutter is implemented in hardware, it is not possible to change the flutter shutter characteristics after the frame is formed. Significant follow-on research on flutter shutter/coded exposure imagers has taken place since Raskar's original work. In 2012 Holloway presented an exposure coding architecture for compressed sensing of video [16]. Holloway leveraged the capabilities of high-speed machine imagers to avoid the need for additional hardware such as a spatial light modulator to implement the coded exposure. Liu implemented a pixel-wise coded exposure using a liquid-crystal silicon device [17]. Liu's approach was distinctive in that it achieved sparse representation of videos using over-complete dictionaries over video patches and it adhered to practical hardware constraints associated with current image sensors.
In addition to having a coded exposure it is also possible to implement a coded aperture. In 2013 Llull implemented a mechanical translation of a coded exposure to enable temporal compression [18]. In 2020 Elmalem made use of a joint optical-digital spatio-temporal phase aperture coding scheme to implement motion deblurring. In this work dynamic phase coding was implemented in the aperture during image acquisition [19]. The advantage of Elmalem's approach is that it could perform encoding without any limitation on the direction of motion and without sacrificing light efficiency.
Frames are commonly formed from event-driven data. One naïve approach to forming event-driven frames is to simply sum all the events for each pixel over a specified time period. Figure 3 shows the result of forming frames by performing a pixel-wise summation of events over a variety of specified periods of time. The imagery was captured by observing an electrode and the resulting melt pool formed during a gas tungsten arc welding process using a Davis 240C dynamic vision sensor using a microscope optics setup and a weld shade lens (#10 and #12 shades tend to be appropriate for this application). By examining figure 3 it is observed that as the time window over which events are summed increases, the motion blur in the resulting frame also increases as would be expected in a traditional imager. In applications such as monitoring the spatio-temporal dynamics of weld pools during manufacturing processes it is important to appreciate the effects of motion blur when sending frames to a conventional image processing pipeline. The high dynamic range of event-driven imagers makes them particularly attractive for applications such as this, however the nuances associated with frame formation must be understood and addressed. In prior work by Rebecq, frames were formed by fusing data captures by a conventional imager and an event-based imager [20]. However, Rebecq's work did not address considerations associated with exposure time.
In this work we present a further extension of the shutter function used for frame formation in the form of a virtual shutter function/digital coded exposure that can be applied to event data captured by an event-driven imager. An event-driven imager does not have a frame-rate or shutter-typical for a conventional imager (see figure 4) and instead only reports changes in per-pixel intensity along with a timestamp and the polarity of the change. As a result, each event is very roughly analogous to a cluster of photons that interacts with the pixels. However, in a conventional frame-based camera the intensity values for a pixel are generated by exposing the pixel to photons which results in the generation of electrons which accumulate on the photodiode prior to read-out. In contrast, in an event-based imager each event measures the polarity of change of photo-current. Since the event-driven sensor reports pixel location, timestamp, and event-polarity, it is now possible to apply an arbitrary digital coded exposure function to each stream of events that might be used to form an image from event-driven imagery. Furthermore, because the digital coded exposure is applied in software, it is now possible to apply any digital coded exposure to event data in post processing. This means it is possible to use a single event-driven capture of a dynamic event to generate multiple images with different motion blur characteristics in post processing. Furthermore, the start and end time of the open period of the virtual shutter function can be adjusted in post processing (figure 5). This capability is useful for dynamic applications that might typically be approached with a strobe light where there is a need to image a periodic dynamic phenomena at different phases of its motion. The final frame is formed simply by summing the product of the event polarity with the virtual shutter function value at that timestamp over all the events that fall within the temporal window of the frame.
The process of forming frames from event-driven data using a coded exposure/virtual shutter function is described by: where: n is the index of the current frame x is the horizontal index of the current pixel y is the vertical index of the current pixel d is the duration (in microseconds) of the frame C is the coded exposure/virtual shutter function e is an event occurring at location (x, y) during the interval [n * d, (n + 1) * d). e.timestamp is the time (in microseconds) of the event, e e.polarity is the polarity (+1 or −1) of the event, e. The effect of the shutter speed/exposure time on the formation of the final frame can be analyzed quantitatively. For each possible sinusoidal irradiance excitation that interacts with the sensor, the sinusoidal excitation is multiplied with the virtual shutter function and integrated over the time period over which the frame is formed. When extended to consider all possible sinusoidal excitations, this procedure is equivalent to considering the Fourier transform of the virtual shutter function/digital coded exposure. Alternatively the shutter function can be thought of as a temporal window function applied at the zero-frequency component of a Fourier transform. Figure 6 provides the magnitude of the Fourier transform of a conventional box-car function used for frame formation with different% of time of the (virtual) shutter being open. In this case the framerate is assumed to be 5.38 Hz to match the rotational frequency of a high-pressure air compressor that will be used to demonstrate this approach later in the paper. The phase of the box-car (virtual) shutter function is not considered because the magnitude of the Fourier transform is invariant to phase shifts. An inspection of these results show that a box-car (virtual) shutter function with a large open time acts as a low-pass filter. As the open-time of the (virtual shutter) function decreases, the cut-off frequency of the low-pass filter increases resulting in an increasingly wide pass-band. This result is in accordance with the Fourier time-frequency scaling properties.
One of the attractive features of event-based imagers is that they are immune to motion blur effects. However, users of popular event-based imager data visualization tools such as JAER [21] or DV [22] will observe that image-based visualizations of event data commonly exhibit motion blur effects. Figure 6 provides an explanation for why these motion blur effects are observed. These tools tend to naively sum events over some time period in order to form a frame. As a result, they are inadvertently applying a boxcar virtual shutter function with 100% duty cycle over the time period of the frame. This is analogous to taking a photo with the shutter open during the entire duration of the frame (100% exposure). Figure 6 shows that a 100% exposure setting corresponds to applying a low-pass filter to the data. As a result, motion blur effects will be observed in the image-based visualization of dynamic scenes much like they would be observed when taking a long-exposure photograph with a conventional imager. However, figure 6 also reveals that the nature of these motion blur effects can be modified simply by only summing events for a fraction of the time period over which a frame is formed. The result is analogous to reducing the exposure time when taking a photograph with a conventional imager.

Digital coded exposure as a virtual shutter for conventional frame formation wavelet digital coded exposure
One of the biggest advantages of the digital coded exposure over a conventional physical shutter is that the virtual shutter function can assume a much wider range of values. A conventional mechanical shutter approximately assumes only binary on-off/0-1 values. In contrast, a digital coded exposure can assume positive/negative, floating point, and even complex values. Furthermore, because the virtual shutter function is applied in post-processing, it is possible to form multiple types of frames from a single set of event-driven imagery. This is a capability that is completely not possible with traditional cameras or the classical coded exposure approach.
The larger range of values that a virtual shutter function can assume opens up much more flexibility when forming frames from event-driven imagery data. In the digital coded exposure framework, the virtual shutter function is analogous to a window function [13]. This means that there is no need to be limited to the use of a conventional boxcar function for the virtual shutter function. Instead, it is possible to tradeoff between  frequency selectivity and sidelobe/spectral leakage characteristics by specifying the use of a variety of window functions (e.g. Blackman, Gaussian, Hann, Hamming, triangle). Furthermore, the virtual shutter function can be chosen to be selective to temporal patterns of interest. For example, it is possible to form the final frame in such a way that it primarily includes contributions from a specific sinusoidal frequency of interest. The ability to selectively modulate the contribution of frequency content in the formation of the final frame is now illustrated in the context of the Morlet/Gabor wavelet [23].
The Morlet/Gabor wavelet consists of a complex exponential/sinusoidal carrier signal modulated by a Gaussian window envelope [23]. The dual nature of the Morlet wavelet gives it the ability to tradeoff between uncertainty in temporal and frequency resolution which is useful for a variety of signal processing tasks and  has relationships to vision [24]. The Morlet wavelet w is written as: where i is the imaginary operator, f is the frequency in Hz, and t is the time in seconds [25]. σ is the width of the Gaussian given by: An alternative useful expression for the Morlet wavelet is: where h is the full-width at half maximum in seconds of the Gaussian window modulating the sinusoid [25]. The Morlet wavelet can now be applied as a virtual shutter function to event-driven imagery data using a point-wise multiplication as illustrated in figure 7.
Additional insight into the behavior of the Morlet wavelet can be obtained by considering the frequency response of Morlet wavelets. Figure 8 shows the magnitude of the Fourier transform of 3 Morlet wavelets with frequency responses of 100, 500, and 700 Hz. From this plot we clearly see that each of these Morlet wavelets exhibit significant frequency selectivity. This means that Morlet wavelets can be used as a virtual shutter function to control the temporal content that is allowed to contribute to the formation of the frame. This capability could be particularly useful as a temporal filtering function when forming imagery which can be useful for applications such as passive digital stroboscopy. The structure/width of the main lobe and side lobes provides insight into the frequency selectivity and sensitivity of the Morlet wavelet virtual shutter function.

Digital coded exposure for emulating the shutter of a conventional camera
We now demonstrate the use of digital coded exposure to form frames in a manner that emulates the shutterbased nature of traditional cameras using event driven imagery. To demonstrate the formation of conventional frames, event-driven imagery data of the rotary-translation motion of the piston in a high-pressure air compressor was captured (figure 9). The event-driven imagery data was captured using the DVXplorer [26]. In addition, video of the pump/piston during the operating condition was captured using a conventional imager at 30.06 frames per second. The rotational frequency of the pump has been measured to be ∼5.37 to 5.38 Hz during operation. Figure 10 shows 4 frames of the conventional video that illustrate the state of the piston as the pump/piston undergoes a single complete rotation.    Now consider the event-driven data. Figure 11 provides a 2D snapshot of the events appearing in a specified duration of time. In all event-driven imagery data shown, positive polarity events are denoted as green, and negative polarity events are denoted red. In this case, only the piston is moving in the field of view of the imager, so the event-driven imager primarily generates events associated with the piston motion. Spurious events can also be seen across the image and these primarily are the result of noise. It is possible to roughly make out the silhouette of the piston from this data. Figure 12 shows an example of a vertical slice of the event-driven imagery data that corresponds to a horizontal location that passes through the path of the moving piston. The most striking feature of these events is that they assume a periodic pattern. The emergence of the periodic pattern is a consequence of the repeating, rotational movement of the piston/pump.
Next, consider the process of forming conventional frames from these events using a boxcar virtual shutter function/coded exposure ( figure 13). The effect of changing the start and stop time of the 'open-shutter' portion of the virtual box car on the formation of the final frame is demonstrated.
In order to form conventional frames using the boxcar virtual shutter function it is necessary to specify a period of time associated with each frame, as well as the start and stop time of the 'open-shutter' portion of the box car function. For this demonstration, the period of the frames is set to correspond to the rotational frequency of the pump, resulting in each frame capturing a single complete rotation of the pump.  Next the effect of changing the start time of the open portion of the boxcar function has on the formation of the resulting frame is investigated. Figure 15 demonstrates the effect of changing the start time of the boxcar function open time while keeping the width of the boxcar function constant at 10% of the period of the frame. The effect on the resulting image is that the mechanical linkage/piston is observed at different phases of its circular path. This result is in line with theory which would suggest that when taking images of periodic phenomena at a frequency matching the frequency of the phenomena, changing the phase angle driving the triggering of the image capture should result in changing the phase of the periodic phenomena that are observed. Figure 16 the time period over which these images are formed corresponds to the frequency of rotation of the compressor piston. These images reinforce that as the ratio of the time the boxcar function is in the open position increases, the fraction of the total cycle of the motion that experiences motion blur also increases. Also, changing the phase angle at which an image is captured also changes the phase of the portion of the periodic phenomena that is observed. Finally, in figure 16 it is also observed that as the ratio of the duration the boxcar function is in the open position, the number of events that contribute to the formation of the final  image also increases. This observation is analogous to the relationship between exposure time and the number of photons that contribute to the formation of an image captured using a traditional camera.

Morlet wavelet digital coded exposure for temporal pattern extraction
We now demonstrate the ability to use a Morlet wavelet virtual shutter function to selectively exclude temporal content captured with an event-driven imager. Consider the case where a DVS128 event-driven imager is used to observe 3 static, blinking LEDs as shown in figure 17. One LED blinks at 100 Hz, another LED blinks at 300 Hz, and the last LED blinks at 700 Hz. A subset of the event data captured using the DVS128 of the three blinking LEDs is provided in figure 18. In this plot the difference in frequency between the three LEDs is clearly observable in the spatio-temporal pattern of the events. Figure 19 shows a naively-formed frame from the event data. This frame is formed simply by summing the contributions from all the events over a given time period and scaling the result to have a max value of 255. In this image it is possible to observe a contribution of all three LEDs to the formation of the final image. Now, we take the event data captured from the 3 blinking LEDs and apply the Morlet wavelet coded exposures associated with the frequency response shown in figure 8. In this case there are three different coded exposures. The sinusoidal component of the Morlet wavelet coded exposures have values of 100, 500 and 700 Hz which match the frequencies of the blinking LEDs. It is anticipated that these Morlet wavelet coded exposures should have frequency selectivity for the blinking LED of corresponding frequency. The 3 Morlet wavelet virtual shutter functions are then applied to the event data captured from the blinking LEDs separately and a frame is formed for each virtual shutter function. The resulting frames can be found in figure 20.
An examination of the frames formed using the Morlet wavelength virtual shutter functions reveals the ability of the Morlet wavelet to selectively allow temporal frequency content of the events to contribute to the formation of the final frame. Each frame primarily consists of a contribution in energy from only one LED. This is particularly true for the 100 and 500 Hz cases where only energy from the 100 and 500 Hz LEDs are visually perceptible. In the 700 Hz case the 700 Hz LED is most prominent. However, the presence of energy from the 100 Hz LED is visually perceptible. However, the 100 Hz LED is far less visually perceptible than the 700 Hz LED for the case of the 700 Hz Morlet wavelet. As expected, the coded exposure virtual shutter function is able to filter the temporal frequency content that is allowed to contribute to the formation of the final frame. For more information on the sensitivity and frequency sensitivity of the Morlet wavelet virtual shutter functions used to perform this analysis please see figure 8.

Wavelet digital coded exposure for spatio-temporal pattern extraction for digital stroboscopy
We now demonstrate the use of the Morlet wavelet as a virtual shutter function for implementing stroboscopy from event-based imaging without any additional strobelight hardware. Event-based imagery was captured of the vertical, aluminum cantilever beam shown in figure 21. The cantilever beam was induced to vibrate by providing an impact with a modal hammer at the bottom portion of the cantilever beam. Prior experiments have shown that providing an impact excitation at the bottom of this beam results in the excitation of the first three vibration modes of this beam [28]. Next, three Morlet wavelet virtual shutter functions were formed. The sinusoidal frequencies of the three Morlet wavelet functions were chosen to match the natural frequencies of the first three natural frequencies of vibration of the beam ( figure 22). The Fourier transform of the three Morlet wavelet virtual shutter functions can be seen in figure 23. The framerate used is 1 Hz so each frame is formed using events captured over a 1 s period. The resulting frames formed using the Morlet wavelet virtual shutter function can be found in figure 24. In these three frames it is possible to observe the individual mode shapes for the first three modes of vibration of a cantilevered beam even though the beam has been excited by an impulse modal hammer excitation which simultaneously excites all three modes of vibration. Furthermore, it must be stressed, these three frames are all formed using the same set of event-data in post processing. A conventional stroboscopy technique would require 3 different measurements to obtain comparable frames for each of the three vibration modes.

Discussion
The main contribution of this paper is the development of a digital approach to the coded exposure that can be applied to the formation of frames captured using event-driven imagery. The proposed technique provides a physically-motivated approach to specifying the temporal content of the event data that contributes to the formation of a frame. In this work the capabilities of the digital coded exposure were demonstrated in the context of rotating, blinking, and vibrating phenomena. Digital coded exposures can manipulate the motionblur properties exhibited by a frame entirely in post processing. Digital coded exposures can also be used to selectively filter phenomena as well as be used to perform stroboscopic functions.
The frames formed using the techniques presented in this work can be fed to conventional frame-based image and video processing algorithms. In addition, different coded exposures/virtual shutter functions could be applied on a pixel-by-pixel basis. One could also envision using a variation of this technique to do phasecoherent averaging across frames of periodic phenomena. This would be like a new form of long-exposure computational photography such that one only captures the exposure during a particular phase. The technique presented in this work could also be applied to Quanta sensor data for photon-limited imaging as well.
The work presented here helps inform the use of event-based imagers for engineering applications where there is a need to enable persistent optical observations of phenomena that potentially exhibit very fast dynamics. These applications include structural health monitoring, machinery monitoring, structural dynamics identification, and monitoring of additive manufacturing processes. Typically these applications would require the use of high-speed imagers, but high speed imagers tend to consume a lot of power, generate large amounts of data, get hot resulting in failures, and have limited dynamics range. These limitations make conventional high-speed imagers infeasible for many applications. However, event-driven imagers can mitigate many of these limitations and hold potential for many industrial engineering applications. Furthermore, event-based imagers often have data rates that are amenable for applications such as online control of manufacturing processes. The development of a technique for forming frames in a principled manner that specifies the temporal content contributing to the formation of the final frames will help ease the transition of event-based imagers to industrial engineering applications.

Conclusion
In this work we present the framework for a digital coded exposure for event-driven imagery. Fundamentally, the concept of the digital coded exposure provides a means to form images from event-driven imagery in a manner that allows for careful control of the temporal dynamic content that is allowed to contribute to the formation of the final image. The digital coded exposure framework allows the user to form images in a manner similar to those that would be captured using a traditional imager with a mechanical shutter and exposure time. As a result, the user could control the motion blur properties of the final image in post processing. However, the digital coded exposure framework also opens up many additional opportunities for frame formation. The digital coded exposure can assume positive/negative, real/imaginary, and floating point values. This opens up the possibility to perform industrially relevant functions such as stroboscopy without any special illumination requirements. Because the digital coded exposure is applied in post-processing, this framework also allows for the generation of multiple images with different motion blur characteristics from the same event-driven imager measurement. It is also possible to imagine capturing in-phase and quadrature-type measurements from the event-driven imagery. Furthermore, the digital coded exposure can be applied on a per-pixel basis so different spatial regions of the final image can be sensitive to different temporal dynamics. The proposed framework naturally would couple with techniques that have been developed for reconstructing per-pixel light intensity time-series from events. The proposed technique could facilitate frame interpolation and up-conversion. The use of the coded-exposure also strengthens the ability to tie event-based imagery data to widely-used framebased image processing data processing pipelines. The use of digital coded exposures to control the temporal dynamics that are allowed to contribute to image formation opens up many interesting possibilities for the use of event-driven imagery in computational photography applications: for example, it may be possible to develop versions of a spatio-temporal image/video pyramid for multi-scalar, multi-rate signal processing algorithms.

Ethical statement
Not applicable.