Skip to main content

DVS-Voltmeter: Stochastic Process-Based Event Simulator for Dynamic Vision Sensors

  • Conference paper
  • First Online:
Computer Vision – ECCV 2022 (ECCV 2022)

Abstract

Recent advances in deep learning for event-driven applications with dynamic vision sensors (DVS) primarily rely on training over simulated data. However, most simulators ignore various physics-based characteristics of real DVS, such as the fidelity of event timestamps and comprehensive noise effects. We propose an event simulator, dubbed DVS-Voltmeter, to enable high-performance deep networks for DVS applications. DVS-Voltmeter incorporates the fundamental principle of physics - (1) voltage variations in a DVS circuit, (2) randomness caused by photon reception, and (3) noise effects caused by temperature and parasitic photocurrent - into a stochastic process. With the novel insight into the sensor design and physics, DVS-Voltmeter generates more realistic events, given high frame-rate videos. Qualitative and quantitative experiments show that the simulated events resemble real data. The evaluation on two tasks, i.e., semantic segmentation and intensity-image reconstruction, indicates that neural networks trained with DVS-Voltmeter generalize favorably on real events against state-of-the-art simulators.

S. Lin and Y. Ma—Equal contribution.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Equation (7) uses first-order Taylor approximation \( \ln I (t) - \ln I (t_0) \approx \frac{1}{I (t)} \varDelta I(t)\).

References

  1. Alonso, I., Murillo, A.C.: EV-SegNet: semantic segmentation for event-based cameras. In: IEEE Computer Vision and Pattern Recognition Workshops (CVPRW) (2019)

    Google Scholar 

  2. Baldwin, R., Almatrafi, M., Asari, V., Hirakawa, K.: Event probability mask (EPM) and event denoising convolutional neural network (EDnCNN) for neuromorphic cameras. In: IEEE Computer Vision and Pattern Recognition (CVPR), pp. 1701–1710 (2020)

    Google Scholar 

  3. Berner, R., Brandli, C., Yang, M., Liu, S.C., Delbruck, T.: A \(240 \times 180\) 10 mw 12 us latency sparse-output vision sensor for mobile applications. In: Symposium on VLSI Circuits, pp. C186–C187. IEEE (2013)

    Google Scholar 

  4. Binas, J., Neil, D., Liu, S.C., Delbruck, T.: DDD17: end-to-end DAVIS driving dataset (2017)

    Google Scholar 

  5. Brandli, C., Berner, R., Yang, M., Liu, S.C., Delbruck, T.: A \(240 \times 180\) 130 db 3 \(\upmu \)s latency global shutter spatiotemporal vision sensor. IEEE J. Solid-State Circ. 49(10), 2333–2341 (2014)

    Article  Google Scholar 

  6. Delbruck, T., Hu, Y., He, Z.: V2E: from video frames to realistic DVS event camera streams. arXiv preprint arXiv:2006.07722

  7. Foi, A., Trimeche, M., Katkovnik, V., Egiazarian, K.: Practical Poissonian-Gaussian noise modeling and fitting for single-image raw-data. IEEE Trans. Image Process. (TIP) 17(10), 1737–1754 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  8. Folks, J.L., Chhikara, R.S.: The inverse Gaussian distribution and its statistical application-a review. J. Roy. Stat. Soc.: Ser. B (Methodol.) 40(3), 263–275 (1978)

    MathSciNet  MATH  Google Scholar 

  9. Gallego, G., et al.: Event-based vision: a survey. IEEE Trans. Pattern Anal. Mach. Intell. (TPAMI) 44(1), 154–180 (2020)

    Article  Google Scholar 

  10. Garca, G.P., Camilleri, P., Liu, Q., Furber, S.: pyDVS: an extensible, real-time dynamic vision sensor emulator using off-the-shelf hardware. In: IEEE Symposium Series on Computational Intelligence (SSCI), pp. 1–7. IEEE (2016)

    Google Scholar 

  11. Gehrig, D., Gehrig, M., Hidalgo-Carrió, J., Scaramuzza, D.: Video to events: recycling video datasets for event cameras. In: IEEE Computer Vision and Pattern Recognition (CVPR), pp. 3586–3595 (2020)

    Google Scholar 

  12. Gehrig, M., Millhäusler, M., Gehrig, D., Scaramuzza, D.: E-RAFT: dense optical flow from event cameras. In: International Conference on 3D Vision (3DV), pp. 197–206. IEEE (2021)

    Google Scholar 

  13. Jiang, H., Sun, D., Jampani, V., Yang, M.H., Learned-Miller, E., Kautz, J.: Super SloMo: high quality estimation of multiple intermediate frames for video interpolation. In: IEEE Computer Vision and Pattern Recognition (CVPR), pp. 9000–9008 (2018)

    Google Scholar 

  14. Kuo, H.H.: White Noise Distribution Theory. CRC Press (2018)

    Google Scholar 

  15. Lagorce, X., Orchard, G., Galluppi, F., Shi, B.E., Benosman, R.B.: HOTS: a hierarchy of event-based time-surfaces for pattern recognition. IEEE Trans. Pattern Anal. Mach. Intell. (TPAMI) 39(7), 1346–1359 (2016)

    Article  Google Scholar 

  16. Li, W., et al.: InteriorNet: mega-scale multi-sensor photo-realistic indoor scenes dataset (2018)

    Google Scholar 

  17. Lichtsteiner, P., Posch, C., Delbruck, T.: A \(128\times 128\) 120db 15 \(\upmu \)s latency asynchronous temporal contrast vision sensor. IEEE J. Solid-State Circ. 43(2), 566–576 (2008)

    Article  Google Scholar 

  18. Michael, J.R., Schucany, W.R., Haas, R.W.: Generating random variates using transformations with multiple roots. Am. Stat. 30(2), 88–90 (1976)

    MATH  Google Scholar 

  19. Mitrokhin, A., Hua, Z., Fermuller, C., Aloimonos, Y.: Learning visual motion segmentation using event surfaces. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14414–14423 (2020)

    Google Scholar 

  20. Mueggler, E., Rebecq, H., Gallego, G., Delbruck, T., Scaramuzza, D.: The event-camera dataset and simulator: event-based data for pose estimation, visual odometry, and SLAM. Int. J. Robot. Res. (IJRR) 36(2), 142–149 (2017)

    Article  Google Scholar 

  21. Nah, S., Hyun Kim, T., Mu Lee, K.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: IEEE Computer Vision and Pattern Recognition (CVPR), pp. 3883–3891 (2017)

    Google Scholar 

  22. Nozaki, Y., Delbruck, T.: Temperature and parasitic photocurrent effects in dynamic vision sensors. IEEE Trans. Electron Dev. 64(8), 3239–3245 (2017)

    Article  Google Scholar 

  23. Rebecq, H., Gehrig, D., Scaramuzza, D.: ESIM: an open event camera simulator. In: Conference on Robot Learning (CoRL), pp. 969–982. PMLR (2018)

    Google Scholar 

  24. Rebecq, H., Ranftl, R., Koltun, V., Scaramuzza, D.: High speed and high dynamic range video with an event camera. IEEE Trans. Pattern Anal. Mach. Intell. (TPAMI) 43(6), 1964–1980 (2019)

    Article  Google Scholar 

  25. Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. (TIP) 13(4), 600–612 (2004)

    Article  Google Scholar 

  26. Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: IEEE Computer Vision and Pattern Recognition (CVPR), pp. 586–595 (2018)

    Google Scholar 

  27. Zhang, S., Zhang, Yu., Jiang, Z., Zou, D., Ren, J., Zhou, B.: Learning to see in the dark with events. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12363, pp. 666–682. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58523-5_39

    Chapter  Google Scholar 

Download references

Acknowledgement

This work was supported in part by the Ministry of Education, Republic of Singapore, through its Start-Up Grant and Academic Research Fund Tier 1 (RG137/20).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bihan Wen .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 17673 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Lin, S., Ma, Y., Guo, Z., Wen, B. (2022). DVS-Voltmeter: Stochastic Process-Based Event Simulator for Dynamic Vision Sensors. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds) Computer Vision – ECCV 2022. ECCV 2022. Lecture Notes in Computer Science, vol 13667. Springer, Cham. https://doi.org/10.1007/978-3-031-20071-7_34

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-20071-7_34

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-20070-0

  • Online ISBN: 978-3-031-20071-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics