Skip to main content

Reduced Precision Research of a GAN Image Generation Use-case

  • Conference paper
  • First Online:
Pattern Recognition Applications and Methods (ICPRAM 2021, ICPRAM 2022)

Abstract

In this research a deep convolutional Generative Adversarial Network (GAN) model is post-training quantized to a reduced precision arithmetic for a complex High Energy Physics (HEP) use-case. This research is motivated by the aim to decrease the necessary model size and computing time for reducing the required hardware resources for future Large Hadron Collider (LHC) detector simulations at CERN. However, in order to interpret the measured physics results, the detector simulations have to maintain the highest possible accuracy. Therefore, the quantized model is not only in detail analyzed in terms of hardware resource consumption but additionally comprehensively evaluated in terms of the achieved physics accuracy. We report that we achieve with the quantized model a 3.0x speed-up versus the initial model on modern CPUs. Furthermore, we investigate several new physics accuracy metrics to demonstrate that the accuracy does not significantly decrease due to the quantization process. Reduced precision computing for classification problems is already adequately studied, however, this is not the case for more complex image generation problems as we require for our use-case of detector simulations in this research. By using the Intel Neural Compressor, the quantization is performed in an iterative process. Neural Compressor automatically quantizes only the parameters of the neural network which do not decrease the accuracy of the model regarding a predefined accuracy metric. In our research we post-training quantize the GAN model from the 32-bit format down to 8-bit format.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 44.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 59.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Intel® neural compressor (2021). https://github.com/intel/neural-compressor

  2. Agostinelli, S., et al.: GEANT4-a simulation toolkit. Nucl. Instrum. Meth. A 506, 250–303 (2003). https://doi.org/10.1016/S0168-9002(03)01368-8

    Article  Google Scholar 

  3. Elsen, E.: A roadmap for HEP software and computing R &D for the 2020s. Comput. Softw. Big Sci. 3(1), 1–2 (2019). https://doi.org/10.1007/s41781-019-0031-6

    Article  Google Scholar 

  4. Apollinari, G., et al.: High-luminosity large hadron collider (HL-LHC): technical design report V. 0.1 4/2017 (11 2017). https://doi.org/10.23731/CYRM-2017-004

  5. Banner, R., Nahshan, Y., Hoffer, E., Soudry, D.: Post-training 4-bit quantization of convolution networks for rapid-deployment (2019)

    Google Scholar 

  6. Borji, A.: Pros and cons of GAN evaluation measures (2018)

    Google Scholar 

  7. Buhmann, E.: Getting high: high fidelity simulation of high granularity calorimeters with high speed (5 2020)

    Google Scholar 

  8. Feng Tian, Haihao Shen, J.G., Abidi, H.: Intel® lpot key takeaways (2021), https://www.intel.com/content/www/us/en/artificial-intelligence/posts/intel-low-precision-optimization-tool.html

  9. Goodfellow, I.J., et al.: Generative adversarial networks (2014)

    Google Scholar 

  10. Gupta, R., Ranga, V.: Comparative study of different reduced precision techniques in deep neural network, pp. 123–136 (2021). https://doi.org/10.1007/978-981-15-8377-3-11

  11. IEEE: IEEE standard for floating-point arithmetic. IEEE Std 754-2008, pp. 1–70 (2008)

    Google Scholar 

  12. Intel: oneAPI deep neural network library (oneDNN). https://github.com/oneapi-src/oneDNN

  13. Itay Hubara, Yury Nahshan, Y.H., Banner, R.: Accurate post training quantization with small calibration sets (2021)

    Google Scholar 

  14. Jain, A., Bhattacharya, S., Masuda, M., Sharma, V., Wang, Y.: Efficient execution of quantized deep learning models: a compiler approach (2020)

    Google Scholar 

  15. Lu, L.: Dying relu and initialization: theory and numerical examples. Communications in Computational Physics 28(5), 1671–1706 (2020). https://doi.org/10.4208/cicp.oa-2020-0165

  16. Micikevicius, P., et al.: Mixed precision training (2017)

    Google Scholar 

  17. Nandakumar, S.R., Le Gallo, M., Piveteau, C., Joshi, V., Mariani, G., Boybat, I., et al.: Mixed-precision deep learning based on computational memory. Front. Neurosci. 14, 406 (2020). https://doi.org/10.3389/fnins.2020.00406

  18. Nasr, G.E., Badr, E., Joun, C.: Cross entropy error function in neural networks: forecasting gasoline demand. In: FLAIRS Conference (2002)

    Google Scholar 

  19. de Oliveira, L., Paganini, M., Nachman, B.: Learning particle physics by example: location-aware generative adversarial networks for physics synthesis. Comput. Softw. Big Sci. 1(1), 1–24 (2017). https://doi.org/10.1007/s41781-017-0004-6

    Article  Google Scholar 

  20. Osorio, J.: Evaluating mixed-precision arithmetic for 3D generative adversarial networks to simulate high energy physics detectors

    Google Scholar 

  21. Paganini, M., de Oliveira, L., Nachman, B.: CaloGAN: simulating 3D high energy particle showers in multilayer electromagnetic calorimeters with generative adversarial networks. Phys. Rev. D 97(1), 014021 (2018). https://doi.org/10.1103/physrevd.97.014021

  22. Pierini, M., Zhang, M.: CLIC Calorimeter 3D images: electron showers at fixed angle (2020). https://doi.org/10.5281/zenodo.3603122

  23. PyTorch: Introduction to quantization on pyTorch (2020). https://pytorch.org/blog/introduction-to-quantization-on-pytorch/

  24. Rehm, F., Vallecorsa, S., Borras, K., Krücker, D.: Validation of deep convolutional generative adversarial networks for high energy physics calorimeter simulations (2021)

    Google Scholar 

  25. Rehm, F., et al.: Reduced precision strategies for deep learning: a high energy physics generative adversarial network use case. In: Proceedings of the 10th International Conference on Pattern Recognition Applications and Methods (2021). https://doi.org/10.5220/0010245002510258

  26. Swamidass, P.M. (ed.): MAPE (mean absolute percentage error). In: Swamidass, P.M. (ed.) Encyclopedia of Production and Manufacturing Management, p. 462. Springer, Boston (2000). https://doi.org/10.1007/1-4020-0612-8_580

  27. TensorFlow Lite: Post training quantization. https://www.tensorflow.org/lite/performance/post_training_quantization

  28. Vallecorsa, S., Carminati, F., Khattak, G.: 3D convolutional GAN for fast simulation. EPJ Web of Conferences 214, 02010 (2019). https://doi.org/10.1051/epjconf/201921402010

  29. Wang, N., Choi, J., Brand, D., Chen, C.Y., Gopalakrishnan, K.: Training deep neural networks with 8-bit floating point numbers (2018)

    Google Scholar 

  30. Wu, H.: Inference at reduced precision on GPUs (2019). https://developer.download.nvidia.com/video/gputechconf/gtc/2019/presentation/s9659-inference-at-reduced-precision-on-gpus.pdf

  31. Wu, H., Judd, P., Zhang, X., Isaev, M., Micikevicius, P.: Integer quantization for deep learning inference: Principles and empirical evaluation (2020)

    Google Scholar 

Download references

Acknowledgements

This work has been sponsored by the Wolfgang Gentner Programme of the German Federal Ministry of Education and Research.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Florian Rehm .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Rehm, F., Saletore, V., Vallecorsa, S., Borras, K., Krücker, D. (2023). Reduced Precision Research of a GAN Image Generation Use-case. In: De Marsico, M., Sanniti di Baja, G., Fred, A. (eds) Pattern Recognition Applications and Methods. ICPRAM ICPRAM 2021 2022. Lecture Notes in Computer Science, vol 13822. Springer, Cham. https://doi.org/10.1007/978-3-031-24538-1_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-24538-1_1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-24537-4

  • Online ISBN: 978-3-031-24538-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics