Skip to main content

Diffusion Models for Document Image Generation

  • Conference paper
  • First Online:
Document Analysis and Recognition - ICDAR 2023 (ICDAR 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14189))

Included in the following conference series:

Abstract

Image generation has got wide attention in recent times; however, despite advances in image generation techniques, document image generation having wide industry application has remained largely neglected. The previous research on structured document image generation uses adversarial training, which is prone to mode collapse and over-fitting, resulting in lower sample diversity. Since then, diffusion models have surpassed previous models on conditional and unconditional image generation. In this work, we propose diffusion models for unconditional and layout-controlled document image generation. The unconditional model achieves state-of-the-art FID 14.82 in document image generation on DocLayNet. Furthermore, our layout-controlled document image generation models beat previous state-of-the-art in image fidelity and diversity. On the PubLayNet dataset, we get an FID score of 15.02. On the complicated DocLayNet dataset, we obtained an FID score of 20.58 with \(256 \times 256\) resolution for conditional image generation.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 119.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 159.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Ramesh A., et al.: Zero-Shot Text-to-Image Generation. In: International Conference on Machine Learning (ICML), pp. 8821–8831 (2021)

    Google Scholar 

  2. Ramesh, A., Dhariwal, P., Nichol, A., Chu, C., Chen, M.: Hierarchical Text-Conditional Image Generation with CLIP Latents. In: arXiv, preprint: arXiv:2204.06125, (2022)

  3. Saharia, C., et al.: Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding. In: arXiv, preprint: arXiv:2205.11487, (2022)

  4. Razavi, A., Van-den-Oord, A., Vinyals, O.: Generating diverse high-fidelity images with VQ-VAE-2. Adv. Neural Inf. Process. Syst. (NeurIPS) 32, 14837–14847 (2019)

    Google Scholar 

  5. Biswas, S., Riba, P., Lladós, J., Pal, U.: DocSynth: A Layout Guided Approach for Controllable Document Image Synthesis. In: Lladós, J., Lopresti, D., Uchida, S. (eds.) ICDAR 2021. LNCS, vol. 12823, pp. 555–568. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-86334-0_36

    Chapter  Google Scholar 

  6. Bui, Q.A., Mollard, D., Tabbone, S.: Automatic synthetic document image Generation using generative adversarial networks: application in mobile-captured document analysis. In: International Conference on Document Analysis and Recognition (ICDAR), pp. 393–400, IEEE (2019)

    Google Scholar 

  7. Sohl-Dickstein, J., Weiss, E., Maheswaranathan, N., Ganguli, S.: Deep unsupervised learning using non-equilibrium thermodynamics. In: International Conference on Machine Learning (ICML), pp. 2256–2265, PMLR (2015)

    Google Scholar 

  8. Welling, M., Teh, Y.W.: Bayesian learning via stochastic gradient langevin dynamics. In: Proceedings of the 28th International Conference on Machine Learning (ICML), vol 28, pp. 681–688 (2011)

    Google Scholar 

  9. Song, Y., Ermon, S.: Generative modeling by estimating gradients of the data distribution. Adv. Neural Inf. Process. Syst. (NeurIPS) 32, 11895–11907 (2019)

    Google Scholar 

  10. Song, Y., Ermon, S.: Improved techniques for training score-based generative models. Adv. Neural Inf. Process. Syst.(NeurIPS) 33, 12438–12448 (2020)

    Google Scholar 

  11. Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Adv. Neural Inf. Process. Syst. (NeurIPS) 33, 6840–6851 (2020)

    Google Scholar 

  12. Song, J., Meng, C., Ermon, S: Denoising Diffusion Implicit Models. In: arXiv, preprint: arXiv:2010.02502 (2020)

  13. Nichol, A.Q., Dhariwal, P.: Improved denoising diffusion probabilistic models. In: International Conference on Machine Learning (ICML), pp. 8162–8171 PMLR (2021)

    Google Scholar 

  14. Dhariwal, P., Nichol, A.: Diffusion models beat GANs on image synthesis. Adv. Neural Inf. Process. Syst. (NIPS) 34, 8780–8794 (2021)

    Google Scholar 

  15. Ho, J., Salimans, T.: Classifier-free Diffusion Guidance. In: arXiv, preprint: arXiv:2207.12598 (2022)

  16. Song, Y., Sohl-Dickstein, J., Kingma, D.P., Kumar, A., Ermon, S., Poole, B.: Score-based generative modeling through stochastic differential equations. In: arXiv, preprint: arXiv:2011.13456 (2020)

  17. Nichol, A.,et al.: Glide: towards photorealistic image generation and editing with text-guided diffusion models. In: arXiv, preprint: arXiv:2112.10741 (2021)

  18. Ho, J., Saharia, C., Chan, W., Fleet, D.J., Norouzi, M., Salimans, T.: Cascaded diffusion models for high fidelity image generation. J. Mach. Learn. Res. 23, 1–33 (2022)

    Google Scholar 

  19. Ramesh, A., Dhariwal, P., Nichol, A., Chu, C., Chen, M.: Hierarchical text-conditional image generation with clip latents. In: arXiv, preprint: arXiv:2204.06125 (2022)

  20. Saharia, C., et al.: Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding. arXiv, preprint: arXiv:2205.11487 (2022)

  21. Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10684–10695 (2022)

    Google Scholar 

  22. Zhong, X., Tang, J., Yepes, A.J.: PubLayNet: largest dataset-ever for document layout analysis. In: International Conference on Document Analysis and Recognition (ICDAR), pp. 1015–1022. IEEE (2019)

    Google Scholar 

  23. Lin, T.-Y., et al.: Microsoft COCO: Common Objects in Context. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8693, pp. 740–755. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10602-1_48

    Chapter  Google Scholar 

  24. Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 248–255, IEEE (2009)

    Google Scholar 

  25. Pfitzmann, B., Auer, C., Dolfi, M., Nassar, A.S., Staar, P.W.: DocLayNet: A Large Human-Annotated Dataset for Document-Layout Analysis. arXiv, preprint: arXiv:2206.01062 (2022)

  26. EPA United States Environment Protection Agency. https://www.epa.gov/facts-and-figures-about-materials-waste-and-recycling/national-overview-facts-and-figures-materials?_ga=2.202832145.1018593204.1622837058-191240632.1618425162

  27. Forbes Report. https://www.forbes.com/sites/forbestechcouncil/2020/04/02/going-paperless-a-journey-worth-taking/?sh=72561e4a5ca1

  28. Wiseman, S., Shieber, S.M., Rush, A.M.: Challenges in data-to-document generation. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing, pp. 2253–2263, Copenhagen, Denmark. Association for Computational Linguistics (2017)

    Google Scholar 

  29. Biswas, S., Riba, P., Lladós, J., Pal, U.: Graph-Based Deep Generative Modelling for Document Layout Generation. In: Barney Smith, E.H., Pal, U. (eds.) ICDAR 2021. LNCS, vol. 12917, pp. 525–537. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-86159-9_38

    Chapter  Google Scholar 

  30. Brown, T., et al.: Language models are Few-shot learners. Adv. Neural Inf. Process. Syst. NIPS 33, 1877–1901 (2020)

    Google Scholar 

  31. Horak, W.: Office document architecture and office document interchange formats. current status of international standardization. Computer 10, 50–60 (1985)

    Google Scholar 

  32. Esser, P., Rombach, R., Ommer, B.: Taming transformers for high-resolution image synthesis. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 12873–12883, (2021)

    Google Scholar 

  33. Kim, T., Bengio, Y.: Deep-directed Generative Models with Energy-based Probability Estimation. In: arXiv, preprint arXiv:1606.03439 (2016)

  34. Yang, L., Karniadakis, G.E.: Potential flow generator with L-2 optimal transport regularity for generative models. IEEE Trans. Neural Netw. Learn. Syst. 33, 528–538 (2020)

    Google Scholar 

  35. Zhang, L., E., W., Wang, L.: Monge-ampere flow for generative modeling. arXiv, preprint arXiv:1809.10188 (2018)

  36. Metz, L., Poole, B., Pfau, D., Sohl-Dickstein, J.: Unrolled generative adversarial networks. In: International Conference on Learning Representations, ICLR (2017)

    Google Scholar 

  37. Arjovsky, M., Chintala, S., Bottou, L.: Wasserstein generative adversarial network. In: Proceedings of the International Conference on Machine Learning (ICML), vol. 70, pp. 214–223 (2017)

    Google Scholar 

  38. Goodfellow, I., et al.: Generative adversarial nets. Adv. Neural Inf. Process. Syst. (NIPS) 27, 139–144 (2014)

    Google Scholar 

  39. Brock, A., Donahue, J., Simonyan, K.: Large scale gan training for high fidelity natural image synthesis. In: International Conference on Learning Representations (ICLR), vol. 7 (2019)

    Google Scholar 

  40. Vo, D.M., Nguyen, D.M., Le, T.P., Lee, S.W.: HI-GAN: a hierarchical generative adversarial network for blind denoising of real photographs, Elsevier Science Inc. Inf. Sci. 570, 225–240 (2021)

    Google Scholar 

  41. Karras, T., et al.: Alias-free generative adversarial networks. Adv. Neural Inf. Process. Syst. (NeurIPS) 34, 852–863 (2021)

    Google Scholar 

  42. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), vol 1, pp. 4171–4186 (2019)

    Google Scholar 

  43. Oord, A.V.D., Vinyals, O., Kavukcuoglu, K.: Neural discrete representation learning. Adv. Neural Inf. Process. Syst. (NIPS) 30, 6306–6315 (2017)

    Google Scholar 

  44. Hutter, L.: Decoupled weight decay regularization. In: International Conference on Learning Representations (ICLR), vol 7 (2019)

    Google Scholar 

  45. Radford, W.: Child, Luan, Amodei, Sutskever: Language Models are Unsupervised Multitask Learners. OpenAI, Technical Report (2019)

    Google Scholar 

  46. Peters, M.E., Neumann, M., Iyyer, M., Gardner, M., Clark, C., Lee, K., Zettlemoyer, L.: Deep Contextualized Word Representations. In: Proceedings of the conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. vol. 1, pp. 2227–2237 (2018)

    Google Scholar 

  47. Karras, T, Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and Improving the Image Quality of StyleGAN. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8110–8119, (2020)

    Google Scholar 

  48. Beaumont, R.: img2dataset: Easily turn large sets of image urls to an image dataset. In Github, https://github.com/rom1504/img2dataset (2021)

  49. Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. Adv. Neural Inf. Process. Syst. (NIPS) 28, 91–99 (2015)

    Google Scholar 

  50. Younas, J., Siddiqui, S.A., Munir, M., Malik, M.I., Shafait, F., Lukowicz, P., Ahmed, S.: Fi-Fo detector: figure and formula detection using deformable networks. Appl. Sci. 10, 6460 (2020)

    Article  Google Scholar 

Download references

Acknowledgement

We want to thank Arooba Maqsood for her assistance in the writing process and for her helpful comments and suggestions throughout the project.

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Noman Tanveer or Faisal Shafait .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Tanveer, N., Ul-Hasan, A., Shafait, F. (2023). Diffusion Models for Document Image Generation. In: Fink, G.A., Jain, R., Kise, K., Zanibbi, R. (eds) Document Analysis and Recognition - ICDAR 2023. ICDAR 2023. Lecture Notes in Computer Science, vol 14189. Springer, Cham. https://doi.org/10.1007/978-3-031-41682-8_27

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-41682-8_27

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-41681-1

  • Online ISBN: 978-3-031-41682-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics