skip to main content
10.1145/3626253.3635608acmconferencesArticle/Chapter ViewAbstractPublication PagessigcseConference Proceedingsconference-collections
poster

Understanding the Role of Temperature in Diverse Question Generation by GPT-4

Published:15 March 2024Publication History

ABSTRACT

We conduct a preliminary study of the effect of GPT's temperature parameter on the diversity of GPT4-generated questions. We find that using higher temperature values leads to significantly higher diversity, with different temperatures exposing different types of similarity between generated sets of questions. We also demonstrate that diverse question generation is especially difficult for questions targeting lower levels of Bloom's Taxonomy.

References

  1. Benjamin S Bloom, Max D Englehart, Edward J Furst, Walker H Hill, David R Krathwohl, et al. 1956. Taxonomy of educational objectives, handbook I: the cognitive domain. New York: David McKay Co.Google ScholarGoogle Scholar
  2. Paul Denny, Hassan Khosravi, Arto Hellas, Juho Leinonen, and Sami Sarsa. 2023. Can We Trust AI-Generated Educational Content? Comparative Analysis of Human and AI-Generated Learning Resources. arXiv:2306.10509 [cs.HC]Google ScholarGoogle Scholar
  3. Jacob Doughty, Zipiao Wan, Anishka Bompelli, Jubahed Qayum, Taozhi Wang, Juran Zhang, Yujia Zheng, Aidan Doyle, Pragnya Sridhar, Arav Agarwal, Christopher Bogart, Eric Keylor, Can Kultur, Jaromir Savelka, and Majd Sakr. 2024. A Comparative Study of AI-Generated (GPT-4) and Human-crafted MCQs in Programming Education. In Proceedings of the 26th Australasian Computing Education Conference ACE '24). Association for Computing Machinery, New York, NY, USA, 114--123. https://doi.org/10.1145/3636243.3636256Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. J. Richard Landis and Gary G. Koch. 1977. The Measurement of Observer Agreement for Categorical Data. Biometrics 33, 1 (1977), 159--174. http://www.jstor.org/stable/2529310Google ScholarGoogle ScholarCross RefCross Ref
  5. Stephen MacNeil, Andrew Tran, Arto Hellas, Joanne Kim, Sami Sarsa, Paul Denny, Seth Bernstein, and Juho Leinonen. 2023. Experiences from Using Code Explanations Generated by Large Language Models in a Web Software Development E-Book. In Proceedings of the 54th ACM Technical Symposium on Computer Science Education V. 1 (Toronto ON, Canada) (SIGCSE 2023). Association for Computing Machinery, NewYork, NY, USA, 931--937. https://doi.org/10.1145/3545945.3569785Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Pranjal Dilip Naringrekar, Ildar Akhmetov, and Eleni Stroulia. 2023. Generating CS1 Coding Questions Using OpenAI. In Proceedings of the 25th Western Canadian Conference on Computing Education (Vancouver, BC, Canada) (WCCCE '23). Association for Computing Machinery, New York, NY, USA, Article 11, 2 pages. https://doi.org/10.1145/3593342.3593348Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Understanding the Role of Temperature in Diverse Question Generation by GPT-4

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Article Metrics

          • Downloads (Last 12 months)51
          • Downloads (Last 6 weeks)51

          Other Metrics

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader