figshare
Browse
1/1
3 files

Study Protocol: ChatGPT-generated versus expert-written answers to frequently asked questions about diabetes - an e-survey among all employees of a Danish diabetes center

Version 4 2023-02-13, 09:02
Version 3 2023-02-09, 09:03
Version 2 2023-01-31, 12:19
Version 1 2023-01-23, 14:14
online resource
posted on 2023-02-09, 09:03 authored by Adam HulmanAdam Hulman, Ole Lindgård Dollerup, Jesper Friis Mortensen, Kasper Norman, Henrik Støvring, Troels Krarup Hansen

  

The aim of the study was to investigate ChatGPT’s knowledge in the diabetes domain, more specifically in response to potential questions asked by patients about the disease, medication, diet and physical activity.

We hypothesized that participants (employees at a regional diabetes center: Steno Diabetes Center Aarhus), who have at least some and up to expert knowledge about diabetes, will not be able to distinguish between answers written by humans and generated by AI in response to diabetes-related questions. Our secondary hypothesis is that people with contact to patients as caregivers and those who previously tried ChatGPT might be better at identifying answers generated by AI. The survey was developed in Danish.

The document is a study protocol including details according to the CHERRIES checklist for e-surveys.


UPDATE (31/01/2023) The actual survey (including the correct answers and comments on misinformation if any) was uploaded both in Danish (original version used for data collection) and in English (direct translation from Danish using ChatGPT, not used for data collection).


UPDATE (09/02/2023) The original prompts and ChatGPT-generated answers (incl. edits) were uploaded to supplement the protocol (in Danish).

Funding

Steno Diabetes Center Aarhus is partly funded by a donation from the Novo Nordisk Foundation. The foundation had no role in the design of the study.

History

Usage metrics

    Licence

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC