Artificial Intelligence Discusses the Role of Artificial Intelligence in Translational Medicine

L ike many people, I was completely taken in and simultaneously taken aback by the public release of ChatGPT (GPT stands for “generative pretrained transformer”), the natural language processing tool that allows one to have a personalized conversation with an artificial intelligence (AI) bot capable of providing detailed responses to questions (prompts) with uncanny speed. ChatGPT is built by OpenAI, a nonprofit San Francisco–based AI research and deployment company, whose mission is to “ensure that artificial general intelligence benefits all of humanity.” OpenAI also released DALLE 2, which is an AI art generator. The development of user-friendly versatile AI chatbots has been slow in coming. As noted by Kevin Roose, a technology columnist for The New York Times, the responses of earlier generations of AI chatbots were often restricted to narrow well-defined tasks or questions. When the chatbot was asked to respond to questions or tasks outside of the preprogrammed AI comfort zone, the bot often “flailed.” However, in recent years, the technology has improved substantially. ChatGPT is based on GPT-3.5, which is an AI text generator that uses Reinforcement Learning from Human Feedback to inform its language model. GPT-3.5 employed human AI trainers who engaged in conversations in which they played

L ike many people, I was completely taken in and simultaneously taken aback by the public release of ChatGPT (GPT stands for "generative pretrained transformer"), the natural language processing tool that allows one to have a personalized conversation with an artificial intelligence (AI) bot capable of providing detailed responses to questions (prompts) with uncanny speed. ChatGPT is built by OpenAI, a nonprofit San Francisco-based AI research and deployment company, whose mission is to "ensure that artificial general intelligence benefits all of humanity." OpenAI also released DALLE2, which is an AI art generator.
The development of user-friendly versatile AI chatbots has been slow in coming. As noted by Kevin Roose, a technology columnist for The New York Times, the responses of earlier generations of AI chatbots were often restricted to narrow well-defined tasks or questions. When the chatbot was asked to respond to questions or tasks outside of the preprogrammed AI comfort zone, the bot often "flailed." 1 However, in recent years, the technology has improved substantially. ChatGPT is based on GPT-3.5, which is an AI text generator that uses Reinforcement Learning from Human Feedback to inform its language model. GPT-3.5 employed human AI trainers who engaged in conversations in which they played the role of the user and the AI assistant to incorporate human responses into the machine learned responses. Although this technology is not new to AI researchers, the release of ChatGPT by OpenAI was the first time that this type of technology became freely available to the public. Making the software publicly available allowed OpenAI to register over 1 million users, who then provided human feedback on a massive scale to further refine ChatGPT's natural language responses.
As an editor of a translational journal and as someone who has previously written about the potential perils of AI technology in health care delivery, 2 I decided to take ChatGPT out for a test drive by prompting (ie, interviewing) the AI chatbot to engage in a point counter point discussion about the role of AI in translational medicine. To accompany this paper translational perspectives article, I also generated an Andy Warhol-inspired AI-generated picture of a female and male translational scientist (Figure 1), which was created using a commercial AI content generator (Jasper) that uses the DALL-E 2 system developed by OpenAI.
The AI Chatbot responses that follow below were generated in response to my questions, and were copied verbatim and have not been edited for grammar, style, or content. Manuscript received January 3, 2023; accepted January 3, 2023.

FUTURE OF TRANSLATIONAL MEDICINE?
Artificial Intelligence (AI) is rapidly becoming a key player in the field of translational medicine, and it is likely to play an increasingly important role in the future. Translational medicine is a rapidly growing field that aims to bridge the gap between basic research and clinical practice, by translating findings from the laboratory into practical applications that can improve patient care. AI is well-suited to this task, due to its ability to analyze large amounts of data quickly and accurately, and to identify patterns and trends that might not be apparent to humans.
One of the key ways in which AI is being used in translational medicine is in the analysis of large In conclusion, AI is likely to play an increasingly important role in the future of translational medicine, due to its ability to analyze large datasets quickly and accurately, to identify patterns and trends, and to help develop personalized treatments and new drugs.
As AI technologies continue to advance, it is likely that we will see even more applications of AI in the field of translational medicine, which will have the potential to improve patient care and outcomes.

WHY WILL ARTIFICIAL INTELLIGENCE NOT BE THE FUTURE OF TRANSLATIONAL MEDICINE?
While Artificial Intelligence (AI) has been hailed as a promising new tool in the field of translational medicine, there are a number of reasons why it is unlikely to be the future of this field. Translational medicine is a rapidly growing field that aims to bridge the gap between basic research and clinical practice, by translating findings from the laboratory into practical applications that can improve patient care.
One of the main limitations of AI in translational medicine is that it is only as good as the data that it is given. If the data that is used to train AI algorithms is biased or incomplete, the algorithms will be biased or incomplete as well. This can lead to inaccurate or misleading results, which could have serious consequences for patient care. For example, if an AI algorithm is used to predict patient outcomes and the data that it is given is biased, it may not accurately

FUNDING SUPPORT AND AUTHOR DISCLOSURES
The author has reported that he has no relationships relevant to the contents of this paper to disclose.