Generative AI

The term"generative AI"refers to computational techniques that are capable of generating seemingly new, meaningful content such as text, images, or audio from training data. The widespread diffusion of this technology with examples such as Dall-E 2, GPT-4, and Copilot is currently revolutionizing the way we work and communicate with each other. In this article, we provide a conceptualization of generative AI as an entity in socio-technical systems and provide examples of models, systems, and applications. Based on that, we introduce limitations of current generative AI and provide an agenda for Business&Information Systems Engineering (BISE) research. Different from previous works, we focus on generative AI in the context of information systems, and, to this end, we discuss several opportunities and challenges that are unique to the BISE community and make suggestions for impactful directions for BISE research.

Wow! That was fast! ChatGPT reached one million users in five days, much faster than any previous online tool.
OpenAI.org released ChatGPT to the public in November 2022, just four months before the writing of this article. In many ways, it seems premature to provide a set of resources for something that is still so new. Yet it is also necessary to do so. ChatGPT and its fellow generative AI applications are already being applied in a dizzying array of applications, so leaders of innovation, R&D, and IT need to engage. I write this with the knowledge that many specific references in this article will be superseded by newer articles but with the confidence that they will help those who are interested to get started.

Where to start
The best place to start is likely with understanding what generative AI is and what it is not. In brief, generative AI uses a very large corpus of datatext, images, or other labelled data-to create, at the request of users, new versions of text, images, or predicted data. Much of the recent buzz has been around the large language models (LLMs) used by ChatGPT and Bing AI, and of image-generation models like DALL-E and Stable Diffusion. These generative AI tools let users create professional sounding text and interesting images using an English-language prompt.
These applications, though much in the news, are only a part of generative AI. There are also applications closer to R&D in science and design. Autodesk has, for many years, incorporated features into its design software that use goals and constraints set by users to generate and test physical designs. Some of the tests include strength testing and modelling of thermal flows (see Autodesk.com's "Generative Design").
Moderna has applied some of the same principles to generate potential pharmaceutical molecules and to filter which of these should be explored in the laboratory. Competing in the Age of AI by Marco Iansiti and Karim Lakhani provides detail on this case study. It explains, in part, how Moderna was able to respond so quickly to the COVID-19 virus. You may also want to read my interview with Marco Iansiti, "Corporate Operating Models in the Age of AI," published in Research Technology-Management (RTM).
Generative AI has also been very successful in complex domains that can be explicitly modelled. Demis Hassabis discusses the topic in his IRI Medalist paper, "DeepMind: From Games to Scientific Discovery," also published in RTM. AlphaGo, one of the products of the lab (now owned by Alphabet, Inc), learned to play and win Go using deep learning techniques trained through games with people. AlphaGo Zero, its successor, used generative AI: it became a world champion by generating moves for an opponent and playing against itself. Similar technology has been used, for example, to solve the complex biology problem of protein folding. There are many more resources, which let you can dig into the technology at whatever level of depth you desire.

Text Generation and Image Generation
ChatGPT and Bing AI are based on LLMs. Their underlying machine learning algorithms are trained on massive amounts of text and are designed to predict what the next word or phrase in a sentence or paragraph might be. They do not search and synthesize but use mathematical models to select the best next word. The prompt seeds the generation.
In the case of images, generative algorithms use past images of cats or space ships or trees, together with labels about styles or schools of art, to generate images of, for example, "a cat flying a space ship over trees, in surrealist style." Again, the key is the training of immense amounts of tagged data to generate new images from an amalgam of related old images.
Much has been written about these systems in the past few months, in all kinds of popular media, most of it based on experiments that reveal both the potential and the limitations of the tools. Here are a few that I found interesting. New York Times technology columnist Kevin Roose wrote a now famous column-"Bing's A.I. Chat: 'I Want to Be Alive.
'"-about his interaction with Bing's AI chatbot, Sydney. The dialog became strange and almost creepy, but it was illuminating about how these LLMs work.
Cade Matz, also of the New York Times, provides background on the underlying technology. In an article "Why Chatbots Sometimes Act Weird and Spout Nonsense," he notes that these tools are part of an evolution of capabilities. "Most people use neural networks every day," he observes. "It's the technology that identifies people, pets and other objects in images posted to internet services like Google Photos. It allows Siri and Alexa . . . to recognize the words you speak. And it's what translates between English and Spanish on services like Google Translate." The space is moving quickly. Bing's AI, which integrates ChatGPT with the Internet, brings significant advantages over ChatGPT alone. Ethan Mollick, in a post titled "Power and Weirdness: How to Use Bing AI," gives tips for framing prompts (and sequences of prompts) to get good results. He suggests asking Bing AI to do some Internet research as part of the project. We will learn over time about the idiosyncrasies of these tools, and they will learn ways that make us less needful of this learning.
But bloggers beware! Zulie Rane, a tech writer, tried to use ChatGPT on a client assignment. Her essay "I Paid a Professional to Edit a ChatGPT-Written Article -Hilarity Ensued," points out the many ways in which the work was below her professional standards. Experiments like these are exposing the limitations of ChatGPT but also the ways that humans can work with it to good effect.
Image generation tools seem more mature, perhaps because the results are inherently subjectively judged. Kevin Kelly, Wired Magazine's founding executive editor, wrote "Engines of Wow" about the current collection of image generation AI tools like DALL-E (OpenAI), Stable Diffusion, Imagen (Google), Parti (Google), and Mindjourney. Kelly paints an enchanting picture of augmented human intelligence enabled by these models and how they will help our creativity. He notes something that is important: "(T)he best applications. . . are the result not of typing in a single prompt but of very long conversations between humans and machines." experiments Once you have some idea of how these tools work (and maybe, even before you do), experiment with them. You will be amazed both at their capabilities and at their limitations. As I noted in a recent editorial "Almost Human" in the last issue of RTM, I first experimented with these tools by writing a four-paragraph essay comparing The Sun Also Rises with Babbitt (a high school assignment I once had). I thought that it did passably well, but my son, who teaches English to 10 th graders, was not impressed.
Next, I tried creating a Christmas card using a combination of ChatGPT and DALL-E. The text was clichéd and lacking authenticity, but it was written in good English and might have worked on a Hallmark card. The image for "an outside in Christmas tree" created a much better design than I could havecreative and unpredictable.
My third experiment was to attempt to use ChatGPT to find a thread among disparate articles published in RTM's March-April 2023. The results were interesting but weak. I decided that the tool could be useful in creating a column but not in writing it.
Out of curiosity, for a fourth experiment, I asked ChatGPT, "What are the best articles written by Jim Euchner?" It gave me a list of five "notable articles." Here is the thing. I did not author any of the articles it cited (though they are the kind of articles I might have written). I thought that I might have been cited in the articles it lists, but I cannot even find these articles. They seem to be entirely made up. This is an example of what some people call ChatGPT's hallucinations.
The latter experiment put a temporary damper on my enthusiasm for ChatGPT. It helped me to understand that ChatGPT generates text; it doesn't look up facts to instantiate that text. It really doesn't care. It reflects, in many cases, what might be true rather than what is factual. The fact that it writes so well makes it more believable than it should be.

use cases
A key question for those considering these tools is to understand what they are good for and what they are unable to do. At this point, you will have to figure some of this out for yourself through your own experiments.
ChatGPT itself makes bold claims as to its utility: I am ChatGPT, a large language model trained by OpenAI. I am designed to understand natural language and generate human-like responses to text-based questions or prompts. I have been trained on a vast corpus of text data and can generate responses on a wide range of topics, including science, technology, business, entertainment, and more. I use machine learning algorithms and deep neural networks to understand the meaning behind text-based inputs and generate responses that are both informative and coherent [italics added]. My goal is to provide helpful and accurate information to users and assist them in their quest for knowledge.
ChatGPT's ambitions are clear. What is not clear, however, is the claim that it understands meaning (as we might understand meaning) or that its responses are "both informative and coherent" (at least not yet).
The potential applications of these tools are nevertheless broad-if they are used in the proper way. According to Andri Peetso, who created Conturata and teaches a seminar on the topic, ChatGPT has been used successfully • To write routine correspondence (for example, from a doctor to appeal compensation decisions to insurance companies); • To brainstorm characters or plot ideas for stories or headlines; • To write code or to translate code from one language to another; • To improve the grammar of written text; • To summarize a longer article; and • To write in the voice of a particular persona.

It runs into problems when
• It is asked to provide specific factual information (though Bing AI is attempting to address this); • It is drawn into error by its user; • It is anthropomorphized; or • It is used as a blind substitute for a writing task.
Michael Chui, Robert Roberts, and Lareina Yee of McKinsey take a more macro perspective. In "Generative AI is here: How tools like ChatGPT could change your business," they discuss the potential uses of generative AI across business functions, including marketing and sales, operations, engineering, legal, and R&D. They provide an extensive list of potential use cases.
To get the most use out of these tools, people are beginning to develop skills in what Andri Peetso calls "prompt engineering." Good prompts get better results, and prompt sequences (frameworks) can be developed and reused. Peetso provides a list of useful prompt sequences at www.conturata.com/ai. He notes that job descriptions for "prompt engineers" are starting to appear.
All of these tools, at present, seem to work best in collaboration with people. That is the thread that runs through the success stories. Nicki Case wrote about this kind of collaboration in a piece on chess several years ago in "How to Become a Centaur," which chronicled the superiority of good chess players working with good computers over world chess champions and chess supercomputers, Case made the point, at least for this instance, of the benefits of human/machine collaboration.

ethics
One of the first reactions of people upon using ChatGPT is that it will lead to cheating, whether at school or at work. It is certainly causing a reassessment of what it means to create. People have responded in diametrically opposed ways-by seeking to prevent unauthorized use of the tools or by embracing them.
The first school has created AI detectors, developed less to catch cheating students than to prevent bots from posting on the net and getting crawled by search engines. Andri Peetso teaches how to work around such detectors in order to make the text more human, but the AI detector group continues its quest and seeks to detect the workarounds. It is like Dr. Seuss' story of the Sneetches.
Ethan Mollick takes a different approach. He expects his students to use AI in their work but to disclose that they are doing so, including the prompts that they have used along the way. In his post on the topic, "All my classes suddenly became AI classes," he discusses how AI should be integrated into curricula.
Gideon Lichfield of Wired wrote, "How WIRED Will Use Generative AI Tools," which lays out how at least one publisher will and will not use tools like ChatGPT and what disclosure they will make to readers when they do.
Others have pointed out that these tools have residual biases, especially political biases. These seem to be embedded in the filters rather than in the LLMs themselves. Elon Musk, who was one of the original investors in OpenAI, is so concerned about this he has announced the intent to build new AI tools that are not biased.
What's Next with Generative AI? As Kevin Kelly has pointed out, all new technologies go through a cycle. Today many people see the limitations of these technologies more than they envision their eventual benefit. But the pace of innovation and their breadth of applicability promises a significant future impact.
Some even see generative AI as defining a new era. The article "ChatGPT Heralds an Intellectual Revolution," published in the Wall Street Journal, for example, is one example, authored by Henry Kissinger, Eric Schmidt, and Daniel Huttenlocher. They believe that the core technology will "transform the human cognitive process as it has not been shaken up since the invention of printing," which is no small claim coming from these authors. It is time to explore their applicability in your industry.