What is Generative A.I.?
Generative A.I. refers to a type of artificial intelligence that once fed enormous amounts training data can create new content, such as text, images, music, or code. When given a prompt or input, these systems process vast amounts of training data, and through prediction and pattern recognition, can produce original content. Gen A.I. can automatically adjust and improve its responses based on new information, so it appears to be “intelligent” in that sense. For most of us, like other forms of technology, A.I. is a tool, like an online dictionary, calculator, or spell checker.
The big name in A.I.—or more accurately a Large Language Model or LLM—currently is ChatGPT, which can interpret and respond to human prompts with surprising speed and accuracy. ChatGPT is easy to use partly because it is conversational. Like a back-and-forth exchange with a friend or relative, ChatGPT can “remember” previous aspects of your conversation and generate a response that makes sense contextually. At the same time , the A.I. model cannot necessarily generate up-to-date or personalized responses. In fact, some of the information generated contains factual errors, called “hallucinations,” and perpetuates stereotypes and biases contained in the training data.
The recent surge in generative A.I. models, exemplified by systems like GPT for text and DALL-E for images, and others, has sparked both both excitement and concerns across higher education, specifically around potential impacts on teaching and learning. As we explore the intersections of these technologies and higher education, keep in mind that generative A.I. is a rapidly evolving field, meaning its potential applications and implications will continue to expand.
The vast collection of information used to teach an AI model. For generative AI, this often includes large datasets of text, images, or other media. The quality, diversity, and volume of training data significantly impact the model's performance and potential biases.
Artificial intelligence systems capable of creating new content, such as text, images, or audio, based on patterns learned from existing data. This term encompasses various technologies that can produce original outputs rather than just analyzing or categorizing existing information.
A type of AI model trained on vast amounts of text data to understand and generate human-like language. LLMs form the backbone of many generative AI systems, enabling them to process and produce coherent text across various topics and styles.
In the context of AI, hallucination refers to when a generative model produces content that is factually incorrect or nonsensical, despite appearing plausible. This can occur when the model generates information beyond its training data or misinterprets the input prompt.
A specific type of language model architecture that has been pre-trained on a large corpus of text. GPT models, such as those developed by OpenAI, have demonstrated impressive capabilities in various language tasks and form the basis for many popular generative A.I. applications.
An artificial intelligence model developed by OpenAI that generates digital images from natural language descriptions. DALL-E can create original, realistic images and art from text descriptions, showcasing the capabilities of text-to-image generation in generative AI.