Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.
Limitations and Risks
Although the rapid advancement and widespread adoption of generative A.I. brings innovative new ideas and excitement, the technologies come with challenging ethical and societal risks and limitations. These considerations highlight the need for us to approach and use genA.I. tools thoughtfully, responsibly, and critically.
Ethical and Social Concerns
Biases
The training data for generative A.I. models come from many sources. Gen A.I. can amplify or perpetuate biases present in their training data, potentially leading to problematic, unfair, or discriminatory outputs.
Environment
Although not known, scientists estimate that training, fine tuning, and running large A.I. models requires significant computational resources, which can be energy intensive leaving a substantial carbon footprint.
Equity and access
There may be disparities in who can access and benefit from generative AI tools, potentially exacerbating existing inequalities. Although some tools are free, there are paid versions that offer significantly better functionality and performance.
Labor Practices
The development and use of A.I. may displace certain jobs or change labor markets in ways that affect employment. Beyond this, some A.I. models have outsourced to low-wage workers the task of shifting through toxic and explicit content without consideration for their psychological wellbeing.
Legal and Privacy Issues
Copyright
There are ongoing legal and ethical questions about the use of copyrighted material in training data and, thus, the ownership of A.I.-generated content.
Data privacy and security
The use of large datasets for training AI models raises concerns about the protection of personal information and data security. Although some models allow you to limit how much your use is used to enhance or further train the model, collecting your (private) is often the default.
Information Integrity and Reliability
Academic integrity
In higher education settings, generative A.I. poses new questions about what constitutes students’ original work and how to mitigate plagiarism and other forms of academic dishonesty. How much assistance from an A.I. model too much help, crossing over into an academic integrity offence?
Disinformation
The ability of generative A.I. to create realistic and plausible text, video, audio and code makes the creation of false, biased, or politically motivated media faster and easier to produce.
Reliability and accuracy
Generative A.I. will always provide you with a response to your prompt, but since it doesn’t “know” anything, the output can be incorrect, nonsensical outputs, or just made up. These hallucinations require careful human oversight and verification.
Concerns around copyright and academic integrity impact how instructors design assessments in the age of genA.I. Conversations around genA.I. with students should include information on biased training data or hallucinations, environmental impacts of A.I., and questionable labour practices in order to make informed decisions on whether they want to use genA.I. Because genA.I. intersects with teaching and learning in unique ways, we will explore these risks and limitations in more detail in future chapters.
Example of Hallucination
In the next chapter, we’ll go into more details on intellectual property, data privacy and security, other risks and limitations disproportionately impacting higher education.
As we explore the potential of generative A.I. in teaching and learning, it’s crucial to critically examine its broader implications. While we’ve highlighted some concrete risks, like academic integrity and environmental impacts, there are deeper questions about how these tools might shape human cognition and creativity.
Will gen A.I. use impact our capacity for independent, critical thought?
Will widespread use of genA.I. take away our originality and creativity?
What skills or knowledge do you believe will remain uniquely human?
definition
In the context of AI, hallucination refers to when a generative model produces content that is factually incorrect or nonsensical, despite appearing plausible. This can occur when the model generates information beyond its training data or misinterprets the input prompt.