Module
Title: AI Limitations, AI Safety, and the Future of AI
Module Overview Page
Module Introduction
This unit covers various technical and safety limitations of modern
deep-learning-based AI systems, including chatbots, image generators,
and more. You’ll learn what sorts of problems these systems struggle to
solve and how these systems can be dangerous if used improperly. We’ll
also discuss possible future directions for advancements in AI.
Module Learning Outcomes
After successful completion of this module, you should be able to do
the following:
Demonstrate the technical limitations of modern deep-learning-based
AI (CLO 1)
Why is it said that deep learning systems are “black boxes”?
What are AI hallucinations?
What kinds of prompts do text-generative AI systems often fail to
answer correctly?
Why does the left-to-right nature of autoregression cause
autoregressive models to struggle with certain kinds of prompts?
For what kinds of prompts do image-generative AI systems often fail
to generate correct images?
Explain the safety limitations of modern deep-learning-based AI (CLO
1)
What is AI jailbreaking?
What are some safety concerns with misplacing trust in AI systems
that hallucinate?
What is AI alignment?
What is an AGI, and what are the potential risks of an unaligned
AGI?