What You Need to Know Before
You Start
Starts 6 July 2025 01:33
Ends 6 July 2025
Superintelligent Agents Pose Catastrophic Risks - Safety-Guaranteed LLMs
Simons Institute
2777 Courses
1 hour 12 minutes
Optional upgrade avallable
Not Specified
Progress at your own speed
Free Video
Optional upgrade avallable
Overview
Explore the potential catastrophic risks posed by superintelligent AI agents with expert Yoshua Bengio. In this insightful discussion, discover innovative and safer alternatives such as "Scientist AI," an approach focused on explaining rather than acting, aiming to advance scientific progress while mitigating the dangers associated with agency-driven entities.
Syllabus
- Introduction to Superintelligent AI
- Understanding Catastrophic Risks
- Safety in AI Design
- Alternatives to Superintelligent Agents
- Case Study: Yoshua Bengio's Proposals
- Agency and Action in AI
- Techniques for Safety-Guaranteed LLMs
- Accelerating Scientific Progress with AI
- Future Directions in AI Safety Research
- Conclusion
Subjects
Computer Science