What You Need to Know Before
You Start
Starts 3 July 2025 19:39
Ends 3 July 2025
Superintelligent Agents Pose Catastrophic Risks - Safety-Guaranteed LLMs
Simons Institute
2765 Courses
1 hour 14 minutes
Optional upgrade avallable
Not Specified
Progress at your own speed
Free Video
Optional upgrade avallable
Overview
Join us in examining the significant risks associated with superintelligent AI agents. With insights from Yoshua Bengio, this discussion highlights a revolutionary approach to AI development:
the creation of the 'Scientist AI.' This non-agentic model is designed to prioritize explanation over action, thereby accelerating scientific research while mitigating potential dangers linked to agency in AI systems.
This event offers valuable insights for anyone interested in artificial intelligence and its implications for both scientific progress and human safety.
Engage with the content on YouTube and explore how this new paradigm in AI can help secure a safer technological future.
Syllabus
- Introduction to Superintelligent AI
- Catastrophic Risks of Superintelligent Agents
- Overview of AI Safety Research
- Yoshua Bengio's "Scientist AI" Concept
- Designing Safety-Guaranteed LLMs
- Practical Implementations and Case Studies
- Critiques and Limitations
- Future Directions and Research Opportunities
- Conclusion
Subjects
Computer Science