What You Need to Know Before
You Start
Starts 3 July 2025 19:36
Ends 3 July 2025
15 Bad Takes from AI Safety Doomers
David Shapiro ~ AI
2765 Courses
24 minutes
Optional upgrade avallable
Not Specified
Progress at your own speed
Free Video
Optional upgrade avallable
Overview
Attend this thought-provoking session on YouTube to uncover the 15 most flawed arguments made by AI safety doomers. Expand your understanding of artificial intelligence and challenge prevailing misconceptions with this in-depth critical analysis.
Syllabus
- Introduction to AI Safety and Doomerism
- Bad Take #1: AI Will Inevitably Lead to Human Extinction
- Bad Take #2: Superintelligence Is Imminent
- Bad Take #3: AI Goals Will Automatically Misalign with Human Values
- Bad Take #4: AI Cannot Be Controlled or Regulated
- Bad Take #5: Data Bias Will Make AI Irreparably Dangerous
- Bad Take #6: AI Will Eliminate All Jobs
- Bad Take #7: AI Will Create Irreversible Inequality
- Bad Take #8: AI Will Make Universal Surveillance Inevitable
- Bad Take #9: Machines Will Develop Consciousness
- Bad Take #10: AI Will Lead to a Malevolent AGI
- Bad Take #11: Race to AI Superiority Will Be Unstoppable
- Bad Take #12: AI Safety Measures Will Always Be Insufficient
- Bad Take #13: Public Distrust in AI Is Irreversible
- Bad Take #14: AI Catastrophes Are Inevitable Due to Human Error
- Bad Take #15: There Are No Ethical Solutions to AI Risks
- Conclusion: Future Directions in AI Safety
Subjects
Computer Science