What You Need to Know Before
You Start
Starts 6 June 2025 13:55
Ends 6 June 2025
00
days
00
hours
00
minutes
00
seconds
47 minutes
Optional upgrade avallable
Not Specified
Progress at your own speed
Free Video
Optional upgrade avallable
Overview
Explore theoretical foundations of AI in safety-critical control systems with Claire Tomlin from UC Berkeley, focusing on trustworthiness in high-risk applications.
Syllabus
- Introduction to AI for Safety Critical Control
- Theoretical Foundations
- AI Techniques in Control
- Trustworthiness and Reliability
- Risk Analysis and Management
- Human-AI Interaction
- Applications in High-Risk Sectors
- Case Studies and Real-World Examples
- Future Trends and Research Directions
- Course Conclusion
- Additional Resources
Overview of safety-critical systems and their importance
Introduction to trustworthiness in AI-driven applications
Basics of control systems and AI intersections
Introduction to dynamical systems
Stability and safety in control systems
Machine learning methods for control systems
Reinforcement learning in safety-critical environments
Model predictive control using AI
Defining trustworthiness in AI
Verifiable AI methods
Assurance cases and argumentation frameworks
Risk assessment techniques in AI control systems
Mitigation strategies for AI-induced risks
Human factors in AI control loop
Designing for human oversight and intervention
AI in aerospace and automotive systems
AI-driven medical devices
Robotics and automation in safety-critical environments
Success stories and lessons learned
Failures and their implications for AI trustworthiness
Emerging techniques and technologies
Policy and ethical considerations in AI safety
Summary of key learning points
Final project presentations and discussions
Recommended readings
Online tools and platforms for further learning
Subjects
Computer Science