What You Need to Know Before
You Start
Starts 3 July 2025 03:58
Ends 3 July 2025
AI/ML Security - Understanding Jailbreak Prompts and Adversarial Illusions in Large Language Models
RSA Conference
2765 Courses
47 minutes
Optional upgrade avallable
Not Specified
Progress at your own speed
Free Video
Optional upgrade avallable
Overview
Explore jailbreak prompts and adversarial illusions in AI/ML systems through two cutting-edge USENIX security research papers presented by Cornell Tech and Washington University experts.
Subjects
Computer Science