What You Need to Know Before
You Start
Starts 7 June 2025 13:48
Ends 7 June 2025
00
days
00
hours
00
minutes
00
seconds
AI/ML Security - Understanding Jailbreak Prompts and Adversarial Illusions in Large Language Models
Explore jailbreak prompts and adversarial illusions in AI/ML systems through two cutting-edge USENIX security research papers presented by Cornell Tech and Washington University experts.
RSA Conference
via YouTube
RSA Conference
2544 Courses
47 minutes
Optional upgrade avallable
Not Specified
Progress at your own speed
Free Video
Optional upgrade avallable
Overview
Explore jailbreak prompts and adversarial illusions in AI/ML systems through two cutting-edge USENIX security research papers presented by Cornell Tech and Washington University experts.
Subjects
Computer Science