What You Need to Know Before
You Start
Starts 3 July 2025 18:49
Ends 3 July 2025
Hallucinations, Prompt Manipulations, and Mitigating Risk: Putting Guardrails around your LLM-Powered Applications
All Things Open
2765 Courses
32 minutes
Optional upgrade avallable
Not Specified
Progress at your own speed
Free Video
Optional upgrade avallable
Overview
Explore innovative techniques to protect your LLM-powered applications from risks like hallucinations and prompt manipulation. This session offers insights into effective pre-processing techniques, methods for evaluating outputs, and demonstrates how open-source frameworks can be utilized in practical scenarios.
Ideal for those interested in artificial intelligence and computer science, this event is hosted by YouTube, making cutting-edge information readily accessible to learners.
Syllabus
- Introduction to LLM Risks
- Understanding Prompt Manipulations
- Pre-processing Techniques
- Output Evaluation Methods
- Implementing Guardrails
- Open-Source Frameworks for LLM Guardrails
- Case Studies and Real-World Applications
- Mitigating Risk in Dynamic Environments
- Closing Remarks
Subjects
Computer Science