What You Need to Know Before
You Start
Starts 3 July 2025 18:24
Ends 3 July 2025
Preventing Toxicity and Unconscious Biases Using Large Language and Deep Learning Models
OpenInfra Foundation
2765 Courses
40 minutes
Optional upgrade avallable
Not Specified
Progress at your own speed
Free Video
Optional upgrade avallable
Overview
Discover how large language models and BERT transformers can detect and prevent unconscious biases in AI systems, achieving 98.7% accuracy across diverse data sources and cultural contexts.
Syllabus
- Introduction to AI Bias and Toxicity
- Large Language Models: Fundamentals
- Detecting Biases with AI
- Techniques for Mitigating AI Biases
- Large Language Models in Practice
- Evaluating and Measuring Model Performance
- Case Studies
- Ethical Considerations and Best Practices
- Practical Workshop
- Conclusion and Future Directions
Subjects
Data Science