Ray - A System for High-performance, Distributed Python Applications
via YouTube
YouTube
2338 Courses
Overview
Explore Ray, an open-source framework for scaling Python applications from laptops to clusters, focusing on ML/AI performance challenges and its key features for distributed computing.
Syllabus
-
- Introduction to Ray
-- Overview of Ray
-- Key benefits of using Ray for distributed computing
-- Comparison with other distributed computing frameworks
- Setting Up Ray
-- Installation and setup
-- Running Ray locally on a laptop
-- Configuring Ray for clusters
- Core Concepts of Ray
-- Ray’s architecture
-- Tasks and Actors in Ray
-- Managing Ray objects and object stores
- Distributed Computing with Ray
-- Scheduling and execution of tasks
-- Combining tasks and actors
-- Handling resource constraints and dependencies
- Scaling Machine Learning Applications
-- Using Ray with popular ML frameworks (TensorFlow, PyTorch)
-- Hyperparameter tuning with Ray Tune
-- Distributed data processing with Ray Datasets
- Advanced Features of Ray
-- Ray's fault-tolerance and failure recovery
-- Monitoring and debugging Ray applications
-- Ray Serve for scalable model serving
- Real-world Applications and Use Cases
-- Case studies of Ray in industry
-- Best practices for deploying Ray in production
- Ray Ecosystem and Tools
-- Overview of Ray libraries (RLLib, Ray Tune, Ray Serve, etc.)
-- Choosing the right Ray tool for your application
- Hands-on Projects
-- Building a distributed application with Ray
-- Scaling a machine learning model using Ray
-- Performance optimization and tuning with Ray
- Conclusion and Future of Ray
-- Emerging trends in distributed computing
-- Future developments in Ray
-- Resources for further learning
- Final Q&A and Wrap-up
Each topic will be accompanied by lectures, demonstrations, and practical exercises to provide a comprehensive understanding of Ray and its applications in high-performance distributed computing.
Taught by
Tags