What You Need to Know Before
You Start

Starts 1 July 2025 12:16

Ends 1 July 2025

00 Days
00 Hours
00 Minutes
00 Seconds
course image

Delivering Unprecedented Scale for AI Infrastructure and Cloud Connectivity

Discover how purpose-built connectivity solutions like PCIe, CXL, and Ethernet overcome performance bottlenecks and scale memory bandwidth, processing, and rack connectivity for AI infrastructure.
Open Compute Project via YouTube

Open Compute Project

2765 Courses


11 minutes

Optional upgrade avallable

Not Specified

Progress at your own speed

Free Video

Optional upgrade avallable

Overview

Discover how purpose-built connectivity solutions like PCIe, CXL, and Ethernet overcome performance bottlenecks and scale memory bandwidth, processing, and rack connectivity for AI infrastructure.

Syllabus

  • Introduction to AI Infrastructure and Challenges
  • Overview of AI models and data requirements
    Common performance bottlenecks in AI systems
  • Purpose-Built Connectivity Solutions
  • Introduction to PCIe (Peripheral Component Interconnect Express)
    Understanding CXL (Compute Express Link)
    Role of Ethernet in AI infrastructure
  • PCIe for AI
  • PCIe standards and evolution
    Enhancing data transfer rates and connectivity
    Use cases and implementations in AI
  • CXL: Enabling Memory and Device Cohesion
  • Overview of CXL architecture and protocols
    Benefits of CXL for AI processing
    Real-world applications of CXL in AI systems
  • Ethernet in AI Infrastructure
  • Ethernet standards relevant to AI
    Scaling network bandwidth for distributed AI workloads
    Case studies of Ethernet in AI connectivity
  • Scaling Memory Bandwidth for AI
  • Memory hierarchy in AI systems
    Challenges of memory scalability and latency
    Solutions for overcoming memory bottlenecks
  • Processing Performance Optimization
  • Optimizing computations across GPUs and CPUs
    Data locality and bandwidth considerations
    Techniques for processing efficiency in large-scale deployments
  • Rack Connectivity for AI
  • Importance of high-performance rack interconnects
    Integrating PCIe, CXL, and Ethernet at the rack level
    Rack design considerations for AI workloads
  • Case Studies and Industry Applications
  • Case study 1: Enhancing AI training with advanced connectivity
    Case study 2: Scalability in AI inference applications
    Future trends and developments in AI infrastructure
  • Conclusion and Future Directions
  • Summary of key concepts
    Emerging technologies and their impact on AI infrastructure scalability
    Resources for continued learning and advancement in AI connectivity systems

Subjects

Programming