What You Need to Know Before
You Start
Starts 8 June 2025 23:56
Ends 8 June 2025
00
days
00
hours
00
minutes
00
seconds
44 minutes
Optional upgrade avallable
Not Specified
Progress at your own speed
Free Video
Optional upgrade avallable
Overview
Discover how NVIDIA's GPU-accelerated math libraries in the CUDA Toolkit and HPC SDK can optimize performance for AI, ML, and scientific computing workflows on Blackwell GPUs.
Syllabus
- Introduction to Blackwell GPUs
- Introduction to CUDA Toolkit and HPC SDK
- Understanding GPU-accelerated Math Libraries
- Utilizing cuBLAS for Linear Algebra
- Fast Fourier Transform with cuFFT
- Random Number Generation with cuRAND
- Solving Systems of Equations with cuSOLVER
- Sparse Matrix Operations with cuSPARSE
- Practical Considerations and Optimization Techniques
- Case Studies and Real-world Applications
- Hands-on Workshop
- Conclusion and Further Resources
Overview of Blackwell GPU architecture
Benefits of GPU acceleration in AI, ML, and scientific computing
Overview of CUDA Toolkit
Overview of HPC SDK
Fundamentals of GPU-accelerated computing
Key math libraries in CUDA Toolkit
cuBLAS
cuFFT
cuRAND
cuSOLVER
cuSPARSE
High-performance math libraries in HPC SDK
Overview of BLAS operations
Accelerating matrix multiplications
Applications in AI and ML
Introduction to FFT and its applications
Leveraging cuFFT for performance gains
Importance of RNG in simulations
Utilizing cuRAND for efficient RNG
Overview of numerical methods for solving equations
Exploiting cuSOLVER for scientific computing
Introduction to sparse matrices
Enhancing performance with cuSPARSE
Profiling GPU applications
Understanding memory hierarchies
Best practices for optimizing performance on Blackwell GPUs
Examples of AI and ML applications
Scientific computing case studies
Implementing math libraries in sample applications
Performance benchmarking and analysis
Recap of course learnings
Additional resources for deeper exploration
Subjects
Programming