What You Need to Know Before
You Start
Starts 21 June 2025 05:20
Ends 21 June 2025

Mastering Generative AI: Fine-Tuning Transformers
504 Courses
Not Specified
Optional upgrade avallable
All Levels
Progress at your own speed
Free
Optional upgrade avallable
Overview
Mastering Generative AI:
Fine-Tuning Transformers
The demand for technical skills in generative AI is rapidly increasing. AI engineers with expertise in fine-tuning transformers for generative AI applications are highly sought after.
This course, Generative AI Engineering Fine-Tuning with Transformers, is meticulously crafted for AI engineers and AI specialists aiming to enhance their resumes with in-demand skills.
Throughout this course, you'll delve into the nuances between PyTorch and Hugging Face, employ pre-trained transformers for language-related tasks, and refine them for specialized applications. Additionally, you will fine-tune generative AI models using both PyTorch and Hugging Face platforms.
The curriculum also covers essential concepts such as parameter-efficient fine-tuning (PEFT), low-rank adaptation (LoRA), quantized low-rank adaptation (QloRA), model quantization in natural language processing (NLP), and the art of prompting.
Engage in hands-on labs to gain practical experience in loading models, performing inferences, training models with Hugging Face, pre-training language models (LLMs), fine-tuning models, and employing PyTorch adaptors.
If you're aspiring to acquire job-ready skills needed by employers for fine-tuning transformers in generative AI, ENROLL TODAY to enhance your resume for career advancement!
Prerequisites:
Basic understanding of Python, PyTorch, and transformer architecture is required. Familiarity with machine learning and neural network concepts is also recommended.
Provider:
edX
Categories:
Machine Learning Courses, Generative AI Courses, PyTorch Courses, LoRA (Low-Rank Adaptation) Courses, Hugging Face Courses, Transformers Courses, Fine-Tuning Courses, QLoRA Courses, Parameter-Efficient Fine-Tuning Courses