Mastering Generative AI: LLM Architecture & Data Preparation

via edX

edX

321 Courses


Overview

The demand for generative AI is projected to grow over 46% annually by 2030, making this an opportune time for AI engineers, developers, data scientists, and machine learning professionals to enhance their skillsets. This course focuses on large language model (LLM) architecture and data preparation, providing the sought-after expertise employers demand.

This course offers insights into the practical applications of generative AI and covers key architectures such as recurrent neural networks (RNNs), transformers, generative adversarial networks (GANs), variational autoencoders (VAEs), and diffusion models. Participants will explore various training methodologies for these models and dive into large language models like generative pre-trained transformers (GPT) and bidirectional encoder representations from transformers (BERT).

Additionally, the course covers the tokenization process, including different methods and the use of tokenizers for word-based, character-based, and subword-based tokenization. Students will gain hands-on experience with data loaders for training generative AI models, utilizing PyTorch libraries and resources from Hugging Face, and will implement tokenization by creating an NLP data loader.

If you're ready to master generative AI LLM architecture and data preparation, enroll today to enhance your resume with in-demand skills. Prerequisites include basic Python and PyTorch knowledge and a general understanding of machine learning and neural networks, although these are not strictly required.

Offered by edX, this course falls under categories such as Neural Networks Courses, Generative AI Courses, PyTorch Courses, Hugging Face Courses, and Transformers Courses.

Syllabus


Taught by


Tags

united states