Generative AI and LLMs: Architecture and Data Preparation

via Coursera

Coursera

1275 Courses


course image

Overview

Generative AI and LLMs: Architecture and Data Preparation

This IBM short course, part of the Generative AI Engineering Essentials with LLMs Professional Certificate, teaches you the basics of using generative AI and Large Language Models (LLMs). It is perfect for existing and aspiring data scientists, machine learning engineers, deep-learning engineers, and AI engineers.

In this course, you will:

  • Explore different types of generative AI and their real-world applications.
  • Gain knowledge to differentiate between various generative AI architectures and models, such as Recurrent Neural Networks (RNNs), Transformers, Generative Adversarial Networks (GANs), Variational AutoEncoders (VAEs), and Diffusion Models.
  • Understand the different training approaches used for each model.
  • Learn to explain the use of LLMs, like Generative Pre-Trained Transformers (GPT) and Bidirectional Encoder Representations from Transformers (BERT).
  • Discover the tokenization process, methods, and the use of tokenizers for word-based, character-based, and subword-based tokenization.
  • Understand how to use data loaders for training generative AI models and list the PyTorch libraries for data preparation and handling.

The knowledge you acquire will help you use the generative AI libraries in Hugging Face, implement tokenization, and create an NLP data loader.

Basic knowledge of Python and PyTorch, along with an awareness of machine learning and neural networks, is advantageous but not strictly required.

University: IBM
Provider: Coursera
Categories: Generative AI Courses, PyTorch Courses, Diffusion Models Courses, Hugging Face Courses, Transformers Courses, Variational Autoencoders Courses

Syllabus


Taught by


Tags

united states