Gen AI Foundational Models for NLP & Language Understanding

via Coursera

Coursera

1500 Courses


course image

Overview

This IBM course will teach you how to implement, train, and evaluate generative AI models for natural language processing (NLP). The course covers a variety of NLP applications including document classification, language modeling, and language translation. Learn the fundamentals of building both small and large language models and converting words to features.

Understand techniques such as one-hot encoding, bag-of-words, embedding, and embedding bags. Discover how Word2Vec embedding models can be utilized for feature representation in text data and apply these skills using PyTorch.

This course provides hands-on experience in building, training, and optimizing neural networks for document categorization. Explore N-gram language models and sequence-to-sequence models. Evaluate the quality of generated text using metrics like BLEU.

Practice your skills in Hands-on Labs by implementing document classification using torchtext in PyTorch. Gain expertise in building and training a simple language model with a neural network, integrating pre-trained embedding models like Word2Vec for text analysis and classification. Additionally, develop sequence-to-sequence models for tasks such as language translation using PyTorch.

University: IBM
Provider: Coursera
Categories: Neural Networks Courses, Generative AI Courses, PyTorch Courses

Syllabus


Taught by


Tags

united states