Overview
Join the short course 'Introducing Multimodal Llama 3.2' and dive into the latest innovations in AI brought to you by Amit Sangani, Senior Director of AI Partner Engineering at Meta. Explore the enhancements in the Llama models 3.1 and 3.2, including custom tool calling, multimodality, and the Llama stack.
Llama models, spanning from 1B to 405B parameters, are essential for AI research and innovation, empowering users to download, customize, and fine-tune models to develop new applications.
In this course, gain insights into the new vision capabilities of Llama 3.2, and learn how to harness these features along with tool-calling and Llama Stack, an open-source orchestration layer.
- Understand the training and features of new models in the Llama family.
- Master multimodal prompting for complex image reasoning use cases.
- Explore the roles—system, user, assistant, ipython—and prompt formats in the Llama models.
- Learn about the expanded tiktoken tokenizer with 128k vocabulary supporting seven non-English languages.
- Discover how to prompt Llama for built-in and custom tools usage.
- Get acquainted with the Llama Stack API for customizing models and building applications.
Start building innovative applications on Llama and expand your AI expertise.
University: Independent Study
Provider: Coursera
Categories: Computer Vision Courses, Prompt Engineering Courses, Fine-Tuning Courses
Syllabus
Taught by
Tags