NVIDIA-Certified Associate

Generative AI LLMs

(NCA-GENL)

关于认证

NVIDIA-Certified Associate NCA Generative AI LLMs 认证,用于验证使用生成式 AI 与大语言模型(LLMs)结合 NVIDIA 解决方案开发、集成和维护 AI 驱动应用的基础概念。该考试为在线远程监考形式,包含 50 道题目,时限为 60 分钟。

在安排考试之前,请仔细阅读 NVIDIA 考试政策

如有问题,请点击这里联系我们。

考试概况

考试时长:1 小时

费用:135 美元

认证等级:Associate

认证主题:Generative AI LLMs

题目数量:50

预备知识:生成式 AI 与大语言模型基础知识   

考试语言:英文 

认证有效期:该认证自颁发之日起有效期为两年。可以通过重新参加考试获得再次认证。

证书:考试通过后,您将获得带有认证主题和级别的数字徽章和可选电子证书。

 

考前准备

考试涵盖主题

  • 机器学习和神经网络的基础知识
  • 提示工程
  • 对齐
  • 数据分析和可视化
  • 实验
  • 数据预处理和特征工程
  • 实验设计
  • 软件开发
  • 大语言模型
  • Python库
  • 大语言模型集成和部署

适用人群

  • AI DevOps 工程师
  • AI 策略师
  • 应用数据科学家
  • 应用数据研究工程师
  • 应用深度学习研究科学家
  • 云解决方案架构师
  • 数据科学家
  • 深度学习性能工程师
  • 生成式 AI专家
  • 大语言模型 (LLM) 专家和研究人员
  • 机器学习工程师
  • 高级研究人员
  • 软件工程师
  • 解决方案架构师

考前学习指南

请查看学习指南

考试大纲

请查看以下考试大纲,详细介绍每个主题所占比重。NVIDIA 培训提供包含以下主题的培训课程,形式为在线自主培训课程讲师授课的培训班。参加培训,帮助您备考。

推荐培训课程
培训形式 | 培训时长 | 语言 | 费用
课程详情 30%
机器学习和人工智能核心知识
24%
软件开发
22%
实验
14%
数据分析和可视化
10%
可信 AI

生成式 AI 入门
在线自主培训 | 2 学时 | 中文 | 免费

您可任选一门:
深度学习新手入门
在线自主培训 | 8 小时 | 中文 | 90 美元

深度学习基础——理论与实践入门
讲师指导的培训班 | 8 小时 | 中文 | 500 美元



加速端到端的数据科学工作流
在线自主培训 | 6 小时 | 英文 | 30 美元

基于 Transformer 的自然语言处理入门
在线自主培训 | 6 小时 | 英文 | 30 美元

构建基于 Transformer 的自然语言处理应用
讲师指导的培训班 | 8 小时 | 中文 | 500 美元

使用 LLaMA-2 进行提示工程
在线自主培训 | 3 小时 | 中文 | 30 美元

使用 RAG 增强大语言模型入门
在线自主培训 | 1 小时 | 中文 | 免费

您可任选一门:
大语言模型 —— RAG 智能体入门
在线自主培训 | 8 小时 | 免费

构建大语言模型 RAG 智能体
讲师指导的培训班 | 8 小时 | 中文 | 500 美元



构建基于大语言模型 (LLM) 的应用
讲师指导的培训班 | 8 小时 | 中文 | 500 美元

您可任选一门:
生成式 AI —— 基于扩散模型的图像生成
在线自主培训 | 8 小时 | 英文 | 90 美元

构建基于扩散模型的生成式 AI 应用
讲师指导的培训班 | 8 小时 | 中文 | 500 美元



高效定制大语言模型 (LLM)
讲师指导的培训班 | 8 小时 | 中文 | 500 美元

更多资源

联系我们

NVIDIA提供培训和 AI 专业认证,助力专业人士提升在生成式 AI与大语言模型、深度学习、加速计算、数据科学、图形与仿真等领域的技能和知识。

联系我们,获得快速成功的技能。

订阅 NVIDIA 培训最新消息

想要获取最新的 DLI 课程、培训班或优惠活动,请填写如下表格。 请了解,您也可以随时取消此订阅。请收藏 DLI 中文官网 nvidia.cn/training,以便随时查看或学习课程。

Generative AI Explained

Skills covered in this course:

Core Machine Learning and AI Knowledge

  • Define generative AI and explain how it works. ​
  • Describe various generative AI applications. ​
  • Explain the challenges and opportunities of generative AI.

You can take one of these courses:

Getting Started With Deep Learning
Fundamentals of Deep Learning

Skills covered in these courses:

Core Machine Learning and AI Knowledge​

  • Understand the fundamental techniques and tools required to train a deep learning model.

Software Development

  • Gain experience with common deep learning data types and model architectures. 
  • Leverage transfer learning between models to achieve efficient results with less data and computation. 
  • Take on your own project with a modern deep learning framework.

Experimentation

  • Enhance datasets through data augmentation to improve model accuracy.

You can take one of these courses:

​Accelerating End-to-End Data Science Workflows
Fundamentals of Accelerated Data Science

Skills covered in these courses:

Data Analysis​ and Visualization​

Understand GPU-accelerated data manipulation:

  • Ingest and prepare several datasets (some larger-than-memory) for use in multiple machine learning exercises.
  • Read data directly to single and multiple GPUs with cuDF and Dask cuDF.
  • Prepare information for machine learning tasks on the GPU with cuDF.
  • Apply several essential machine learning techniques to prepared data.
  • Use supervised and unsupervised GPU-accelerated algorithms with cuML.
  • Train XGBoost models with Dask on multiple GPUs.
  • Create and analyze graph data on the GPU with cuGraph.
  • Use NVIDIA RAPIDS™ to integrate multiple massive datasets and perform analysis.
  • Implement GPU-accelerated data preparation and feature extraction using cuDF and Apache Arrow data frames.
  • Apply a broad spectrum of GPU-accelerated machine learning tasks using XGBoost and a variety of cuML algorithms.
  • Execute GPU-accelerated graph analysis with cuGraph, achieving massive-scale analytics in small amounts of time.
  • Rapidly achieve massive-scale graph analytics using cuGraph routines.

Introduction to Transformer-Based Natural Language Processing

Skills covered in this course:

Core Machine Learning and AI Knowledge​

  • Learn to describe how transformers are used as the basic building blocks of modern LLMs for natural language processing (NLP) applications.
  • Understand how transformer-based LLMs can be used to manipulate, analyze, and generate text-based data.

Software Development

  • Leverage pretrained, modern LLMs to solve various NLP tasks such as token classification, text classification, summarization, and question-answering.

Experimentation

  • Understand how transformer-based LLMs can be used to manipulate, analyze, and generate text-based data.  
  • Leverage pretrained, modern LLMs to solve various NLP tasks such as token classification, text classification,  summarization, and question-answering.

Data Analysis​ and Visualization​

  • Understand how transformer-based LLMs can be used to manipulate, analyze, and generate text-based data.

Building Transformer-Based Natural Language Processing Applications

Skills covered in this course:

Core Machine Learning and AI Knowledge​

  • Know how transformers are used as the basic building blocks of modern LLMs for natural language processing (NLP) applications. 
  •  Know how self-supervision improves upon the transformer architecture in BERT, Megatron, and other LLM variants for superior NLP results.

Software Development

  • Apply self-supervised transformer-based models to concrete NLP tasks using NVIDIA NeMo™. 
  • Deploy an NLP project for live inference on NVIDIA Triton™. 
  • Manage inference challenges and deploy refined models for live applications.

Experimentation

  • Leverage pretrained, modern LLM models to solve multiple NLP tasks such as text classification, named-entity recognition (NER), and  question-answering.

Prompt Engineering With LLaMA-2

Skills covered in this course:

Core Machine Learning and AI Knowledge​

  •  Iteratively write precise prompts to bring LLM behavior in line with your intentions.

Experimentation

  • Leverage editing the powerful system message.  
  • Guide LLMs with one-to-many-shot prompt engineering.  
  • Incorporate prompt-response history into the LLM context to create chatbot behavior.

Augment Your LLM Using Retrieval-Augmented Generation

Skills covered in this course:

Core Machine Learning and AI Knowledge​

  • Understand the basics of retrieval-augmented generation (RAG).  
  • Understand the RAG process.  
  • Be familiar with NVIDIA AI Foundation models and the components that constitute a RAG model.

Building RAG Agents for LLMs

Skills covered in this course:

Core Machine Learning and AI Knowledge​

  • Explore scalable deployment strategies for LLMs and vector databases. 
  • Practice with state-of-the-art models with clear next steps regarding productionalization and  framework exploration.

Software Development

  • Understand microservices—how to work between them and how to develop your own. 
  • Practice with state-of-the-art models with clear next steps regarding productionalization and framework exploration.

Experimentation

  • Experiment with modern LangChain paradigms to develop dialog management and document-retrieval solutions.

Trustworthy AI

  • Practice with state-of-the-art models with clear next steps regarding productionalization and framework exploration.

Rapid Application Development With Large Language Models (LLMs)

Skills covered in this course:

Software Development

  • Find, pull in, and experiment with the Hugging Face model repository and the associated transformers API.
  • Use state management and composition techniques to guide LLMs for safe, effective, and accurate conversation.

Experimentation

  • Find, pull in, and experiment with the Hugging Face model repository and the associated transformers API. 
  • Use encoder models for tasks like semantic analysis, embedding, question-answering, and zero-shot classification. 
  • Use decoder models to generate sequences like code, unbounded answers, and conversations. 
  • Use state management and composition techniques to guide LLMs for safe, effective, and accurate conversation.

Trustworthy AI

  • Use state management and composition techniques to guide LLMs for safe, effective, and accurate  conversation.

Generative AI With Diffusion Models

Skills covered in this course:

Trustworthy AI

  • Understand content authenticity and how to build trustworthy models.

Efficient Large Language Model (LLM) Customization

Skills covered in this course:

Core Machine Learning and AI Knowledge​

  • Know how to apply fine-tuning techniques.
  • Understand how to effectively integrate and interpret diverse data types within a single-model framework.

Software Development

  • Leverage the NVIDIA NeMo™ framework to customize models like GPT, LLaMA-2, and Falcon with ease.

Experimentation

  • Use prompt engineering to improve the performance of pretrained LLMs. 
  • Apply various fine-tuning techniques.

Data Analysis​ and Visualization​

  • Assess the performance of fine-tuned models.