AI Safety Defenders: Reinforcing Medical Boundaries with Guardrails

, Senior Manager, Applied Research, NVIDIA
, Senior Data Scientist, NVIDIA
Get ready to master the art of steering AI towards accuracy in the medical domain. In this training lab, we’ll learn how to develop “guardrails” to ensure your AI stays on track and delivers trustworthy results. We’ll uncover how Nemo Guardrails can avert AI hallucinations by employing robust filters, including those for fact-checking, moderation, and HIPAA compliance in healthcare applications. Along the way, we will use NVIDIA/UFlorida’s SynGatorTron model and Milvus vector database microservice for a small-scale synthetic data demonstration which can scale and accelerate to match your target use case. By the end of this session, you’ll be able to craft AI systems that are not just factually accurate but also stay on track and adhere to the highest industry standards.
Prerequisite(s):

Install libraries mentioned here.


Explore more training options offered by the NVIDIA Deep Learning Institute (DLI). Choose from an extensive catalog of self-paced, online courses or instructor-led virtual workshops to help you develop key skills in AI, HPC, graphics & simulation, and more.
Ready to validate your skills? Get NVIDIA certified and distinguish yourself in the industry.

活动: GTC 24
日期: March 2024
行业: All Industries
级别: Beginner Technical
话题: Large Language Models (LLMs)
NVIDIA technology: NeMo
语言: English
所在地: