NVIDIA Open-Sources Guardrails for Hallucinating AI Chatbots

The great powers of Generative AI carry great risks along with them. The AI chatbots hallucinate, often veer off topic and tend to scrape through user data. The danger is that companies rushing to integrate these tools within their system can potentially overlook these massive risks. NVIDIA may have a solution.

Yesterday, the Jensen Huang-led company released a new open-source framework called NeMo Guardrails to help resolve this problem. These guardrails ensure that the organisations building and deploying LLMs for a range of different functions can stay on track.

There are three types of guardrails that the project has – Topical controls to prevent applications from responding to sensitive questions, Safety controls to ensure accurate information from credible sources and security controls to restrict them from connecting with vulnerable external third-party applications.

Jonathan Cohen, VP of Applied Research at the company explained how the guardrails could be implemented. “While we have been working on the Guardrails system for years, a year ago we found this system would work well with OpenAI’s GPT models,” he stated. The blog posted on the guardrails stated that it works on top of all major LLMs like GPT3 or Google T5 or even AI image generation models like Stable Diffusion 1.5 and Imagen.

Considering they are open-source, NeMo Guardrails can work with all the tools used by enterprise application developers. For instance, it can run on top of the open-source toolkit, LangChain that developers have been working on for third-party applications.

Harrison Chase, the creator of the LangChain toolkit stated, “Users can easily add NeMo Guardrails to LangChain workflows to quickly put safe boundaries around their AI-powered apps.”

Interestingly, the guardrails themselves use the LLM to check itself much like the SelfCheckGPT technique. Cohen admitted that while using the guardrails was “relatively inexpensive” in terms of the compute, there was space to optimise the controls but it still was

The guardrails are built on CoLang, a natural language modelling language which provides a readable and extensible interface for users to better control the behaviour of their AI bots.

NVIDIA has incorporated the guardrails into the NVIDIA NeMo framework which is already open-sourced on GitHub. In addition, NeMo Guardrails will be included in the AI Foundations service which offers several pre-trained models and frameworks for companies to rent out.

The post NVIDIA Open-Sources Guardrails for Hallucinating AI Chatbots appeared first on Analytics India Magazine.

Follow us on Twitter, Facebook
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 comments
Oldest
New Most Voted
Inline Feedbacks
View all comments

Latest stories

You might also like...