OpenAI and Anthropic to collaborate with US government on AI safety

US and AI regulation illustration

There are many reasons to have safety concerns about generative artificial intelligence (gen AI): how it gathers and uses training data, inconsistent protection for users against harmful content, potential hallucinations, the spread of misinformation, and more. A new partnership between the US government and leading AI companies seeks to tackle those issues.

Also: What the mobile wave can teach us about the AI tsunami

On Thursday, the US Artificial Intelligence Safety Institute at the US Department of Commerce's National Institute of Standards and Technology (NIST) announced agreements with Anthropic and OpenAI to formally collaborate on research, testing, and evaluation.

"With these agreements in place, we look forward to beginning our technical collaborations with Anthropic and OpenAI to advance the science of AI safety," said Elizabeth Kelly, director of the US AI Safety Institute, in the release. "These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI."

Both companies have agreed to give the US AI Safety Insititute access to major new models before and after public release so it can evaluate and mitigate risks.

According to the release, the US AI Safety Institute will also work with its partners at the UK AI Safety Institute to provide the companies with feedback on potential safety improvements. The US and UK have previously collaborated on AI safety, partnering in May to develop safety testing for AI models.

Also: Why Claude's Artifacts is the coolest feature I've seen in generative AI so far

Both Anthropic and OpenAI are major leaders in the AI race, responsible for creating some of the most popular large language models (LLMs) and chatbots available. OpenAI's GPT-4o, the LLM behind ChatGPT, is currently in first place in the Chatbot Arena, while Anthropic's Claude 3.5 Sonnet ranks sixth in the overall category.

OpenAI has been making efforts to increase transparency around its models, most recently by releasing a GPT-4o System Card, a thorough report delineating the LLM's safety based on risk evaluations from OpenAI, external red-teaming, and more.

Artificial Intelligence

Follow us on Twitter, Facebook
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 comments
Oldest
New Most Voted
Inline Feedbacks
View all comments

Latest stories

You might also like...