In Anthropic We Trust 

Over the last few days, we have seen a series of announcements highlighting generative AI firms forming partnerships with the US government to provide AI technology for military and defence. Anthropic is definitely on top of that list. Not only is it big tech’s favourite child, but it has also secured its place in the public sector and government organisations.

Recently, the company partnered with Palantir to provide its advanced AI model Claude to the US government for data analysis, and complex coding activities in projects of national security interest. This partnership involves an IL6 accreditation, just one level below the top secret tier.

It didn’t take long for the partnership to spark a debate around the company’s commitment to building AI responsibly, especially as its CEO, Dario Amodei, is well-known for his views on building an AI that prioritises safety.

Recently, Anthropic released a statement urging governments to take action and bring in regulations to enforce the safe and ethical use of AI. “Governments should urgently take action on AI policy in the next eighteen months. The window for proactive risk prevention is closing fast,” said Anthropic.
Moreover, Anthropic also hired a full-time AI welfare expert to explore the moral and ethical implications of AI. People were quick to question whether Amodei and Anthropic’s views on AI were mere virtue signalling and were disappointed about their partnership with the US government.

The announcement also came a day before the election results in the US, where Donald Trump is set to take charge as the 47th President. These concerns stem from Trump’s desire to loosen AI regulations. His allies have drafted an order to rapidly maximise AI usage for defence.

The move has raised concerns about whether it can set AI on a path towards aiding questionable wartime activities, especially as Palantir founder Peter Thiel, who owns 7% of the company’s shares, has been vocal in his support for Trump.

No Surprise Moves

It is premature to defend or criticise Anthropic. They have played the game fair and square throughout, at least in terms of transparency. Amodei, on multiple occasions, revealed his ambition to use Claude to support the government and its interests in protecting national security.

“We are making Claude available for applications like combating human trafficking, rooting out international corruption, identifying covert influence campaigns, and issuing warnings of potential military activities,” said Amodei at the AWS Summit 2024 in Washington, DC.

In his recent essay ‘Machines of Loving Grace’, Amodei said, “On the international side, it seems very important that democracies have the upper hand on the world stage when powerful AI is created.”

“AI-powered authoritarianism seems too terrible to contemplate, so democracies need to be able to set the terms by which powerful AI is brought into the world, both to avoid being overpowered by authoritarians and to prevent human rights abuses within authoritarian countries,” he added.
Anthropic has also been quite transparent about the same and has revealed its intent to provide its technology for government use. Earlier in June, it revealed its plans to expand Claude’s access for government use. Anthropic made its Claude models available on the AWS marketplace for the US Intelligence Community.
“Claude offers a wide range of potential applications for government agencies, both in the present and looking towards the future. Government agencies can use Claude to provide improved citizen services, streamline document review and preparation, enhance policymaking with data-driven insights, and create realistic training scenarios,” said Anthropic in a statement.

At the same time, Anthropic proposed amendments to California’s Senate Bill 1047 (SB 1047). Notably, the proposed amendments include exempting US military and intelligence operations from liability for “critical harms”.

Walking on a Tightrope

Anthropic also intends to strike a balance between its two ambitions. This year, Anthropic partnered with the US Artificial Intelligence Safety Institute (AISI US) and has also been working with AISI UK to test its models for safety. Last year, Anthropic developed a ‘Constitutional AI’, to align its LLMs to “abide by high-level normative principles written into a constitution”.

In September 2023, Anthropic published a responsible scaling policy, a series of protocols, and security levels. “Our RSP defines a framework called AI Safety Levels (ASL) for addressing catastrophic risks, modelled loosely after the US government’s biosafety level (BSL) standards for handling of dangerous biological materials,” read the report.

With its commitment to ethics and morals, Anthropic wants to be the first to foster a strong relationship with the government. Its updated usage policy introduced an exception that will allow governments to use their model, while also stating that it will continue to prevent any activities that are morally questionable.

“With carefully selected government entities, we may allow foreign intelligence analysis in accordance with applicable law. All other use restrictions in our usage policy, including those prohibiting use for disinformation campaigns, the design or use of weapons, censorship, domestic surveillance, and malicious cyber operations, remain,” Anthropic wrote in the statement.

In comparison, OpenAI hasn’t been actively partnering with the government. However, some reports surfaced claiming it was ‘quietly’ pitching its tech to the government. Several employees of OpenAI, including many from the safety team, have also left the company.

Actions Speak

One of Anthropic’s major investors is Amazon, and they are also set to raise another round of funds. As mentioned, Anthropic recently made Claude available on the AWS market.
Most public sector technology is hosted on AWS, and Amazon, one of the biggest companies in the US, certainly benefits from close ties with the government.

“We’re convinced that responsibility drives trust, and trust drives adoption, and adoption drives innovation,” said Dave Levy, VP of worldwide public sector at AWS, in conversation with AIM. This principle is reflected in their strategic collaboration with Anthopic.

Walking the talk pays off. Anthropic has consistently championed safety and security, earning trust and partnerships with public sector companies. In contrast, OpenAI introduced these priorities later, making building trust harder.

The post In Anthropic We Trust appeared first on Analytics India Magazine.

Follow us on Twitter, Facebook
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 comments
Oldest
New Most Voted
Inline Feedbacks
View all comments

Latest stories

You might also like...