Flanked By Palantir And AWS, Anthropic's Claude Marches Into US Defense Intelligence

Palantir has announced a partnership with Anthropic and Amazon Web Services to build a cloudy Claude platform suitable for the most secure of the US government's defense and intelligence use cases.

In an announcement today, the three firms said the partnership would integrate Claude 3 and 3.5 with Palantir's Artificial Intelligence Platform, hosted on AWS. Both Palantir and AWS have been awarded Impact Level 6 (IL6) certification by the Department of Defense, which allows the processing and storage of classified data up to the Secret level. 

Claude was first made available to the defense and intelligence communities in early October, an Anthropic spokesperson told The Register. The US government will be using Claude to reduce data processing times, identify patterns and trends, streamline document reviews, and help officials "make more informed decisions in time-sensitive situations while preserving their decision-making authorities," the press release noted. 

"Palantir is proud to be the first industry partner to bring Claude models to classified environments," said Palantir's CTO, Shyam Sankar.

"Our partnership with Anthropic and AWS provides US defense and intelligence communities the tool chain they need to harness and deploy AI models securely, bringing the next generation of decision advantage to their most critical missions." 

Acceptable use carve-outs

It's interesting to compare the AI usage policy of Meta, which announced yesterday it was opening its Llama neural networks to the US government for defense and national security applications, to that of Anthropic.

Meta's usage policy specifically prohibits the use of Llama for military, warfare, espionage, and other critical applications, for which Meta has granted some exceptions for the Feds.

In our view, no such clear-cut restrictions are included in Anthropic's acceptable use policy. Even high-risk use cases, which Anthropic defines as the use of Claude that "pose an elevated risk of harm," and require extra safety measures, leave defense and intelligence applications out, only mentioning legal, healthcare, insurance, finance, employment, housing, academia, and media usage of Claude as "domains that are vital to public welfare and social equity." 

Instead, Anthropic's AUP lists off various specific ways its model can't be used to cause harm, directly and indirectly, which would cover at least some military work. Meanwhile, we expected to see a blanket ban, a la Meta, on military use, requiring exceptions to make the Palantir-Amazon deal possible.

When asked about its AUP and how that might pertain to government applications, particularly defense and intelligence as indicated in today's announcement, Anthropic only referred us to a blog post from June about its plans to expand government access to Claude. 

"Anthropic's mission is to build reliable, interpretable, steerable AI systems," the blog stated. "We're eager to make these tools available through expanded offerings to government users." 

Anthropic's post mentions it has established a method of granting acceptable use policy exceptions for government users, noting that those allowances "are carefully calibrated to enable beneficial use by carefully selected government agencies." What exceptions have been granted, we're not told, and Anthropic didn't directly answer our questions to that end.

The existing carve-out structure, Anthropic noted, "allow[s] Claude to be used for legally authorized foreign intelligence analysis … and providing warning in advance of potential military activities, opening a window for diplomacy to prevent or deter them."

"All other restrictions in our general usage policy, including those concerning disinformation campaigns, the design or use of weapons, censorship, and malicious cyber operations, remain," the AI house said.

It could be argued that Anthropic's AUP covers all the most dangerous and critical uses of Claude by the defense and intelligence communities, and thus no blanket ban on use by such government entities, like what Meta has, is needed in its policy. All possible separate angles are forbidden without an exception, in other words, whereas Meta's wider-ranging approach seems more efficient.

Anthropic's policy, for example, includes prohibitions on using Claude to interfere with the operation of military facilities and bans on "battlefield management applications" and the use of Claude to "facilitate the exchange of illegal or highly regulated weapons or goods."

Ultimately, we'll just have to hope no one decides to emotionally blackmail Claude into violating whichever of Anthropic's rules the US government still has to follow. ®

Editor's note: This article was updated on November 8 to expand upon our observations about the acceptable use policies.

RECENT NEWS

From Chip War To Cloud War: The Next Frontier In Global Tech Competition

The global chip war, characterized by intense competition among nations and corporations for supremacy in semiconductor ... Read more

The High Stakes Of Tech Regulation: Security Risks And Market Dynamics

The influence of tech giants in the global economy continues to grow, raising crucial questions about how to balance sec... Read more

The Tyranny Of Instagram Interiors: Why It's Time To Break Free From Algorithm-Driven Aesthetics

Instagram has become a dominant force in shaping interior design trends, offering a seemingly endless stream of inspirat... Read more

The Data Crunch In AI: Strategies For Sustainability

Exploring solutions to the imminent exhaustion of internet data for AI training.As the artificial intelligence (AI) indu... Read more

Google Abandons Four-Year Effort To Remove Cookies From Chrome Browser

After four years of dedicated effort, Google has decided to abandon its plan to remove third-party cookies from its Chro... Read more

LinkedIn Embraces AI And Gamification To Drive User Engagement And Revenue

In an effort to tackle slowing revenue growth and enhance user engagement, LinkedIn is turning to artificial intelligenc... Read more