Anthropic Warns Of AI Catastrophe If Governments Don't Regulate In 18 Months
Only days away from the US presidential election, AI company Anthropic is advocating for its own regulation -- before it's too late.
On Thursday, the company, which stands out in the industry for its focus on safety, released recommendations for governments to implement "targeted regulation" alongside potentially worrying data on the rise of what it calls "catastrophic" AI risks.
Also: Artificial intelligence, real anxiety: Why we can't stop worrying and love AI
The risks
In a blog post, Anthropic noted how much progress AI models have made in coding and cyber offense in just one year. "On the SWE-bench software engineering task, models have improved from being able to solve 1.96% of a test set of real-world coding problems (Claude 2, October 2023) to 13.5% (Devin, March 2024) to 49% (Claude 3.5 Sonnet, October 2024)," the company wrote. "Internally, our Frontier Red Team has found that current models can already assist on a broad range of cyber offense-related tasks, and we expect that the next generation of models -- which will be able to plan over long, multi-step tasks -- will be even more effective."
Additionally, the blog post noted that AI systems have improved their scientific understanding by nearly 18% from June to September of this year alone, according to benchmark test GPQA. OpenAI o1 achieved 77.3% on the hardest section of the test; human experts scored 81.2%.
The company also cited a UK AI Safety Institute risk test on several models for chemical, biological, radiological, and nuclear (CBRN) misuse, which found that "models can be used to obtain expert-level knowledge about biology and chemistry." It also found that several models' responses to science questions "were on par with those given by PhD-level experts."
Also: Anthropic's latest AI model can use a computer just like you - mistakes and all
This data eclipses Anthropic's 2023 prediction that cyber and CBRN risks would be pressing in two to three years. "Based on the progress described above, we believe we are now substantially closer to such risks," the blog said.
Guidelines for governments
"Judicious, narrowly-targeted regulation can allow us to get the best of both worlds: realizing the benefits of AI while mitigating the risks," the blog explained. "Dragging our feet might lead to the worst of both worlds: poorly-designed, knee-jerk regulation that hampers progress while also failing to be effective."
Anthropic suggested guidelines for government action to reduce risk without hampering innovation across science and commerce, using its own Responsible Scaling Policy (RSP) as a "prototype" but not a replacement. Acknowledging that it can be hard to anticipate when to implement guardrails, Anthropic described its RSP as a proportional risk management framework that adjusts for AI's growing capabilities through routine testing.
Also: Implementing AI? Check MIT's free database for the risks
"The 'if-then' structure requires safety and security measures to be applied, but only when models become capable enough to warrant them," Anthropic explained.
The company identified three components for successful AI regulation: transparency, incentivizing security, and simplicity and focus.
Currently, the public can't verify whether an AI company is adhering to its own safety guidelines. To create better records, Anthropic said, governments should require companies to "have and publish RSP-like policies," delineate which safeguards will be triggered when, and publish risk evaluations for each generation of their systems. Of course, governments must also have a method of verifying that all those company statements are, in fact, true.
Anthropic also recommended that governments incentivize higher-quality security practices. "Regulators could identify the threat models that RSPs must address, under some standard of reasonableness, while leaving the details to companies. Or they could simply specify the standards an RSP must meet," the company suggested.
Also: Businesses still ready to invest in Gen AI, with risk management a top priority
Even if these incentives are indirect, Anthropic urges governments to keep them flexible. "It is important for regulatory processes to learn from the best practices as they evolve, rather than being static," the blog said -- though that may be difficult for bureaucratic systems to achieve.
It might go without saying, but Anthropic also emphasized that legislation should be easy to understand and implement. Describing ideal regulations as "surgical," the company advocated for "simplicity and focus" in its advice, encouraging governments not to create unnecessary "burdens" for AI companies that may be distracting.
"One of the worst things that could happen to the cause of catastrophic risk prevention is a link forming between regulation that's needed to prevent risks and burdensome or illogical rules," the blog stated.
Industry advice
Anthropic also urged its fellow AI companies to implement RSPs that support regulation. It pointed out the importance of situating computer security and safety ahead of time, not after risks have caused damage -- and how critical that makes hiring toward that goal.
"Properly implemented, RSPs drive organizational structure and priorities. They become a key part of product roadmaps, rather than just being a policy on paper," the blog noted. Anthropic said RSPs also urge developers to explore and revisit threat models, even if they're abstract.
Also: Today's AI ecosystem is unsustainable for most everyone but Nvidia
So what's next?
"It is critical over the next year that policymakers, the AI industry, safety advocates, civil society, and lawmakers work together to develop an effective regulatory framework that meets the conditions above," Anthropic concluded. "In the US, this will ideally happen at the federal level, though urgency may demand it is instead developed by individual states."
Reassessing AI Investments: What The Correction In US Megacap Tech Stocks Signals
The recent correction in US megacap tech stocks, including giants like Nvidia, Tesla, Meta, and Alphabet, has sent rippl... Read more
AI Hype Meets Reality: Assessing The Impact Of Stock Declines On Future Tech Investments
Recent declines in the stock prices of major tech companies such as Nvidia, Tesla, Meta, and Alphabet have highlighted a... Read more
Technology Sector Fuels U.S. Economic Growth In Q2
The technology sector played a pivotal role in accelerating America's economic growth in the second quarter of 2024.The ... Read more
Tech Start-Ups Advised To Guard Against Foreign Investment Risks
The US National Counterintelligence and Security Center (NCSC) has advised American tech start-ups to be wary of foreign... Read more
Global IT Outage Threatens To Cost Insurers Billions
Largest disruption since 2017’s NotPetya malware attack highlights vulnerabilities.A recent global IT outage has cause... Read more
Global IT Outage Disrupts Airlines, Financial Services, And Media Groups
On Friday morning, a major IT outage caused widespread disruption across various sectors, including airlines, financial ... Read more