UK's New Thinking On AI: Unless It's Causing Serious Bother, You Can Crack On

Comment The UK government on Friday said its AI Safety Institute will henceforth be known as its AI Security Institute, a rebranding that attests to a change in regulatory ambition from ensuring AI models get made with wholesome content – to one that primarily punishes AI-abetted crime.

"This new name will reflect its focus on serious AI risks with security implications, such as how the technology can be used to develop chemical and biological weapons, how it can be used to carry out cyber-attacks, and enable crimes such as fraud and child sexual abuse," the government said in a statement of the retitled public body.

AI safety – "research, strategies, and policies aimed at ensuring these systems are reliable, aligned with human values, and not causing serious harm," as defined by The Brookings Institution – has seen better days.

Between Meta's dissolution of its Responsible AI Team in late 2023, the refusal of Apple and Meta to sign the EU's AI Pact last year, the Trump administration ripping up Biden-era AI safety rules, and concern about AI competition from China, there appears to be less appetite for preventive regulation – like what the US Food and Drug Administration tries to do with the food supply – and more interest in proscriptive regulation – enjoy your biased, racist AI but don't use it to commit acts of terror or sex crimes.

"[The AI Security Institute] will not focus on bias or freedom of speech, but on advancing our understanding of the most serious risks posed by the technology to build up a scientific basis of evidence which will help policymakers to keep the country safe as AI develops," the UK government said, championing unfettered discourse in a way not evident in its reported stance on encryption.

Put more bluntly, the UK is determined not to regulate the country out of the economic benefit of AI investment and associated labor consequences – AI jobs and AI job replacement.

... helping us to unleash AI and grow the economy ...

Peter Kyle, Secretary of State for Science, Innovation, and Technology, said as much in a statement: "The changes I’m announcing today represent the logical next step in how we approach responsible AI development – helping us to unleash AI and grow the economy as part of our Plan for Change." That plan being the Labour government's blueprint of priorities.

A key partner in that plan now is Anthropic, which has distinguished itself from rival OpenAI by staking out the moral high ground among commercial AI firms. Built by ex-OpenAI staff and others, it identifies itself as "a safety-first company," though whether that matters much anymore remains to be seen.

Anthropic and the UK's Department for Science, Innovation and Technology (DSIT) have signed a Memorandum of Understanding to make AI tools that can be integrated into UK government services for citizens.

"AI has the potential to transform how governments serve their citizens," said Dario Amodei, CEO and co-founder of Anthropic, in a statement. "We look forward to exploring how Anthropic's AI assistant Claude could help UK government agencies enhance public services, with the goal of discovering new ways to make vital information and services more efficient and accessible to UK residents."

Allowing AI to deliver government services has gone swimmingly in New York City, where the MyCity Chatbot, which relies on Microsoft's Azure AI, last year gave business owners advice that violated the law. The Big Apple addressed this not by demanding its AI model that gets things right but by adding this disclaimer in a popup window:

The disclaimer dialogue window also comes with a you're-to-blame-if-you-use-this checkbox, "I agree to the MyCity Chatbot's beta limitations." Problem solved.

Anthropic appears to be more optimistic about its technology and cites several government agencies that have already befriended its Claude family of LLMs. The San Francisco upstart notes that the Washington, DC Department of Health has partnered with Accenture to build a Claude-based bilingual chatbot to make its services more accessible and to provide health information on demand. Then there's the European Parliament, which uses Claude for document search and analysis – so far without the pangs of regret evident among those using AI for legal support.

In England, Swindon Borough Council offers a Claude-based tool called "Simply Readable," hosted on Amazon Bedrock, that makes documents more accessible for people with disabilities by reformatting them with larger font, increased spacing, and additional images.

The result has been significant financial savings, it's claimed. Where previously documents of 5-10 pages cost around £600 to convert, Simply Readable does the job for just 7-10 pence, freeing funds for other social services.

According to the UK's Local Government Association (LGA), the tool has delivered a 749,900 percent return on investment.

"This staggering figure underscores the transformative potential of 'Simply Readable' and AI-powered solutions in promoting social inclusion while achieving significant cost savings and improved operational efficiency," the LGA said earlier this month.

No details are offered on whether this AI savings entailed a cost in jobs or expenditures in the form of Jobseeker's Allowance.

But Anthropic in time may have some idea about that. The UK government deal involves using the AI firm's recently announced Economic Index, which uses anonymized Claude conversations to estimate AI's impact on labor markets. ®

RECENT NEWS

From Chip War To Cloud War: The Next Frontier In Global Tech Competition

The global chip war, characterized by intense competition among nations and corporations for supremacy in semiconductor ... Read more

The High Stakes Of Tech Regulation: Security Risks And Market Dynamics

The influence of tech giants in the global economy continues to grow, raising crucial questions about how to balance sec... Read more

The Tyranny Of Instagram Interiors: Why It's Time To Break Free From Algorithm-Driven Aesthetics

Instagram has become a dominant force in shaping interior design trends, offering a seemingly endless stream of inspirat... Read more

The Data Crunch In AI: Strategies For Sustainability

Exploring solutions to the imminent exhaustion of internet data for AI training.As the artificial intelligence (AI) indu... Read more

Google Abandons Four-Year Effort To Remove Cookies From Chrome Browser

After four years of dedicated effort, Google has decided to abandon its plan to remove third-party cookies from its Chro... Read more

LinkedIn Embraces AI And Gamification To Drive User Engagement And Revenue

In an effort to tackle slowing revenue growth and enhance user engagement, LinkedIn is turning to artificial intelligenc... Read more