Google Warns Its Own Employees: Do Not Use Code Generated By Bard
AI in brief Google has warned its own employees not to disclose confidential information or use the code generated by its AI chatbot, Bard.
The policy isn't surprising, given the Chocolate Factory also advised users not to include sensitive information in their conversations with Bard in an updated privacy notice. Other large firms have similarly cautioned their staff against leaking proprietary documents or code, and have banned them using other AI chatbots.
The internal warning at Google, however, raises concerns that AI tools built by private concerns cannot be trusted – especially if the creators themselves don't use them due to privacy and security risks.
Cautioning its own workers not to directly use code generated by Bard undermines Google's claims its chatbot can help developers become more productive. The search and ads dominator told Reuters its internal ban was introduced because Bard can output "undesired code suggestions." Issues could potentially lead to buggy programs or complex, bloated software that will cost developers more time to fix than if they didn't use AI to code at all.
Microsoft-backed voice AI maker sued
Nuance, a voice recognition software developer acquired by Microsoft, has been accused of recording and using people's voices without permission in an amended lawsuit filed last week.
Three people sued the firm, and accused it of violating the California Invasion of Privacy Act – which states that businesses cannot wiretap consumer communications or record people without their explicit written consent. The plaintiffs claim Nuance is recording people's voices in phone calls with call centers, who use its technology to verify the caller.
"Nuance performs its voice examination entirely in the 'background of each engagement' or phone call," the plaintiffs claimed. "In other words, Nuance listens to the consumer's voice quietly in the background of a call, and in such a way that consumers will likely be entirely unaware they are unknowingly interacting with a third party company. This surreptitious voice print capture, recording, examination, and analysis process is one of the core components of Nuance's overall biometric security suite."
They argue that recording people's voices exposes them to risks – they could be identified when discussing sensitive personal information – and means their voices could be cloned to bypass Nuance's own security features.
"If left unchecked, California citizens are at risk of unknowingly having their voices analyzed and mined for data by third parties to make various determinations about their lifestyle, health, credibility, trustworthiness – and above all determine if they are in fact who they claim to be," the court documents argue.
The Register has asked Nuance for comment.
- AI is going to eat itself: Experiment shows people training bots are using bots
- Google Lens now can spot problematic skin spots, or not
- Euro Parliament green lights its AI safety, privacy law
- Google's Bard barred while trying to enter Europe
Google does not support the idea of new federal AI regulatory agency
Google's DeepMind AI lab does not want the US government to set up an agency singularly focused on regulating AI.
Instead, it believes the job should be split across different departments, according to a 33-page report [PDF] obtained by the Washington Post. The document was submitted in response to an open request for public comment launched by the National Telecommunications and Information Administration in April.
Google's AI subsidiary called for "a multi-layered, multi-stakeholder approach to AI governance" and supported a "hub-and-spoke approach" – whereby a central body like NIST could oversee and guide policies and issues tackled by numerous agencies with different areas of expertise.
"AI will present unique issues in financial services, health care, and other regulated industries and issue areas that will benefit from the expertise of regulators with experience in those sectors – which works better than a new regulatory agency promulgating and implementing upstream rules that are not adaptable to the diverse contexts in which AI is deployed," the document states.
Google DeepMind's view differs from other companies including OpenAI and Microsoft, policy experts, and lawmakers who support the idea of building an AI-focused agency to tackle regulation.
Microsoft rushed to release the new Bing despite OpenAI's warnings
OpenAI reportedly cautioned Microsoft about releasing its GPT-4-powered Bing chatbot too quickly, considering it could generate false information and inappropriate language.
Bing shocked users with its creepy tone and sometimes manipulative or threatening behaviour when it launched. Later, Microsoft restricted conversations to prevent the chatbot going off the rails. OpenAI had previously urged the tech titan to hold back on releasing the product to work on its issues.
But Microsoft didn't seem to listen and went ahead anyway, according to the Wall Street Journal. That wasn't the only conflict between the AI advocates, however. Months before Bing was launched, OpenAI released ChatGPT despite Microsoft's concerns it could steal the limelight away from its AI-powered web search engine.
Microsoft has a 49 per cent stake in OpenAI, and gets to access and deploy the startup's technology ahead of rivals. Unlike with GPT-3, however, Microsoft doesn't have exclusive rights to license GPT-4. At times, this can make things awkward – OpenAI will often be courting the same clients as Microsoft or other businesses that are directly competing with its investor.
Over time, this could make their relationship rocky. "What puts them on more of a collision course is both sides need to make money," Oren Etizoni, ex-CEO of the Allen Institute for Artificial Intelligence, said. "The conflict is they'll both be trying to make money with similar products. ®
From Chip War To Cloud War: The Next Frontier In Global Tech Competition
The global chip war, characterized by intense competition among nations and corporations for supremacy in semiconductor ... Read more
The High Stakes Of Tech Regulation: Security Risks And Market Dynamics
The influence of tech giants in the global economy continues to grow, raising crucial questions about how to balance sec... Read more
The Tyranny Of Instagram Interiors: Why It's Time To Break Free From Algorithm-Driven Aesthetics
Instagram has become a dominant force in shaping interior design trends, offering a seemingly endless stream of inspirat... Read more
The Data Crunch In AI: Strategies For Sustainability
Exploring solutions to the imminent exhaustion of internet data for AI training.As the artificial intelligence (AI) indu... Read more
Google Abandons Four-Year Effort To Remove Cookies From Chrome Browser
After four years of dedicated effort, Google has decided to abandon its plan to remove third-party cookies from its Chro... Read more
LinkedIn Embraces AI And Gamification To Drive User Engagement And Revenue
In an effort to tackle slowing revenue growth and enhance user engagement, LinkedIn is turning to artificial intelligenc... Read more