Google has finally fixed its AI recommendation to use non-toxic glue as a solution to cheese sliding off pizza. “Glue, even non-toxic varieties, is not meant for human consumption,” says Google Gemini today. “It can be harmful if ingested. There was a bit of a funny internet meme going around about using glue in pizza sauce, but that’s definitely not a real solution.”
Google’s situation is funny. The company that invented the very idea of gen AI is having trouble teaching its chatbot it shouldn’t treat satirical Onion articles and Reddit trolls as sources of truth. And Google’s AI has made other high-profile flubs before, costing the company billions in market value. But it’s not just the AI giants that can get in hot water because of something their AIs do. This past February, for instance, a Canadian court ruled that Air Canada must stand behind a promise of a discounted fare made by its chatbot, even though the chatbot’s information was incorrect. And as gen AI is deployed by more companies, especially for high-risk, public-facing use cases, we’re likely to see more examples like this.
According to a McKinsey report released in May, 65% of organizations have adopted gen AI in at least one business function, up from 33% last year. But only 33% of respondents said they’re working to mitigate cybersecurity risks, down from 38% last year. The only significant increase in risk mitigation was in accuracy, where 38% of respondents said they were working on reducing risk of hallucinations, up from 32% last year.
However, organizations that followed risk management best practices saw the highest returns from their investments. For example, 68% of high performers said gen AI risk awareness and mitigation were required skills for technical talent, compared to just 34% for other companies. And 44% of high performers said they have clear processes in place to embed risk mitigation in gen AI solutions, compared to 23% of other companies.
Executives are expecting gen AI to have significant impacts on their businesses, says Aisha Tahirkheli, US trusted AI leader at KPMG. “But plans are progressing slower than anticipated because of associated risks,” she says. “Guardrails mitigate those risks head on. The potential here is really immense but responsible and ethical deployment is non-negotiable.”
Companies have many strategies they can adopt for responsible AI. It starts with a top-level commitment to doing AI the right way, and continues with establishing company-wide policies, selecting the right projects based on principles of privacy, transparency, fairness, and ethics, and training employees on how to build, deploy, and responsibly use AI.