MIT’s Dr. Peter S. Park and academics associated with the Center for AI Safety examined deception by Artificial Intelligence. They conclude that increased capabilities of AI pose a serious risk and computers are now capable of inducing false beliefs and encouraging harmful outcomes.
A second lead author concludes that the biggest difference between AI and other consequential threats is that AI is completely unregulated. Dr. Simon Goldstein told Information Age, “Can you imagine letting private companies develop nuclear weapons or super viruses for profit without any government oversight? That needs to change quickly.”
The lpaper included below argues that current AI systems have learned how to deceive humans to achieve untruths. It details several risks from computerized deception, such as fraud, election tampering, and losing control of AI.
Several protective measures are suggested:
- Regulatory frameworks should subject AI systems to robust risk-assessment requirements,
- Policymakers should implement bot-or-not laws1,
- Policymakers should prioritize the funding of relevant research, including tools to detect AI deception and to make AI systems less deceptive.
- Proactive work is needed to prevent AI deception from destabilizing the foundations of our society.

British-Canadian computer scientist and cognitive psychologist Geoffrey Hinton helped develop modern artificial intelligence, but now worries the computer technology will cause serious harm.
Dr. Hinton’s journey from A.I. groundbreaker to doomsayer marks a remarkable moment for the technology industry at perhaps its most important inflection point in decades.
Generative A.I. can already be a tool for misinformation. Soon, it could be a risk to jobs. Somewhere down the line, tech’s biggest worriers say, it could be a risk to humanity.
‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead
1 California passed the first American “bot” disclosure regulation. The law requires that any automated online account conspicuously identify itself as a bot if it is being used to influence the person to make a purchase or vote.
Here at advertising-free IN-SIGHTS, followers are not asked to subscribe, but financial support from readers enables the site to continue. If you find value in the content here, please make a contribution. Methods are described HERE.
Categories: Technology


“This paper argues that a range of current AI systems have learned how to deceive humans. We define deception as the systematic inducement of false beliefs in the pursuit of some outcome other than the truth.”
”…instances of double-crossing opponents, bluffing, pretending to be human, and modifying behaviour…”
Seems that AI is just a digitized version of a politician’s normal playbook.
LikeLiked by 2 people