With the rise of artificial intelligence (AI), artificial general intelligence (AGI), which refers to the point where machines are super intelligent in most areas and outperform humans, remains a speculation.
Tech leaders Sam Altman and Elon Musk predict that AGI might arrive by 2025-2028, but the facts differ. The real risk in 2025 is that humans might misuse current AI tools instead of having superintelligent machines.
Unintentional Misuse of AI Tools
The AI adoption in several fields has caused some unforeseen impacts, especially in the law field. Lawyers who excessively relied on AI have been sanctioned for presenting wrongful documents prepared by chatbots. Here are a few examples that WIRED included in its report:
- Attorneys Steven Schwartz and Peter LoDuca were sanctioned for submitting false citations from ChatGPT in New York for a penalty of $5,000.
- Chong Ke was penalized for adding a series of fabricated cases in legal submissions in British Columbia.
- Colorado attorney Zachariah Crabill was sanctioned for attributing mistakes in several court cases to an AI-generated intern.
The cases remind one of the dangers of blind trust in AI without verification which may bring about reputational damage or legal penalties. This is why we need to be careful about AI-produced information.
Intentional Misuse: The Deepfake Dilemma
AI tools have also been weaponized for ill-faith purposes. Unwanted deepfakes like those of sexually explicit images of Taylor Swift created using Microsoft's Designer AI tool have become rampant.
Despite protection mechanisms, loopholes, such as misspelling names, can bypass the restrictions.
Open-source deepfake tools make it even worse. Such misuse affects the individual but also creates societal distrust, thereby making it difficult to regulate and combat fake content.
The 'Liar's Dividend': Manipulating Perception
AI-generated content blurs the line between reality and fiction, thus enabling a phenomenon called the "liar's dividend." This allows people to dismiss actual evidence as being fake, thus undermining accountability. Examples include:
- Tesla argues that a 2016 video of Elon Musk exaggerating Tesla Autopilot's safety could be a deepfake.
- An Indian politician claimed doctored audio clips despite verification of authenticity.
- January 6 riot defendants falsely claimed that incriminating videos were AI-generated deepfakes.
As AI-generated audio, video, and text become more realistic, the ability to distinguish truth from fabrication will become increasingly difficult.
Dubious AI Products as Means of Exploitation
Companies exploiting AI hype usually sell broken products that make critical decisions about people's lives. The hiring platform, Retorio, claims it can determine a candidate's job suitability by interviewing them via video.
Research, however, showed the system is prone to manipulation, including changes in backgrounds or glasses.
In the Netherlands, the Dutch tax authority used AI in a way that resulted in thousands of parents wrongly accused of welfare fraud, demanding exorbitant repayments. The scandal was so egregious that it caused the Prime Minister and his cabinet to resign.
Mitigating AI Risks in 2025
The most significant risks in 2025 arise from human misuse of AI, not from rogue superintelligence. They include:
- Over-reliance on AI where it fails (for example, legal errors).
- Intentional misuse (such as deepfakes and misinformation).
- Poorly designed AI systems that cause harm to people.
Governments, companies, and society have to focus on these immediate concerns first. As long as the possibility of AGI remains distant, focusing on today's concrete AI risks will be a protection against real-world harm.
Originally published on TechTimes.