AI can now fool your eyes: New report warns of hyper-realistic fakes

AI can now fool your eyes:  New report warns of hyper-realistic fakes

A new industry report warns that artificial intelligence (AI) systems are becoming so sophisticated that they can easily fool human perception. The “Trends – Artificial Intelligence” report, released in May 2025, highlights the growing challenge of distinguishing AI-generated content from real-life material, raising new concerns about misinformation and public trust.

 

Conversations that feel human
AI chatbots are now capable of producing text so natural that people struggle to tell it apart from human writing. In a recent Turing test study from UC San Diego, about 73 percent of AI responses were mistaken as human-generated.

Image 1 Percentage of testers mistaking AI responses for human, with GPT-4.5 outperforming previous models in a recent Turing test. /Trends–Artificial Intelligence report, p42, citing Cameron Jones & Benjamin Bergen

One example involved GPT-4.5, where participants were asked to identify whether “Witness A” or “Witness B” was an AI bot. A remarkable 87 percent of testers wrongly believed Witness A, an AI bot, was a human.

Image 2 Example of a Turing test conversation where participants mistook GPT-4.5’s responses (left) as human. /Trends–Artificial Intelligence report, p43, citing Cameron Jones & Benjamin Bergen

AI images rival professional photography
Visual content generated by AI is advancing at an extraordinary pace. The example showcases the evolution of Midjourney, a leading AI image-generation tool. Its latest version (v7), released in April 2025, can produce images with realistic lighting, textures and fine detail. A sample image of a woman’s necklace with a sunflower pendant demonstrated visual fidelity almost indistinguishable from a real photograph.

Image 3 AI-generated images of a sunflower pendant necklace using Midjourney, showing the tool’s progress from v1 to v7. /Trends–Artificial Intelligence report, p44, citing Midjourney & Gold Penguin

Faces too real to be fake

AI’s ability to generate hyper-realistic human faces is advancing rapidly. The New York Times published an interactive quiz in 2024, inviting readers to test their ability to distinguish AI-generated faces from real photographs. The featured AI images were created using StyleGAN2, a powerful face-generation model. By placing an AI-generated face side by side with a genuine photo, the exercise underscored just how difficult it can be for the average viewer to tell the difference.

Image 4 AI-generated face (left) created with StyleGAN2 compared to a real image (right), from The New York Times interactive quiz. /Trends–Artificial Intelligence report, p45

AI fakes we’ve caught so far

As synthetic content continues to evolve, we have been monitoring and verifying AI-generated false content actively circulating online, from viral videos to fake images. In particular, we have seen frequent use of AI generation in political content, election campaigns and disaster coverage. Here are some of the AI fakes we have investigated:

1. Elections: AI-generated deception targeting voters

AI manipulation targeting elections emerged in 2023 during the Republican primary race. In June 2023, we verified that a video shared by Ron DeSantis’s campaign included AI-generated images of Trump embracing Anthony Fauci. Later in the campaign, we examined claims that Kamala Harris’s rally images were AI-manipulated. Our verification combined the use of AI detection tools and cross-referencing with live broadcast footage, confirming the images were authentic.

2. Politics: AI-generated disinformation targeting sensitive issues

We have verified multiple AI-generated political fakes in recent months. In March 2025, we examined a widely shared image falsely showing European leaders removing their jackets to support Ukraine. Through source verification, reverse image searches and artifact analysis, we confirmed the image had been digitally altered. In May, we debunked a manipulated TikTok video falsely portraying Donald Trump praising the Pakistan Air Force. Our verification process included analyzing lip-sync mismatches, checking official records and news reports, and using AI detection tools, confirming that both the speech and accompanying visuals were fabricated.

Image 5 AI-manipulated TikTok video falsely portraying Donald Trump praising Pakistan’s Air Force. /Fact Hunter

3. Natural disasters: synthetic catastrophes misleading the public

We have also uncovered numerous AI-generated disaster videos circulating online. After the March 28 Myanmar earthquake, we verified that viral videos showing massive destruction and a water cloud in Bangkok were synthetic. Our verification combined reverse image searches, visual red flag analysis and source tracing, linking the content to AI video accounts. In May, we debunked a viral video misrepresenting Israeli wildfires. Through visual analysis, we identified distorted license plates, static flame effects and an AI-generation confidence score of 98 percent. We traced the video’s origin to an AI art account, confirming it was not authentic footage of the fires.

Image 6 AI-generated video falsely claiming to show damage from Myanmar’s March 28 earthquake. /Fact Hunter

We remain committed to fact-checking such content and helping the public navigate this new wave of synthetic misinformation. See more: Fact Hunter

A call for stronger media literacy

As AI-generated content becomes increasingly sophisticated, it is no longer safe to trust appearances alone. Texts, images and even human faces can now be convincingly fabricated by machines. Developing strong media literacy, questioning what we see, seeking verification and understanding how AI tools operate are essential to navigating this new reality.