Copyleaks Research: 82% Admit They’ve Mistaken AI Images for Real – and Public Trust Is Crumbling

In This Blog

From hyperrealistic portraits of people who don’t exist to fabricated political photos circulating on social media, AI-generated imagery is blurring the boundary between fact and fiction. To understand how people perceive and respond to this new visual reality, Copyleaks surveyed nearly 4,000 U.S. adults about their experiences with AI-generated and manipulated images. The results reveal both widespread exposure and deep concern.

Ubiquitous Exposure, Widespread Deception

AI-generated visuals are no longer rare or novel. Nearly two-thirds (61%) of respondents report seeing manipulated images often, with another 33% saying they come across them occasionally. Only 3% said they’ve never seen one.

The reach of these visuals comes with a cost: 82% admitted they’ve believed an AI-generated image was real at least once. When realizing they’d been misled, respondents described feeling deceived (38%), frustrated (24%), or angry (9%). Over half (51%) suspect they see fake images daily, and another 36% think they encounter them weekly – meaning visual skepticism is now a routine part of online life.

Misinformation, Manipulation, and Erosion of Trust

The consequences extend far beyond individual confusion. When asked about their biggest concerns around AI-generated images, respondents pointed to fake news and misinformation (49%), political propaganda (21%), and criminal scams (18%). These fears align with a broader sentiment of distrust: 82% said their confidence in media and institutions has decreased as a direct result of AI-generated content.

Responsibility for this problem, according to respondents, lies primarily with social media platforms (45%), followed by AI companies (30%) and governments (15%). And when it comes to fixing it, the public is divided on who should take the lead. Governments/regulators (32%), social platforms (28%), and AI developers (26%) each share a significant portion of the burden.

What People Want: Detection, Regulation, and Transparency

The path forward, according to respondents, centers on stronger guardrails and better tools. The top-requested solution is improved AI detection (29%), followed by stricter regulation (28%), and more effective content moderation (15%).

A significant majority (84%) support visible labels or watermarks to clearly mark AI-generated images, and an equal share said they would use a verification tool to confirm authenticity. While 33% remain hopeful about AI’s potential and 27% are cautiously optimistic, 17% are concerned that the risks outweigh the benefits and 7% are pessimistic believing public trust will collapse entirely.

Seeing Through the Synthetic

These findings paint a stark picture of a public that’s inundated, uncertain, and increasingly skeptical of what it sees. The challenge isn’t just spotting manipulated content, but also restoring confidence in the visual world itself.

At Copyleaks, we’re building originality authentication technologies to identify AI-generated and AI-altered images and videos at scale, empowering organizations, educators, and platforms to maintain trust in an era where seeing is no longer believing.

Synthetic imagery is already here. The next step is making sure we can see through it.

Methodology:

This survey was conducted by Copyleaks in October 2025 and polled 3,829 U.S. adults using an online panel.

Build trust, protect your brand, and stay ahead in the age of AI.

Request a custom Copyleaks demo and see how the world’s top enterprises ensure trust and transparency.

Related Blogs