Copyleaks Research: 83% Fear AI Video Manipulation Following Sora’s Launch

October 29, 2025
4 Minute Read
A 3-panel image demonstrating AI video manipulation, adding people and a tube to an empty ocean scene.

In This Blog

In our recent look into Sora – OpenAI’s new AI video generator that can create high-quality videos from text prompts – we uncovered instances where users had produced racist deepfakes of celebrities and other public figures. One of the targets, Mark Cuban, even responded, noting that he tries “to go through and delete them,” underscoring how anyone can be affected.

While our technical team continues to analyze these platforms to inform our own work in AI detection, compliance, and governance, we also wanted to understand how the general public feels about this new wave of generative AI. So we asked them.

In a new survey that we commissioned earlier this month, polling nearly 4,000 U.S. adults, we found growing awareness, and unease, around Sora.

Awareness

Nearly two-thirds (60%) of respondents have heard of Sora, with 42% saying they’ve already seen examples of the technology. About one in five (19%) have heard of it but haven’t seen any videos, while 35% said they haven’t heard of it at all.

Awareness of Sora

Heard & seen examples (42%)
Heard but not seen (19%)
Haven’t heard at all (35%)
0 5 10 15 20 25 30 35 40
Percentage of Respondents

Perceived Impact

Public opinion is split on what impact Sora and similar tools will have on society. Over one in four (26%) believe they’ll have a mostly positive effect by empowering creativity and innovation. However, 35% view the impact as mixed – useful for creative purposes but risky for misinformation – and 24% believe the technology will be mostly negative, making misinformation harder to control.

Perceived Impact of Sora and Similar Tools

Mostly Positive (26%)
Mixed (35%)
Mostly Negative (24%)
0 5 10 15 20 25 30 35
Percentage of Respondents

Level of Concern

When asked how concerned they are that tools like Sora could be used to create fake or misleading videos, 83% expressed concern — including more than half (53%) who are extremely concerned and another 30% who are somewhat concerned. Only 3% said they are not very or not at all concerned.

Level of Concern Regarding Miss-leading AI Videos

Extremely Concerned (53%)
Somewhat Concerned (30%)
Not Very/Not at All (3%)
0 5 10 15 20 25 30 35 40 45 50
Percentage of Respondents

Regulation and Oversight

A strong majority (61%) believe AI-generated videos should carry clear labels or watermarks to distinguish them from real footage. Another 21% think access to realistic AI video tools should be restricted to certain users. Only 1.5% said such tools should be left unregulated, while 10% favored open access with penalties for malicious use.

Opinion on Regulation and Oversight of AI Video Tools

Clear Labels/Watermarks (61%)
Access Restricted (21%)
Open Access w/ Penalties (10%)
Unregulated (1.5%)
0 5 10 15 20 25 30 35 40 45 50 55 60
Percentage of Respondents

These findings reinforce what our research uncovered firsthand: AI video tools are already being used to distort reality, and the public is both aware and alarmed. There’s growing consensus that while the technology holds promise, detection, transparency, and proactive safeguards are now urgent.

At Copyleaks, we believe detection must evolve just as quickly as generation. Our work in developing AI image and video detection is built to identify manipulations at scale, before they can mislead, exploit, or go viral.

Sora is just the beginning. The question is not whether synthetic media will flood our feeds – it already has. The question is whether we’re prepared to detect it before it does harm.

Methodology

This survey was conducted by Copyleaks in October 2025 and polled 3,738 U.S. adults using an online panel.

Related Blogs

Build trust, protect your brand, and stay ahead in the age of AI.

Request a custom Copyleaks demo and see how the world’s top enterprises ensure trust and transparency.