In our recent look into Sora – OpenAI’s new AI video generator that can create high-quality videos from text prompts – we uncovered instances where users had produced racist deepfakes of celebrities and other public figures. One of the targets, Mark Cuban, even responded, noting that he tries “to go through and delete them,” underscoring how anyone can be affected.
While our technical team continues to analyze these platforms to inform our own work in AI detection, compliance, and governance, we also wanted to understand how the general public feels about this new wave of generative AI. So we asked them.
In a new survey that we commissioned earlier this month, polling nearly 4,000 U.S. adults, we found growing awareness, and unease, around Sora.
Awareness
Nearly two-thirds (60%) of respondents have heard of Sora, with 42% saying they’ve already seen examples of the technology. About one in five (19%) have heard of it but haven’t seen any videos, while 35% said they haven’t heard of it at all.
Awareness of Sora
Perceived Impact
Public opinion is split on what impact Sora and similar tools will have on society. Over one in four (26%) believe they’ll have a mostly positive effect by empowering creativity and innovation. However, 35% view the impact as mixed – useful for creative purposes but risky for misinformation – and 24% believe the technology will be mostly negative, making misinformation harder to control.
Perceived Impact of Sora and Similar Tools
Level of Concern
When asked how concerned they are that tools like Sora could be used to create fake or misleading videos, 83% expressed concern — including more than half (53%) who are extremely concerned and another 30% who are somewhat concerned. Only 3% said they are not very or not at all concerned.
Level of Concern Regarding Miss-leading AI Videos
Regulation and Oversight
A strong majority (61%) believe AI-generated videos should carry clear labels or watermarks to distinguish them from real footage. Another 21% think access to realistic AI video tools should be restricted to certain users. Only 1.5% said such tools should be left unregulated, while 10% favored open access with penalties for malicious use.
Opinion on Regulation and Oversight of AI Video Tools
These findings reinforce what our research uncovered firsthand: AI video tools are already being used to distort reality, and the public is both aware and alarmed. There’s growing consensus that while the technology holds promise, detection, transparency, and proactive safeguards are now urgent.
At Copyleaks, we believe detection must evolve just as quickly as generation. Our work in developing AI image and video detection is built to identify manipulations at scale, before they can mislead, exploit, or go viral.
Sora is just the beginning. The question is not whether synthetic media will flood our feeds – it already has. The question is whether we’re prepared to detect it before it does harm.
Methodology
This survey was conducted by Copyleaks in October 2025 and polled 3,738 U.S. adults using an online panel.