While only a few months ago, people could tell AI apart, AI visuals and voices have reached a level of realism where even trained editors sometimes struggle to distinguish them
As AI-generated videos become increasingly realistic, the boundary between real and synthetic content is growing harder to spot.
As experts warn of overexposure and trust fatigue, the rise of tools like Veo and Sora, capable of producing photorealistic videos, is prompting new questions about authenticity, responsibility, and how audiences engage with digital media.
Aakarsh Gupta, Global Head of Operations at Nas Daily, with 70 million subscribers across social media platforms, said AI’s role varies across stages — about 90 per cent in scriptwriting, 10 per cent in production, and 50 per cent in editing.
It speeds up idea generation, research summarisation, and narrative drafting, while humans refine the story. During production, AI’s role is minimal due to the need for human emotion and on-set chemistry. In editing, it’s used extensively for voice cleanup, captions, upscaling, and AI-generated B-roll.
“I don’t see AI as ‘creating’ content, but as a creative assistant. A recent survey found that over 80% creators use AI in some part of their workflow, but the best ones still rely on human judgment for storytelling. AI accelerates execution, but emotion, context, and authenticity still come from people,” he said.
While only a few months ago, people could tell AI apart, AI visuals and voices have reached a level of realism where even trained editors sometimes struggle to distinguish them.
Gupta explained that he uses a mix of instinct, experience, and digital forensics.
Visual inconsistencies include too many fingers or hands with oddly bent joints, facial expressions that don’t match the emotion, eyes that don’t blink naturally or feel “glassy”, hair blending into the background or morphing during movement, background patterns that repeat or deform when zoomed, and lighting inconsistencies, like a nose shadow that doesn’t match the sun’s direction.
Audiowise, the texture of the sound is a giveaway. Unnatural reverb, over-smooth tone, or breath patterns that feel too mechanical are an indication of AI.
“Creators have an ethical duty to guide audiences. We try to be transparent whenever synthetic elements are used because trust, once lost, is hard to regain. The audience may forgive imperfections, but not manipulation,” he said, adding that recent reports show that only 46% of people say they trust AI systems, even as they use them daily. While audiences may admire technological progress, they fear deception.
Creators must now disclose any image, voice, or clip that could be mistaken for real. Platforms like YouTube already mandate labels for realistic AI-generated content, particularly in areas like news, politics, and health. This transparency requirement may become standard across all platforms soon.
Srishti Vatsa, a counselling psychologist, highlighted that AI doesn’t impact everyone similarly. Teens and young adults absorb AI-enhanced images as identity cues, working professionals face ‘skepticism fatigue’ despite knowing manipulation exists, and older adults often over-trust visuals, mistaking AI-generated videos for proof.
While in Tier 1 cities, there’s a growing fatigue from constant digital manipulation, people in Tier 2 and 3 regions show visual over-trust, where AI-generated videos are often accepted as proof since visuals have long symbolized authenticity.
“Exposure to AI-generated content makes it hard to tell what’s real, not just visually, but neurologically. Our brains learn through repetition. If you keep seeing AI-generated faces, perfect lighting, or synthetic voices, your brain normalises them. Over time, your sensory alarms dull because you stop reacting to them,” she added.
Exposure to fake and deepfake media is eroding emotional trust and critical thinking. Empathy depends on believing events are real. When that belief fades, the brain protects itself by feeling less, weakening empathy through overexposure to simulation. People may disengage not out of apathy, but exhaustion due to sorting truth from manipulation.
“There’s a trauma aspect called epistemic betrayal. When your sense of what’s real gets violated, it triggers the same responses as betrayal in relationships — distrust, avoidance, vigilance,” Vatsa said.
To prevent AI from generating harmful, misleading, or biased content, companies can implement thorough testing tools and deepfake detection tools as part of their mainstream tools, according to Dr Srinivas Padmanabhuni, CTO, AIENSURED.
The mantra “Trust but verify” emphasizes the importance of verifying the authenticity of content before consumption. Developing comprehensive deepfake detection tools and fact-testing tools is crucial, and their adoption should be evangelized.
“Given the evolving dynamics of the industry, users today are increasingly open and receptive to AI-generated content. However, when compared to human-created content, a section of the audience still feels that AI often lacks the depth, emotion, and authentic human touch that resonates on a more personal level,” said Vedang Jain, Director at Prachar Communications.
Published on November 11, 2025