Synthetic media isn't new. Every photograph has been edited. Every film has used effects. What's changed is the scale and the accessibility. Anyone can now generate a convincing image, clone a voice with seconds of audio, or produce a video of someone saying something they never said. The tools are neutral. What matters is what people do with them — and whether they say so.
Synthetic media is content — images, video, audio, or text — generated or significantly altered by AI. The term covers a wide spectrum, from harmless creative tools to sophisticated fraud. Understanding the spectrum is the starting point.
The key distinction: Synthetic media is not inherently deceptive. A film uses synthetic media. An advertisement uses synthetic media. An AI-assisted news graphic uses synthetic media. What makes it a problem is undisclosed use — when the audience has no way of knowing the content was generated or manipulated. Disclosure is the line.
The technical barrier has effectively collapsed. What required a film studio in 2015 requires a consumer subscription in 2026. Understanding how these tools work helps you understand why detection is so difficult.
The accessibility of these tools is the central fact. OpenAI's Sora, Google's Veo, and a growing ecosystem of commercial platforms mean that generating convincing synthetic media now requires a subscription, not a studio. As of 2025, voice cloning software requires as little as three seconds of audio. Video deepfakes can be created in 45 minutes using freely available tools.
The detection problem: Independent research shows that state-of-the-art deepfake detectors can lose up to 50% accuracy when tested against new, "in the wild" content not in their training data. Detection is a moving target — the same AI advancement that makes generation better makes detection harder. Human judgment alone is no longer a reliable defense.
These are not hypothetical risks. The following incidents are verified and sourced. They illustrate the range of harm — financial, political, personal — that synthetic media has already caused at scale.
The liar's dividend: The existence of deepfakes has created a secondary harm — the ability for people to dismiss authentic recordings as probably fake. When any video can be claimed synthetic, documented evidence of real events becomes easier to discredit. This effect is now documented in peer-reviewed research and is considered one of the most structurally significant risks of widespread synthetic media.
Using AI tools to create content isn't the problem. Not saying so is. Disclosure is the standard that distinguishes legitimate use from manipulation — and it's the standard responsible creators are already adopting.
The question isn't whether synthetic media should exist. It does, and it will. The question is whether the people creating it are honest about what it is. A film studio uses visual effects and lists them in the credits. An advertiser uses AI-generated imagery and labels it. A marketer uses a voice clone of a spokesperson with that spokesperson's documented consent. The tool is not the issue. The disclosure is.
An honest note on C2PA adoption: As of 2025, adoption of Content Credentials is growing but uneven. Major AI image generators implement the standard. Most social media platforms do not yet display credential information to viewers. A non-C2PA tool can save a file without the manifest, silently removing all credentials. The absence of a credential does not prove a file is synthetic — only that it lacks verifiable provenance. C2PA is a meaningful step forward, not a complete solution.
The video below was produced using AI video tools with deliberately watermarked stock footage. The watermarks are intentional — a demonstration of what transparent disclosure looks like in practice.
This is what the creator's responsibility looks like in practice — not avoiding AI tools, but using them honestly. The same principle applies to every piece of synthetic media: the tool is not the issue. The disclosure is.
Regulation is moving faster in this space than almost any other area of AI law — driven by documented harm and bipartisan concern. What exists is still a patchwork. But the direction is clear.
The constitutional tension: Synthetic media regulation faces significant First Amendment challenges in the US. A federal judge blocked California's prohibition law in 2024 over concerns that it unconstitutionally restricted political speech. The line between protected creative expression and harmful impersonation is not always clear, and courts are still developing frameworks. The EU's approach — mandatory labeling rather than prohibition — is less vulnerable to these challenges and may influence future US legislation.
Seven questions based on verified facts from this guide. An honest measure of what you now know.
Synthetic media literacy isn't about fear. It's about developing the habits that let you navigate a media environment that has fundamentally changed.
Every fact in this guide is drawn from the sources below. Pending legal and regulatory matters are noted as such.
About this guide
I'm Jennifer Stivers, founder of Jenntelligence.ai, a division of MarketMind Consulting. I have a psychology degree and spent my career in marketing — at Apple, at a venture-backed startup that went public, at organizations like Coursera and GlobalEnglish. I built these guides using AI tools. The research questions, editorial decisions, and responsibility for accuracy are mine.
A note on accuracy
This guide reflects my research and editorial judgment as of the date shown. Synthetic media law, technical standards, and the legal cases covered here change quickly. I update content when I become aware of significant changes, but I cannot guarantee real-time accuracy. Pending legal and regulatory matters are noted as such and should not be read as final. If you find something that needs correction, I want to know. Contact me here. Links to external sources are provided for reference; I am not responsible for changes to third-party content after publication.