AI Literacy  ·  Synthetic Media

Synthetic Media
and Why It Matters

AI-generated images, video, audio, and text are real, useful, and spreading fast. Here is how they work and what every viewer — and every creator — needs to know.

Synthetic media isn't new. Every photograph has been edited. Every film has used effects. What's changed is the scale and the accessibility. Anyone can now generate a convincing image, clone a voice with seconds of audio, or produce a video of someone saying something they never said. The tools are neutral. What matters is what people do with them — and whether they say so.

$25.6M
Lost by engineering firm Arup in a single deepfake video call fraud — January 2024
8M
Online deepfakes estimated in 2025 — up from 500,000 in 2023, nearly 900% annual growth
3 sec
Of audio needed to clone a voice with 85% accuracy — scraped from social media or video
$40B
Deloitte's projection for AI-enabled fraud losses in the US by 2027
Module 01

What synthetic media actually is

Synthetic media is content — images, video, audio, or text — generated or significantly altered by AI. The term covers a wide spectrum, from harmless creative tools to sophisticated fraud. Understanding the spectrum is the starting point.

Legitimate uses
  • +Visual effects and entertainment production
  • +Accessibility — dubbing and translation at scale
  • +Historical and educational reconstruction
  • +Creative expression, art, and experimentation
  • +Marketing and content production
  • +Medical training simulations
Documented harms
  • Financial fraud through impersonation
  • Non-consensual intimate imagery
  • Political disinformation and voter manipulation
  • Corporate espionage and identity theft
  • Harassment of private individuals
  • Fabricated evidence in legal proceedings
Four categories of synthetic media
Image
AI-generated images
Photorealistic images of people, places, and events that never existed — generated from text prompts. Tools include Midjourney, Adobe Firefly, DALL-E, and Stable Diffusion. At current quality levels, most AI-generated faces are indistinguishable from photographs to the average viewer.
Video
AI-generated and manipulated video
Video generated entirely by AI, or existing footage manipulated to show someone saying or doing something they didn't. Face-swapping, lip-sync manipulation, and full video generation are all available through consumer tools. OpenAI's Sora, Google's Veo 3, and numerous commercial platforms can generate polished video from text prompts in minutes.
Audio
Voice cloning and audio synthesis
AI systems can clone a person's voice from as little as three seconds of audio — producing speech that is perceptually identical to the original. Voice cloning has become the most widespread form of synthetic media fraud. A 2024 McAfee study found that 1 in 4 adults have experienced an AI voice scam.
Text
AI-generated text
Text produced by large language models — articles, emails, social posts, scripts, reviews — written in the style of a person, organization, or publication. At scale, AI-generated text is being used for disinformation campaigns, synthetic product reviews, and automated social media influence operations.

The key distinction: Synthetic media is not inherently deceptive. A film uses synthetic media. An advertisement uses synthetic media. An AI-assisted news graphic uses synthetic media. What makes it a problem is undisclosed use — when the audience has no way of knowing the content was generated or manipulated. Disclosure is the line.

Module 02

How synthetic media is made

The technical barrier has effectively collapsed. What required a film studio in 2015 requires a consumer subscription in 2026. Understanding how these tools work helps you understand why detection is so difficult.

The underlying technologies
GANs
Generative Adversarial Networks
Two AI systems run simultaneously — one generating synthetic content, one trying to detect whether it's fake. The generator improves by fooling the detector. The result is content that becomes progressively harder to distinguish from authentic material. GANs were the dominant technology for face-swap deepfakes through the early 2020s.
Diffusion
Diffusion models
The technology behind most current image generators. The model learns to reverse a process of adding noise to images — essentially learning to reconstruct realistic images from randomness. Diffusion models underpin Adobe Firefly, DALL-E 3, Stable Diffusion, and Midjourney. They are significantly more controllable and higher quality than earlier GAN approaches.
TTS
Text-to-speech and voice cloning
Modern voice synthesis models learn the acoustic characteristics of a target voice — pitch, rhythm, cadence, breathing — from a short audio sample. They then generate new speech in that voice from any text input. The same technology has legitimate applications in accessibility, audiobook production, and translation. It also enables convincing voice fraud.
Video
Video generation models
Large video generation models like Sora and Veo 3 generate coherent, temporally consistent video from text or image prompts. As of 2025, these models produce output that is indistinguishable from authentic footage for many viewers, particularly at the resolutions typical of social media. The perceptual tells that once gave away synthetic video have largely disappeared.

The accessibility of these tools is the central fact. OpenAI's Sora, Google's Veo, and a growing ecosystem of commercial platforms mean that generating convincing synthetic media now requires a subscription, not a studio. As of 2025, voice cloning software requires as little as three seconds of audio. Video deepfakes can be created in 45 minutes using freely available tools.

The detection problem: Independent research shows that state-of-the-art deepfake detectors can lose up to 50% accuracy when tested against new, "in the wild" content not in their training data. Detection is a moving target — the same AI advancement that makes generation better makes detection harder. Human judgment alone is no longer a reliable defense.

Module 03

Documented harm

These are not hypothetical risks. The following incidents are verified and sourced. They illustrate the range of harm — financial, political, personal — that synthetic media has already caused at scale.

$25.6M
Arup Group · January 2024
An employee at British engineering firm Arup joined a video call with a deepfaked CFO and multiple AI-generated colleagues. Every person on the call except the victim was synthetic. The employee authorized 15 wire transfers totaling $25.6 million before the fraud was discovered weeks later. Hong Kong police confirmed the case. Arup later acknowledged the incident publicly.
47M
Taylor Swift · January 2024
Non-consensual explicit deepfake images of Taylor Swift were viewed approximately 47 million times on platform X before being taken down. The incident prompted Congressional hearings and directly contributed to passage of the TAKE IT DOWN Act in May 2025, which criminalizes non-consensual intimate deepfakes at the federal level.
1 in 4
Voice scams · McAfee 2024
A 2024 McAfee study found that 1 in 4 American adults have experienced an AI voice scam — a call using a cloned voice to impersonate a family member, colleague, or authority figure. 1 in 10 reported being personally targeted. Voice cloning requires as little as three seconds of audio publicly available from social media or video.
$35M
Investment scams · Europe/Canada 2022–2025
Fraud networks across Europe and Canada used deepfake videos and voice cloning of prominent figures — politicians, business leaders, journalists — to promote fraudulent investment schemes through social media advertising and cloned news sites. Over 6,000 individuals lost a combined $35 million. Consumer protection authorities in multiple countries issued public warnings.
Additional documented incidents
Political
Electoral synthetic media · 2024
AI-generated audio and video were used in elections across the US, India, and Europe in 2024 — for both legitimate outreach and deliberate manipulation. In the US, an AI-generated robocall impersonating a political figure was used to suppress voter turnout in a primary election, leading to indictment of the consultant responsible. The FCC subsequently banned AI-generated voices in robocalls. In India, AI-generated candidate videos were openly used for translation and voter outreach. The technology is now a standard campaign tool globally, used across the political spectrum.
Corporate
Ferrari CEO impersonation attempt · 2024
Fraudsters attempted to impersonate Ferrari CEO Benedetto Vigna using an AI-cloned voice that replicated his distinctive southern Italian accent. The attempt was only stopped after an executive asked a question only Vigna would know the answer to. Similar attempts have targeted WPP CEO Mark Read and executives across multiple industries.
Legal
Deepfake defense strategies · ongoing
Defense attorneys in criminal cases have begun challenging authentic recordings as potentially deepfaked — what researchers call the "liar's dividend." In Wisconsin v. Rittenhouse, the defense challenged zoomed video evidence on grounds that AI processing could have altered it. The US Judicial Conference considered rule amendments for AI-generated evidence in 2025. Courts are still developing adequate frameworks.
School
Student-generated deepfakes · 2024–2025
Schools across the US have reported cases of students using accessible deepfake tools to create harassing and sexually explicit synthetic images of classmates and teachers. Stanford HAI documented the accessibility of these tools to minors with no technical expertise. The harm is real even when the content is entirely synthetic. Lawmakers have struggled to adapt existing cyberbullying laws.

The liar's dividend: The existence of deepfakes has created a secondary harm — the ability for people to dismiss authentic recordings as probably fake. When any video can be claimed synthetic, documented evidence of real events becomes easier to discredit. This effect is now documented in peer-reviewed research and is considered one of the most structurally significant risks of widespread synthetic media.

Module 04

The creator's responsibility

Using AI tools to create content isn't the problem. Not saying so is. Disclosure is the standard that distinguishes legitimate use from manipulation — and it's the standard responsible creators are already adopting.

The question isn't whether synthetic media should exist. It does, and it will. The question is whether the people creating it are honest about what it is. A film studio uses visual effects and lists them in the credits. An advertiser uses AI-generated imagery and labels it. A marketer uses a voice clone of a spokesperson with that spokesperson's documented consent. The tool is not the issue. The disclosure is.

"Content Credentials are essentially a nutrition label for digital content — showing who produced it, when, and what tools were used." — Coalition for Content Provenance and Authenticity (C2PA)
What responsible creators do
Label
Disclose AI involvement clearly
Responsible creators label AI-generated or AI-assisted content at the point of publication — in the caption, in the description, in the metadata. Not buried in terms of service. Not implied. Stated. This is not a legal requirement in most contexts yet. It is a professional standard that the industry's most credible practitioners are already following.
Consent
Obtain consent for likeness and voice use
Creating synthetic content that uses a real person's face, voice, or likeness without their knowledge or consent is the clearest ethical violation in this space. New York's digital replica law (2024) requires written consent and compensation for AI-created likeness use. Tennessee's ELVIS Act protects musicians' voices specifically. Federal law through the TAKE IT DOWN Act covers intimate imagery. Consent is both an ethical baseline and an emerging legal requirement.
C2PA
Use content credentials where available
The Coalition for Content Provenance and Authenticity (C2PA) — founded by Adobe, Microsoft, the BBC, Intel, and others — has developed an open technical standard for embedding verifiable provenance into digital files. A C2PA Content Credential travels with the file and records who made it, when, and what tools were used. Adobe Firefly, OpenAI's DALL-E 3, Sora, and Google's Imagen embed C2PA credentials automatically. Google's Pixel 10 camera signs photos by default. The standard is advancing toward ISO certification.
Watermark
Use watermarking tools
Google's SynthID has watermarked over 10 billion pieces of AI-generated content with pixel-level signals designed to survive compression and editing. Adobe's Content Authenticity app (public beta, 2025) allows creators to apply Content Credentials with attribution information to any digital work. These tools don't prevent misuse but establish a verifiable chain of custody that supports accountability.

An honest note on C2PA adoption: As of 2025, adoption of Content Credentials is growing but uneven. Major AI image generators implement the standard. Most social media platforms do not yet display credential information to viewers. A non-C2PA tool can save a file without the manifest, silently removing all credentials. The absence of a credential does not prove a file is synthetic — only that it lacks verifiable provenance. C2PA is a meaningful step forward, not a complete solution.

Module 05

See it in practice

The video below was produced using AI video tools with deliberately watermarked stock footage. The watermarks are intentional — a demonstration of what transparent disclosure looks like in practice.

What you're looking at: This video was produced by Jennifer Stivers using AI video production tools. The stock footage visible throughout carries deliberate watermarks — not removed, not hidden. The choice to leave them visible is the point. Responsible AI content creation means being transparent about what the tools are and how they were used. The research questions, editorial decisions, and responsibility for accuracy in this guide are mine.

This is what the creator's responsibility looks like in practice — not avoiding AI tools, but using them honestly. The same principle applies to every piece of synthetic media: the tool is not the issue. The disclosure is.

What to look for as a viewer
Labels
Explicit AI disclosure
Is the creator stating clearly that AI tools were used? Not in terms of service — in the post, the caption, the description. The most credible creators are the ones who say so upfront without being asked.
Source
Identifiable origin
Can you identify who made this and verify their identity? Anonymous AI-generated content with no verifiable source is higher risk than content from an identified creator with an established track record.
C2PA
Content credentials
The C2PA Content Credentials icon — when displayed — indicates that a file carries verifiable provenance metadata. You can verify credentials at contentcredentials.org/verify or using the Chrome extension from the Content Authenticity Initiative.
Context
Emotional urgency as a signal
Fraudulent synthetic media — voice scams, impersonation calls, investment fraud videos — consistently uses urgency and strong emotion as mechanisms. A call claiming a family member is in danger. An investment opportunity that requires immediate action. Synthetic media designed to harm relies on the same psychological levers as other fraud. Urgency is the signal to slow down, not speed up.
Module 06

The legal landscape

Regulation is moving faster in this space than almost any other area of AI law — driven by documented harm and bipartisan concern. What exists is still a patchwork. But the direction is clear.

Federal · US
TAKE IT DOWN Act (May 2025) — Signed into law by President Trump. Criminalizes publishing non-consensual intimate deepfakes, with penalties up to 2 years imprisonment (3 years involving minors). Covered platforms must remove flagged content within 48 hours. FTC enforcement begins within one year. Status as of April 2026 — compliance period underway.
State · US
46 states have enacted some form of deepfake legislation as of 2025 — covering non-consensual intimate imagery, political disinformation, or both. Tennessee's ELVIS Act (July 2024) is the first state law outside these categories, protecting musicians' voices from AI manipulation. New York's digital replica law requires written consent and compensation for AI-created likeness use. Louisiana HB 178 (August 2025) is the first statewide framework addressing AI-generated evidence in legal proceedings.
EU
EU AI Act — Transparency requirements for synthetic media take effect August 2026. AI-generated content must be labeled as such. Providers of tools capable of generating synthetic media must implement technical measures — including watermarking — to ensure compliance. High-risk uses face additional requirements. Fines up to €30M or 6% of global annual revenue.
Federal · pending
FCC ruling (February 2024) — Banned AI-generated voices in robocalls following a documented voter suppression incident in a 2024 US primary election. First federal regulatory action specifically targeting voice synthesis misuse. Additional federal legislation addressing political deepfakes and election integrity is pending in Congress. Status as of April 2026.
Industry
C2PA standard — Not a law but an increasingly significant industry standard. Organizations that fail to implement available authentication technologies are facing growing negligence claims following deepfake-enabled fraud, particularly where industry standards have emerged and peers have adopted them. C2PA v2.2 published May 2025; advancing toward ISO international standardization.

The constitutional tension: Synthetic media regulation faces significant First Amendment challenges in the US. A federal judge blocked California's prohibition law in 2024 over concerns that it unconstitutionally restricted political speech. The line between protected creative expression and harmful impersonation is not always clear, and courts are still developing frameworks. The EU's approach — mandatory labeling rather than prohibition — is less vulnerable to these challenges and may influence future US legislation.


What to watch
1
EU AI Act implementation (August 2026) — The transparency requirements for synthetic media take effect. How platforms and creators comply will establish practical standards globally, not just in the EU.
2
C2PA adoption trajectory — Whether major social media platforms begin displaying Content Credential information to viewers is the key adoption metric. Cloudflare's 2025 implementation brought C2PA to approximately 20% of the web. Platform adoption is the next threshold.
3
Court standards for AI-generated evidence — The US Judicial Conference's Advisory Committee on Evidence Rules is actively developing frameworks for how courts evaluate potentially synthetic evidence. This will significantly affect both criminal and civil litigation.
4
Federal political deepfake legislation — Multiple bills addressing AI in political campaigns are pending in Congress. A 2024 voter suppression robocall case established a precedent for FCC enforcement. Whether comprehensive federal legislation passes is the key open question.
Module 07

Knowledge check

Seven questions based on verified facts from this guide. An honest measure of what you now know.

Question 1 of 7
Module 08

What you can do

Synthetic media literacy isn't about fear. It's about developing the habits that let you navigate a media environment that has fundamentally changed.

01
Pause before sharing
The primary mechanism by which synthetic media causes harm is speed — content spreads before it can be verified. Before sharing any emotionally compelling video, audio, or image, take thirty seconds. Ask who made this, where it came from, and whether the source is identifiable and credible. Urgency is the signal to slow down.
02
Check for Content Credentials
Verify provenance at contentcredentials.org/verify by uploading an image or video. Install the Content Authenticity Initiative's Chrome extension to see credentials while browsing. The absence of credentials doesn't prove content is synthetic — but their presence provides meaningful verification of origin.
03
Establish a verbal safe word with family members
Voice scams succeed by exploiting the trust you place in recognized voices. A pre-agreed word or phrase — known only to your family — provides a verification mechanism that cannot be replicated by voice cloning. If you receive an urgent call from a family member you don't expect, ask for the safe word before acting.
04
If you create AI content, say so
Disclosure is the standard that distinguishes legitimate use from manipulation. If you use AI tools to generate or significantly alter images, video, or audio — label it. In the caption, not in a footnote. Use tools like Adobe's Content Authenticity app to embed verifiable credentials. The professional standard is transparency.
05
Minimize your voice and likeness online
Voice cloning requires as little as three seconds of audio. Video deepfakes are easier to generate when more reference footage exists. This doesn't mean disappearing from the internet — it means being thoughtful about high-quality audio and video you make publicly available, and reviewing privacy settings on platforms that hold large amounts of your content.
06
For organizations: implement verification protocols
The Arup fraud succeeded because a single employee could authorize $25.6 million based on a video call alone. Financial institutions and organizations handling high-value transactions should implement secondary verification channels that cannot be compromised by synthetic media — pre-established code phrases, mandatory callbacks on known numbers, or time delays for large transfers. The US Financial Crimes Enforcement Network has issued formal guidance on this.
Reflect
What AI-generated content have you shared in the past month? Did you know it was synthetic when you shared it?
If you create content professionally — marketing, communications, journalism — does your organization have a disclosure policy for AI-generated or AI-assisted content?
If you received an urgent call from a family member's voice asking for money, what would you do to verify it?
Primary sources

All claims verified

Every fact in this guide is drawn from the sources below. Pending legal and regulatory matters are noted as such.

World Economic Forum
weforum.org →
Detecting Dangerous AI Is Essential in the Deepfake Era (2025). Source for Arup fraud details, WEF risk ranking of synthetic media, Deloitte $40B projection, and the FS-ISAC deepfake risk taxonomy. Includes confirmation of the Arup $25.6M incident and verification protocols being adopted by financial institutions.
UNESCO
unesco.org →
Deepfakes and the Crisis of Knowing (October 2025). Source for 8 million deepfakes statistic, Generative AI market projections, the liar's dividend concept, illusory truth effect research, and voice cloning fraud statistics. Also sources the 46% of fraud experts encountering synthetic identity fraud figure.
The Conversation / DeepStrike
theconversation.com →
Deepfakes Leveled Up in 2025 — Here's What's Coming Next (February 2026). Source for 500,000 to 8 million deepfakes growth figure, voice cloning technical threshold details, 45-minute video deepfake creation time, and the shift toward real-time synthesis. Written by a computer scientist researching deepfakes.
Jones Walker LLP — AI Law Blog
joneswalker.com →
Synthetic Media Creates New Authenticity Concerns for Legal Evidence (August 2025). Source for TAKE IT DOWN Act details, Tennessee ELVIS Act, New York digital replica law, Louisiana HB 178, California prohibition law First Amendment block, Wisconsin v. Rittenhouse deepfake defense, and US Judicial Conference advisory committee proceedings.
Jones Walker LLP — AI Law Blog
joneswalker.com →
Deepfakes-as-a-Service Meets State Laws (January 2026). Source for 46-state deepfake legislation figure, TAKE IT DOWN Act signing details, Coalition Deepfake Response Endorsement (December 2025), Swiss Re SONAR 2025 report, and C2PA implementation guidance for organizations.
Content Authenticity Initiative / C2PA
contentauthenticity.org →
How It Works — Content Authenticity Initiative. Primary source for C2PA standard explanation, Content Credentials description, founding members (Adobe, Arm, BBC, Intel, Microsoft, Truepic), and open-source tools. C2PA v2.2 specification published May 2025. Source for the "nutrition label" characterization and cryptographic tamper-evidence description.
Adobe / Content Authenticity Initiative
contentauthenticity.org →
5,000 Members: Building Momentum for a More Trustworthy Digital World (2025). Source for Cloudflare CDN implementation of C2PA (20% of web), Google Pixel 10 hardware-backed signing, Sony PXW-Z300 first C2PA camcorder, Adobe Content Authenticity public beta, and SynthID 10 billion watermarks milestone.
McAfee
mcafee.com →
McAfee Voice Scam Study (2024). Source for the statistic that 1 in 4 American adults have experienced an AI voice scam, and 1 in 10 have been personally targeted. Also cited for three-second audio requirement for voice cloning with 85% accuracy match.
ScienceDirect — Deepfake Legal Framework
sciencedirect.com →
Deepfake Detection in Generative AI: A Legal Framework Proposal (June 2025). Source for Arup $25.6M fraud details, Taylor Swift 47 million views figure, electoral synthetic media incident, $35 million European investment fraud total, Almendralejo sextortion case, and EU AI Act regulatory analysis.
Authenticity Crisis
authenticitycrisis.com →
Deepfake Incidents and AI Identity Fraud Cases (2025–2026). Source for Graphika state-aligned influence operation documentation, $1.1 billion in 2025 deepfake fraud losses estimate (Surfshark methodology), EU investment scam network details, NASK Poland warning, and DPRK remote IT worker scheme details.
Stanford HAI
hai.stanford.edu →
Stanford Human-Centered AI reporting (2025). Source for documentation of student-generated deepfakes in schools and the accessibility of deepfake tools to minors with no technical expertise. Associated Press reporting (June 2024) cited for specific school incidents.

What to watch
1
EU AI Act synthetic media transparency requirements — Take effect August 2026. Will establish mandatory labeling for AI-generated content across EU platforms. Track at digital-strategy.ec.europa.eu →
2
C2PA platform adoption — Whether major social media platforms display Content Credential information to viewers is the key next threshold. Track at c2pa.org →
3
TAKE IT DOWN Act enforcement (May 2026) — FTC enforcement begins one year after signing. How the FTC interprets and applies the law will shape the practical regulatory landscape for intimate deepfakes. Status as of April 2026: compliance period underway.
4
Federal political deepfake legislation — Multiple bills pending in Congress. The FCC's February 2024 ban on AI voice robocalls is the established precedent. Track at congress.gov →

About this guide

I'm Jennifer Stivers, founder of Jenntelligence.ai, a division of MarketMind Consulting. I have a psychology degree and spent my career in marketing — at Apple, at a venture-backed startup that went public, at organizations like Coursera and GlobalEnglish. I built these guides using AI tools. The research questions, editorial decisions, and responsibility for accuracy are mine.

A note on accuracy

This guide reflects my research and editorial judgment as of the date shown. Synthetic media law, technical standards, and the legal cases covered here change quickly. I update content when I become aware of significant changes, but I cannot guarantee real-time accuracy. Pending legal and regulatory matters are noted as such and should not be read as final. If you find something that needs correction, I want to know. Contact me here. Links to external sources are provided for reference; I am not responsible for changes to third-party content after publication.