top of page
Search

1 in 4 aren’t worried about sexual deepfakes. What does that mean for brand and marketing risk?

  • Chris Godfrey
  • 2 days ago
  • 3 min read
ree

A recent UK-survey commissioned by the police found that one in four people felt either neutral or unconcerned about the creation and sharing of non-consensual sexual deepfakes.  Specifically, about 13 % said they saw “nothing wrong” with creating or sharing these types of videos and images, and another 12 % said they were neutral.


These statistics are shocking. Not only because of what they say about attitudes to personal violation, but because it raises a broader question: If a substantial minority are indifferent to deepfakes in an intensely personal domain, what does that signal for less-obvious domains, such as brand advertising, social content or marketing campaigns?


What the sexual deepfake survey tells us


The survey underscores two key points. First, tools to generate realistic synthetic content are now sufficiently accessible that non-consensual deepfakes are more visible and convincing than ever before. Second, public perception of harm is uneven. Although 25% see little issue, authorities warn the risks are real. In this case, they say they’re part of a “rising threat” of intimate-image abuse.


This outcome invites reflection. If people are relatively unmoved in the context of sexual deepfakes, will they likewise be unmoved when brands or businesses are targeted; or conversely, is there reason to believe they’d react more strongly?


Deepfakes in brand messaging: Potential scenarios


Imagine this: A major brand’s advertising or social-media content is secretly generated (or hijacked) using deepfake technology. Perhaps a beloved celebrity’s likeness is used without permission to endorse a faulty product, or an influencer is synthetically portrayed as pushing an anti-climate agenda. Or maybe a malicious actor creates deepfake content purporting to be from a company spokesman defaming a competitor or selling unsafe goods.


In each of these cases, trust is breached, not just at the personal level (as in the sexual context) but at the institutional level - brand integrity, company reputation, consumer confidence, etc. The fact that 1 in 4 people seem unconcerned about sexual deepfakes might suggest some baseline tolerance or apathy to manipulated media generally. If so, does that mean brands are safe from public outrage?


How this could impact marketing and brands


1.       Erosion of trust & credibility: Brands rely on authenticity - the promise that what you see is real, that the messenger and message are genuine. When deepfakes blur that authenticity, the value of trust erodes.

2.       Regulatory and legal risk: A brand that uses or is victimised by deepfake-derived content could face claims of misleading advertising, defamation, or failure to police synthetic hijack attacks.

3.       Marketing fatigue and scepticism: If consumers assume any video, endorsement or message might be synthetic, the entire ecosystem of influencer marketing, testimonial adverts and user-generated content could lose potency,

4.       Opportunity cost & defensive positioning: On the flip side, brands that embrace transparency (e.g., “This content is genuine, no AI impersonation”) may differentiate themselves positively. But crafting that positioning takes time and money – assets that many smaller businesses do not have.


What the sexual-deepfake survey suggests for this marketing scenario


The police survey suggests that a sizeable share of the public may not react strongly to synthetic manipulations per se. On one hand, this could mean that malicious deepfake campaigns against brands might quietly succeed. On the other hand, the indifference may be domain-dependent. Personal violation (sexual deepfakes) may feel abstract to many, but a brand being misrepresented might hit differently.


In short, the baseline of indifference doesn’t guarantee safety, in fact it underscores a hidden risk: A brand could be undermined without triggering mass outrage, but the damage in trust, reputation and long-term effectiveness may accumulate.


Final word


The fact that one in four people appear unconcerned about non-consensual sexual deepfakes should not breed complacency in the marketing world. Rather, it signals that synthetic media threats are both real and under-appreciated. For brands and marketers, the consequences could be meaningful: Loss of trust, regulatory exposure, increased scepticism, and higher costs of managing authenticity. Proactive steps are essential. This means auditing content for synthetic-media risk, training teams to recognise deepfake threats, and planning responses for scenarios where brand messages are hijacked or faked.


Ultimately, in a world where manipulation becomes ever cheaper to do and much harder to spot, the brand that declares “we guarantee that what you see is real” may earn more than safe distance - it may earn survival.



Contact us to learn more about this topic.

 
 
 

Comments


No duplication permitted without the written consent of authors.  ©Freelance Words 2025

FW² is a fully insured creator of text, pictorial and a/v content for worldwide publication. We are also a founding partner in Thrilla Films, the UK's best script and story curation hub for film, TV and web content production.

LIGHTBULB MOMENT LOGO png.png
thrilla films face red circle png.png
linkedin-blue-style-logo-png-0.png
content advisor.png
bottom of page