The Great Undermining: When we can’t trust what we see or hear online
- Chris Godfrey
- Dec 4, 2025
- 4 min read
Updated: Dec 8, 2025

On the 25th of November, The Guardian reported that a cluster of “pro-Trump” accounts posting on X were presenting themselves as authentic American conservatives - based in America, for Americans and American through and through. However, in reality, they were being run from Asia. This wasn’t common political slant. The accounts were fundamentally misrepresenting who they were, where they operated from, and what their motives were. It also wasn’t a one-off scandal, but just one more amid a wide and growing trend.
Let’s call it the Great Undermining: The erosion of trust created by social-media accounts and online content that intentionally pretend to be something they’re not. In an era where technologies can generate realistic faces, polished personas, and perfectly tailored narratives, the question becomes uncomfortably simple: How do we trust anything we see or hear online anymore - and perhaps more alarmingly - what happens when audiences simply stop believing?
What Is the Great Undermining?
The Great Undermining describes the growing practice of online personas - political activists, influencers, reviewers, experts, or “ordinary people” - who claim to represent one identity but are actually operated by someone else, often for hidden ideological, financial, or geopolitical purposes.
This can include:
Fake political accounts run by foreign actors.
AI-generated influencers that look fully human but are, in fact, commercial fantasy.
Fraudulent product reviews or testimonials crafted to manipulate consumer behaviour.
Coordinated disinformation networks amplifying certain narratives while obscuring their origins.
Entire “communities” or “movements” that are actually bots, sock-puppets, or paid operators.
Thanks to AI-powered face generation and linguistic modelling, these fake fronts are becoming increasingly convincing - often indistinguishable from real human users. What used to be sloppy, low-effort impersonation has evolved into high-fidelity identity simulation at scale.

Why the undermining is accelerating now
There are three major forces fuelling the current explosion of inauthentic online identities:
1. Technology makes it easy
AI can generate profile photos that look like real people, write fluent text in any style, and automate engagement. The barrier to creating a believable persona has collapsed. A single operator can manage dozens of “individuals,” each with their own voice, behaviour patterns, and emotional hooks.
2. Platforms reward emotion, not truth
Social media algorithms don’t incentivise accuracy, they incentivise engagement. Outrage, fear, and identity affirmation outperform nuance, so fake accounts often thrive by producing emotional spikes. Platforms profit from attention, not authenticity.
3. Manipulation generates money and power
False personas can sell products, influence elections, disrupt social cohesion, or simply generate ad revenue through controversy. Whether the motive is profit, ideology, or geopolitical advantage, the incentives are aligned in favour of fakery.
In short: We’ve built an engagement-based information ecosystem where deception is not a glitch, it’s a feature.
The collapse of trust: What happens when you can’t tell what’s real
For everyday users, the consequence of this is psychological and cultural exhaustion. People know they’re being manipulated, but they no longer know where the manipulation begins or ends – which can lead to two equally dangerous reactions:
1. Distrust everything
Users may start assuming every post, every review, and every video is potentially fake. This cynicism erodes the social contract that makes public information useful. If nothing is trusted, nothing is persuasive - not news reports, not experts, not brands, not even peers.
2. Trust only what confirms their beliefs
Paradoxically, as trust in general information collapses, people may cling to closed communities, echo chambers, or partisan influencers. “Authenticity” becomes self-proclaimed rather than verified. This fractures the public sphere and makes society more vulnerable to manipulation, not less. It also increases population polarisation. Essentially, when trust collapses, people don’t become sceptical, they become tribal.
The risk to brands and marketers
Brands depend on trust:
Customers believing reviews.
Audiences believing influencers.
Communities believing brand messages.
Consumers assuming ads are honest, not manipulative or artificially amplified.
The Great Undermining threatens all of this.
Fake reviews make all reviews suspect
If consumers know that thousands of reviews are generated by bots, paid click-farms, or AI personas, they may stop believing positive (or negative) reviews altogether.
Influencer marketing loses credibility.
If audiences doubt whether influencers are real, or whether their enthusiasm is genuine, influencer campaigns lose value.
Ad performance declines as cynicism rises.
If consumers believe a product testimonial could be written by a bot operating from a distant server farm, they tune out – both emotionally and financially.
Reputation becomes harder to defend.
A single misleading fake account posing as a dissatisfied customer can ignite a crisis before a brand even knows what's happening.
In this environment, authenticity becomes both more rare and more valuable. Brands that prove transparency - real humans, real expertise, verifiable identities - may gain a competitive edge. But that requires more than slogans; it requires infrastructure - which comes at a cost.
Are we reaching a tipping point?
Yes, and very quickly.
The combination of sophisticated identity-forging tools, engagement-based platform economics, and widespread public awareness of deception has created a fragile ecosystem. More people are now doubting what they encounter online, and many are beginning to disengage entirely.
The next phase of the internet may be defined not by information abundance, but by credibility
scarcity.
We are approaching a digital environment where the default assumption may shift from believing content is real to assuming it is fake until proven otherwise. When that happens, the Great Undermining will be complete and we'll all suffer for it.
Final word - where we go from here
If trust is to survive, we need:
Better verification tools.
Platforms incentivising authenticity instead of engagement hacking.
Brands embracing transparency as a strategic asset.
Users learning to verify before amplifying.
Regulators focusing on identity accuracy, not just content moderation.
Because if we don’t address the issue now, we risk a future where audiences no longer tune in because they simply can’t trust what they are told or sold.
In that world, the question won’t be: “is this content real?” but “does it matter anymore?”
Get started with Freelance Words
Strong communication is the bedrock of good business and when there’s a lack of it, problems can arise. Take the ambiguity and doubt out of what your business needs to say. Contact us now.



Comments