Sunlight is more effective than censorship
This article was brought to you by Cointelegraph Accelerator - a community of crypto investors and founders working together to build the industry’s future.
The past few years have seen a steep rise in calls for censorship of free speech under the pretext of protecting us from misinformation. There is no viable alternative to letting citizens judge the truth for themselves, but better platforms could provide them with the tools to do so.
Calls for government-enforced censorship have come from both sides of the aisle and all parts of the globe. The EU started enforcing social media censorship with its Digital Services Act , both Brazil and the EU have threatened X for failing to suppress unfavorable political voices, a United States Supreme Court ruling allowed the government to push tech companies to take down misinformation, Mark Zuckerburg expressed regret for giving in to exactly such pressure from the White House during the pandemic, and Tim Walz claimed “there’s no guarantee to free speech on misinformation.”
Misinformation online is a real problem, but misinformation itself is not new, and it's not clear that people are any more susceptible to falsehoods than they used to be. Twenty years ago the Iraq War was justified by claims of mass destruction now widely discredited. During the “Satanic Panic” of the 1980s, an investigation of over 12,000 reports failed to substantiate a single satanic cult abusing children. In the 1950s, McCarthy launched a Red Scare by claiming there were hundreds of known communists in the State Department, with no evidence to support his charges. Not too long ago, we were hanging witches, a practice that persists to this day .
Salem Witch Trials. Source: Wikipedia
Much of what’s newly dangerous about misinformation today is not the spread of false information itself. It's the ability of bad actors — empowered by AI and pretending to be ordinary human users — to deliberately promote misinformation. Hordes of coordinated fake or incentivized accounts create the illusion of consensus and make fringe ideas appear mainstream. Popular social media platforms today are closed ecosystems, making it difficult to assess the reputation of sources or the provenance of claims. We’re limited to the information the platforms choose to measure and expose — followers, likes, and “verified” status. As AI becomes increasingly capable, hyper-realistic synthetic media undermine our ability to trust any raw content, be it audio, video, images, screenshots, documents or whatever we’d typically consume to evidence claims.
Politicians themselves are no more trustworthy than the information they seek to censor. Public trust in government is near historic lows . Many of the most aggressive censorship efforts have targeted information that later proved to be true, while government-backed narratives have repeatedly been discredited. The same intelligence apparatus proactively warning us about this election’s disinformation suppressed and mislabelled the Hunter Biden laptop story "Russian disinformation" the last time around. During the pandemic, legitimate scientific debate about COVID's origins and public health measures was silenced , while officials promoted claims about masks, transmission and vaccines they later had to reverse . Both Elon Musk's " Twitter Files " and Mark Zuckerberg’s recent admissions of regret exposed the scale of government pressure on social platforms to suppress specific voices and viewpoints — often targeting legitimate speech rather than actual misinformation. Our leaders have proven themselves dangerously unfit to be the arbiters of truth.
Public trust in government near historic lows. Source: Pew Research Center
The problem we face is a lack of trust. Citizens have lost faith in institutions, traditional media and politicians. Content platforms — Google, Facebook, YouTube, TikTok, X and more — are constantly accused of political bias in one direction or another. Even if such platforms managed to moderate content with complete impartiality, it wouldn’t matter — their opacity will always breed conspiracy and invite claims of bias and shadowbanning.
Fortunately, blockchains are trust machines. Instead of requiring faith in centralized authorities, they provide open, verifiable systems that anyone can inspect. Every account has a transparent history and quantifiable reputation, every piece of content can be traced to its source, every edit is permanently recorded, and no central authority can be pressured to manipulate results or selectively enforce rules. In the run-up to the US election, it's no coincidence that Polymarket — a blockchain-based, transparent and verifiable prediction market — emerged as a go-to election forecast while the electorate is losing faith in pollsters. Transparency and verifiability enable a shared ground of truth from which we can attempt to rebuild social trust.
Blockchain enables powerful new forms of verification. Tools like WorldCoin demonstrate how users can prove they're unique humans, and similar technology can verify concrete attributes like residence, citizenship or professional credentials. Zero-knowledge proofs might allow us to verify these attributes without revealing the underlying personal data. Such technologies could reveal meaningful information about the individuals and crowds participating in online discourse — whether they’re human, where they’re from and what credentials they hold — while preserving users’ privacy.
For example, users seeking medical advice might filter to verified MDs, or ignore non-citizens in domestic policy debates. Wartime disinformation might be ignored by limiting results to verified members of various armed forces. Politicians might focus their feeds and surveys on verified constituents to avoid being pressured by the illusion of outrage by well-organized fringes or foreign actors. AI-powered analysis could uncover authentic patterns across verifiable groups, revealing how perspectives vary between experts and the public, citizens and global observers, or any other meaningful segments.
Cryptographic verification extends beyond blockchain transactions. The Content Authenticity Initiative — a coalition of over 200 members founded by Adobe, The New York Times and Twitter — is developing protocols that act like a digital notary for cameras and content creation. These protocols cryptographically sign digital content at the moment of capture, embedding secure metadata about who created it, what device captured it and how it's been modified. This combination of cryptographic signatures and provenance metadata enables verifiable authenticity that anyone can inspect. A video, for example, might contain cryptographic proof that it was taken on a given user’s device, in a specific location and at a specific time.
Finally, open protocols enable third parties to build tools users need to evaluate truth and control their online experience. Protocols like Farcaster already allow users to choose their preferred interfaces and moderation approaches. Third parties can build reputation systems, fact-checking services, content filters and analysis tools — all operating on the same verified data. Rather than being locked into black box algorithms and centralized moderation, users get real tools to assess information and real choices in how they do so.
Trust is an increasingly scarce asset. As faith in our institutions erodes, as AI-generated content floods our feeds and as centralized platforms become increasingly suspect, users will demand verifiability and transparency from their content. New systems will be built on cryptographic proof rather than institutional authority — where content authenticity can be verified, participant identity established and a thriving ecosystem of third-party tooling and analysis supports our search for the truth. The technology for this trustless future already exists — adoption will follow necessity.
Ben Turtel ( @bturtel ) is Founder & CEO of Kazm . Previously he was an Area 120 founder and senior software engineer working on applied AI at Google. He’s a startup adviser and investor, a Mentor at the Cointelegraph Accelerator , and writes about philosophy and technology .
Twitter - Substack - Linkedin - LinkTree
Disclaimer. This article is for general information purposes and is not intended to be and should not be taken as legal or investment advice. The views, thoughts, and opinions expressed here are the author’s alone and do not necessarily reflect or represent the views and opinions of Cointelegraph.
Disclaimer: The content of this article solely reflects the author's opinion and does not represent the platform in any capacity. This article is not intended to serve as a reference for making investment decisions.
You may also like
IRS Tightens Grip on Crypto with New 2025 Rules
How YeagerAI Is Allowing dApps to Access Real-World Data More Efficiently
241128: Bitcoin Pumps Above $97K, Then Dumps, as Ether, XRP Surge 7%
Bitcoin zoomed above $97,000, bringing hopes of breaching the landmark $100,000 level on social media, before paring gains to nearly $95,500 in Asian morning hours on Thursday. BTC added 3.3% in the past 24 hours, data shows, ahead of a Thanksgiving weekend that has historically recorded sudden pri
Attention is value, and AI Agents will continuously attract users and capital
To truly grow, AI entities cannot be limited to promoting meme coins.