Fixing misinformation and fake news: keynote from Marshall Van Alstyne
August 1, 2025
From slowing vaccination rates to contested election results, the evidence is growing: misinformation is no longer simply a side effect of the internet. Rather, the inability to manage misinformation may be one of its core failures.
Even as the internet connects us with the wider world, offers community and facilitates dialogue, it has also completely upended our information economy – and led to meaningful real-world harms.
‘We can’t solve issues of global warming, or who the president is, or whether vaccines work or not if we cannot agree on basic facts’ – Professor Marshall Van Alstyne
Misinformation is like the pollution of information we consume. Like pollution, it is a negative externality which means that we’re all negatively impacted even if we don’t pollute ourselves. This raises the stakes dramatically. In order for public health, the climate and democracy to function and thrive, we need to find new approaches and solutions to managing information online.
“We can’t solve issues of global warming, or who the president is, or whether vaccines work or not if we cannot agree on basic facts,” says Professor Marshall Van Alstyne, the Allen & Kelli Questrom Chair Professor at Boston University, as well as a digital fellow at MIT and the co-author of Platform Revolution. “The misinformation problem is one of the biggest challenges we face today.” Marshall’s research into misinformation attempts to not only diagnose the key issues and evaluate current interventions, but also to chart a better course forward.
A foundational threat to democracy, health and public discourse
“Our political situation is one that’s actually quite concerning,” Marshall thinks. “Our own research on misinformation was cancelled by the current [United States government] administration.” At a time when we’re confronted by seemingly countless fundamental global challenges, Marshall is emphatic about the importance of ongoing research. “Misinformation and social media have been implicated in a huge number of different problems,” he explains, “including insurrection, riots, lynching, money laundering, suicide, sex trafficking, drug trafficking, child exploitation, judicial intimidation [and] terrorist recruiting.”
According to Marshall, when online dis- or misinformation is driving extreme and even violent action offline, it is a clear signal that we need to intervene.
Today’s interventions are falling short
In part, Marshall’s research builds on earlier work by a team out of the UK that compiled an overview of global interventions against online misinformation. However, he identifies four clear categories of complications, which he believes are limiting how efficacious the interventions are.
1- Arms race. Advancements in technology enable misinformation to evolve faster than it can be caught. “[We’re] trying to use technology to create and detect, and yet there’s always going to be another mechanism to avoid detection,” says Marshall.
2- Discredited institutions. “Science is being discredited; academics are being discredited; government, institutions, churches – any organisation that stands in the way of those who would disseminate misinformation – are being discredited,” says Marshall. Without trusted authorities, measures to stop misinformation are less effective.
3- Misplaced responsibility. In our current environment, the burden for identifying false information is placed on users instead of sources. “The social costs […] are overwhelmingly in the wrong direction,” Marshall thinks. “In a pollution context, [our current approach to community content moderation] would be the equivalent of asking citizens to wear gas masks and filter their own water as opposed to asking the factories to stop polluting.”
4- Cost imbalance. In Marshall’s view, misinformation may actually be a market problem at its core. “Fake news and misinformation are extremely easy to create and share, and honest truth is very hard to dig up,” he says. “The cost structure favours falsity over fact.”
In order to design stronger solutions, Marshall’s work considers these complicating factors, while also looking at the costs associated with letting misinformation run roughshod.
Rethinking misinformation in economic terms: not just lies, but market failures
Most efforts at addressing misinformation, Marshall explains, focus on whether information is true or false. But he argues that this framing neglects important context, pointing to exceptions like parody or exaggeration that may bend the truth without becoming dangerous. What matters more than accuracy alone, he thinks, is how bad information travels, and who pays the cost and who does not.
In economic terms, these are externalities: the unaccounted-for harm that misinformation causes in the real world. When a conspiracy theory leads to a shooting, or vaccine misinformation leads to disease outbreaks, the damage is real – but platforms and misinformation originators bear none of the cost. “These are damages that occur off platform that the platforms themselves are not internalising,” Marshall says.
Here, he suggests applying the theories of economists Arthur Pigou and Ronald H. Coase, which deeply consider externalities and transaction cost economics, respectively. Through these lenses, Marshall summarises the misinformation problem as: too much pollution and too little correction. “Everybody’s a producer of information,” he says. “There’s lots of production of the pollution, but no one is internalising the cost.” He thinks that this is due to the limitations of government legislation, like the EU’s Digital Services Act (DSA) and Section 230 of the Communications Act in the United States. “It’s really hard to hold individuals accountable, and under our current laws, the platforms are not accountable [for user-generated content].”
Externalities are by definition difficult to account for, and in Marshall’s view, this further leads to ineffective action. “Attempts by courts or society to use the ‘marketplace of ideas’ to sort things out – to let users decide – will fail,” he says firmly. “Markets do not self-correct market failures, [and] externalities are market failures.” At the same time, government interventions on speech can easily amount to outright censorship.
Attention: a scarce resource
To rethink the issue, Marshall looks to economic theory proposed by Nobel laureate Herbert A. Simon. Simon’s theory suggests a decentralised approach to contested resources, rooted in property rights. Using this frame, Marshall asks: “What is the scarce resource? What is it that’s being abused?”
In Marshall’s opinion, the answer is attention. Attention is the finite, polluted resource in today’s information ecosystem. Just as we regulate air and water pollution, he thinks that we need mechanisms to manage attention pollution – without compromising free expression. This line of thinking, and Simon’s work in particular, leads to Marshall’s idea of granting individuals property rights over their own attention. He proposes two new digital rights: a listener’s right and a speaker’s right to be heard.
The listener’s right
“[The listener’s right] means you have the right to focus your attention on your preferred sources,” Marshall explains. “If there are a lot of polluted information streams, you could focus on the clean information streams.” However, this is not to say that focusing attention on trusted sources is risk-free. “You could choose to hear only what you believe […] That’s the downside – that’s the extreme use or exercise of [the listener’s] right,” he admits.
The speaker’s right to be heard
The listener’s right has a counterpart in the right to be heard. Marshall defines this as “an individual expressive right to influence decisions that affect you [and] to counter disagreeable speech with agreeable speech.”
Crucially, this right only works if speakers are willing to take responsibility for the truth of their claims. “Free exercise of [the speaker’s] right in its extreme also has its own downside, which is that you could lie to people,” continues Marshall. “You could speak over them and you could use their attention to your private advantage.”
These two rights are in dialogue, if not direct competition, with one another. This introduces the next question driving Marshall’s research: “How do you balance the right not to hear against the right to be heard?” he asks.
Difficult as striking this balance may sound, Marshall and his colleagues are testing a mechanism that could make it real.
A new solution: vouchsafing the truth
At the heart of Marshall’s recent experimental research is a self-certification system of “vouchsafing.”
In this system, if a source or creator wants to reach an audience and demonstrate trustworthiness, they can share content along with a financial bond to be held in escrow. If the information holds up when scrutinised, the money is returned to the poster of the content. If someone proves it to be false information, that user collects the bond instead.
This voluntary form of self-regulation rewards truth, Marshall believes. “If I [as the content source] know I’m lying, I’d want to use a traditional ad,” he postulates. “If I know I’m telling the truth, I’d want to use this vouchsafe mechanism because it’s free to me, but it’s now a signal [of credibility].”
Marshall emphasizes that vouchsafing isn’t censorship, because no content is removed, and no government or platform determines fact from fiction. Instead, there is a shift in the economic incentives: truth becomes cheaper, lies become more expensive and attention is no longer free to pollute.
Tested and proven: turning theory into results
Along with a team of colleagues, Marshall conducted a series of experiments with over 3,500 users and 60,000 responses, gathered from across demographics in the U.S.
In one version of the study, users could earn small monetary rewards for sharing “interesting” headlines. As expected, false stories performed well, since they are often more novel or shocking. The same held true when researchers first primed participants to think about the importance of sharing correct information: users nevertheless shared the same amount of false, “interesting” information.
But when participants had the option to vouchsafe a claim by putting a small amount of money on the line to guarantee their content’s truthfulness, the results were reversed. Sharing of false information dropped significantly, while sharing of truthful information increased. Most importantly, users said they found vouchsafed claims to be more credible than the same information when it didn’t have the guarantee.
In a related study, Marshall and his colleagues also found that the vouchsafe mechanism increased user engagement. This was because they were more confident in the quality of content that they were sharing. For Marshall, higher user engagement was a welcome surprise, given how important this is for platforms: “If you look across a lot of the different [current] interventions, they add friction, which is unattractive to the business because it reduces engagement.” Based on the results so far, the vouchsafing mechanism encourages users to share more content, not less.
A market-based model for truth
Vouchsafing is a system that can scale globally, Marshall argues, and one that’s designed to function even in different regulatory environments. “If it can work under the most restrictive conditions, where government can’t intervene,” he thinks, “then it ought to work even more easily in less restrictive conditions.” His hope is that vouchsafing could be effective regardless of context, if the underlying theories continue to hold: “The market should become self-cleaning because folks are internalising their externalities.”
‘This changes the cost structure so that it’s actually cheaper to tell the truth.’– Professor Marshall Van Alstyne
Perhaps the most powerful feature of the vouchsafe mechanism is what it avoids. “There is, by design, absolutely no central authority at all,” Marshall says. “We’ve simply given you the right to signal that you’re telling the truth and to reach an audience you wouldn’t otherwise reach.” He thinks that this approach also addresses other complications, such as misplaced responsibility: “It’s putting the burden back on the source, not on the destination.” And he notes that it deals with imbalanced incentives, too, adding: “This changes the cost structure so that it’s actually cheaper to tell the truth.”
If misinformation is indeed a market failure, then we need market tools to fix it. Marshall’s proposal – grounded in economic theory, tested in behavioral science and designed for real-world use – reframes the conversation around digital responsibility and the economics of truth.
Stay up to date with Platform Leaders
This keynote presentation by Professor Marshall Van Alstyne, moderated by Laure Claire Reillier, was part of the Platform Leaders event hosted by Launchworks & Co on the 5th of June 2025. The online conference again drew hundreds of participants from across the global Platform Leaders community of entrepreneurs, investors, academics, policymakers and practitioners.
Platform Leaders is where experts explore the most urgent questions shaping the future of technology – and the real-world impacts. To access more key insights, recordings and in-depth articles, or to be the first to hear about upcoming events, visit the Platform Leaders website and subscribe.
To watch the full event, you play the video below.
The Platform Leaders initiative has been launched by Launchworks & Co to help unlock the power of communities and networks for the benefit of all. All Launchworks & Co experts live and breathe digital platforms and digital ecosystems. Some of their insights have been captured in best-selling book Platform Strategy, available in English, French and Japanese.
The Platform Leaders initiative has been launched by Launchworks & Co to help unlock the power of communities and networks for the benefit of all. All Launchworks & Co experts live and breathe digital platforms and digital ecosystems. Some of their insights have been captured in best-selling book Platform Strategy, available in English, French and Japanese.