From policy to product in an age of unpredictability
December 16, 2025
Digital regulators around the world are facing a new era – one in which the rules of the game need to be not only rewritten, but also reimagined. As AI accelerates change across markets, policymakers are revisiting laws on which the metaphorical ink has barely dried. At the same time, tech firms are working to translate abstract regulations into practical product and engineering decisions, even as the market competition heats up.
This leaves regulators and practitioners with a shared question: How do we regulate and build products in a world we cannot easily predict?
“Peak hubris”? AI upends regulators’ predictions
For years, regulators assumed that digital markets followed predictable patterns. Alexander Waksman, partner in competition law at Gunnercooke, casts back to 2023: a moment defined by “peak hubris”, in his opinion. “Regulators around the world said, ‘We know how digital markets work: they’re all characterised by barriers to entry, network effects, personalisation, lock in. This leads to a series of incumbents dominating each of their own individual areas of specialist expertise,’” he recalls.
The sudden rise of generative AI has drastically challenged these assumptions. In April 2024, the Competition and Markets Authority (CMA) in the UK published its review of AI foundation models, predicting that established digital giants would inevitably dominate the AI stack. Yet within months, the landscape had shifted and these forecasts did not materialise as expected. “Market shares have only continued to move away from what the CMA would’ve called the incumbency firms,” notes Alexander.
Brian Williamson, partner at Communications Chambers, agrees that AI has caught regulators out in a number of ways. “We’re used to being able to scale things like social networks at very low incremental costs,” he says, while “[AI markets are] characterised by capital intensity, which is not the way we thought about digital markets historically.” And again, the market dynamics have been more competitive than anticipated, he observes: “There are all sorts of [firms] contending the space, as well as the established players.”
Regulators are at a crossroads between old and new, thinks Brian. “In Brussels, [there’s] a desire to do things that are future proof, which I think is harder than one might think. The future is full of surprises.”
To launch regionally or go global
That future also looks different from different positions. Professor Annabelle Gawer is a chaired professor of digital economy and the director of Surrey CoDE, and she observes that regulators are in no small part responsible for setting the course of AI development in various regions. “What role do you want to play in AI: that of a consumer or that of a producer?” she asks. “Europe is putting out regulation that is aiming to protect users; the US and China […] want regulations that allow them to continue to produce AI.”
‘What role do you want to play in AI: that of a consumer or that of a producer?’ – Annabelle Gawer, Chaired Professor in Digital Economy; Director Surrey CoDE
Indeed, countries and regions tend to favour different regulatory approaches, each shaped by what role they see themselves playing in AI’s future. While the EU leans on prescriptive, hard-coded obligations – for example, in the Digital Markets Act (DMA) and the forthcoming EU AI Act – the UK has been building a more flexible regulatory model and investing in technical capacity, with data scientists, UX researchers and engineering expertise embedded directly into regulatory teams. Meanwhile, the US is taking a “try AI first” approach, with a focus on innovation and experimentation before restrictive intervention.
For global tech firms, this divergence is becoming part of product strategy: where to launch, how to prioritise engineering resources and how to navigate differing obligations. “In a perfect world, you would launch globally, collect all of the test data and then iterate as quickly as possible so all consumers and businesses have the opportunity to react to the launch,” says Oliver Bethell, head of competition, regulatory engagement & advisory at Google. But, he adds, “that rarely happens in practise.” Instead, many firms are looking to regions where they have good working relationships with regulatory agencies.
‘In a perfect world, you would launch globally, collect all of the test data and then iterate as quickly as possible so all consumers and businesses have the opportunity to react to the launch‘ – Oliver Bethell, Head of Competition, Regulatory Engagement & Advisory, Google
Translating regulation into product decisions
Today, the strength of the relationship that a firm has with a regulator can contribute to whether a product is launched now, later or never. In part, Oliver says, this depends on the literal letter of the law: “Is the text clear? How many laws are there? Can we understand quickly what we need to do?”
The other component is collaborative. “Then it comes down to the nature of the relationship with the regulator,” continues Oliver. “Can I quickly talk to them? Can I quickly bring my engineering lead [or] my product design lead to that agency to talk about what we are doing?” Without that dialogue, companies may become risk-averse, slowing or withholding launches entirely.
“The speed with which launch decisions have to be taken does not give you infinite time and capacity to sit with [a lawyer] and pass through complicated terms of legislation,” Oliver adds. He has seen first-hand how regulatory complexity can lead to delays in launching products, especially with the current framework in the EU. Scaling companies confront threshold-based rules that could change their obligations overnight; for established firms, compliance is not straightforward either. “The larger your scale, the more regulation you have to manage. That’s a consequence of success,” he admits.
On the basis of harms
Differing regulatory philosophies also shape how each region identifies, interprets and responds to harm: an area where definitions remain contested and increasingly central to effective rulemaking.
“How should rules evolve when the harms aren’t clear?” asks Alexander. In highly prescriptive frameworks like the DMA, evolving the rules becomes extremely difficult. If the harms are uncertain, yet the rules are fixed, the regulatory system can become misaligned with the realities it aims to govern.
For Alexander, the solution is restraint: “Rules should be delayed until there is a harm that they can anchor on.” Otherwise, regulation risks becoming what he calls “a value-free void into which pure lobbying and rent-seeking step in”. In other words, without clear harms, the policymaking space is too easily dominated by vested interests shaping rules to their own advantage.
‘Rules should be delayed until there is a harm that they can anchor on.’ – Alexander Waksman, Partner, Competition Law, Gunnercooke
Annabelle reinforces this point from an ecosystem perspective. “Divergent incentives and abuse of power are nothing new,” she observes, but the identification of harm is rarely straightforward. What one stakeholder sees as a harmful exclusionary practice, another may view as efficient market organisation or necessary risk management. As Annabelle reminds us: “There is still work to do today to have a consensus on what constitutes harm.
Regulation in ecosystems, not market silos
Digital value creation today is deeply interconnected: platforms, sellers, developers and consumers all shape each other’s experiences. Yet the benefits are often concentrated. “One of the paradoxes of the digital economy,” Annabelle explains, “[is that] even if the process of value creation is very distributed across these ecosystems, the value capture is very, very centralised.”
In Annabelle’s view, we need to mobilise the notion of ecosystem in order to assess where power accumulates and where risk emerges. “There is really a need to expand regulators’ tools so that it matches the strategic toolkit that companies have when they enter, price and design products and services in these interconnected markets,” she says.
As well as offering products, large platforms act as governors of their vast digital ecosystems. Amazon Marketplace, for example, oversees millions of third-party sellers. “When [third-party sellers] observe [Amazon] not doing such a great job at governing the platform – apparently punishing some people for some transgression but not other people for the same transgression – they develop a mistrust of the platform authority as a ‘regulator’,” Annabelle says, referencing results from her recent research into inconsistent governance. “What happens is a social contagion of misconduct,” she recounts. “We shouldn’t overlook the role of those central actors in the way they behave and govern their own ecosystems to influence – positively or negatively – millions of ecosystem members.”
Areas of opportunity for AI regulation
While risks may be emerging in unexpected ways with AI, Oliver cautions against assuming that competition regulation is always the right tool to address a market problem. “Plenty of the hallmarks seem to be indicative of competition [in the AI space] working quite well,” he thinks, pointing to the speed of product launches, the volume of new patents and trademarks and the number of consequential scientific developments. He urges regulators to think carefully about where to direct their attention: “Other parts in the stack – particularly compute, infrastructure, chip capacity – seem to me interesting areas to be thinking about as competition regulators.”
Brian also notes interesting opportunities for regulators to help unlock innovation. “Sometimes you have to create regulation to have new markets,” he thinks. Citing the UK’s Automated Vehicles Act of 2024, which allowed services like Waymo to launch, he sees a potential role for more regulation that is as much enablement as it is limitation. “AI will not progress economically as fast as some technologists think, because there are a lot of things – even if it was capable of doing them legally – that AI wouldn’t be allowed to do,” he believes. “We need different or new regulation to allow things to happen.”
‘AI will not progress economically as fast as some technologists think, because there are a lot of things – even if it was capable of doing them legally – that AI wouldn’t be allowed to do. We need different or new regulation to allow things to happen’ – Brian Williamson, Partner, Coomunications Chambers
For his part, Alexander agrees that various regulatory approaches are needed: “In some markets, it just doesn’t make sense for us to intervene with the full force of law.” As AI continues to develop at speed, he thinks we may need to accept a degree of uncertainty. “What regulation should not be trying to do is to assume knowledge of how markets will work and engineer outcomes to fit. What regulation can do more successfully is to be more modest in its goals and instead to focus on things like basic levels of transparency,” he says.
“The future comes at you fast”
As Brian puts it, “The future comes at you fast.” AI continues to reshape digital markets, making that future, regardless of its speed, opaque for regulators and practitioners alike.
For firms like Google, success in this uncertain environment comes back to relationships, says Oliver. He calls out two strong characteristics that he hopes to see in regulatory agencies: “A willingness to engage at a technical level as quickly as possible – I cannot tell you how helpful that is – [and] a willingness to adopt an innovator’s mindset: engaging with us, being prepared to experiment, being prepared to iterate on solutions.”
Alexander’s plea to regulators is simply to be a little simpler. In his words: “Focus on the things that really matter: price, quantity, product availability. If regulators focus on that, then they’re not going to go far wrong.”
Corporate power, however, reaches beyond pricing and products, according to Annabelle. “There are people who think if we can only solve the competition part, everything else is going to get sorted,” she says. “As soon as we have markets that become contestable, then all the bad behaviours that dominant firms can get away with are going to fall by the wayside, because consumers are going to ‘vote with their feet’.” Annabelle thinks that remains to be seen: “We are all living in a huge experiment that is happening in real time.”
Join the conversation with Platform Leaders
This panel with Oliver Bethell, Annabelle Gawer, Alexander Waksman and Brian Williamson was part of the Platform Leaders event organised by Launchworks & Co on 18 November 2025. Bringing together experts from technology, academia and policy, Platform Leaders provides a unique forum for tackling the most pressing challenges in digital markets. Explore the agenda, watch session recordings and join the community to stay up to date with future events and insights. To watch the full event, you play the video below.
The Platform Leaders initiative has been launched by Launchworks & Co to help unlock the power of communities and networks for the benefit of all. All Launchworks & Co experts live and breathe digital platforms and digital ecosystems. Some of their insights have been captured in best-selling book Platform Strategy, available in English, French and Japanese.
The Platform Leaders initiative has been launched by Launchworks & Co to help unlock the power of communities and networks for the benefit of all. All Launchworks & Co experts live and breathe digital platforms and digital ecosystems. Some of their insights have been captured in best-selling book Platform Strategy, available in English, French and Japanese.