
About
I've spent my career at the point where technology meets policy, building companies, advising platforms, and working with governments to figure out the rules. This is the longer version.
The internet wasn't built outside of society. Its evolution has been shaped by companies, governments, courts, and the people caught between them.
I've spent my career inside that process, working between technology companies, regulators, and the societies they operate within. This is the story of how the consumer internet came to be regulated, told through the work I've done along the way.
In the early 2000s, the European Parliament was writing rules for an internet in its infancy. Google had just IPO'd. Facebook was still a dorm project. The question in Brussels wasn't how to regulate technology, but whether it needed regulating at all.
I was still at university, and already working for a small consultancy called Policy Action, monitoring parliamentary committees and trying to explain to companies like Amazon and Microsoft why directives on e-commerce and audiovisual media might one day matter to them. Much of it felt abstract at the time. It doesn't now.
The first era of internet regulation was about classification. What was a platform? What was a publisher? What was a marketplace? The answers determined who was caught by which rules.
I spent those years at the centre of that question, as the web moved from static pages into something fully interactive. First as Director of EDiMA, the European trade body representing Google, Apple, Amazon, Microsoft, and Yahoo. I drafted and drove amendments to the Audiovisual Media Services Directive that ensured streaming platforms were not treated as broadcasters. Later at Google, as the company's first EU policy hire, I helped build the frameworks governing social networking, search, and video across Europe.
At the time, the industry argued that it could regulate itself, and I helped promote that approach. I negotiated the European Safer Social Networking Principles with Commissioner Viviane Reding and made the case that voluntary commitments could work. Some of them did. Many did not. Not because the intent was missing, but because the incentives driving growth were not aligned with resolving the hardest issues at their root. Privacy, child safety, harmful content: the risks were raised early, but they were largely seen as conceptual, outweighed by the promise of the internet economy.
At Google, I had genuine conviction in what the company was building. But over time, I began to see gaps between the mission as it was articulated and some of the decisions being made in practice. For example, I was having to defend and then negotiate a resolution to Google Streetview cars illegally sniffing un-secured WiFi networks. That experience taught me something important: even companies with real ambition and talented people can lose sight of how their choices land in the world outside.
I left Google and joined Meta, this time with clearer eyes about how large platforms actually operate. Inside the company, the culture was energetic and optimistic; there was a genuine belief in the value of what was being built. But externally, the narrative was shifting. The gap between that internal conviction and how platforms were perceived by the public and by policymakers was widening, and that disconnect was itself becoming a force that shaped regulation.
I spent three years working across child safety, privacy, competition, and content regulation across EMEA. What struck me most during those years was that much of this felt like unchartered territory with neither side, industry nor policymakers, fully understanding the system they were part of. Technology companies understood their products but not always the political and cultural forces shaping them. The policy world understood the politics but not the commercial logic driving the platforms. The conversation became lost in translation, even as the stakes grew.
That gap wasn't just an observation. It was a market opportunity.
At both Google and Meta, I had worked alongside consultants and policy advisors, many of them talented and well-connected. But the best of them tended to come from a policy tradition. They understood how Brussels or Washington worked, but they weren't always equipped to connect that to the business imperatives driving product decisions. I kept seeing the same disconnect from both sides of the table: the advice companies were getting on regulation didn't speak the language of the boardroom, and the strategies coming from the boardroom didn't account for where regulation was heading.
On the second of January 2013, driving back from Cornwall with my fiancée, I decided I either had to do something about it or stop talking about it. A month later, I was in a room with a group of games companies who needed help with a misguided gambling regulation issue. I sketched out an approach on a whiteboard that combined the policy problem with the business reality, and I had my first clients before I'd formally launched. That was the start of Delany & Co.
Over eleven years, I built it into an international advisory practice working with companies across games, social media, the sharing economy, dating, enterprise, and entertainment. Clients included Uber, Airbnb, Roblox, Supercell, Rovio, Zynga. I founded industry bodies, chaired a European forum on the collaborative economy, and testified before regulators in the US, UK, EU, and Australia.
The model was deliberately built to scale. I kept a tight core team with deep sector expertise and brought in local policy specialists on a campaign basis in different jurisdictions. One piece of regulatory intelligence could serve multiple clients across multiple markets. It meant a lean operation could punch well above its weight.
Around 2018, I narrowed the focus to interactive entertainment, primarily games. The regulations being written to target social media were going to catch thousands of other companies who weren't prepared for them. That was a huge, underserved market, and it was where the most interesting regulatory questions were emerging.
Across all of that work, the same question kept coming up: how is this company shaping the world it operates in, and how is that world shaping the company in return?
The current phase of the internet is defined by accountability. The Digital Services Act, the Online Safety Act, age-appropriate design codes, and emerging AI governance frameworks are moving from discussion into implementation. Public trust in technology platforms remains low. This is not a reset, it is the continuation of the same dynamic, now playing out through regulation and infrastructure instead of voluntary commitments.
The hardest problems have always been about implementation. It was never enough to agree that platforms should verify age, protect children, or comply across jurisdictions. The question was whether those commitments could be turned into systems that actually worked at scale. My own clients kept asking the same thing: we understand the regulatory implications, but what do we actually tell our developers? How do we implement this in the product?
That question is what brought me to k-ID.
In late 2023, several of my own advisory clients introduced me to a company building the infrastructure layer for child safety compliance, with APIs that let platforms implement age assurance and regulatory requirements directly into their products, across jurisdictions. I sold Delany & Co, joined k-ID as Chief Corporate Affairs Officer and VP EMEA, and helped bring the company out of stealth at GDC in March 2024. Later that year, k-ID raised $45 million in a Series A led by Andreessen Horowitz and Lightspeed, and was named a World Economic Forum Technology Pioneer.
The work connects everything I've done before: the regulatory knowledge, the platform experience, the understanding of how policy actually becomes product. The difference is that k-ID is building the infrastructure to make compliance real, not just promised.
After twenty-plus years working across these roles, a pattern is clear. Technology companies operate within society, politics, and culture; they do not sit outside them. Product decisions carry societal consequences. Political shifts change what companies can build, sell, and say.
I've described the principle behind this as “safe danger”: the belief that innovation, experimentation, and growth should be enabled, but not at the cost of real harm. It is not an anti-innovation position. It is the only version of innovation that lasts.
For much of the internet's history, regulation, reputation, and public trust were treated as external factors to be managed. That approach is no longer viable. What matters now is understanding how these forces interact, between companies, regulators, and the societies they serve. That is where the most important decisions are being made.
It is also the layer I have spent my career working in, and the one I write about here.
Languages: English (native), French (fluent), Dutch (conversational), German (learning)
Based in: London
Get in touch: Contact form