AI now has 800m users but is it in the hands that need it?
Mary Meeker, world famous tech analyst, has published 340 pages of incredible detail on the AI sector but it left me wondering who is really benefiting from this exponential growth?
Mary Meeker doesn’t publish often—but when she does, the tech world listens. She has been analysing the sector for decades, including in the late 90s when she even turned her attention to a small business in the UK, last minute.com :-)
Her 2025 Trends in AI report, released by her firm BOND Capital this week, runs to a dizzying 340 slides and captures the scale, ambition, and velocity of the AI revolution. It’s full of astonishing numbers, compelling insights and bold projections.
What its not trying to address—at least not head-on—is whether this transformation will work for everyone and what the wider implications might be for society. Not just the companies building models at the frontier, but the public servants, SMEs, teachers, nurses, and care workers who will need to use them. Acceleration alone cannot be the goal. Widespread, meaningful usage must be.
The report is crystal clear that the speed of adoption is like nothing we’ve seen before.
ChatGPT reached 800 million weekly active users in just 17 months. Google took over a decade to hit the same milestone for search. In the U.S., generative AI tools have hit 50% consumer adoption in just three years—faster than mobile phones, the internet or social media. And this isn’t just a consumer story. More than 60% of Fortune 500 companies are already embedding AI into core functions—from logistics and legal to customer support and design.
The technology is also improving at pace. Models like GPT-4, Claude 3, and Gemini are approaching human-level capabilities in specific tasks. One benchmark, MMLU, which tests performance across dozens of academic and professional subjects, shows Claude 3 hitting 89%—double what GPT-3 achieved just five years ago. Evaluators mistook nearly three-quarters of GPT-4.5’s outputs for human writing.
And while the cost of training the largest models remains eye-watering—up to $1 billion for the most advanced—the cost of using them has plummeted. The price of inference (running a trained model) has fallen by 99% in just two years, unlocking new products, pricing models and market entrants.
That cost compression is driving another shift: a new era of infrastructure investment. In 2024, the “Big Six” U.S. tech companies—Apple, NVIDIA, Microsoft, Alphabet, Amazon, and Meta—spent $212 billion. That’s up 63% year-on-year, with most of it directed toward AI chips, data centres and cloud systems.
The landscape is far more competitive than many anticipated. Open-source models are gaining serious traction—Meta’s Llama 3 was downloaded 1.2 billion times in just 10 weeks. Developers across the globe are adapting and fine-tuning these models for local needs and new sectors. Chinese firms are gaining ground. Feature parity across systems is narrowing. Innovation is increasingly collaborative, not just proprietary.
All of this is reminded me that the AI moment isn’t coming. It’s already here. The pace is breathtaking. But here’s the risk - the benefits remain concentrated.
In the UK, just 30% of SMEs report using any form of AI—compared to over 50% in the U.S. Even in large organisations, use is often limited to isolated pilots or productivity experiments. Too many frontline services such as healthcare, education, and justice are miles behind.
Meanwhile, only a handful of firms have the resources to operate at the frontier. Training a frontier model still requires access to thousands of high-end GPUs and vast energy resources. Most organisations—private and public—are left using what’s built elsewhere, on someone else’s terms.
And trust, in the West at least, lags behind. The report notes bias, hallucination and misinformation risks, but this remains one of the most under-addressed areas of AI deployment. The public sector—where high-stakes decisions get made—needs tools that are not just powerful, but explainable, auditable, and aligned with public values.
Finally, there’s a geographic imbalance. The report is almost entirely focused on the U.S., with some attention to China. Europe barely registers. That’s not just a presentational gap—it reflects a genuine strategic risk.
But it’s not inevitable.
Europe—and the UK in particular—have strengths that are easy to underestimate: deep research institutions, high-trust public services, strong civic institutions and a legal tradition grounded in rights and accountability.
Too often the narrative in Europe is one of massive and complex legal frameworks that stifle start ups. But Europe’s regulatory instincts could be a strength, not a weakness. Done well and without being overly burdensome, they could offer an alternative model for AI governance.
It shouldn’t be impossible to design more laws that prioritise safety, rights, and democratic oversight without stifling innovation. However, if the legal vision is purely reactive—always responding to moves in the US—we will fall behind not just commercially, but socially and politically.
One area where the UK has a real shot at global leadership is workforce transformation. Over the last few months, estimates about the scale of changes to jobs range from the fairly discreet to the catastrophic. I do not think we hear our PM talk enough about the urgency of re-skilling the workforce. We need to get ahead of the tsunami and not ignore it. In the next decade, there will be huge differences in what jobs we need and who does them, but we cannot put our heads in the sand and ignore the direction of travel.
AI isn’t just a technology shift—it’s a work shift. It changes how jobs are structured, how tasks are performed, and what skills are required. If we treat this purely as a productivity play and ignore the people who need to use these tools, we’ll fail.
In a funny twist, the BOND report dropped while I was in a Multiverse board meeting—Mary Meeker, its author, sits on the board too. Multiverse shows the positive business impact when employers take workforce development seriously. It began as a modern apprenticeship platform but is now one of the fastest-growing tech-ed businesses in the UK. It is helping UK employers re -skill thousands of people in data, AI and software development.
If the UK can get workforce strategy right—if we equip SMEs, public institutions, and entire industries to use AI confidently and ethically—we unlock a national advantage. This is not just an economic output. It will help build societal resilience.
The opportunity is there, but it won’t be realised through innovation alone. We need:
Investment in public infrastructure: open datasets, shared compute, and safe model testing environments. This will allow transparent and open source applications to flourish.
A massive increase in green energy supply to power the next wave of adoption. China is investing billions in solar energy - adding more than any other country in just the first quarter of 2025 (Europe is investing in AI infrastructure, but faces challenges: it holds just 4% of global AI computing power, compared to the US’s 70%, and suffers from higher industrial energy costs)
Reform of public procurement: so the NHS, local councils and schools can adopt AI tools built for their needs—not just enterprise platforms retrofitted for public service.
A shift in capital flows: towards startups building for education, care, community health—not just another enterprise customer service application.
In 2025, the most important question isn’t whether AI can write a poem or summarise a contract. It is whether it helps build a fairer, smarter, more inclusive society.
The Bond Report shows us that Europe—and the UK—wont win the AI arms race. So we need to focus on the huge challenge of dissemination. The clock is ticking.
https://www.bondcap.com/report/tai - if you can’t face wading through the whole thing, ask a gen ai app to help you summarise it….
An excellent overview of where we are today. However, I'd like to offer a different perspective on the industry's future.
AI models have been trained on 1000 years of human activity. That's one reason for their high performance and speed of customer adoption. However, five years from now they will be learning mainly from other AI models, not from people. A self-reinforcing vortex of hallucinations and uncorrected errors (garbage in, garbage out) will grow at an exponential rate.
I'm not sure how this can be avoided or how damaging this would be. We might witness a rapid decline in some (or even most) AI applications. Public confidence will be lost as the benefits predicted today fail to materialise.
I'd like to hear how the industry believes it can prevent this from happening.
I'm certainly not afraid of AI. Indeed, I was one of its earliest commercial advocates, back in 1995, but I'm already starting to doubt its efficacy as a serious work tool in many fields.
We are in sync - i even pitched this idea to someone !