THIS! The question is no longer “Can this be built?” but “Who realises it already can?”
Capability is moving faster than comprehension, and when the bottleneck moves, power moves with it — towards the people and firms who see it first and have the resources to act on it.
So the task now is not to restrain the technology, but to widen the circle of people who can see it clearly enough to use it — and to shape it — on their own terms, in workplaces, in communities, and in the public sector
As I said on LinkedIn, this was a superb piece. Visible pacesetting on AI in the UK feels like it's missing - we are a country of creativity, this should be right up our street. Your influence I'm sure will help change this.
I keep being astonished at Google's AI Mode - as a research tool it is awesome. But then you start to realise the trick is not finding answers, but knowing what question to ask. Perhaps sitting in a good library helps you ponder the right questions to ask.
> Put secure, supervised AI tools in people’s hands
I would really like some examples of such tools. That's not a rhetorical question, but a factual one. I don;t believe that such things exist, or that we know how to build them, yet ... but I am quite willing to be proved wrong by actual examples.
It's my contention that the theories, techniques, tools and tradecraft don't exist to build provably, testably secure and safe AI applications outside a small number of niche, and economically uninteresting, examples. Again, prove me wrong. Please!
Excellent piece - I like your 4 steps but sadly I see no signs that the 50 year old logistics coordinator in Sunderland has been given a second thought. Nor do I think there is a dashboard being created to see which tasks are being automated, and the messaging is not positive to make people think that the public will share in the benefit from time saving practices. I love the fourth one about agency in schools.
Your points are all so forward thinking and positive.
Instead the reality seems to be mild panic and anxiety and the fear inducing thought that if we don't get across AI we will become obsolete and irrelevant
"Fourth, agency in schools. Not lectures about AI, but making with it: build something, present it locally, reflect on what was human and what was machine. That changes how young people see both the tools and themselves."
also first principles re-building of education for an AI-first world: teaching judgement + critical thought, as the production of 'good enough' becomes commonplace; more emphasis on in-person (return of vivas?); importance of primary source material over generic summarisation...
What a great insight: the time between the invention of a new literacy and thinking tool (for-profit service?) to its application and the accumulation of power that comes with it has shrunk on a galactic scale. That will rock societies and cultures to the core, but this smallish detail is what troubles me most: that the “first draft” role is evaporating.
In some cases: hooray! Is my time better spent drafting a required sustainability report or writing about literacy strategies for young learners? But in many cases, I wonder if outsourcing initial drafts to a very good re-user of existing texts, models, and patterns will come back to bite us if it ever becomes clear that we have abdicated our role as meaning makers and handed our decisionmaking agency over to non-sentient, non-interested and, I would argue, non-lingusitic (in the human sense) programs? We may never notice when exactly we (humans) stopped considering, questioning, and working through ideas to develop and discuss our own, human-based (not numerically based) texts and heuristics and structures. Outsource selectively, people, not for convenience only.
This is a brilliant essay Martha and challenges some of my resistance. But are we happy that we have no democratic control over the owners of the "corporate clock" and that our "political clock" is so powerless and ineffective? Who is going to train the trainers who will train the logistics coordinator in Sunderland? Who decides and enforces the guardrails? Shouldn't those things come before we make policies to 'give' those corporations access ot all of us? Or is it too late? I want to share your excitement, I really do.
Their conclusion was that juniors were actually _more_ valuable in an AI world, not less, and that it was mid-career engineers that were more threatened by AI:
> "The retreat challenged the narrative that AI eliminates the need for junior developers. Juniors are more profitable than they have ever been. AI tools get them past the awkward initial net-negative phase faster. They serve as a call option on future productivity. And they are better at AI tools than senior engineers, having never developed the habits and assumptions that slow adoption.
>
> "The real concern is mid-level engineers who came up during the decade-long hiring boom and may not have developed the fundamentals needed to thrive in the new environment. This population represents the bulk of the industry by volume, and retraining them is genuinely difficult. The retreat discussed whether apprenticeship models, rotation programs and lifelong learning structures could address this gap, but acknowledged that no organization has solved it yet."
This is an interesting companion piece - speaks to the point about "value of discernment rising" https://substack.com/home/post/p-187098090. "AI is accelerating the markers of scientific production while potentially degrading both the quality and diversity of what gets produced"
I've been in a couple of AI communities today where people have been sharing their experiences of the magnetic nature of the tools that let you build. There is a phenomenon that seems prevalent - 'because you can, you do'. And it's driving people to prototype everything that comes into their head. I'm fighting it myself. #AI-diction
Very interesting piece - and struck by your focus on 'access' referencing Sunderland as the first point. On a similar theme, I tried to assemble the available statistics on regional use of AI across the UK a couple of weeks ago and the signs aren't encouraging. For example, adoption of AI by businesses appears to be happening at double the rate in London as it is in the North East. Some of this will be about the types of businesses, but this speaks to the risks that you highlight about early movers. https://futurenorth.substack.com/p/ai-and-tech-is-the-north-catching
THIS! The question is no longer “Can this be built?” but “Who realises it already can?”
Capability is moving faster than comprehension, and when the bottleneck moves, power moves with it — towards the people and firms who see it first and have the resources to act on it.
So the task now is not to restrain the technology, but to widen the circle of people who can see it clearly enough to use it — and to shape it — on their own terms, in workplaces, in communities, and in the public sector
Thank you - took sooo long to get this right….
You did it: clearly written, luminous. May even convince the obtuse. (I hope so for all our sakes) KBO.
As I said on LinkedIn, this was a superb piece. Visible pacesetting on AI in the UK feels like it's missing - we are a country of creativity, this should be right up our street. Your influence I'm sure will help change this.
I keep being astonished at Google's AI Mode - as a research tool it is awesome. But then you start to realise the trick is not finding answers, but knowing what question to ask. Perhaps sitting in a good library helps you ponder the right questions to ask.
> Put secure, supervised AI tools in people’s hands
I would really like some examples of such tools. That's not a rhetorical question, but a factual one. I don;t believe that such things exist, or that we know how to build them, yet ... but I am quite willing to be proved wrong by actual examples.
It's my contention that the theories, techniques, tools and tradecraft don't exist to build provably, testably secure and safe AI applications outside a small number of niche, and economically uninteresting, examples. Again, prove me wrong. Please!
The price of initiative collapsing is profound.
The next shift may be interpretive scarcity collapsing alongside it.
When anyone can build, authority has to re-justify itself.
Excellent piece - I like your 4 steps but sadly I see no signs that the 50 year old logistics coordinator in Sunderland has been given a second thought. Nor do I think there is a dashboard being created to see which tasks are being automated, and the messaging is not positive to make people think that the public will share in the benefit from time saving practices. I love the fourth one about agency in schools.
Your points are all so forward thinking and positive.
Instead the reality seems to be mild panic and anxiety and the fear inducing thought that if we don't get across AI we will become obsolete and irrelevant
Really good piece Martha, with 4 action steps, let’s get those to policymakers.
"Fourth, agency in schools. Not lectures about AI, but making with it: build something, present it locally, reflect on what was human and what was machine. That changes how young people see both the tools and themselves."
also first principles re-building of education for an AI-first world: teaching judgement + critical thought, as the production of 'good enough' becomes commonplace; more emphasis on in-person (return of vivas?); importance of primary source material over generic summarisation...
What a great insight: the time between the invention of a new literacy and thinking tool (for-profit service?) to its application and the accumulation of power that comes with it has shrunk on a galactic scale. That will rock societies and cultures to the core, but this smallish detail is what troubles me most: that the “first draft” role is evaporating.
In some cases: hooray! Is my time better spent drafting a required sustainability report or writing about literacy strategies for young learners? But in many cases, I wonder if outsourcing initial drafts to a very good re-user of existing texts, models, and patterns will come back to bite us if it ever becomes clear that we have abdicated our role as meaning makers and handed our decisionmaking agency over to non-sentient, non-interested and, I would argue, non-lingusitic (in the human sense) programs? We may never notice when exactly we (humans) stopped considering, questioning, and working through ideas to develop and discuss our own, human-based (not numerically based) texts and heuristics and structures. Outsource selectively, people, not for convenience only.
Brilliant Martha…With my 77 years I got most of that …& that’s saying something . Well done you !
This is a brilliant essay Martha and challenges some of my resistance. But are we happy that we have no democratic control over the owners of the "corporate clock" and that our "political clock" is so powerless and ineffective? Who is going to train the trainers who will train the logistics coordinator in Sunderland? Who decides and enforces the guardrails? Shouldn't those things come before we make policies to 'give' those corporations access ot all of us? Or is it too late? I want to share your excitement, I really do.
Sorry to clarify, i am only partly excited - i am v anxious about many aspects as i tried to explain - the corporate piece is the biggest of course
Thank you and NO I AM NOT.
Superb article! Re "the entry-level rung thinning out", the software consultancy ThoughtWorks brought a bunch of leading software developers together recently for a retreat to discuss the future of software development; they summarised the findings in a whitepaper here that makes for interesting reading. https://www.thoughtworks.com/content/dam/thoughtworks/documents/report/tw_future%20_of_software_development_retreat_%20key_takeaways.pdf
Their conclusion was that juniors were actually _more_ valuable in an AI world, not less, and that it was mid-career engineers that were more threatened by AI:
> "The retreat challenged the narrative that AI eliminates the need for junior developers. Juniors are more profitable than they have ever been. AI tools get them past the awkward initial net-negative phase faster. They serve as a call option on future productivity. And they are better at AI tools than senior engineers, having never developed the habits and assumptions that slow adoption.
>
> "The real concern is mid-level engineers who came up during the decade-long hiring boom and may not have developed the fundamentals needed to thrive in the new environment. This population represents the bulk of the industry by volume, and retraining them is genuinely difficult. The retreat discussed whether apprenticeship models, rotation programs and lifelong learning structures could address this gap, but acknowledged that no organization has solved it yet."
This is an interesting companion piece - speaks to the point about "value of discernment rising" https://substack.com/home/post/p-187098090. "AI is accelerating the markers of scientific production while potentially degrading both the quality and diversity of what gets produced"
I've been in a couple of AI communities today where people have been sharing their experiences of the magnetic nature of the tools that let you build. There is a phenomenon that seems prevalent - 'because you can, you do'. And it's driving people to prototype everything that comes into their head. I'm fighting it myself. #AI-diction
Very interesting piece - and struck by your focus on 'access' referencing Sunderland as the first point. On a similar theme, I tried to assemble the available statistics on regional use of AI across the UK a couple of weeks ago and the signs aren't encouraging. For example, adoption of AI by businesses appears to be happening at double the rate in London as it is in the North East. Some of this will be about the types of businesses, but this speaks to the risks that you highlight about early movers. https://futurenorth.substack.com/p/ai-and-tech-is-the-north-catching
In a time when we’re saturated with ‘intelligence’ this type of wisdom is sadly scarce. Such a great
piece. I’m all in!