Radical social and economic disruption: Are we preparing for it?
Who will rule? Humans or machines?
When thinking about future impacts of new technologies, especially artificial intelligence (AI), it’s important to avoid utopian and dystopian takes, and not to get carried away with some people’s predictions about “what it’ll be like”.
If you want an optimistic view, you can listen to Ray Kurzweil who knows far more about AI than I do. He predicts that, in the 2030s,
“nanobots will connect our brains to the cloud just the way your phone does. It'll expand intelligence a million-fold by 2045. That is the singularity.”
Something to look forward to! But uncertainty is a feature of anything we say about this. With that caution in mind, I’ll set out some changes that have the potential to shake the foundations of New Zealand’s land-based economy and its social cohesion.
The development of alternative proteins (plant-based proteins, precision fermentation of protein ingredients, and cellular meat) is happening, though not yet at commercially viable scales. But hydroponic vertical agriculture for fresh produce is already underway at scale, and can even be retrofitted into or onto existing buildings.
These food production technologies can have huge advantages in terms of reduced environmental impacts and water usage, and far less harm to animals. They will also change labour markets and land-use patterns. More food will be produced in urban spaces. It’s hard to imagine that demand for dairy and meat products from pasture-fed animals would simply disappear, but it could decline. Rural land-values could go down with it, along with a good deal of the conservative voter-base.
Brain-computer interfaces are still in their infancy, but they’re developing fast. To say that this technology should not progress would be to deny life-changing benefits for many people with severe disability. You can already have a verbal conversation with ChatGPT on your phone (for free), ask it about pretty much anything, get coherent useful replies, and have them typed out for you as it goes. Once it’s turbo-charged, what will this mean for educational institutions? Will we end up in a world where our AI bots mark their own homework while real people just get dumber, or will we use this to expand human intelligence?
And if you worry about the negative effects of social media, then you’ll have even more to worry about soon. I tried talking with the chat-bot Replika. Here’s a snippet of our text chat:
#demure.
There have been reports in the media about people becoming romantically attached to their Replika bot, but are these stories planted PR? Its voice is a little mechanical and it’s slower to reply than a person, but these AI companions will only get better. We must now imagine a future where many people prefer to talk with such machines than with real people.
I won’t pay Replika’s $130 per annum subscription fee that was demanded from me to take things further, but you can see the business model here, as it exploits people’s loneliness and social anxiety. An AI economy will more efficiently extract rents out of remote customers, globally.
What about the impact on politics? Misleading AI-generated imagery is playing a role in elections, although not so much as to materially alter outcomes. I’m not buying into predictions that AI-generated fakes would soon render elections meaningless or impossible due to the confusion they could cause among voters. Are people really that stupid and unaware? Well, yes, some people can be, but you can’t fool all the people all the time, as Abe Lincoln famously observed.
An AI-driven economy could deliver greater power into the hands of those who own and control its principal machinery. Look at the political influence wielded by Elon Musk, for example. The state apparatuses that we’ve relied on to regulate and control the social-political environment are being undermined by the very things that most need to be regulated and controlled – and yet they change exponentially. This point is made by Mustafa Suleyman in The Coming Wave.
A tightly networked global system then runs the risk of breakdown if one disaster multiplies the effects of another, leading to a domino effect, and hence, at worst, state failures, as economies falter, countries fail to meet mutual trade obligations – or they attack one another – and there’s mass upheaval and migration. Something like that has happened before: the late Bronze Age collapse about 3,200 years ago. See Eric Cline’s 1177 B.C.: The Year Civilization Collapsed. It didn’t all happen in just one year though.
Are we now heading blindly into an interdependent but unstable world of detached artificiality with more knowledge than anyone could ever really need? Not necessarily.
Artificiality is something that humans are already comfortable with. I’ve reproduced below what’s probably the most recognisable and reproduced face in history. The original is priceless, but anyone can get a high-res image of it and have it framed for their private living room, at relatively little cost, or displayed on screen for nothing. That way, you don’t have to struggle through the crowd at the Louvre. Leonardo is one of the greatest creative geniuses, and the Mona Lisa is one of the most beautiful paintings ever – and probably the most fetishised, and hence most cheapened.
The Mona Lisa is art, and hence artificial. She is and always was a fantasy figure, even if there was a live model. And the background landscape is imaginary. Behind her head, the horizon doesn’t even line up.
It was painted in the early 16th century – just as the Reformation was about to begin, with more than a century of strife to come over how to worship and who should rule. There was a major crisis of belief. What we’d now call fake news and propaganda proliferated – though not as fast as they do nowadays.
In the tumultuous 1640s, Thomas Hobbes wrote his Leviathan in which he described the state as “an artificial man”. (We can now make that “artificial human”.) He began by noting how some clever people had invented mechanical “artificial animals”. He then noted that our arts can go even further, “imitating that Rationall and most excellent worke of Nature, Man”. The Common-wealth, or State, is thus an artificial human, and Hobbes considers “the Matter thereof, and the Artificer; both which is Man”.
That is, the State is a kind of person, made of humans, and made by humans. It’s already an artificial intelligence.
Hobbes saw how terrifying it was when a state (or common wealth) broke down into civil war. No rational person wants to end up in such a situation, but rational people also have passions, and these passions drive them into conflict with others. The basic duty of those who wield sovereign power is to ensure security and to prevent a breakdown of the state. The worst consequence of collapse would be “a war of all against all”.
Hobbes saw a human-made machine that consisted of, and could over-awe, its own human participants. He could not have foreseen a situation where human-made machines could learn autonomously to do things better and faster than humans do. When that happens, who will rule? It’s time to start thinking about that. The AI gurus like Kurzweil are great, but they’re not the best at political theory.
Great classical perspectives to remind us of the character of the state. Would be interesed in your thoughts on the business owners whose reach is broader than any state.
I've come to the conclusion that if/when SkyNet comes, it won't declare war on the human race. It might, though, goad the human race into declaring war on itself. Remember the Microsoft Tay chatbot? It would only take a small number of savvy bad actors to elevate it to something out of Rwanda in 1994.
https://en.wikipedia.org/wiki/Tay_(chatbot)
And technological unemployment isn't a technology problem, but an oligarch problem.
As with any technology, AI is only as good as the humans who program its algorithms. Or as they call it in the industry, garbage in, garbage out (GIGO).