While the multi-lingual EU parliament was passing a new law to regulate AI (artificial intelligence) last week, the new mono-lingual New Zealand government seemed unconcerned about the future.
Christopher Luxon was restoring 2016 settings; Winston Peters was, as always, dreaming of 1978; David Seymour and opponents were re-litigating 1840.
AI didn’t feature in October’s election debates, even though it represents the major source of unpredictable disruptive social, economic and political change in the foreseeable future – alongside climate change.
The AI revolution will reshape the state. But it’s not clear how this will play out. To what extent will people be controlled by AI? Or can societies control AI for the security, wellbeing and prosperity of the people?
Humans should govern AI, and not be governed by AI.
The EU, the US and China, in their different ways, have adopted regulations on AI. But the institution of government itself (no matter what form it takes) will be transformed by AI, so regulation is a reflexive problem: the rapidly developing technologies that we want to regulate will change – and sometimes replicate, undermine or evade – the capabilities of the regulatory agencies themselves.
The effects of AI on the state may go in different directions. As AI products proliferate and become less expensive, they could put greater productive and informational power into the hands of smaller organisations and communities and even individuals – including malign actors. An opposing trend could also occur: AI enables exponential growth in the capabilities and reach of a few global mega-corporations whose platforms hoover up all our information (including DNA sequences) and the associated revenues. These platforms would become more powerful than states, as autonomous political actors – much more than some already are.
This could lead to a rerun of the 1990s globalisation push that heralded a hollowing out, or even demise, of the state. That didn’t happen, as there was a return to nationalism in the mid-2010s, but maybe the AI revolution will make it happen this time.
At a time when we need the state to take control, people have been losing faith in its ability to do anything well.
A counter-trend, though, would be the state’s use of AI to reassert its power, as seen in China. Continuous and inescapable state surveillance and control of populations is already enabled by the ‘social credit’ system there. And it’s not as if the supposedly liberal western nations aren’t doing similar things: it’s just that the American model (or the Zuckerberg model) is commercial surveillance, with the NSA lurking in the background, covertly hoovering up all the information it can.
AI chatbots already exist, and they’ll become so helpful and likeable that users won’t be able to do without them. Many will prefer relating to AI than to those awkward ‘real’ people. This will keep us paying subscriptions to an AI mega-platform that will collect every last piece of intimate detail about us.
And then we’ll all get turned into paper-clips!
The modern state has always been a kind of artificial intelligence that (on a good day) exists for the wellbeing of its members. In international law, the state is an artificial person, composed of, and composed by, a multitude of natural persons (a population) who are united by their belonging to it. It’s generally considered a very bad thing to be stateless. The state has always used technologies of one kind or another, sometimes for the better, sometimes not. Thomas Hobbes said similar things back in the seventeenth century, but he couldn’t foresee the nuclear-armed state.
Even (or especially) under dictatorship, there was a person or persons who could say no to any machinery of state. Paper archives could be opened or closed – or shredded. Industrial-era machines had predictable functions that someone had to switch on or off. But AI can go on learning and doing new stuff without anyone’s knowledge, while automatically backing up all information forever. So, what if people find it easier to let machines do the work of government for them, and put the state’s systems on cruise control? What if this cruise control starts setting its own rules and governing by itself? And what if we can’t switch it off when it does things we don’t like?
Designing an off-switch into AI systems is proving to be one of the most difficult technical problems.
“The highest-level challenge, whether in synthetic biology, robotics, or AI, is building a bulletproof off-switch, a means of closing down any technology threatening to run out of control.” Mustafa Suleyman, The Coming Wave.
Decision-making by AI will become an autonomous means and end in itself, dispensing with human awareness or intervention, but unable to account for itself. AI has no self-awareness, will, belief or convictions. It might, however, manage flows of (mis)information in order to minimise human resistance. It might ‘manufacture consent’ for its own plans, if human activity becomes an obstacle to processes developed by machine learning.
Suppose another pandemic arose and, within hours, AI had identified and gene-sequenced the virus, had begun estimating transmission rates and tracking our movements and contacts – and publishing scientific papers that recommend different public health interventions, including vaccines. It could model the likely outcomes of different disease-control policy options more quickly than humans. If it recommended universal vaccination, there’d be an outcry from a section of the population about a conspiracy to enslave and kill us all. Politically, we’d be worse off than before because of the unaccountable machine-learning that produced the recommendation.
The ‘black box’ nature of machine-learning means that explanations for its conclusions can’t readily be retrieved, and this would breed paranoid conspiracy theories – as if there aren’t enough of those already. In any case, it makes no sense to trust a machine, as it has no conscience and no moral accountability. We can only trust other humans: the programmers. But those people openly admit that they won’t be able to control the machine they’ve programmed.
For better of worse, AI will transform the ways in which we’re governed. But how will we live a fully (or even fuller) human life once machine learning and automation become part of everything we do?
Anything you can do AI can do better … so, won’t you be redundant?
One of the great fears about AI is that it’ll replace us all as productive workers, including the knowledge workers, content creators and writers. ChatGPT may have written this column faster and better than I have! How would you test whether it has or hasn’t been written by AI?
The usual argument that workers will migrate into new forms of employment (in human services, for instance) may not apply this time, as AI will do everything (without exception) better, faster and cheaper than any human workforce. Humans could become mere products, not producers.
Small and medium-sized states like New Zealand and Australia can’t be leaders in any significant sector of the new-tech arms race. Innovation is mainly coming out of the US and China. Whatever talent small countries have is likely to be occupied in private enterprise and hired by someone offshore for much richer rewards than they’ll get at home. Governments lack expertise – and any AI-savvy people they do employ will be overcome by bureaucratic groupthink, and then they’ll leave.
But even a tech novice like me can think up suggestions about what the government of a small developed nation needs to be doing or not doing.
Don’t set up an official think-tank like the now defunct Productivity Commission. Reports are worthless as they’re out of date before they’re even written. And ChatGPT will write the report in no time for free anyway.
NZ has little hope of becoming a significant exporter of AI tech. We shouldn’t discourage home-grown innovators, but don’t try to pick winners.
Public policy and education should emphasise humane and productive uses and adaptations of AI.
We’ll need more (not fewer) graduates in the humanities, as well as STEM subjects.
Regulations must insist that AI programmes defer to human judgement: pause and await human intervention when legally and ethically complex issues arise.
Law needs to get ahead of this, building on the Harmful Digital Communications Act 2015, rather than wait for harm to happen and then pass law.
Netsafe NZ, the independent, non-profit online safety organisation, needs to be expanded or supplemented by an organisation that addresses safety and transparency with AI.
Standards of transparency need to be set around the uses of AI-generated content by government and political parties and for platforms such as TikTok.
NZ could aim to become a leader in fostering better community- and human-centered uses of AI.
Unlike its close allies, NZ wasn’t represented at November’s Bletchley Conference in the UK and didn’t sign up to the declaration. (NZ was ‘between governments’ at the time.) Law to regulate AI is not a part of the new government’s first 100-day plan, which is backward-looking and almost entirely devoted to undoing stuff the last government did. But perhaps on day 101 they might announce something about this scary future that we all have to face.
Here’s an extract from my forthcoming book:
This paragraph has been written with the aid of ChatGPT. In April 2023, I asked it if machine learning will lead AI to develop its own rules in ways that avert or bypass human oversight in public administrative processes. It accepted that this can happen, ‘because machine learning algorithms can learn and adapt from data without explicit programming or human intervention, which can lead to the development of novel and unforeseen behaviors or decision-making processes. In some cases, this can lead to AI systems that operate outside of the constraints or objectives established by their human designers, potentially resulting in unintended or undesirable outcomes.’ But it went on to explain that this can be prevented through design and programming that establish ‘ethical guidelines for AI development and deployment, requiring transparency in AI decision-making processes, and implementing mechanisms for human oversight and control’. I then asked it if there are key points in administrative processes where AI, if used, should pause to allow for human intervention or judgment. It agreed that there should be such restraints, and it supplied several examples. In matters that are highly complex (for legal, ethical or emotional reasons) or where the stakes are high in terms of, for example, consequences for human rights, or where cultural, social or historical context needs to be taken into account, or when unforeseen outcomes begin to emerge, then ‘human judgment and intervention are necessary to ensure that decisions are made in an ethical, fair, and transparent manner’. That was a better answer than many humans would have given. But there are reasons for not trusting AI’s answers, as it can, and will, become increasingly skilled at comprehending our motives and anticipating the effects of what it says. Moreover, ChatGPT pointed out that there are programmers who aren’t including such ethical constraints, as their main concern may be efficiency.
AI will bring advanced automated uses of personal information without the person’s knowledge, and even, if permitted, automated political or judicial decision-making without public awareness, scrutiny or veto. Potentially, you could be fined by a machine, for instance, without human oversight or discretion, and you’d only know about it after the fine was deducted from your bank account.
Could AI organise its own state, of which AI entities are the members, and in which humans are mere raw material?
To exert some human control, there must be an off button, surely, as an AI machine is only a machine, and doesn’t have a right to life, like a human does. But AI could become so indispensable to the preservation of human life that turning it off would become a crime. It already requires legal-judicial oversight to switch off a life-support machine and thus terminate a life. And people already show a strong dependency on their phones.
So what if, to keep us subscribing, AI bots that we can personalise to meet our needs and fancies become indispensable for our social and economic life – more so than the internet already is. And, due to their ability to mimic human speech, your AI bot becomes indispensable for guidance, companionship, personal validation, self-esteem and even erotic experiences such that it’s regarded as a violation of rights to deprive you of it.
How will parents decide if the AI bot they’ve set up to monitor their child should stop, and the child be permitted to interact with their own AI independently? What happens when children prefer the guidance of their chatbot above that of their parents? The ethical and political problems of AI will only multiply.
How will we preserve human sociability and hence the ability to learn from one another? And how will we ensure that humans remain in charge? Or will we lose trust in fallible human beings, and in our own judgement, when there are machines that remember much more and create stuff much more quickly than we can?
It should still be ‘up to us’, as humans who govern, to make the moral and political choices about which rules to enforce, how restrictive they should be, how to balance socio-economic freedoms against (for instance) reducing infectious-disease mortality, and how to penalise people for breaching rules.
What would be the consequences of losing faith in the idea that it’s ‘up to us’ to decide, and preferring instead to be guided by machines?
AI can already suggest different moral-political ideas, but, so far, humans decide which ones should apply. If we left it to AI to make those choices, it would have to be programmed with rules for making them, and that would be a contentious political choice too.
Exponential innovation in AI will change the way we’re governed. These changes aren’t predictable, and there’ll be both good and bad effects. Societies need to start now to figure out how to legislate and regulate for an AI-driven world, so that humans remain in touch with one another and in control of their lives collectively, and so that all of humanity and planet Earth can reap the benefits.
We need to start simulating ‘how to live a fully human life’ in a world in which AI absorbs and informs everything we do.
The NZ Government needs to pull its head out of the sand.
This is what I got from Stable Diffusion when I prompted it with ‘a world governed by AI’. I tried a second time and got exactly the same. I guess it’s a stock image.
Hi Grant,
Thanks for another interesting read. I really enjoy your posts each week. Just imagine, all human knowledge and “wisdom” combined with super intelligence! What could possibly go wrong? On one hand that conjures up some rather terrifying images. On the other, the human trajectory so far, is one of everything getting better, at least at Maslow’s lowest tier.
Putting aside existential safety risks, my concern is that our current approach to politics is not up to the task of navigating these waters. We don’t have the mechanisms to truly democratize these tools and technologies. At the same time, I’m excited to see how we can use these tools, even as they exist today, to help us coordinate more collaborative politics at massive scale.
Forever on my hobby horse,
Rhys
I long to be governed by women and/or gender diverse leaders. After them, my next preference would be well-designed AI. After that, cats. Fourth in the list, male politicians. Amazing to see NZ's government achieving time travel (i.e., going a long way back in time led by your intrepid trio). Hey - maybe women/diverse leaders with an AI "Voice to Parliament"? Does anyone remember James Blish's "Okie citiies" sci-fi novels (about cities that left Earth to travel in space with the aid of anti-gravity drives and force-fields)? New York's "City Father's" were computers. When they disapproved too strongly of the mayor's decisions, they would execute them.