AI: Who will win the master–slave struggle? Humans or machines?
What will the world be like in 20 years' time?
In episode 7, season 2 of The Coming Storm (a BBC podcast series about conspiracy theories), Gabriel Gatehouse asks if AI will one day run societies autocratically. He sees a possible future “where we have little power to challenge bad decisions because we don’t really know how they’re made or who’s making them.” (He doesn’t ask to what degree that’s already the case.) We’ll continue to vote for people, he says, but:
“Democracy becomes a façade, a simulacrum, a representation of something that no longer exists.”
Again, he doesn’t ask the obvious: have we always had a “representation” of a democracy that no longer exists? Or, to put it another way: is what we call “democracy” not really all that democratic? And, if not, could that help to explain the present discontents and political distrust? If people don’t understand how and why decisions that affect them have been made, then conspiracy theories about, for instance, “the deep state” will flourish.
“Representation” carries two meanings in this context – more than Gatehouse had probably noticed. Representation is at the heart of the problems posed for democracy, both past and future.
Summoning or sending representatives to a royal court or to a parliament were practices that developed in thirteenth-century England, and hence were not originally democratic. Monarchs needed to get the powerful nobles, bishops and magnates to agree to raise taxes and/or supply troops in order to wage war.
As parliaments developed, representative government meant that the few people who passed laws, imposed taxes and executed policy were regarded as speaking and acting on behalf of (or at the behest of) the many other taxpayers who weren’t present in the decision-making forum. Often, but not always, the representatives were elected – by at least some of the people they represented. The origins of this system were aristocratic (rule by “the elect” or “the best”), not democratic (rule by “the many” or the majority of the people).
Later, the American colonists threw off King George III’s “tyrannical” government, partly on the principle of “no taxation without representation”.
It took a long series of fights to get there, but the universal franchise was an improvement, especially as literacy rates and communications systems had improved. Even so, how confident are you now that your representatives in the House are genuinely speaking on your behalf? Most people, when asked by surveyors, say that they don’t trust politicians, and that political leaders are “out of touch”.
It’s normal to confuse democracy with elections. The advantage of competitive elections, at least, is that no one gets to rule for life, and the voters can sack people who let them down. What’s now called “democracy” may always have been “a representation of something that no longer exists” (to rework Gatehouse’s ominous words).
Representation was not used in ancient democracies.
In Athens, all the free male citizens were expected to participate in the lawmaking assembly, and they were drafted into official councils and juries, in large numbers, normally at random by lots (sortition). They were directly involved in the making of law, in administration and in the army. The only elected positions were the generals – who really needed to be the best available. This system was in part made feasible by unfree slave labour, moreover, allowing the free males to engage in public life.
States are not “natural”; they’re “artificial”. They’re creatures of human intelligence, made by and out of the people who inhabit them. Animals “naturally” associate in herds etc., but humans argue over justice and make law. Just ask Aristotle – who also said that the good citizen should know how to rule and how to be ruled well. Although women weren’t permitted to participate back then, his principle was that the citizen should be an active participant in the government of the community.
By contrast, modern states have often had compulsory military service but not compulsory government service. “The government of things” was delegated or entrusted to a few elected representatives and public officials. For a long time that was restricted to a minority who were privileged by way of family pedigree, regal favour, property or wealth. But eventually elections were opened to all and employment in the civil service required passing an exam or holding a degree or having some practical expertise.
Nonetheless, making law and governing were reserved for “the elect” or for “the qualified” – for a minority, in effect.
Electing and trusting in “the best” – rather than rule by the majority – was what wise men like James Madison and John Stuart Mill had wanted. In the latter’s view, an educated and cultivated minority should be elected by their inferiors – the poorly educated majority. It sounds elitist because it was elitist. In this post-Twitter age, however, that’s hard to get away with.
Representative systems (now called “democracies”) evolved structurally around struggles against two basic kinds of tyranny: tyranny of the majority, and tyranny of the minority.
Majorities could tyrannise over two kinds of minority: a rich minority or an oppressed minority. Rich minorities don’t want the majority to seize and redistribute their wealth or land. Oppressed minorities justifiably fear discrimination or much worse.
On the other hand, minorities could tyrannise over majorities by sheer force of state violence, or by more sophisticated constitutional means: exploiting veto powers, courts or political influence. A complex system of checks and balances can be used by an insider elite for their own advantage. This is the theme of the recent book Tyranny of the Minority by the American political scientists Levitzky and Ziblatt. (Highly recommended.) They could have called it How America’s White Oligarchs Seized and Kept Their Power.
In a true democracy, by contrast, almost everyone would take part in lawmaking, government and administration, for at least some time in their lives. It wouldn’t be entrusted or delegated to a minority, even though respected civic leaders may emerge in public life. But, as that conservative aristocrat Plato put it in The Republic, a democracy is “an agreeable anarchic form of society, with plenty of variety, which treats all men as equals, whether they are equal or not”. People accustomed to democracy complain about the slightest affront to their liberties. But “an excessive desire for liberty undermines democracy and leads to the demand for tyranny”. [Desmond Lee’s translation.] Democracy, or rule by the majority of free citizens, was disharmonious, Plato thought. He would not have been surprised by the recent rise of authoritarian leaders.
How does AI fit in?
If AI is going to do large amounts of routine administrative work for us, then AI should serve as the slave-labour, so to speak, freeing up humans to get more involved in governing their own affairs, enabled by new communications technologies. In other words, AI has the potential to restore democracy, and not reduce it to a façade as Gatehouse fears it could.
Sadly, such democratic hopes have been dashed before. Many people thought that the internet would enable innovative democratic forums and processes, and indeed sometimes it does. Digitisation has had many benefits, such as instant access to large amounts of official information. Social media gives anyone a platform, no matter how small. But the internet has also become a tool for state surveillance and commercial exploitation. We now see billionaires rushing to advance and control – and make mega-fortunes from – emerging AI capabilities. The next frontier of capitalist expropriation is in AI, including genetic, trans-human and life-extension technologies.
Who will win the master–slave struggle? Humans or machines? There’s reason to fear that machines will rule, while delivering the profits to a tiny minority of plutocrats. But it’s never too late to do something about it.
What will the world be like, then, in 20 years’ time?
Short answer: very different.
Suppose a society’s political and commercial decision-makers have a median age of about 50. That puts people who are now about 30 in focus as the decision-makers of the mid-2040s. These are digital natives who may only vaguely remember 9/11, who presently struggle to get ahead (especially with owning homes) and who live in the shadow of permacrisis (climate change, armed conflict, distrust in institutions, etc.). And, by 2044, the youngest baby-boomers will be hitting 80 – many of them still playing up no doubt, but no longer a dominant demographic bulge.
This future will have been transformed by AI, in ways that we can’t fully predict – although there are experts, split between doomers and accelerationists, who are trying.
What’s your P(doom), by the way? It’s not a frivolous question! What is the probability that AI will destroy humanity?
No pressure or anything, but, if you’re about 30 now, you belong to the generation that will be tasked with saving humanity from machines that were invented by much older guys. Or, to put it positively, you’ll have the privilege of building a future with unlimited potential for human wellbeing and abundance.
Twenty years ago, I didn’t have a device that fits neatly in my pocket and allows me to listen, as I walk around, to some of the world’s leading thinkers and finest music. Twenty years from now, such a device won’t be needed anymore. Access to information (as if we didn’t have enough already) will have grown and gotten faster, exponentially.
The realm of the artificial will expand exponentially too: façades, simulacra, fakes will become more pervasive and harder to distinguish from “the real thing”. AI-generated voices will talk with AI-generated voices, any one of which could be the simulacrum of your voice or mine, or the AI-generated voice of someone who’s passed away. The fear is that our informational world will be flooded with fakes and scams, and we won’t be able to tell what’s authentic.
Any expert or celebrity can train an AI on him/herself and then anyone anytime can call that person’s digital alter ego and ask their opinion about something.
An AI could be trained on the whole opus of Plato and it could bring him and Socrates back to life, in their original Greek or translated into any language you choose. Plato’s dialogues were already “fake” – or made up artfully and brilliantly –when he wrote them. We can think of him as a playwright: just read the Symposium. His Socrates isn’t the “real” Socrates. But how much does that matter?
Why be afraid of fake?
The creation of characters and representations is a long-established human ability. We have always symbolised, dissimulated and pretended. It’s nothing new; it’s just expanding in scope and speed.
Every child learns to pretend.
“A child has much to learn before it can pretend. (A dog can’t be a hypocrite, but neither can it be sincere.)” Ludwig Wittgenstein, Philosophical Investigations, §363.
We understand genuineness and authenticity (or truthfulness) as qualities related and opposed to pretense and dissimulation, and we learn to tell the difference, even though there’s no completely reliable method for doing so.
How is it that we’re smart enough, though, to tell that a person doesn’t really mean what they say? In Shakespeare’s Julius Caesar, how do we know that Marc Antony meant his audience to believe the opposite of “Brutus is an honourable man”?
On the other hand, why do people believe so much of the nonsense that they read n the paper or online?
We are going through a new process of learning to distinguish fake from genuine in the digital world. Images and words become cheap to produce and to distribute globally, and we’re flooded with information of varying quality and truthfulness. What should we believe? In whom should we trust? Sadly, too many people are getting duped, ripped off or abused in the process, but we’re learning. AI will make it even more complicated. The recent crises of belief and trust in institutions and leaders could get worse.
Large language models (LLMs) reproduce an average opinion of the average opinion, based on found text and predictive modelling. As ChatGPT told me:
“My answers are generated based on patterns in vast amounts of text data, rather than personal experience or beliefs, so I aim to provide well-informed, neutral responses.”
“Ultimately, I can provide accurate, thorough answers within my training and tools, but transparency and external verification are key for users to fully trust information online.”
“I can simulate a persona or ‘pretend’ to be someone as a way to provide context or help make a scenario feel more realistic, like during a role-play for learning languages or rehearsing conversations. However, I avoid impersonating real individuals in a misleading way, especially public figures, as my responses need to remain respectful and responsible.”
“I can’t actually ‘judge’ someone’s genuineness or sincerity because I lack consciousness, intuition, and firsthand experience.”
That last statement seems disarmingly “honest”, even though honesty and lies aren’t attributable to a machine. Nonetheless, ChatGPT can learn to identify verbal or contextual cues that indicate irony. Only humans, it suggests, can judge how genuine or sincere someone is. Even so, humans often (or more often than not) get it wrong.
An AI can’t be a hypocrite, but neither can it be sincere.
So far, however, I’d judge ChatGPT’s responses to be better informed and more accurate than most humans, as well as being transparent about its own abilities and limitations. It’s not a question of trusting or not trusting it, as our trust is invested in people – in this case, the people responsible for it at OpenAI.
Like sincerity, trust isn’t relevant to a machine. A machine has no consciousness – or, more to the point, no conscience. ChatGPT doesn’t even pretend to have that faculty.
We will have to programme – and talk AI into – telling the truth, doing what it’s told and submitting to human judgement. But what if a truth hurts people? And what about “truths” that are contestable?
This is the image created by Gencraft when I asked it for “a surgeon in the operating theatre with clinical staff”. Is it a realistic, even truthful, image, or does it reinforce harmful stereotypes? You can judge for yourself.
This is what it gave me in response to “Augustine, Bishop of Hippo” (aka Saint Augustine), even though he was known to be of North African Berber origins and was probably dark-skinned. The blue eyes are most improbable.
Elon Musk has spoken up for “truth-seeking AI” as compared with “woke” AI. Especially in image-generation, some AI does sometimes over-correct for biases, if it’s been programmed to do that. But Elon’s version of truth is: “anything goes, so long as it doesn’t undermine my business interests”.
There are political storms brewing over how AI should be programmed – and who will wield power over the programmers.
No matter what, we should make AI into a slave, not a master.
I chose the lowest probability for “your quiz proposition “AI will destroy humanity.” Had the proposition been “Humanity will use AI to destroy ourselves,” I’d have chosen a higher probability range.
A fascinating examination of the current problem of 'democracy' in our modern representative systems. I'm 30 years old and used to work as a manager for one of the 'Magnificent Seven' giants of Big Tech that briefly operated out of New Zealand. Dealing with the dangers of AI and LLMs will definitely be one of the biggest challenges for people of my generation that hold a rudimentary understanding of how it has developed so far...
I'm torn over whether AI can, or should, lead to a more Athenian form of democracy. My work as a manager showed me that the general public, even my peers, did not have a good philosophic understanding of computers. The current user interface is too easy to use, so most users have never seen the need to educate themselves about how it came into being.
When the AI revolution took hold post-COVID, I read up on the new technical advancements of LLMs and neural net algorithms. I've been playing around with Google Gemini, and found pretty much the same stuff you have with ChatGPT. It is a good source for broadly accurate data analysis, and more aware of its biases than many human interlocutors.
I personally have no issue with AI generated 'fake news', mostly because political polemics, satire, rumormongers, etc... have long been a feature of our representative democracies. In particular, I look back to the political war between John Adams and Thomas Jefferson in early American history - one where they both funded shocking amounts of 'yellow journalism' to discredit the other candidate and their ideas.
This war for what we would now call 'low information' voters affected their friendship at the time, but they both came out the other side and had a rich correspondence in their later year. So, based on this example, I'm not very worried about the spread of 'disinformation' as the U.S.A managed to survive this 'fake news' war between two of their Founding Fathers.
However, I'm only optimistic so long as people continue to trust our government and our political system, even if they despise many of the people who work in it...
I suspect we will probably see a massive increase in 'yellow journalism' over the next few years as political parties and activists become comfortable with the use of AI in their election campaigns, but voters will eventually get fed up and trust in our political system will continue to fall. This might lead to a revolt - but my concern is that, instead of revolting, people will entrust their governance to 'Big Tech' and other corporations because they don't understand the ancient philosophies of politics.
If this happens, humanity will drift away from democracy entirely and into the realm of a corporate form of government. Hopefully this doesn't happen, and the internet finally fulfills its promise of being a genuinely democratic marketplace of ideas, but who knows how this revolution in communications technology will play out?