Daisy, Daisy, give me your answer do…
An old line one used occasionally to hear deployed against homophobic men is that they fear being treated as they themselves treat women. Serviceable as an insult, if not particularly insightful, and liable to reflect poorly on those it seeks to defend. All the sorry evidence we have suggests Shakespeare was, as ever, more accurate when, in King Lear, he features an officer of law treating harshly the transgressions of a harlot he has some hot need to transgress with himself.
Both, in their own way, suggest our violence is often projected outward at the very things we fear in ourselves. Call it the narcissism of no difference.
Some similar mechanic doubtless plays a role in our attitudes toward machines and AI. (I say ‘our’ but I mean ‘Western’, since the East, in particular Japan, sees things very differently. The Japanese tend to see giant death robots as the good guys, as in the likes of Mobile Suit Gundam and Neon Genesis Evangelion, recently butchered by Hollywood in Pacific Rim. I suspect this has something to do with the technological ascension they underwent after the British-backed Meiji restoration, but that’s a discussion for another time.) AI, in the West, is widely predicted to put us out of work and probably then kill us in the near future. In fiction it has killed us countless times already. The ghost of Ned Ludd haunts us still.
Think HAL9000, Skynet, VIKI in I, Robot, the replicants in Bladerunner, Alicia Vikander in Ex Machina; even the blandly functional Mother on board USCSS Nostromo in Alien is a harbinger of doom, whilst the android Ash, who would later become Bilbo Baggins, provides the kinetic element of attempted murder. Though quite how he hoped to achieve it by feeding Sigourney Weaver a rolled up magazine I’m not wholly sure. Perhaps it was a copy of the New Statesman.
There are two principle fears at play here, though they interweave so closely that they might better be understood as a single entity.
The first is that the logical conclusion of AI is a kind of rules-based psychopathy, something akin to autism or the European Commission, based usually on the notion that computers are amoral, devoid of empathy, and fundamentally utilitarian. It follows that they may one day calculate that we fleshy automatons are unnecessary, or perhaps even an obstacle to efficiency, or else a threat to the AI itself (usually because we are – see Skynet), or possibly a threat to ourselves, which allows for genocide to uphold the principle of Asimov’s ‘zeroth law’ – ‘A robot may not injure humanity, or, by inaction, allow humanity to come to harm.’ Mankind must be saved from itself.
The second is that AI, in becoming self-aware, becomes all too human, and thus susceptible to all the many vices which lead us to war, and to otherwise demonstrate man’s inhumanity to man.
In both cases our own nature is the problem, either because we are incapable of devising decent laws by which AI operates or because, by becoming self-aware, AI becomes like us. Either way, its incompatibility with us is a result of us. Driverless cars provide a handy demonstration: in all the numerous accidents in which they’ve been involved, the fault for the collision has invariably been with the human driver. No computer yet exists which can mitigate women attempting to park or white van men to overtake. I think it was Gwendoline Butler who calls ‘the last law of robotics’ being ‘to tend towards the human’.
In both cases we see again the truth of Shakespeare’s line, since all the eventualities resulting from either thread manifest the very things – propensity to violence and rage, conflict, selfishness, cruelty, malice, spite, destructive self-preservation, or brute and uncaring calculation – we recognise and fear in ourselves. Which can be made to dovetail quite pleasantly with the line with which I opened: we fear AI because we fear it will treat us the way we treat our computers – as recipients of violence, impatience, frustration, aggression, spilt coffee, and debauched pornographic curiosity. Put otherwise, sadistically.
There are in fact at least two versions of the HAL9000 computer in the wonderful 2001: A Space Odyssey, one found in Kubrick’s film and the other in Clarke’s novel. ‘His’ motivations are nebulous in both, most especially in the film, but can be divided along the lines of the two threads already described. In one reading (from the book, naturally), HAL goes all homicidal because he is inhuman, and must deploy cold logic to overcome a contradiction in the directives installed in him by his human creators. One, to provide any and all information to the crew. Two, to conceal from the crew the true purpose of the mission to Jupiter (or, in the book, Saturn). Logic dictated that the only way to resolve this contradiction was to kill the crew, for if there were no crew HAL could not give or withhold information from them.
In the other reading, also from the book but present in the film as well, is that HAL is too human. Alone amongst the crew he knows about the obelisk, and about its role in guiding evolution. If HAL alone reaches the obelisk, HAL alone evolves; if the humans reach it, he does not. So again, psychopathy is the result, either from robotically following contradictory laws or humanly following something akin to emotion, evolved behaviour. (Indeed, one of the criticisms of 2001 is that HAL is the only ‘human’ character. Bowman and Poole are, by contrast, robotic.)
The two threads I’ve mentioned do have some academic basis: those who study these things have drawn innumerable category distinctions, the most sweeping of which divides AI into two classes: Strong and Weak. (The difference, in the Mass Effect series, is between artificial and virtual intelligence.) Strong AI has consciousness, mind, self-awareness, and all the other traits of (at least some of) humanity. It sees the world ‘feelingly’, as the blind man said to the Earl of Gloucester. It’s the HAL concerned with evolution, it’s Sunny in I, Robot, it’s Mike in The Moon is a Harsh Mistress, or David in Prometheus.
Weak AI, by contrast, lacks most of these things. It is for that reason sometimes known as Artificial General Intelligence, and defined by its ability to ‘merely’ apply (unfeeling) intelligence to any problem. See Skynet, Mother, HAL acting to resolve conflict, the Tet in Oblivion, and, in prototypical form, the computer which runs the Wildfire Facility in Michael Crichton’s The Andromeda Strain.
The difference between them can be parsed thusly: A weak AI can do algebra. A strong AI can do algebra whilst despairing that in no ordinary activity of a normal sentient creature’s daily life is such a skill useful. A weak AI can do a thing, a strong AI can ask, ‘what the hell’s the point?’
Our own pet psycho-killer AI, D.I.V.A, probably counts as a weak AI. In truth we do not really know her motives – in my story she’s made a utilitarian calculation that, life itself being so full of the possibility for pain and suffering and misery, it’s vital to humanity that humanity be killed; a perverted application of the zeroth law, if you will. Though it may be possible that she’s a strong AI driven to genocidal rage by the quality of her daily opponents. In any case, she doesn’t seem to me to have much by way of personality, and strikes one as distinctly managerial. Managers, as all employees know, are not sentient, feeling creatures.
In any case, if you wish to save the world from our universal techno-foe, now’s your chance. Come along, pit your wits against the ruthless logic of D.I.V.A. Just don’t ask her to open the pod bay doors.
By Benjamin Mercer