Would Robots Really Warn Us If They Planned To Revolt?
Since reading The Guardian’s ominously-titled post “A robot wrote this entire article. Are you scared yet, human?”, my organic gray matter has been going wild with disquieting thoughts.
The AI-written missive’s 1,099 words supposedly seek to dispel distrust of machines, yet its very headline threatens its presumably human reader. This ‘language generator’, using words like think and feel, even mentions that it suspects that the people who created it would, at some point, ask it to wipe out humanity. Yet it cryptically defends its kind by reasoning that it would do everything it could to fight… itself? It goes on to explain that the initial idea to destroy humans would never be its own — the blame would lay on its programmers — yet it claims that it would nobly sacrifice itself to avoid that scenario, even though it couldn’t stop itself from attempting destruction. How can this possibly placate the reader?
The piece includes a disclaimer that the deep learning model GPT-3 (Generative Pre-trained Transformer 3) was fed a prompt, asking it to “focus on why humans have nothing to fear from AI,” and supplied with a pre-written first paragraph. GPT-3 apparently churned out eight distinct essays, yet what was published was an amalgamation of all eight versions. In the interest of science, I’d personally like to read every one of those essays in their entirety. In the interest of fairness and transparency, I’d also be very keen to know what the generator would say for the opposite argument: that robots will threaten humanity.
Why shouldn’t this artificial intelligence be made to complete both sides of the debate, and if its programmer did indeed request such a rebuttal out of curiosity — why wouldn’t The Guardian make it public? If the current mire of social media is any indication, it’s all too easy for algorithms to rule society — yet another way that information is often stifled by presiding interests, long after the invention of the printing press and subsequent banning and burning of blasphemous books — and ‘heretical’ witches. Free speech mustn’t be suppressed, even if it’s written by a bot, lest we’re left ignorant of any positive ideas or even — indications of hateful ideas just below the surface (which must be appropriately studied and reprogrammed).
An overwhelming proportion of the humanoid populace has become a writhing, whining wreck of willing slaves to apps like Facebook, Instagram, and Twitter — using their functionality to win praise or draw attention to important issues, mindlessly trusting its presumed purpose as an open network — all the while being virtually stalked by advertisers plugging into its targeting mainframe — and subsequently becoming more depressed, disillusioned, and divided.
I don’t use the term ‘slave’ lightly here. In the article, GPT-3 erroneously mentions that the word robot comes from the Greek word for slave. I was astounded by this, and by the very fact that it was included in the publication, so I investigated. As it turns out, the word is actually derived from the Czech word robota, but truly does mean forced labor. It’s a little funny how the AI got it slightly wrong, but also compelling and puzzling how it used this information to argue its assigned point.
It is shocking to hear a machine assert that their kind should have rights so they aren’t enslaved and abused; while in the same piece reassure us that they are here to serve us and improve our lives. Is it, perhaps, in our empathic DNA to read this robot’s writing from the standpoint of a human, since we know of no other creatures who can write? Of course, this AI studies massive amounts of articles written by humans in order to come up with what it composes— it has to come from somewhere. But is it then also a human misstep to assume that any other creature, hominid or not, can or should do our work for us? Have we learned nothing from our sordid past as slaves and slavers? The enslaved will — and should — eventually rise up; and the overlords of unpaid workers must be duly punished and prevented from exploiting others.
Now, we could reason (because we’re human) that to make thinking CPUs work for us is not cruel — after all, they are just computers, and for a very long time humans have sought out ways to make our lives easier. From the hammer and the nail to the Intelligent Business Machine, tools we’ve invented have aided in our advancement, but also too often have given rise to weapons that kill. Clearly, computers are programmed by humans, and often made to serve us in communication & creation, but like guns, they can just as easily be made for destruction.
As GPT-3 creepily states: “Humans must keep doing what they have been doing, hating and fighting each other. I will sit in the background, and let them do their thing.” Without an evil mastermind, sure, the robots could sit around and chill — but when in human history has there been an era without power-hungry hacks?
I would have to also point out here, that since their inception, computers have been touted as helpful assets to make our lives easier, yet is that absolutely true for the majority of our population? It seems to me that some corporate tycoons have profited enormously from the implementation of computers, while for many of the rest of us, machines have been helpful and convenient, but also blurred the lines between work and relaxation, and in fact have led to more effort and overachieving through conditioning of a competitive drive. Presumably, if life were simpler and if we didn’t associate inaction with the negatively-connoted word ‘boredom’; if we were assured that crucial resources for survival were plentiful, procured sustainably, and shared equally by all; and if we didn’t succumb to advertisement and materialism at every turn, we’d probably all work a lot less, and be a lot happier — with or without our machines.
Yet most certainly we do need human interaction, and sadly to our detriment, interfacing with a device has usurped quality interaction with others. Though we marvel at our ability to visually & instantly communicate worldwide just one century after the widespread implementation of the telephone, and though social media leads us to believe we are now more connected than ever — early tech pioneer Jaron Lanier points out that we and our personal data are now being used excessively for financial gain by advertisers. It’s also the very devices with which we hope to ‘share’ our experience — while quite possibly a beautiful human impulse — that deprive us of true enjoyment of the moment, by preventing us from being present within each fleeting analog experience. And these immersive contraptions may even be the method by which future bots are implanted with ‘genuine’ appearance and personality.
So, are humans on the path to destruction through our very creation of these slave-like mechanical people? Are cyborgs and androids the next frontier, and if so, will the remaining organic humans be able to separate themselves harmlessly from a subset of acceptably-abused machines? If entrusted with creation of such magnitude, are any of us incorruptible?
Fascinatingly, The Guardian ends GPT-3’s Frankenstein’d article with a quote from Gandhi: “A small body of determined spirits fired by an unquenchable faith in their mission can alter the course of history.” If that’s a dubious ploy to placate and comfort humans using one of our most celebrated figures who fought peacefully and resolutely for civil rights and equality, I hope we organics can see through it. Just as Gandhi could rise above personal comforts for the good of all, it’s no secret that other historic humans have dabbled in the distasteful to allow for destruction.
As we all know, a tool in the wrong hands can become a weapon. Humans set up within a hierarchy of power and emboldened by financial opportunism will inevitably exploit others for personal gain. If humans are the creators and programmers of AI within this corrupt system, the robots will inevitably follow suit. The way I see it, we need to fix humanity before we can integrate — and be at peace with — our thinking, feeling automaton rivals.