What Will the Machines Believe?

Artificial Intelligence has always been one of my most favorite science fiction themes. The idea of living among machines that can interact with us on a social level is both inspiring and terrifying. Robots fighting side by side with humans in space, or mankind’s struggle against an artificial intelligence gone rogue – it’s all the stuff of epic imagination.

But as good as AI stories are for a spectacle, their true value lies in the questions they make us ask in the end. Good stories in which AI is the central component rarely end without us asking rather deep and profound questions, whether to ourselves or to those we shared the story with.

What is consciousness and how will we perceive consciousness in something that is not organic? We anthropomorphize organic creatures rather easily, whether they’re cartoon pigs that sing and dance or bipedal aliens sharing the bridge of the Enterprise. But machines are much different, especially when the machine-as-conscious character isn’t just a cliché or sidepiece in a larger story.

Your perception of Commander Data in Star Trek is probably different from that of Roy Batty in Blade Runner (which is, in my opinion, the perfect AI film). One is a clever device, albeit a little cliché, with which the crew of the Enterprise can have moments analyzing the human condition. The other is a direct challenge to the question of whether or not machines can truly feel.

I look at the classic “android” character like Data and think “there’s a clever machine.” He can be turned on and off, will suffer malfunctions, and can be reprogrammed to experience things in different ways. He was a machine that was very much a data collector, parsing his observations and offering a typical “robot” quip about it. Granted, as the show went on, the character was given more life, steered towards being perceived as “alive” rather than programmed, but I don’t feel like that was the original intent.

The replicants in Blade Runner, however, were complex devices that seemed to be struggling with their own identities in ways that we only expect of conscious beings. When Rutger Hauer speaks to Harrison Ford in that perfectly brilliant, climactic scene between Batty and Deckard, we get a machine that is aware of itself in a completely different way.

“I’ve seen things you people wouldn’t believe. Attack ships on fire off the shoulder of Orion. I watched c-beams glitter in the dark near the Tannhäuser Gate. All those moments will be lost in time, like tears in rain. Time to die.”

Blade Runner’s theme was consciousness. The very book on which it was based asks in the title, Do Androids Dream of Electric Sheep? It wasn’t a story about machines or androids or even AI. To me it was a question of consciousness. What difference does it make if you’re man or machine when you can dream, believe, and feel sadness that your memories will be lost, like tears in rain, when you die?

Blade Runner: The Final Cut

That is the theme of AI that makes everyone nervous. Will a sufficiently advanced AI become self-aware and if it does, will it be conscious in a way that will allow for morality, for empathy, for a code of ethics? What the machines will think is one thing, but will they have anything that they believe? Will they believe similar things that we do? Will they believe dramatically different things? Which is scarier?

I saw the film Ex Machina recently. I loved it so much I watched it again the following day and I rarely ever do that with movies. Frustratingly it reminded me far too much of a story I wanted to write one day, of notes I’d scribbled down a long time ago. Envy aside, the film is centered around the Turing Test (something that is also a pivotal idea in Blade Runner) and one man’s part in it. It’s very well done and the actress who plays the AI should be commended, She expressed herself in a way that convinced us that she was A) a machine and that B) she was a machine who passed the Turing Test. Granted, special effects had a large hand in the former, but the latter was a great performance in facial expression and voice control.

The film is an artistic study of consciousness and deeply examines what elements of humanity a machine might need in order to convince us that it is not “just a machine.” I wonder how many who see the film realize that they have been left with questions they may genuinely have to answer during their lifetimes.

As much as I loved it, I would’ve ended the film differently. I will try not to spoil it, but the film is very much a thriller designed to make the audience uncomfortable. In crude terms it’s just another story of robots run amok, also known as “the same old story.” In some ways the ending says more about the writer’s opinion of humanity than his opinion of AI consciousness.

Machine sympathy was something attempted in Speilberg’s AI. Here we have a movie that did everything it could to establish that AI machines might be conscious and be desperate for things like love and kindness. Like the trouble I had with Star Trek, however, I never quite got the feeling that the machines were truly conscious, capable of feeling anything other than what they were originally programmed for. The robot boy David wants to be “real” so his “mother” will love him. But David was programmed to love his mother and to desire love in return. He experiences a whole range of humanity in his quest through the film, but one constant remains, he is desperate to reclaim the love of his mother. He doesn’t betray his initial programming.

Had David been a cleaning robot with a bad attitude that ended up wanting love in the end, then that’s a better example of a conscious machine. This lack of development, along with campy and unrealistic world-building, left me feeling like the audience missed out on the deeper question of AI.

There’s a show developed in the UK called Humans that may come close to directing an audience towards the more interesting questions. The show features multiple conscious machines with distinct personalities. One is vengeful and bitter, another is loyal, another is obsessed with protecting those that it cares about. And each one seems to exhibit not only the capacity for compassion, but a struggle to embrace compassion rather than give in to fear and anger.

The biggest flaw in the show is that we are told that these machines are conscious and self-aware. Even though the human characters in the show will struggle to believe this, it’s distinctly insisted that the machines are, in fact, true conscious AIs. Part of a great AI story is the audience’s own quest to make that decision for itself, something that Blade Runner and Ex Machina did brilliantly. An AI story is great if it is its own Turing Test.

All of this leads me to my own questions about AI. Will a conscious machine believe it needs to do good things? Will it believe there are things greater than itself? Will it struggle to do “right,” not because it’s told what the rules are, but because of an innate sense of good? Will their morality be dependent on how they’re nurtured, or will the capacity for kindness be in their nature? Could it be that rather than wipe us out as a species they’ll look at us in fondness?

My perception of consciousness is wholly my own, just as only yours is your own. In order to avoid a life of solipsism we all have to take each other’s word for it, as a species, that human consciousness is a real thing and it generally has some universal traits. I know I’m not the only one that finds it hard to understand a conscious being that does not consider anything other than itself.

Will there be a machine that, after manipulating me into serving its own self-interests, looks back, realizes it’s been a bit of a jerk, apologizes, and makes things right? How many times will we need to see that scenario not happen before we realize we’re doomed? Will we even be worth the effort required to exterminate?

avaWhat about love and passion? It’s not difficult to accept that a human might fall in love with a conscious machine, but present the opposite idea and people feel the need to review the results of the latest Turing Test one more time. A conscious being, however, should be expected to fall in love. Worse, it can be expected to become infatuated, or even obsessed. What if your stalker is a machine? Perhaps another machine will protect you.

 

How will a machine find pleasure? Will they feel pain? What about emotional pain? Will their entire conscious state depend on the hardware they decide to house themselves in? If an AI is given human form and programmed with a gender, will she embrace those physical and mental qualities? What will she do once she realizes that her consciousness is software, transferable? How will her ethics change then?

Conversely, how much of our own ethical approach depends on AI hardware? It’s a lot easier to smash a giant metal robot than a machine walking in human form. How much of our fear of conscious AI is related to our first reaction to it? We know it is a human tendency to do terrible things to that which we do not understand.

We’re at a point in time where the AI characters in books, television, and film need to grow into something far more substantial. I don’t think people should be continuously told that AI is something to fear, that we’re doomed as soon as a robot looks at us and says, “I don’t feel like you appreciate me anymore.”

Like it or not, we’re heading into that very world. I strongly believe it’s not a matter of if, but when, and when might by within my lifetime. We can be afraid or we can be ready.