At the Picture Show: Extended Cut
by Chris Bellamy
Imitation of self
As depictions of A.I. get more and more familiar, they may be getting more personal, too
For about as long as there has been science fiction at the movies, the presence of artificial life, in
one form or another, has practically been a constant. From rudimentary depictions of robotics
(humanoid and otherwise) to more advanced interpretations of the future of consciousness, the
concept has been a resilient one, and a defining staple of genre classics for going on a century.
Modern sci-fi on the big and small screen is no exception. However, we may be reaching a
critical turning point in our representations of artificial intelligence. Just as there was a
pronounced shift in the tone and style of space-exploration films once space flight became a
burgeoning reality in the 1960s, so, too, are we seeing a change in the way we approach A.I. as it
becomes closer to becoming a prevalent aspect of 21st Century life. (Or I should say, more of a
prevalent aspect, since primitive forms of it are already so commonplace.)
The subject has gotten a fair amount of attention in the press in recent years, as the likes of
Stephen Hawking, Elon Musk and Bill Gates have all warned - or at least publicly speculated -
about the theoretical dangers of A.I., which is developing rapidly as we speak. "Computers will
overtake humans with A.I. at some point within the next 100 years," Hawking said earlier this year. "When that happens, we need to make
sure the computers have goals aligned with ours."
Indeed, the fear associated with the concept - that said intelligence will quickly surpass our own
(in ways it hasn't already), and, hypothetically, render our own intelligence obsolete (or at the
very least, outdated) - has been one of the driving emotional and philosophical factors in artists
and writers' depictions of it throughout the years. Just this summer we got yet another entry in
the Terminator series, which is basically my generation's go-to reference for advanced
intelligence run amok. That franchise is built on the concept of machines becoming genuinely
self-aware, an idea that still sparks pretty wide-ranging debate among scientists, futurists and
others. Certainly no one is arguing that Siri has designs on taking over the world. But still, the
fear of our own creation advancing beyond our control remains a real one, whether it's someone
like Hawking arguing for the importance of wisdom and regulation in our continuing
development of the technology, or a big-budget movie with an A.I. menace bent on total control.
Just a few months ago, we saw one of pop culture's most popular figures, Robert Downey Jr's
Tony Stark, very nearly bring about humanity's destruction, simply by creating an artificially
intelligent security system that it turns out he had no means to control. Avengers: Age of Ultron
is hardly the most thoughtful of sci-fi films - in many ways it's akin to last year's Johnny Depp
catastrophe Transcendence (which was undone, ironically, by a crushing lack of self-awareness
about its own silliness) - but I was intrigued that, in its own way, it took on more existentially
urgent subject matter than its predecessor.
But I'm not sure films like that, with their apocalyptic scope and paranoia, are really the trend
anymore. Once again, there's a shift in tone now that the reality is at least somewhat more clear.
Self-aware, malevolent Terminators and James Spaders may still make for good spectacle, but
they don't resonate the same way now that a certain 21st Century clarity regarding A.I. has
emerged. At the times HAL 9000 or the T-800 were envisioned and put on screen, none of us
really had direct experience with artificial intelligence, at least not in any meaningful way. Now
that it's crept into daily lives, we see it differently.
Science fiction has always been about so much more than just technology, or exploration. It has
always, like art in general, been a prism with which to understand human nature. One of the great
misunderstandings I often hear is that the genre simply tries to imagine or anticipate the future,
when in fact it's just as often (if not more often) a way to understand the present. For this reason
and others, I now wonder if, as old-hat as A.I. is as a concept, it will become sci-fi cinema's
preeminent idea moving forward. If not that, might it at least become the most resonant one? The
underlying thing about A.I. is that it's not about technology at all - it's about us. The artificially
intelligent creations we see are mere reflections of humanity - at both its worst (like Caradog
James' The Machine, when it's created for the purpose of being a weapon) and its most ideal
(which, existentially speaking, may come across as even scarier than the former).
Take a couple of recent examples, which are notable not just for their unusual sensitivity toward
A.I., but their conspicuously modern feel, like we could be watching ourselves in the very near
future. The first is Spike Jonze's Her, which - while ostensibly a film about a man who falls in
love with his operating system - was actually a meditation on human connection and human
relationships. Artificial intelligence was the vehicle, not the end game. The film's aesthetic was
deliberately reminiscent of modern trends of computing and design, suggesting the idea that this
impossible, emotional, sexual being - Scarlett Johansson's disembodied voice, named Samantha
- could have been conjured up by the likes of Steve Jobs, or someone like him.
A machine that thinks and feels like a human is an old idea that, frankly, usually isn't all that
interesting. But there were key distinctions in Her's case. The fact that she ultimately was just a
voice put the focus squarely on Joaquin Phoenix's Theodore, so that the (subjective) emotional
connections were experienced through the volatility and frailty of a human being. Their
relationship, as real as it might have been for her (whatever "she" was), was still largely a
projection. For the purposes of the film, turning her into a physical being - especially if
Johansson herself had been that physical being - would have been cheating (a bit more on that in
a second), and Jonze knew it. It would have made her almost explicitly human, even if we knew
her to be artificial. It would have taken the focus away from the human character and onto the
more traditional ambiguities of the human values of androids and cyborgs and all other forms of
physical A.I. that we've been seeing in sci-fi for decades.
Perhaps more importantly, and as if underscoring the importance of the human side over the
technological one, in the end (obligatory spoiler warning), the A.I. itself, "Samantha," evolved so
rapidly that it disappeared from Theodore's life and from human perception/communication as a
whole.
The second example shares similarities with Her, and kind of expands on one of its most
memorable scenes. It's Be Right Back, a season-two episode of the BBC series Black Mirror,
starring Hayley Atwell as Martha, the grieving wife of a husband (Domhnall Gleeson) recently
killed in a car accident. Thanks to the (unprompted) assistance of a friend, she recreates her
husband, Ash, using all manner of personal and private correspondences and records. At first, it's
simply an online conversation. Then, he's given a voice - his voice, thanks to all the audio
recordings she uploaded. And finally, he's given a body - one virtually identical to Ash's actual
body. And she tries to recreate their life together. This touches on similar territory to the sex-surrogate scene from Her, in which Theodore hires a woman to be Samantha's "body," so they
can finally be together physically. In the film, the experiment is a disaster. In Be Right Back, it
works - at least up to a point. She actually has to "activate" him - taking the physical vessel,
putting it into the tub and letting it gestate ("Don't forget the electrolytes," he tells her) before
growing into a full-fledged Ash facsimile. She tries him out - unlike Theodore and Samantha,
the sex seems to work in their case, at least for a period of time - but, naturally, this version of
Ash cannot ever replicate all of the physical and emotional nuances that Martha is used to.
Here, once again, the synthetic intelligence is simply a vehicle used by the filmmakers (writer
Charlie Brooker and director Owen Harris) to explore human truths and realities. The A.I. itself
is a tool, not the subject.
Like anything else that plays a role in our present or future, artificial intelligence is worth being
explored in all its forms - as a technology, as an existential threat, as a reflection of ourselves -
but it's that last one that seems the most potent, and the most likely to dominate the cinematic
conversation in years to come. It's one of the reasons I'm a bit ambivalent about AMC's
otherwise well-produced sci-fi series, Humans. As well-acted and filmed as it might be, it (at
least through the first four episodes) is swimming in rather basic territory - robots who think and
feel like humans, the ethics of creating A.I., whether synthetics have rights, etc. It hasn't yet
touched on anything particularly robust. Meanwhile, Alex Garland's Ex Machina seems to split
the difference nicely. It, like many others before it, expresses (with a rather bemused satisfaction)
fears about the self-awareness of artificial beings, and in terms of their physical and emotional
depiction, the movie delivers the goods. But the setup of its narrative is quite explicitly about the
way we project our own human baggage onto the artificial things we create, rather than about
whether or not machines have feelings. Ultimately the film's thoughts and attitudes are about
humanity itself, expressing an emotional ambiguity about our godlike role in creating the future
of civilization.