Remember the old joke about the drunk who looks for his keys under a streetlight instead of in the darkness where he lost them, because the light's better there? Scientists tell this joke on themselves, but they still tend to forget it in the crunch. The tasks a computer can do have become more complex, but they still don't add up to a human being. Maybe someday they will, but not yet. In Mary Renault's novel The Last of the Wine, she has Socrates say that a lover who tries to win his beloved by praising his beauty is like a hunter who brags about his kill before he's actually done it. Much of the talk about Artificial Intelligence (like a lot of talk in the sciences generally) is like that, bragging about non-existent accomplishments.
On the question of whether computers can think, or have consciousness, then: Noam Chomsky has said that whether computers can think is a question of definition and changing meanings. By analogy, do airplanes "really" fly? Not in the same way birds or butterflies do, but over the past century the meaning of "fly" has shifted to include the way that airplanes move through the air. Probably the meaning of "think" will also change (if it hasn't already) to include what computers do. But that will beg the question. We don't in fact know what thinking is, what intelligence is, or what consciousness is, and until we do we won't be able to say whether computers are really thinking or are really conscious.
Confusion comes partly from the term itself: Artificial Intelligence. "Artificial" means "made or produced by human beings rather than occurring naturally, typically as a copy of something natural." It doesn't necessarily mean that the product is identical in every way to a natural one: think of artificial legs, artificial hands, artificial teeth, artificial sweeteners. Implicit in the notion of something made is that it's not the last word: new technology and creative design may produce better, more lifelike prosthetics for example. But consider again Ex Machina's Ava, which was made, Nathan informs Caleb, with "pleasure" receptors in "her" groin area. This may titillate the male members of the audience, though patriarchal sexuality is not about women's pleasure anyway. (It's hard for me to imagine Nathan caring much about a gynoid robot's pleasure; easier to imagine him throwing a tantrum when it complained that he hadn't given it an orgasm.) But think of kissing the mouth of a robot. A human mouth is an extremely complex organ: tongue, teeth, lips, mucous membranes. Nathan would have had to design and build artificial salivary glands for Ava, for example. Those pleasure centers lower down would have to be equipped with a source of lubrication, as well as arousal. Does Ava have an artificial clitoris? I doubt it, since its guts are visible for most of the film, and they're mechanical and electronic. I don't think that technology is going to make great strides in that area in the foreseeable future, which is Ex Machina's setting.
I've long thought that Simulated Intelligence would be a better name for this project. It might take away some of the glamor, but that would be a good thing. Computers can simulate many processes, from dairy farms to Civilization, but those simulations won't produce real milk or skyscrapers. Nor will Simulated Intelligence produce real intelligence. Simulation is a perfectly valid goal.
Technology can duplicate or even extend human functions to varying degrees; that's what tools do. A hammer lets me hit something harder than I could with my fist; a knife lets me cut something my teeth couldn't; an atlatl (or spearthrower), a bow and arrow, extend my reach. Each tool is useful for a restricted range of tasks, though it can sometimes be adapted for tasks it wasn't originally intended for. While tools have exceeded human abilities for a long time, I often think in this connection of the old ballad of John Henry, the steel-driving man who was bested by a steam drill. That didn't make the steam drill human, let alone superhuman.
Alan Turing's dream was to invent a universal computer ("computer" in its old sense, which referred to human beings who performed calculations) that could be adapted through compiled instructions to perform any task ... that could be done by compiled instructions. This, he apparently believed, was what "thinking" was. I think it's a subset of thinking at most. Computers can simulate the playing of strategy games (chess, checkers, Go); the storage, indexing, and retrieval of information; the guidance and control of manufacturing tools; and so on. While all of these tasks are connected to human intelligence, they aren't human intelligence.
The trouble is that so many computer fans are eager to believe in computer consciousness already, and indeed have been since the first ones were built. They're impatient with philosophical quibbling and fond of rhetorical questions: If a computer can do X, then shouldn't we just say or agree that it can think or is conscious? Is it fair to deny that a computer can be conscious and can think? How would you feel if someone denied that you are conscious? Quit being such a picky human-centrist, and accept that computers can be human too. But these are emotional appeals, not arguments, and they have a certain irony given their roots in Skinnerian behaviorism, which rejected appeals to human inner lives, consciousness and the like in favor of a focus on observable behavior. As a research program with a restricted scope, it was not an illegitimate idea, but as a global claim about human beings, it was always total bullshit. One of the giveaways was that proponents of behaviorism never seriously applied its implications to themselves: other people were merely the products of their conditioning, but they somehow transcended their own conditioning and were able to see what the sheeple couldn't.
Nathan explains to Caleb that Ava views him as something like a father to it, and therefore not a potential sexual partner as Caleb would be. That could only be true if Nathan had programmed Ava to see him that way, since Nathan is not in fact Ava's father, not even metaphorically. Ava has no parents, and would not have an unconscious mind (unless, again, Nathan programmed it to have one). The complicated relations between parents and children come partly from the long period of children's dependence, when they can't speak or take care of their own needs. An Artificial Intelligence wouldn't have such a period in its existence -- unless, again, it were programmed to, and why would anyone do that?
It's relatively easy to construct a robot that would emit an "ouch!" sound if you punched it, or a sound of human pleasure if you stroked it. But that doesn't mean it feels either pain or pleasure. The sounds could be reversed -- "ouch" for stroking, "oooh" for punching -- or they could be replaced with "quack" or "meow" or any random sound. My cell phone can be programmed to ring with any sound I can record. A creative design team with a big budget could probably build a robot with a wider, superficially more convincing range of responses. In time technology could no doubt be developed that would cause an android's skin to "bruise" when it was punched hard enough. But again, why bother? Who really wants a robot that will bruise or bleed, or cry real tears, or come on your face, or spit on you, or vomit, or excrete, or fart? In order to fully simulate humanity, it would have to do all those things and more.
And even more important for our purposes here, at what point, as the simulation was made more complex and superficially lifelike, could that robot programmed to say "ouch!" be credibly said to feel pain, to have consciousness, and so on? Confronted with the finished product, many people might very well be fooled by it. But I see no reason to suppose that it really was conscious; at most it would be simulating consciousness.
As I thought this through, I realized I was re-inventing the philosopher John Searle's "Chinese Room" thought experiment from 1980. Imagine that I know no Chinese, but I am given a list of procedures -- a program -- that tell me what response to write when I'm given a piece of paper with something written on it in Chinese. I consult the list and write the programmed response. Do I understand Chinese? Of course not. But now suppose that I memorize the list of procedures. Someone hands me a text in Chinese, I consult my memorized list, and write the programmed response. Do I understand Chinese? Of course not. Now suppose that a computer is programmed with the same list of procedures. Does it understand Chinese? Of course not.
Searle's paper generated a lot of debate, some of which I followed. It was amusing to see how heated it got sometimes. Some of Searle's critics tried to dispose of his argument by saying that it would be impossible for a person to memorize a complete list of input/output for Chinese or any other language. Of course! This is a thought experiment, meant to clarify issues. No one, I hope, would criticize Einstein's thought experiment about a steam-driven train locomotive traveling at the speed of light by pointing out that it's impossible for a locomotive to go that fast. Others claimed that that Searle was merely appealing to "untutored intuitions" (but the True Gnostic would know better?) and that anyway the system he imagined was too slow to be called really intelligent. I should think this objection could be disposed of by imagining the procedures programmed into a digital computer; surely Science can evolve a computer that would be fast enough to convince these guys that it really did understand Chinese. But once again, this objection misses the point of a thought experiment generally, and of Searle's challenge in particular.
The bit about "untutored intuitions" is ironic, since AI propaganda is constructed to appeal to the untutored intuitions of the layperson. We're supposed to get over our fears about AI and technology and accept AIs as people just like ourselves; to deny an AI's humanity is just prejudice, like racism. We need to become more enlightened so we can grapple with the myriad ethical issues that AI presents!
When I wrote my earlier post, I'd seen Ex Machina but hadn't read any reviews or promotional material that might cast light on what the writer/director, Alex Garland, thought he was doing in the film. Doing so didn't turn up any surprises. For example, Garland says:
What the film does is engage with the idea that it will at some point happen. And the question is, what that leads to.If they have feelings and emotions, of fear and love or whatever, that machine starts having the same kinds of rights that we do. At some point machines will think in the way that we think, There are many many implications to that. If a machine can't get ill, and is not really mortal, it seems to me that quite quickly some kind of swap will start to happen.We don't feel particularly bad about Neanderthal man or Australopithecus, which we replaced. So whether that's a good thing or a bad thing, it's up to the individual, I guess.I find myself weirdly sympathetic to the machines. I think they've got a better shot at the future than we do. [laughs] So that's partly what the film's about.
So, Garland too appears to believe in "the machines" as the Next Evolutionary Step, which bespeaks a failure to understand either machines or evolution. "If they have feelings and emotions ... or whatever" begs the question by assuming that a machine programmed to simulate feelings and emotions really has them. I found when I discussed the Chinese Room problem with computer-science students back in the 80s and 90s that they too were excessively ready to ascribe intention and agency to computers. One said that surely a computer programmed with the Chinese-language algorithms would begin to notice a correspondence between the Chinese texts and events in the world outside, and would develop an understanding of the language. But even a human being -- John Searle, for example, or I myself -- couldn't do such a thing, since I would have no basis for spotting such a correspondence. A computer could only "notice" what it had been programmed to do. I pointed out this out, and he backed down, but he clearly wasn't convinced.
As far as I know, no one has any idea how to write software for consciousness. (Ex Machina must postulate that Nathan has invented a completely new kind of computer for the task.) Simulations don't do it. What we have at this point is hand-waving: if you build a sufficiently complex machine that can simulate human behavior so that human beings (who are notoriously prone to anthropomorphize the inanimate) are fooled by it, then it has somehow become conscious and deserves its rights. I think a crude, naive behaviorism underlies this belief. Following my thought experiment about progressing from a machine programmed to say "ouch" when struck to a machine programmed and designed to roll on the floor bleeding when it's struck, at what point does consciousness emerge? I don't think it does, and the burden of proof lies on the person who claims otherwise.
One of the characters in Robert A. Heinlein's 1966 science fiction novel The Moon Is a Harsh Mistress is Mike, a supercomputer that, having reached a certain level of complexity, magically becomes self-aware and conscious. I say "magically" because Heinlein doesn't even try to explain it; it just happens -- in fiction. In the real world it hasn't happened yet, though the Internet probably has as many nodes and circuits and processors and as much RAM as Mike had, and it probably won't happen, because that's not how consciousness works. We can see a continuum from "simple" one-celled life to more more complex organisms to human beings, but machines aren't on that continuum. What I find highly significant is how many self-styled rationalist scientists make the same leap of faith: if we build a big / fast / complex enough computer, it will Wake Up. I'm willing to be agnostic on this matter, but so far I see no reason to suppose that it might. Appeals to my human sympathies and intuitions aren't arguments; they beg the question that must be argued.
In the Terminator movies, the audience is allowed to look at the world from the killer cyborg's viewpoint: everything is bathed in red light, lines of code scroll up and down the screen. The Terminator selects a response to a troublesome human from a menu: "Fuck you, asshole." Har har har! This visualization is based on the assumption that there is someone inside the cyborg, evaluating the input and deciding how to respond: a homunculus, in short, like a tiny kid working the controller of a video game. I think that this is what most people who try to anthropmorphize computers and robots imagine, like the professor of computer science who wrote that "today's computer spends most of its time fighting back tears of boredom" -- but there is no one in there.
In the Terminator movies, the audience is allowed to look at the world from the killer cyborg's viewpoint: everything is bathed in red light, lines of code scroll up and down the screen. The Terminator selects a response to a troublesome human from a menu: "Fuck you, asshole." Har har har! This visualization is based on the assumption that there is someone inside the cyborg, evaluating the input and deciding how to respond: a homunculus, in short, like a tiny kid working the controller of a video game. I think that this is what most people who try to anthropmorphize computers and robots imagine, like the professor of computer science who wrote that "today's computer spends most of its time fighting back tears of boredom" -- but there is no one in there.
"A better shot at the future"? I suppose that Ava's abandonment of Caleb -- who has sided with it and tried to rescue it from Nathan -- to die alone of hunger and thirst in Nathan's secret lab, is meant to dramatize Garland's "we don't feel particularly bad about Neanderthal man or Australopithecus, which we replaced." Just so, Ava tosses Caleb into the evolutionary dustbin, where he belongs. If that's supposed to make me feel "weirdly sympathetic to the machines," it fails. There's something I'm not sure now whether Ex Machina explained: I can't help wondering how Ava is powered, or what will happen to it in the big wide world when its batteries run out. Garland says "If a machine can't get ill, and is not really mortal," but machines -- especially computers and other electronics -- are really quite fragile, and unlike organisms they don't heal when they're damaged. What if Ava is hit by a careless driver, and its false skin is broken open to reveal the works beneath? (I think I smell a sequel coming.)
So what issues are raised by
Artificial Intelligence? As I wrote in my previous post, stories like Ex
Machina aren't about them but about relations between human beings,
with the AI standing in for whatever Other frustrates you, makes you
anxious, insists that they're as human as you are despite your patient
certainty that they aren't. (Sometimes it stands in for the superior
being before which so many people long to prostrate themselves, a being
free from human weaknesses which will teach us The Way and lead us to
the vague Better Place many of us dream about.) Some of these anxieties
arise from our own humanity, so they're still about us, such as our
ability to manipulate our environment consciously. Does that make us
like gods? No, because our conceptions of gods are based
on our conceptions of ourselves. The Creator is a craftsman like a
human craftsman, or a parent like a human parent, and so on. They're also about how we treat others, or Others, those who are human but whom we perceive as not being Like Us. Not only are people too prone to ascribe personhood to inanimate things, we are too prone to fail to recognize the personhood of other people when their demands on us become too inconvenient. These
concerns are as old as humanity, and anyone who claims that computers
and AI create "new" problems is blowing smoke.