I'M SORRY, DAVE ... I CAN'T DO THAT
Behaviorism is a 20th-century scientific fad whose biggest self-promoter, the late B. F. Skinner, made a career of playing a caricature of the 19th-century materialist crank who declares dogmatically that if it can't be measured, weighed, etc., it's just superstitious nonsense. Behavioral psychology lives on, exploiting the principle beloved of evangelists, generals, and scientists everywhere: if we haven't delivered the Second Coming / total victory / a Grand Unified Theory, it's only because you haven't given us enough money and personnel! Dig till it hurts! Send in your contribution NOW!
Like some other Artificial Intelligence researchers, Rawlins subscribes to the behaviorist principle that "Ultimately it's behavior that matters, not form" (17). If it quacks like a duck, it is a duck, no matter how many superstitious, animistic mystics claim that it's a CD player.
But AI apologists have made one vital modification to the principle: where behaviorists refused to anthropomorphize anything, especially human beings, some AI folk are ready to recognize the full humanity of computers, yesterday if not sooner. Slaves of the Machine churns up a weird goulash of behaviorist rhetoric spiced with echoes from Abolitionists, the Civil Rights movement, even the anti-abortion movement. (Hence the pun on "quickening" in the book's subtitle: If a computer scientist was gestating a computer Beethoven, would you abort his research grant?) "Most computer hardware today isn't working all the time anyway; most computers are asleep most of the time, snoring away while they wait for us to give them something to do" (81). "So today's computer spends most of its time fighting back tears of boredom" (29). Dig deep into your pockets, friends, and give, so today's computer will never have to cry again.
"So what will our future be? Wonderful and terrifying," Rawlins crows (20). "Are we ready for a world of feral cars?... Some of these future machines will be so complex that they'll eventually become more like cats than cars.... Perhaps all of them will be insane to some extent. Perhaps we are too. Perhaps when your future toaster breaks down and refuses to toast your bread because it's having a bad day, you won't call an engineer or a mechanic, you'll call a therapist" (121).
"One day our artificial creations might even grow so complex and apparently purposeful that some of us will care whether they live or die. When Timmy cries because his babysitter is broken, ... then they'll be truly alive" (121)." Like the Velveteen Rabbit? Or Pinocchio? Rawlins knows, and says, that human beings are prone to personify inanimate objects. Children already become attached to inanimate objects like stuffed toys or security blankets; adults are not much better.
"As the children who accept them as alive grow up and gain power in the world, they'll pass laws to protect the new entities." And he sees this as progress? More like a regression into animistic superstition. But it's also not very likely. People don't pass laws to protect the civil rights of the security blankets or stuffed toys, or imaginary playmates, they loved as children either.
I'm sure there are people who would buy a robot "pet," a thing coated in washable plastic fur that eats electricity, doesn't need a litter box, never goes into heat, and can be neatly switched off when you go on vacation. It would alleviate another common problem: the cute little puppies that are abandoned when they grow up and aren't cute anymore. Rawlins would like us to have robots that are too lifelike to be dumped in the landfill, but why bother? We already have real pets.
Rawlins likes to think that robot "pets" and robot "people" will present new ethical problems. This is typical Jetsons talk. The ethical problems involved are quite old, and we've mostly preferred not to think them through. Since we've never done very well with them before, I see no reason to suppose that we'll do better when computers are involved. Other animal species and other human "races" have raised the very same ethical problems.
The easiest way not to deal with those problems has been displacement: the faith that on the other side of the next mountain there lives a "race" of natural slaves who are waiting to serve us, their natural masters. They will welcome our lash, set our boots gratefully on their necks, and interpose their bodies between us and danger, knowing that their lives are worth less than ours. Since there is no such natural slave race, the obvious solution is to build one. The question is not whether we can, as scientistic fundamentalists like to think, but whether we should. Rawlins is a bit smarter: he not only recognizes, he stresses that true machine intelligence would not be controllable. But he confidently declares that it's inevitable, which he needs to prove, not assert.
In fact he's so obviously wrong that I wonder if he's using the word in some arcane metaphorical sense. "Inevitable" suggests a downhill slide that requires all your strength to retard, let alone stop. On Rawlins's own showing, such machines will be built only if governments and/or businesses pour vast amounts of money and intellectual labor into reinventing the art of computer programming from the ground up, and probably also reinventing computer hardware from the ground up, in order to produce a totally new kind of machine that no one really wants anyhow: a "feral" computer that is unpredictable, uncontrollable, and "insane to some extent." I don't call that inevitable. It's an uphill climb if it's anything at all.
Like many scientistic evangelists, Rawlins accuses anyone who doesn't want to climb that hill of superstitious pride: "Some of us may resist the idea of ourselves as complex machines -- or of complex machines one day becoming as smart as we are -- for the same reason that we resisted the idea that the earth revolves around the sun or that chimpanzees are our genetic cousins" (120).
Most people who talk about this kind of sinful pride forget that "Man" was never at the top of the Great Chain of Being: that spot was reserved for the gods and other supernatural beings, like angels. Humility in our human status has always been an official value in the West, even if it has been honored more in the breach than the observance. In practice its main function has been to rationalize and reinforce human hierarchies (obey the King, as he obeys God). This alone should give us pause before we try to push machines to a rung above our own. James Cameron seems not to have considered in his Terminator movies that there would be human Jeremiahs urging their fellows to worship Skynet, the intelligent computer network that was trying to wipe out the human race, or at least to submit to it as the instrument of god's wrath, much as the original Jeremiah urged Israel to surrender to Babylon.
A kinder (if two-edged) analogy is the biblical tower of Babel. Yahweh smote its builders with confusion of languages, not because he was annoyed by their pride, but because he really feared they would build a tower that could reach Heaven. Just as there were still naifs in the Fifties and Sixties who opposed the space program because they thought the astronauts would crash-land at the Pearly Gates and injure St. Peter, there are probably those who oppose AI because they think these are mysteries in which "Man" should not meddle. But it's more accurate -- and despite its triumphalist rhetoric, Slaves of the Machine bears me out -- to point out that the Tower of AI will never reach Heaven. Inventing smaller faster microprocessors with extra gigabytes of memory won't produce machine intelligence -- not even if perfectly coordinated teams of programmers manage to write flawless million-million-line programs. Rawlins and his colleagues are not clear-eyed rationalists -- they're the builders of Babel who argued that with just a few more research grants, a few thousand more slaves, Babylonian newlyweds could honeymoon in Paradise.
GARBAGE IN, GOSPEL OUT
On Rawlins's own showing we're reaching the limits of computers as we know them, because programming for every contingency is impossible in any but the simplest tasks. He says we could program current hardware to be more flexible, if only programmers would stop living in the past, but he gives no reason to think so and numerous reasons not to think so. It seems more likely to me that we'd need new kinds of hardware too, kinds that aren't based on on-off logic switches – and that hardware doesn't exist yet. Maybe someday it will, but Rawlins is already bowing down before it as if it did.
The most likely scenario Rawlins presents is that technogeeks will get funding for new hardware developments the way they always have: by promising governments, business, and the military faster, more compact, more flexible, but still absolutely docile electronic slaves. Instead they would produce (in the unlikely event they succeeded) machines with minds of their own. It isn't inevitable that they will get the money they demand, though. There's no reason why the public must pay the AI cult to build whatever its members want while promising something else. We can say No.
And on that somber note, our time is just about up. Let's take one last look at our computer flatworm experiment.... No, it appears that the sundered parts of our BFD supercomputer have not regenerated themselves. BFD promises us, however, that this problem will be corrected in the next version of the operating system, which will be released very soon. So thanks for tuning in, and join us next week for the next installment of "This Old Computer"!