Monday, August 23, 2010

We'll Have A.I. in the Sky When We Die, Part 1



(I wrote this back in the 90s, and while Artificial Intelligence no longer seems as trendy as it was then, I think a lot of the issues I talk about here are still very much with us.)

In the 1990s, or so we were assured when I was a lad, everyone would have his own personal helicopter, powered by its own clean, safe, and inexhaustible nuclear power pack. (Automobiles would be obsolete!) The skies would be full of commuters, all of them white males as all Americans were presumed to be in the 1950s, returning to the suburbs from their 20-hour a week jobs. (The work week would shrink, giving us all more leisure for home barbecues cooked on clean, safe and inexhaustible nuclear-powered backyard grills!) Their wives would preside languidly over homes staffed by robot butlers and maids, each powered by a clean, safe, and inexhaustible nuclear power pack, whose excess heat could be used to distill the alcoholic beverages needed by 1950s housewives to dull the boredom and isolation of their suburban days.

Well, you get the picture: it's the Jetsons. But the Jetsons affectionately parodied visions of the near future that were discussed seriously in other media. Nowadays those failed predictions are a bit embarrassing: nuclear power has lost its glamour, few people like the idea of filling our skies with individual commuter aircraft, and economists lecture us that the relative affluence of the 1950s was a historical fluke, that we'll be lucky if our standard of living doesn't decrease. As I read computer scientist Gregory J. E. Rawlins's book Slaves of the Machine: The Quickening of Computer Technology (MIT Press, 1997) I often had the feeling that I had been transported into the past, the past of the 1939 World's Fair, the past of the Jetsons.

Rawlins plays the prophet with such gee-whiz gusto, in fact, that I'm still not sure he isn't kidding. The future is here, by golly, and you ain't seen nothin' yet! “From ovens to spaceships, from mousetraps to nuclear power stations, we're surrounded by millions of automatic devices each containing small gobs of congealed wisdom” (2). Today's portable computer is "half the size of Gutenberg's first Bible - and perhaps as important" (21). Computers "twenty years from now will be vastly different from today's primitive mechanisms, just as a tiny computer chip is different from a barnful of Tinkertoys [5]... and in 20 years they'll be as cheap as paper clips" (28). "Because in the far future - which in the computer world means one or two decades - we may make computers from light beams. Sounds like science fiction, doesn't it? Well, in 1990 AT&T Bell Laboratories built one" (31). "In forty years, something bee-sized may carry more memory and computing power than all today's computers put together" (34). "But one day, what we call a computer may be something grown in a vat that will clamber out clothed in flesh" (33). "And, for the same money, by 2045 you could have 8500 million 1997 dollars worth of machine – four times the power of the entire world's supply of supercomputers in 1997 - all in your designer sunglasses" (34).

I suppose there's no harm in this sort of thing, but is it worth $25 for 135 clothbound pages? You can find equally solid predictions for a lot less in the checkout lane at your supermarket. The dust jacket copy of Slaves of the Machine touts it as an introduction to the wonderful world of computers, written in simple, colorful language that any cybernetically-challenged doofus can understand. I doubt that Slaves will reach its intended audience, though, not just because of its cost and lack of graphics, but because of its patronizing tone. As the examples I've quoted above indicate, Rawlins is more interested in dazzling than in explaining. Slaves of the Machine is more like an evangelical tract, full of promises and threats, than science writing. (But I may be drawing a nonexistent distinction there.)

One of Rawlins's pronouncements, though, caught my fancy: "Today, our fastest most complex computer, armed with our most sophisticated software, is about as complex as a flatworm" (19). In that spirit, we have in our studio a state-of-the-art supercomputer, courtesy of BFD Technologies, which I shall proceed to split down the middle with this fire axe. In the course of our program today we'll return to see if this complex marvel of human ingenuity can, like a flatworm, regenerate itself into two complete supercomputers, each with the memory and software of the original computer already installed! Meanwhile, we'll take a closer look at Prof. Rawlins's vision of the future of computing.

THAT'S NOT A BUG, THAT'S A CREATURE, I MEAN FEATURE!

Having begun by promising us unlimited horizons of computer power, Rawlins proceeds to explain How We Talk to Computers. Like a guide warning Sahib about the innate inferiority of the natives he's about to encounter, Rawlins warns the reader, "It's no use appealing to the computer's native intelligence. It has none" (42). (Remember that line.) I use the guide / Bwana / native/ image because Rawlins himself uses it, not just in his guiding metaphor of computer languages as "pidgins" but in his anecdotes.

Using a computer, he says, is like being driven around France in the 1940s by a surly driver named Pierre, who "obviously thought he was competing in the Grand Prix" and needed precise instructions, which were futile because "he was particularly dense. He understood only two words -- yes and no; or rather, since he refused to understand anything but French, oui and non" (44). But then, "around 1955, ... we got Jacqueline, a slightly more advanced translator. Of course, Jacqueline didn't speak English either, but she knew Pierre really well" (44f). Oo la la! Lucky Pierre! Gradually it became clear to me that Rawlins meant this conte as an allegory of the development of computer "languages," but even though I already knew this stuff, I found it hopelessly confusing. I can't imagine what a real computer virgin would make of it.

Rawlins chides us for wondering, "How could a mere machine understand a language?" Since he has just told us that French computers, at least, have no native intelligence, this seems a reasonable question to me, but Rawlins breezily assures us: "Still, without getting into philosophical subtleties, it's okay to say that a computer can understand lots of simple things, if we first explain them carefully enough using a small number of well-chosen commands" (45). With that, he's off with a basic course on computer programming: the IF-bone is connected to the THEN-bone, the THEN-bone’s connected to the NEXT-bone, and so on. And I hope you were paying attention, class, because Rawlins moves quickly on to what really interests him: the cult of the Silicon God, and the care and feeding of its priesthood.

In ancient times (about thirty to forty years ago), "computers were costly, cranky, and above all, customized" (48). "In those days, even the vibration of a passing car or plane could crash the machine" (51). "No one could dispute the technical elite because no one -- often including the elite themselves -- understood the machines. What they did, how they did it, what they could do, these were all mysteries to almost everyone, including the biggest companies and the biggest names in computing today" (49). As late as 1972, when I took a FORTRAN class at IU, The Computer sat in its Holy of Holies, its sacred meals of punched cards fed to it by graduate students in Computer Science, and spewed forth the printouts of its oracles (which in my case, consisted mostly of error messages).

Remember that even in those days, computers were advertised as "electronic brains." Not knowing what they did or could do didn't keep their advocates from making grandiose claims about them, which ought to make us suspicious about the no less grandiose promises Rawlins was making about the future of computing just a few pages back.

But ah, then "came the pidgin designers", who "made it possible to talk to the machine in a sensible way." But wait! "Today's computer pidgins are hard to learn and hard to use. All are cryptic, precise, and dense" (53). Contrasting crocheting instructions with computer languages, Rawlins concedes that "even the rankest beginner can learn to crochet a doily or scarf in a few hours, whereas it often takes computer beginners days or weeks before they can program anything interesting" (54).

Now, a pidgin can be used by ordinary people, by traders and the "natives" with whom they trade. Computer languages vary in their difficulty, but for the most part they still require an interpreter - the programmer - between the user and the "native." A closer analogue to a real "pidgin" is the mouse-and-icons interface of the Amiga, the Mac, and Windows, which enables people without extensive technical training to "tell" their computers what to "do."

But there's a more basic problem: Computers are not trying to communicate with us, any more than a thermometer is. Despite the common use of anthropomorphic language such as "think" and "feel" by their devotees, they do not think, or feel, or see, or communicate. Perhaps someday they will, but for now and the foreseeable future they don't, and any computer enthusiast who pretends otherwise is either lying or fooling himself. Computer "language" is also a metaphor, not a literal fact. In a very loose sense, it could be said that you "communicate" with a light bulb when you turn on its switch, and that is the only sense in which computer "languages" enable us to "communicate" with computers. Rawlins's extended metaphors are entertaining, but they are also misleading. I suspect that when he says that "computers" think, he is using "computers" metaphorically too: to mean, not today's computers, but imaginary computers which might someday clamber out of hydroponic vats but for now are only a gleam in Gregory J. E. Rawlins's eye.

Rawlins blames the impenetrability of computer "pidgins" on programmers themselves, "slow-witted, tiny-brained bunglers" (74) who are "lost in the past" (81), "like lawyers who are concerned only with law, not justice" (80). "While hardware keeps leapfrogging ahead every eighteen months, software is still lost in the dark ages of the 1960s. ...What we lack today isn't more hardware, but more imagination" (81). "[W]e don't let [computers] revise their procedure in the light of new evidence or insight. We never let anything new occur to them, and we deny them access to the history of earlier attempts" (97).

All this may be true, but it's still unfair to the programmers, who must write instructions for machinery that may be twice as fast and cost half as much, but still has no native intelligence. Programmers don't "deny them access to the history of earlier attempts"; computers aren't seeking such access. "The change to adaptive computer systems will come, as often these things do, by the actions of a few forward-looking software companies with the courage to take a risk and produce a new kind of software -- adaptive software" (81). In other words, Rawlins has no idea how to do it either, but he will accept funding from "forward-looking software companies."

What surprises me is that at the same time he's impugning the intelligence and integrity of programmers, Rawlins anticipates every objection I could think of. "Of course, there are financial and physical limits on how much we can improve today's computers" (35). "Year by year we're rapidly gaining power and, just as rapidly, losing control" (37). Faster, more powerful machines still must be programmed by human beings. "The more people we put on a software project, the longer it takes .... At current rates of progress they'll never produce flawless thousand-million-line programs" (69, 70).

On a deeper level, "We simply can't track all the myriad details involved in solving a very complex problem." This is not because we're tiny-brained bunglers, but because "no one knows how to design a sequence of steps that represents what we would call thinking" (105).

"The trouble is that just because something is hard for us it isn't necessarily hard for computers, and vice versa. We invented them to do something we're bad at: they can do arithmetic with unparalleled speed, reliability, and accuracy. So it's only fair that they're bad at things we're good at. The surprise is that they're bad at such apparently easy things" (111). Rawlins has evidently read, and taken to heart, such serious critics of Artificial Intelligence research as Joseph Weizenbaum, Noam Chomsky, John Searle, and the arch-heretic Dreyfus brothers, though he prudently names none of them, lest he be pronounced anathema by Douglas Hofstadter and Daniel Dennett.

Still, the fault lies not in our stars, Horatio, but in our wicked refusal to recognize that a computer's a man for a' that. But let's stop here for a moment to see how our split computer is coming along... hmmmm. No regeneration so far. Well, we mustn't be impatient. As Thomas Edison said, science is 1% inspiration and 99% perspiration. We'll be right back with more about Slaves of the Machine, right after this message from Microsoft.