Thursday, November 10, 2011

When Cyborgs Collide

Nathan Jurgenson was back at Salon last weekend with another, even more vacuous post on the power of Teh Social Media. Though he admits that OWS "is most certainly not an Internet Revolution. Much more than a 'digital' protest, the movement has been fundamentally concerned with taking over geographic space, mobilizing bodies in an area, yelling, walking, breathing, sleeping and doing what physical bodies do", he insists that the "lesson that is playing out over and over is that utilizing both physicality and digitality and the important intersection of the two can effectively mobilize massive numbers of people."

Le duh! "Digitality" is incidental to the communications media being used. Electronic media were originally analog (telegraph, radio, television, videotape), still and motion photography didn't even need electricity to function. If technology had developed in different directions, today's activists might be recording police abuse on analog visual media, or sending messages by fax or analog walkie-talkies; the 140-character parameter of Twitter is not, in itself, more significant than the size of a Post-it note. Digital technology is not key, let alone determining. (Which reminds me, "digital" is not the opposite of "physical.") Would-be technological visionaries, recognizing the limits of digital computers have speculated that the next generation of computers won't be binary or digital. If they're right, then "digitality" would turn out to be a mere phase. What is important is cheap, portable, instant communication.

Jurgenson goes on to talk about "augmented revolution," which turns out to be a link to something he wrote elsewhere, and about "the implosion of atoms and bits into an augmented reality." The technological / scientific ignorance of that phrase (which he evidently considers significant: he also uses it in his "augmented revolution" post) tells a lot about the quality of Jurgenson's thought. Atoms are, of course, everywhere; we are all composed of them, whether we're protesting or not; but they are not imploding, around the OWS protests or elsewhere. (I'm reminded of Moliere's Bourgeois Gentilhomme, who is delighted and rather puffed up to discover that for forty years he has been speaking prose without knowing it; but also of Jurgenson's claim in his previous post that Twitter and e-mail are "new forms of language-production.") Ditto for bits: Jurgenson might as well say that letters of the alphabet, or Morse-code dots and dashes, are imploding into an augmented reality. It sounds cool, even meaningful, but it isn't. Calling something by a different label doesn't make it different.

Another word that Jurgenson likes is "cyborg," which has indeed been trendy among ambitious academics for a couple of decades. The term was invented in 1960, a portmanteau of "cybernetic" and "organism," with the idea that a human/computer hybrid might be able to survive and function in outer space. Wikipedia traces the concept of the man/machine combination to an 1843 story by Edgar Allan Poe, which "described a man with extensive prostheses." I haven't read it yet, but from the title, "The Man That Was Used Up," he sounds like a forerunner of L. Frank Baum's Tin Man, who as various body parts were cut off replaced them with tin prostheses, until there was no organic part of him left. [P.S. Having read it, I can say that I wasn't far off; it's more of a Mark-Twain-style tall tale than a science fiction story.] If "cyborg" is used to refer to such constructs, then Captain Ahab and Captain Hook would also be cyborgs -- especially the latter, who could have different hooks for different jobs. But then, someone with a crutch, a glass eye, a wig, or platform shoes would be a cyborg too; and if enhancement (technology that extends our capabilities beyond our normal, unaided abilities, like a bow and arrow or a crowbar) is the defining trait, then not only have human beings been cyborgs in this sense since we started making tools, but we are all cyborgs already. In protests, not just someone who uses a bullhorn to address a crowd, but someone who stands on a soapbox in order to be heard is using enhancing technology, and therefore is a cyborg. When your terminology becomes all-inclusive like that, it's no longer useful, or -- I'd argue -- meaningful.

Speaking of meaning, though, Wittgenstein admonished us not to look for the meaning of a word, but for its use. I must confess that I haven't yet read Donna Haraway's influential (or anyway oft-cited) writings on the cyborg, so let's look at how people like Jurgenson are using the concept: "Many of the well-connected protesters-turned-cyborgs can snap photos, shoot videos, organize on Facebook and tweet to the world", he writes. The link is to another webpage whose content he supplied: "Individuals and social groups have always been cyborgs because we have always existed in tandem with technology" -- just as I said earlier. "Today, with the vast proliferation and diffusion of new technologies throughout society, techno-human syntheses occur in more aspects of our lives than ever before." He goes on to speak of the "digital" and the "material" as opposite, if perhaps complementary.

Well, no. "Techno-human syntheses" occur in all aspects of our lives already; I think Jurgenson means something like "high-tech" on the techno- side, technology that looks inorganic, mechanical, artificial, electronic. That's a very narrow view of things; it's a commonplace that human technology was biological from its earliest stages, involving the domestication of animals (which was mirrored by domestication of human beings), the domestication and planned breeding of plants for food and medicines, the String Revolution (nets, cords, ropes, spinning, weaving), the construction of shelters and other buildings from a variety of materials, landscaping, and so on.

Why is inorganic, mechanical, electronic technology so glamorous, at least to a certain mindset? Part of the appeal, I think, is tied to another ancient concern: the desire to escape or "transcend" the earthbound, mortal, material, corruptible aspects of life. Historically this desire is largely associated with males, though no doubt many women share it; it's also associated with religion, though the physical (or "hard," giggle smirk) sciences have taken it on as well. If it's made of steel, or better yet plastic, it won't die or decay, so why not make people out of metal or plastic? Then you'd have enhanced, superior people! Maybe we could upload our personalities into those imperishable inorganic bodies, and we'd be immortal!

I've written before about the popularity of this fantasy in computer science circles. Since then I read an odd book by Isiah Lavender III, Race in American Science Fiction (Indiana, 2011), which takes a similar line. Lavender begins by positing "AI (artificial intelligence), cyborgs, artificial people, and posthumans" as "new racial forms", hoping to imagine "how individuals might newly conceive identity within newly technological worlds" (17). He quotes with approval another writer who claims "The fact remains that technology is rapidly making the concept of the ‘natural’ human being obsolete" (27); the "natural" human being has been "obsolete" since we learned to control fire.
My own thinking diverges here from that of posthumanist scholars in that I consider these new beings as new races. For example, would constructed humans or informational systems be privileged over their human counterparts? Is individual consciousness essential for machine races? How would this machine identity evolve? How should humanity interact with these other beings? How should race exist if the human body cannot be distinguished from a computer simulation or a cybernetic being? Certainly, sf criticism inspires a hotbed of ontological questions in regard to race and racism [27].
In construing "these new beings as new races," Lavender begs the question of what a "race" is. He also seems to take science fiction portrayals of "constructed humans" as some kind of documentary, when they are more likely allegories of differences that have already generated conflict among humanity. For example, in his discussion of Isaac Asimov's classic robot stories, he writes:
Asimov’s robots resonate with the antebellum South’s myth of a happy darkie – a primitive, childlike worker without a soul, incapable of much thought – cared for by his benevolent and wise master. This resonance is hard to ignore. As Edward James has observed in his own critique of Asimov, “The Three Laws restrain robots, just as the slave-owner expected (or hoped) that his black slaves would be restrained by custom, fear and conditioning to obey his every order” (40). Clearly, then, an otherhood reading allows us to consider the historical roots of I, Robot’s cultural bigotry [62].
In my review of Gregory J. E. Rawlins's Slaves of the Machine (MIT, 1997), I wrote about
the faith that on the other side of the next mountain there lives a "race" of natural slaves who are waiting to serve us, their natural masters. They will welcome our lash, set our boots gratefully on their necks, and interpose their bodies between us and danger, knowing that their lives are worth less than ours. Since there is no such natural slave race, the obvious solution is to build one. The question is not whether we can, as scientistic fundamentalists like to think, but whether we should. Rawlins is a bit smarter: he not only recognizes, he stresses that true machine intelligence would not be controllable. But he confidently declares that it's inevitable, which he needs to prove, not assert.
While I was working through Race in American Science Fiction, I reread Asimov's I, Robot, and I think Lavender misrepresents what Asimov was doing. First, his robots are not docile; some are even rebellious. But they're based not on science or technology but on something more like magic: the "positronic brain" which gives them a range of action and response far more complex than anything we can build even now -- and remember that these stories were written between 1940 and 1950, when computers in the real-world were still mainly glorified adding machines. Asimov's robots can form personal loyalties (and enmities), they can Haz a Sad when separated from a beloved owner, they can construct mythologies about their own creation. True, Asimov used tropes from sentimental fiction to make his readers sympathize with them. But they're still machines, not persons. (Or turn it the other way around: Asimov's robots are persons, not machines, since he had no idea how to make real robots, so he invented persons in machine drag, like C3PO from Star Wars.)

It's interesting that Lavender (and Edward James) want to see them as people, thinking of them in terms of American racialism, but that means reading them backwards. If the Three Laws of Robotics “restrain robots,” then brakes and steering wheels restrain automobiles – isn’t that slavery? Lavender wants robots to be able to “program themselves” – does he want his own computer to do that? Does he read 2001 as the story of a slave revolt in which HAL fights back against his human masters? Does he want to think of the computer he used to write Race in Science Fiction, or the presses that printed it, as slaves?

When Lavender gets to Artificial Intelligence later in the book, he throws off all restraint.
Humanity is the beginning for AI. Computers and computer programs are created in the pursuit of knowledge to make the acquisition and retention of information easier ... Additionally, the cognitive capacity of computers continually improves as humanity builds faster, more powerful microchips. The time computers can think, in fact, is upon us. In sf, machines have achieved a state of consciousness, a state in which the computer knows that it is aware and can make judgments through its own experience. This is an ominous scenario to say the least.

Any machine that demonstrates a sense of independent awareness, something that may be perceived as a nascent humanity, causes anxiety in most humans because it is an alien experience akin to the racial one – a white person fearing a black person and vice versa. We think of machines, like the computer, as being simple tools designed to help us in numerous circumstances. Yet, a computer may be a form of artificial intelligence, so the idea is not so simple [193-4].
Along with so much else that is wrong with this book, Lavender begs the question by assuming the personhood / humanity of computers. The "time computers can think" is not, in fact, upon us: it isn't even on the horizon. It's mildly alarming that Lavender confuses what has happened in sf -- that is, in fiction -- with what has happened in the real world. Nor is this a lapse. When he discusses clones on Battlestar Galactica, for example, he writes,
There is strong circumstantial evidence that suggests Cylons possess extrasensory powers such as being prescient, telepathic, or having the gift of astral projection. The cloned copies of one individual that are manufactured and raised together may think alike, thus speeding up communication through some kind of internal wiring that allows for thought projection and ultimately personality trasnsference upon the death of a particular copy. Perhaps the worst [?] consequences of cloning involve the death of any member of a clone group because individual clone personalities are downloaded into new bodies, reintegrated into Cylon society, and no longer have a sense of individual uniqueness ... Regardless, their displays of individual identity, passion, and emotion throughout the story indicate the presence of a soul, something thought to be absent in clones [206-7].
I thought a sense of individual uniqueness was one of those nasty human traits that we’re going to transcend in the posthuman. But notice that Lavender talks about the Cylons as if they actually existed, or represent the real-world results of cloning. There's no reason to believe that cloned human beings would be prescient, telepathic, or have the "gift of astral projection." Nor is there any reason to believe that clones would have different nature than other human beings, any more than identical twins do. And "the presence of a soul, something thought to be absent in clones"? "Thought" by whom, and why do they think something so foolish?

The problems raised by technological "enhancement" of human bodies, or of sufficiently advanced and complex computer "intelligence," seem overrated to me in one sense. (The notion of these changes as "enhancements" is problematic, of course: it echoes the rhetoric of twentieth-century eugenics, which postulated the possibility of producing a superior species or "race" of human beings, on the false assumption that Darwinian evolution involves progressive improvement of species.) In another sense, they may turn out to be real, but they won't be novel: human beings have had trouble recognizing their common humanity with other human beings all through history -- not just based on "race" but on sex, class, language, custom, and so on. Some enthusiasts talk as if those conflicts have been resolved, but robots will suddenly present us with new conflicts. Lavender, by contrast, appears to worry that old "racial" conflicts will simply be transferred to the new "races." I'd say we should cross that bridge when we come to it, and work harder on resolving the conflicts we already have.

And it occurs to me: how odd that neither Jurgenson nor Lavender worries about -- or even notices -- an already existent form of artificial life that is widely hated and derided: the corporation.