True Names: And the Opening of the Cyberspace Frontier Read online

Page 10


  “But he really was self-aware, and that was the triumph of it all. And in those few minutes, I figured out how I could adapt the basic kernel to accept any input personality…That is what I really wanted to tell you.”

  “Then what the Limey saw was—”

  She nodded. “Me…”

  She was grinning now, an open though conspiratorial grin that was very familiar. “When Bertrand Russell was very old, and probably as dotty as I am now, he talked of spreading his interests and attention out to the greater world and away from his own body, so that when that body died he would scarcely notice it, his whole consciousness would be so diluted through the outside world.

  “For him, it was wishful thinking, of course. But not for me. My kernel is out there in the System. Every time I’m there, I transfer a little more of myself. The kernel is growing into a true Erythrina, who is also truly me. When this body dies,” she squeezed his hand with hers, “when this body dies, I will still be, and you can still talk to me.”

  “Like the Mailman?”

  “Slow like the Mailman. At least till I design faster processors…

  “…So in a way, I am everything you and the Limey were afraid of. You could probably still stop me, Slip.” And he sensed that she was awaiting his judgment, the last judgment any mere human would ever be allowed to levy upon her.

  Slip shook his head and smiled at her, thinking of the slow-moving guardian angel that she would become. Every race must arrive at this point in its history, he suddenly realized. A few years or decades in which its future slavery or greatness rests on the goodwill of one or two persons. It could have been the Mailman. Thank God it was Ery instead.

  And beyond those years or decades…for an instant, Pollack came near to understanding things that had once been obvious. Processors kept getting faster, memories larger. What now took a planet’s resources would someday be possessed by everyone. Including himself.

  Beyond those years or decades…were millennia. And Ery.

  — Vernor Vinge,

  San Diego

  June 1979—January 1980

  AFTERWORD

  Marvin Minsky

  In real life, you often have to deal with things you don’t completely understand. You drive a car, not knowing how its engine works. You ride as passenger in someone else’s car, not knowing how that driver works. And strangest of all, you sometimes drive yourself to work, not knowing how you work, yourself.

  To me, the import of True Names is that it is about how we cope with things we don’t understand. But, how do we ever understand anything in the first place? Almost always, I think, by using analogies in one way or another—to pretend that each alien thing we see resembles something we already know. When an object’s internal workings are too strange, complicated, or unknown to deal with directly, we extract whatever parts of its behavior we can comprehend and represent them by familiar symbols—or the names of familiar things which we think do similar things. That way, we make each novelty at least appear to be like something which we know from the worlds of our own pasts. It is a great idea, that use of symbols; it lets our minds transform the strange into the commonplace. It is the same with names.

  Right from the start, True Names shows us many forms of this idea, methods which use symbols, names, and images to make a novel world resemble one where we have been before. Remember the doors to Vinge’s castle? Imagine that some architect has invented a new way to go from one place to another: a scheme that serves in some respects the normal functions of a door, but one whose form and mechanism is so entirely outside our past experience that, to see it, we’d never think of it as a door, nor guess what purposes to use it for. No matter: just superimpose, on its exterior, some decoration which reminds one of a door. We could clothe it in rectangular shape, or add to it a waist-high knob, or a push-plate with a sign lettered “EXIT” in red and white, or do whatever else may seem appropriate—and every visitor from Earth will know, without a conscious thought, that pseudo-portal’s purpose, and how to make it do its job.

  At first this may seem mere trickery; after all, this new invention, which we decorate to look like a door, is not really a door. It has none of what we normally expect a door to be, to wit: hinged, swinging slab of wood, cut into wall. The inner details are all wrong. Names and symbols, like analogies, are only partial truths; they work by taking many-levelled descriptions of different things and chopping off all of what seem, in the present context, to be their least essential details—that is, the ones which matter least to our intended purposes. But, still, what matters—when it comes to using such a thing—is that whatever symbol or icon, token or sign we choose should remind us of the use we seek—which, for that not-quite-door, should represent some way to go from one place to another. Who cares how it works, so long as it works! It does not even matter if that “door” leads to anywhere: in True Names, nothing ever leads anywhere; instead, the protagonists’ bodies never move at all, but remain plugged-in to the network while programs change their representations of the simulated realities!

  Ironically, in the world True Names describes, those representations actually do move from place to place—but only because the computer programs which do the work may be sent anywhere within the world­wide network of connections. Still, to the dwellers inside that network, all of this is inessential and imperceptible, since the physical locations of the computers themselves are normally not represented anywhere at all inside the worlds they simulate. It is only in the final acts of the novel, when those partially-simulated beings finally have to protect themselves against their entirely-simulated enemies, that the programs must keep track of where their mind-computers are; then they resort to using ordinary means, like military maps and geographic charts.

  And strangely, this is also the case inside the ordinary brain: it, too, lacks any real sense of where it is. To be sure, most modern, educated people know that thoughts proceed inside the head—but that is something which no brain knows until it’s told. In fact, without the help of education, a human brain has no idea that any such things as brains exist. Perhaps we tend to place the seat of thought behind the face, because that’s where so many sense-organs are located. And even that impression is somewhat wrong: for example, the brain-centers for vision are far away from the eyes, away in the very back of the head, where no unaided brain would ever expect them to be.

  In any case, the point is that the icons in True Names are not designed to represent the truth—that is, the truth of how the designated object, or program, works; that just is not an icon’s job. An icon’s purpose is, instead, to represent a way an object or a program can be used. And, since the idea of a use is in the user’s mind—and not connected to the thing it represents—the form and figure of the icon must be suited to the symbols that the users have acquired in their own development. That is, it has to be connected to whatever mental processes are already one’s most fluent, expressive, tools for expressing intentions. And that’s why Roger represents his watcher the way his mind has learned to represent a frog.

  This principle, of choosing symbols and icons which express the functions of entities—or rather, their users’ intended attitudes toward them—was already second nature to the designers of earliest fast-interaction computer systems, namely, the early computer games which were, as Vernor Vinge says, the ancestors of the Other Plane in which the novel’s main activities are set. In the 1970’s the meaningful-icon idea was developed for personal computers by Alan Kay’s research group at Xerox, but it was only in the early 1980’s, after further work by Steven Jobs’ research group at Apple Computer, that this concept entered the mainstream of the computer revolution, in the body of the Macintosh computer.

  Over the same period, there have also been less-publicized attempts to develop iconic ways to represent, not what the programs do, but how they work. This would be of great value in the different enterprise of making it easier for programmers to make new programs from old ones. Such attempts have been
less successful, on the whole, perhaps because one is forced to delve too far inside the lower-level details of how the programs work. But such difficulties are too transient to interfere with Vinge’s vision, for there is evidence that he regards today’s ways of programming—which use stiff, formal, inexpressive languages—as but an early stage of how great programs will be made in the future.

  Surely the days of programming, as we know it, are numbered. We will not much longer construct large computer systems by using meticulous but conceptually impoverished procedural specifications. Instead, we’ll express our intentions about what should be done, in terms, or gestures, or examples, at least as resourceful as our ordinary, everyday methods for expressing our wishes and convictions. Then these expressions will be submitted to immense, intelligent, intention-understanding programs which will themselves construct the actual, new programs. We shall no longer be burdened with the need to understand all the smaller details of how computer codes work. All of that will be left to those great utility programs, which will perform the arduous tasks of applying what we have embodied in them, once and for all, of what we know about the arts of lower-level programming. Then, once we learn better ways to tell computers what we want them to get done, we will be able to return to the more familiar realm of expressing our own wants and needs. For, in the end, no user really cares about how a program works, but only about what it does—in the sense of the intelligible effects it has on other things with which the user is concerned.

  In order for that to happen, though, we will have to invent and learn to use new technologies for “expressing intentions”. To do this, we will have to break away from our old, though still evolving, programming languages, which are useful only for describing processes. And this may be much harder than it sounds. For, it is easy enough to say that all we want to do is but to specify what we want to happen, using more familiar modes of expression. But this brings with it some very serious risks.

  The first risk is that this exposes us to the consequences of self-deception. It is always tempting to say to oneself, when writing a program, or writing an essay, or, for that matter, doing almost anything, that “I know what I would want, but I can’t quite express it clearly enough”. However, that concept itself reflects a too-simplistic self-image, which portrays one’s own self as existing, somewhere in the heart of one’s mind (so to speak), in the form of a pure, uncomplicated entity which has pure and unmixed wishes, intentions, and goals. This pre-Freudian image serves to excuse our frequent appearances of ambivalence; we convince ourselves that clarifying our intentions is a mere matter of straightening-out the input-output channels between our inner and outer selves. The trouble is, we simply aren’t made that way, no matter how we may wish we were.

  We incur another risk whenever we try to escape the responsibility of understanding how our wishes will be realized. It is always dangerous to leave much choice of means to any servants we may choose—no matter whether we program them or not. For, the larger the range of choice of methods they may use, to gain for us the ends we think we seek, the more we expose ourselves to possible accidents. We may not realize, perhaps until it is too late to turn back, that our goals were misinterpreted, perhaps even maliciously, as in such classic tales of fate as Faust, the Sorcerer’s Apprentice, or The Monkey’s Paw (by W.W. Jacobs).

  The ultimate risk, though, comes when we greedy, lazy, master-minds are able at last to take that final step: to design goal-achieving programs which are programmed to make themselves grow increasingly powerful, by using learning and self-evolution methods which augment and enhance their own capabilities. It will be tempting to do this, not just for the gain in power, but just to decrease our own human effort in the consideration and formulation of our own desires. If some genie offered you three wishes, would not your first one be, “Tell me, please, what is it that I want the most!” The problem is that, with such powerful machines, it would require but the slightest accident of careless design for them to place their goals ahead of ours, perhaps the well-meaning purpose of protecting us from ourselves, as in With Folded Hands, by Jack Williamson, or to protect us from an unsuspected enemy, as in Colossus by D.H.Jones, or because, like Arthur C. Clarke’s HAL, the machine we have built considers us inadequate to the mission we ourselves have proposed, or, as in the case of Vernor Vinge’s own Mailman, who teletypes its messages because it cannot spare the time to don disguises of dissimulated flesh, simply because the new machine has motives of its very own.

  Now, what about the last and finally dangerous question which is asked toward True Names’ end? Are those final scenes really possible, in which a human user starts to build itself a second, larger Self inside the machine? Is anything like that conceivable? And if it were, then would those simulated computer-people be in any sense the same as their human models before them; would they be genuine extensions of those real people? Or would they merely be new, artificial, person-things which resemble their originals only through some sort of structural coincidence? What if the aging Erythrina’s simulation, unthinkably enhanced, is permitted to live on inside her new residence, more luxurious than Providence? What if we also suppose that she, once there, will be still inclined to share it with Roger—since no sequel should be devoid of romance—and that those two tremendous entities will love one another? Still, one must inquire, what would those super-beings share with those whom they were based upon? To answer that, we have to think more carefully about what those individuals were before. But, since these aren’t real characters, but only figments of an author’s mind, we’d better ask, instead, about the nature of our selves.

  Now, once we start to ask about our selves, we’ll have to ask how these, too, work—and this is what I see as the cream of the jest because, it seems to me, that inside every normal person’s mind is, indeed, a certain portion, which we call the Self—but it, too, uses symbols and representations very much like the magic spells used by those players of the Inner World to work their wishes from their terminals. To explain this theory about the working of human consciousness, I’ll have to compress some of the arguments from “The Society of Mind”, my forthcoming book. In several ways, my image of what happens in the human mind resembles Vinge’s image of how the players of the Other Plane have linked themselves into their networks of computing machines—by using superficial symbol-signs to control of host of systems which we do not fully understand.

  Everybody knows that we humans understand far less about the insides of our minds, than what we know about the world outside. We know how ordinary objects work, but nothing of the great computers in our brains. Isn’t it amazing we can think, not knowing what it means to think? Isn’t it bizarre that we can get ideas, yet not be able to explain what ideas are. Isn’t it strange how often we can better understand our friends than ourselves?

  Consider again, how, when you drive, you guide the immense momentum of a car, not knowing how its engine works, or how its steering wheel directs the vehicle toward left or right. Yet, when one comes to think of it, don’t we drive our bodies the same way? You simply set yourself to go in a certain direction and, so far as conscious thought is concerned, it’s just like turning a mental steering wheel. All you are aware of is some general intention—It’s time to go: where is the door?—and all the rest takes care of itself. But did you ever consider the complicated processes involved in such an ordinary act as, when you walk, changing the direction you’re going in? It is not just a matter of, say, taking a larger or smaller step on one side, the way one changes course when rowing a boat. If that were all you did, when walking, you would tip over and fall toward the outside of the turn.

  Try this experiment: watch yourself carefully while turning—and you’ll notice that, before you start the turn, you tip yourself in advance; this makes you start to fall toward the inside of the turn; then, when you catch yourself on the next step, you end up moving in a different direction. When we examine that more closely, it all turns out to be dreadfully complicated:
hundreds of interconnected muscles, bones, and joints are all controlled simultaneously, by interacting programs which locomotion-scientists still barely comprehend. Yet all your conscious mind need do, or say, or think, is Go that way!—assuming that it makes sense to speak of the conscious mind as thinking anything at all. So far as one can see, we guide the vast machines inside ourselves, not by using technical and insightful schemes based on knowing how the underlying mechanisms work, but by tokens, signs, and symbols which are entirely as fanciful as those of Vinge’s sorcery. It even makes one wonder if it’s fair for us to gain our ends by casting spells upon our helpless hordes of mental under-thralls.

  Now, if we take this only one more step, we see that, just as we walk without thinking, we also think without thinking! That is, we just as casually exploit the agencies which carry out our mental work. Suppose you have a hard problem. You think about it for a while; then after a time you find a solution. Perhaps the answer comes to you suddenly; you get an idea and say, “Aha, I’ve got it. I’ll do such and such.” But then, were someone to ask how you did it, how you found the solution, you simply would not know how to reply. People usually are able to say only things like this:

  “I suddenly realized…”

  “I just got this idea…”

  “It occurred to me that…”

  If we really knew how our minds work, we wouldn’t so often act on motives which we don’t suspect, nor would we have such varied theories in Psychology. Why, when we’re asked how people come upon their good ideas, are we reduced to superficial reproductive metaphors, to talk about “conceiving” or “gestating”, or even “giving birth” to thoughts? We even speak of “ruminating” or “digesting”—as though the mind were anywhere but in the head. If we could see inside our minds we’d surely say more useful things than “Wait. I’m thinking.”