Interviews With The Fantastic
InterGalactic Interview With Vernor Vinge
by Darrell Schweitzer
The starting point for this interview is an article called "The Coming Technological Singularity"
which you may quickly find by doing an internet search on Vernor Vinge's name. It was
presented at the VISION-21 Symposium sponsored by NASA Lewis Research Center and the
Ohio Aerospace Institute, March 30-31, 1993. A slightly changed version appeared in the Winter
1993 issue of Whole Earth Review. Otherwise, what you need to know by way of an introduction
is that Vernor Vinge has been publishing science fiction since 1965. One early story of his, "The
Accomplice" (1967) is remarkably prophetic. Not only does it describe desktop computers and
CGI animation, but suggests that this could be used to make a movie out of The Lord of the
Rings. His "True Names" (1981) is one of the first stories about cyberspace, hackers, and virtual
reality. He has won Hugo Awards for A Fire Upon the Deep (1992), A Deepness in the Sky
(1999), "Fast Times at Fairmont High" (2002), "The Cookie Monster" (2004) and Rainbows End
(2006). Marooned in Realtime (1996) won the Prometheus Award.
He is a retired professor of mathematics from San Diego State University. At the beginning of
the Wikipedia entry about him, he is quoted from the Singularity article as saying, "Within thirty
years, we will have the technological means to create superhuman intelligence. Shortly after, the
human era will be ended."
SCHWEITZER: I've read your 1993 paper about the Singularity online. The first question that
occurs to me is this: If the future will soon become unforeseeable and unknowable, what is the
science fiction writer to write about?
VINGE: Yes, if there is such strangeness on the near horizon, then much of the classic domain
of science fiction can't be realistically written about as long as writers and readers are human
scale creatures. Both commercially and intellectually, science fiction writers are the first
occupational group to be impacted by the Singularity -- whether or not it actually happens. The
Singularity casts a big shadow back upon anyone thinking about tech and the future. You can see
the impact in hard science fiction from the 1990s onwards. If you're trying to write about events
that would come more than a few decades in the future, you need some kind of explanation, at
least implicit, for why the Singularity did not happen, or if the Singularity did happen, why there
can still be an intelligible story.
This is probably true even for science fiction writers who consider the Singularity to be totally
bogus. Even those writers, for purely commercial reasons, may feel the need to say why the
Singularity never happened. It doesn't have to be explicit; it may be something in structure of the
story, just enough to satisfy customers who do find the Singularity plausible. On the whole, I
think this limitation may have actually broadened the field somewhat, bringing even more what-ifs on stage. (And of course there is the point that Charles Stross made in a Locus interview a
while back, namely that if the Singularity is on the way, then maybe writers will grow into it.)
SCHWEITZER: If only Philip K. Dick were alive today. He might argue that the Singularity
has already happened, but it has been concealed from us, so we'll never know.
VINGE: Yes. The notion of the "invisible Singularity" comes up in different ways. I'm doing a
story right now where decades have passed uneventfully and Singularitarians are mocked
because they're hanging on to their dignity by claiming that an invisible Singularity has
happened. Unfortunately, it's hard to be dignified pushing a claim like that!
The first time I used the term Singularity with regard to intelligence was in 1982, at an AAAI
conference at Carnegie-Mellon. Afterwards this fellow came up and said, "You know, I think the
Singularity has happened already, and it happened several thousand years ago." He said this was
because "Nation states are superhuman entities and the first nation states are several thousand
years old, so this isn't a new thing after all."
That's an interesting point of view, but I don't think it has the concrete nature of what we're
running up on in our near future.
SCHWEITZER: Before we proceed further, please define for the readers exactly what you
mean by the Singularity.
VINGE: By Singularity, I am talking about the likelihood that in the relatively near future, we
humans, using technology, will either create or become creatures that are superhumanly
intelligent. [For a concretely dated form of the assertion, see the 1993 NASA essay.] I think
there are several different possible paths to this development: classic Artificial Intelligence,
computer enhancement of human intelligence, bio-science enhancement of human intelligence,
networks plus humanity becoming a superhumanly smart ensemble, or the development of all
our distributed embedded systems into a kind of digital Gaia. Whichever path or combination of
paths, the result would be an event with few analogues in the past. One analogue is the rise of
humanity within the animal kingdom. Perhaps another is the Cambrian Explosion.
While the Singularity is a technological event, it's different from previous technological events.
You could explain things like television or the printing press to people from earlier eras. You
could even explain the social consequences of such technological progress -- though you
probably would not be believed. On the other hand, you could not have such a conversation with
a goldfish or even a chimpanzee.
When we talk about what things will be like after the Singularity, we are talking about a world
that is run by creatures significantly smarter than human. Trying to explain that world is a
qualitatively different problem than trying to explain past technological advances to earlier
humans.
SCHWEITZER: Wouldn't an equivalent of this be trying to explain written language to a pre-literate? Among other things, written language is alleged to have changed certain brain
processes, particularly the way people remember. That was how in the old days people could
memorize all of Homer, for example.
VINGE: I don't think the invention of writing would satisfy the "unintelligible future" criterion,
but it is more interesting than most inventions. Terry Pratchett may have made the point that
writing lets the dead talk to us. That is magical.
To me, the most important distinction between us and other animals is not language or tool
making. Our distinguishing virtue is that we humans can externalize cognitive function. Writing
is a great example of that. For instance, there are birds that can remember where they have
cached seeds and in far greater numbers that I can remember -- unless I had paper and pencil.
Writing is an externalization of memory, not just one's own but all others who have ever written
(Pratchett's point). The invention of the computer is another example.
SCHWEITZER: Isn't painting on the cave wall another one?
VINGE: I think so.
SCHWEITZER: If we have artificially-enhanced intelligence in humans, or human-computer
interfaces, or anything like that, aren't these advantages only for the rich? You and I may be
talking about this, but there are illiterate stone-age farmers in Borneo or someplace like that.
They don't have electricity or running water. They've never heard of a computer. How does this
concern them? Aren't these enhancements only going to be for a small segment of the total
population?
VINGE: The general issue of whether technology and high-tech computer technology is a tool
of social division, for the enhancement of rich people to the detriment of poor people, is a very
serious question. Leaving aside the catastrophic possibilities such as general nuclear war, I
believe that technology in general and computer technology in particular are our best hope for
improving the condition of humans. I'll bet that most of the human race lives better now than all
but the richest of 500 years ago -- and that's not counting medical advances. The first instances
of an invention are usually very expensive. Hopefully the rich people do get their hands on them
and incite constructive envy in those not quite so rich. Over and over, this process has driven the
price of new gadgets down to very low levels.
Just in this decade, we've had one of the most spectacular and beautiful examples of this -- the
world-wide explosion of cell phone availability. Googling world cell phone population, I see a
ton of extraordinary numbers, including a claim of a UN report from March 2009 that says six
out ten humans on Earth have a cell phone subscription. Even if that's only counting adult
humans, it is still extraordinary.
SCHWEITZER: I think what they're doing in the Third World is just bypassing the need to put
in land lines.
VINGE: It is that certainly, but I think it is much more. The cell phone revolution is
empowering some of the poorest people in the world. For instance, it gives farmers the ability to
get market intelligence about things that are happening two or three days' walk away.
Where/when smarter phones are available, it can give village medical people expertise and
diagnostics that may be thousands of miles away.
SCHWEITZER: If enhancements get to the point where they makes some people significantly
smarter -- not just a matter of better tools, but a significant increase in intelligence -- won't this
create, for the first time in history, a superior race? Won't the people who have greater abilities
inherently take over and lord it over their inferiors?
VINGE: Alas, that's one ugly possibility. In part it depends on how much payoff there is for a
few smart persons compared to what those same people could have being part of a much larger
community of smart persons. If the first ones figure they are in a positive-sum game, then they
would probably be inclusive. Personally, I think that -- besides the natural risks inherent in
power tools -- the rise of technology has brought an overwhelming affirmation that we are in a
positive-sum game. If it's perceived as a zero -- or a negative-sum game -- then things could be
very bad. If the first to be smart are willing to use force to prevent other players from getting
super-intelligence, then this seems much like the classic "AIs enslave us."
I have friends who would prefer to have a pure AI rather than Intelligence Amplification of
humans. They point out that we're carrying fifty million years of evolutionary bloody baggage in
the back of our heads and so we just can't be trusted to the extent you might trust a machine that
doesn't have that instinctual killer inclination. I have one friend who makes this argument and
then taps his chest and says, "In fact, there is only one person I would trust to undertake this
responsibility." So if one figures that the threat in your question is convincing, that the first
enhanced humans are going to become gods and reduce us all to serfs -- then one is reasonably
nervous about Intelligence Amplification.
SCHWEITZER: It might not be a matter of active malevolence, but of a lot of people being left
behind. The nation or social group which enhances itself first will inevitably win. If we can
make ourselves ten times smarter, immune to most diseases, and much stronger, then the people
who have not become supermen are much less employable. Let us throw into this the idea that
the enhanced people can interact with machines in a way that unaltered people cannot. The
science fiction precedent I see for this is Asimov's division, in his robot novels, of the human
race into short-lived people and the long-lived spacers, who are almost two different species.
Couldn't we see the human race divided into two species, those who are enhanced and those who
are left behind?
VINGE: Actually, I think that will happen if there is enhancement without coercion. A certain
percentage of people will reject the technology. There are all sorts of reasons for a person to
reject it. I think some of the reasons might be sound. And just from the viewpoint of hedging
humanity's risks, I'd hope there would be folks who would not opt in.
The mellow version of this could be the scenario where some people say, "Hell, no! I've got
legal title to my property, and I've paid off my mortgage. My I.Q. is 130, and I'm satisfied. So
you guys that are smarter, I don't want to buy into what you have. I'll just stay where I am, and
you will have to find somewhere else to play your game."
There are number of interesting variations here: One is where the stay-behinds are not allowed to
opt-in later, and come to feel seriously downtrodden. Another might be where the stay-behinds
are doing better than any prior civilization in human history, but from the viewpoint of post-humans, those stay-behinds are living in squalor.
SCHWEITZER: So some people opt out, and they're still the equivalent of stone-age farmers
plowing a field behind a mule. Everybody else goes high-tech. The inferiors just get ignored.
VINGE: And "ignored" could mean several things. Of the benign variants: I think it's possible
that the post-humans wouldn't consider Earth the most profitable real estate. If so, the stay-behinds might be literally so, perhaps protected as an emergency backup in case of some post-human catastrophe. (I'll bet sf readers can think of several stories of this sort! When it comes to
scenarios, science-fiction has outposts scattered all over the map. That's a strength, but also the
reason why the field should not be considered prophetic.) Another variant might be where the
post-humans haven't all gone away. It might involve an intermediate level of entrepreneurs who
would provide access to design studios that are run by smart people.
SCHWEITZER: You realize, of course, that a good deal of the population of the world has no
idea what you are talking about, because they have never seen a computer.
VINGE: I think your main point is correct. Today. But the last ten years have changed the part
about having computers -- cell phones! The fact that this worldwide diffusion could happen in
just ten years is to me an astonishing fact, and gives some reason to believe that these issues are
not going to remain the obscure obsession of a small minority.
SCHWEITZER: They're also used with those previously mentioned killer instincts to set off
remote-controlled bombs. Maybe nothing has really changed since the discovery of fire, or
before.
VINGE: Yes indeed. Will our creativity kill us before it saves us? I think we have a good chance
at success, but we have only one example to look at.
SCHWEITZER: Can't we argue that the first step in this future evolution of mankind is science
fiction itself, in the sense that you and I are using a science-fictional method of thinking even to
have this discussion?
VINGE: Yes. It seems to me that science fiction plays for the body politic the role that dreaming
does for the individual. Most dreams are nonsense. Some dreams clue you in to things that you
should be worrying about, like, "My God, if I forget to pay that bill, there could be some really
bad consequences." In the later Twentieth Century, the notion of thinking out scenarios, which
we have been doing in science fiction since forever, began to be a serious bureaucratic planning
tool. I think it is often superior to using forecasts and trend-lines. In fact, no one knows what the
future is going to be like. Earlier in this interview you were talking about dystopian possibilities
related to intelligence amplification. It's important for people to work through all sorts of
scenarios like this. Science fiction is a relatively light-hearted way of doing this.
When it's done more seriously (as in companies such as GBN) it can be part of an overall
planning strategy: You come up with a scenario and then work backwards from that, thinking,
"If that's really how things turned out, what would be the symptoms that we would see as we fall
into that scenario?" Do that reasoning with a number of contrasting scenarios. Then you have
these families of symptoms for different sorts of outcomes. To me, this is a far more effective
way of dealing with uncertainty than simple forecasts. As time goes on, you can watch for
symptoms on your various lists. You can say, well, if such-and-such happens, that makes it more
likely that we are in Scenario A or Scenario C, in which case it would be good to spend some
money on such-and-such. Of course, this also feeds back into further scenario generation.
SCHWEITZER: Does this then give the science fiction writer a certain responsibility to be
realistic?
VINGE: I think that a story should be true to itself, but that's a much broader requirement than
what most people mean by "realism."
SCHWEITZER: You're talking about science fiction as a kind of dreaming. A lot of science
fiction makes no attempt to be realistic. Think of van Vogt's The World of Null-A. It was
supposed to be a big revelation, but there is nothing realistic in it at all.
VINGE: I haven't read The World of Null-A in a long time, so I shouldn't try to comment on it
specifically. I have noticed that there are so many different categories of quality that a story can
be an abject failure in some ways and still be a treasure for what it does in others.
SCHWEITZER: Aren't we describing only a specific type of science fiction, if the subject of
realism comes up?
VINGE: The dreaming metaphor probably covers most types of science fiction (and the superset
of sf, fantasy). It's not reasonable to have a preset definition of serious science fiction. And
taking science fiction writers as a whole, there is no reason why they have to take themselves
seriously. Now, the people who read their stories get various different things out of them. I read
some people's stories for reasons that may be fairly serious, but for many, the more crazy the
better, if the author can handle it cleverly and/or with emotional impact.
Now it's true that the kind of sf writer who is invited to planning meetings is normally of the
more realistic variety. But their value may be as loose cannons, crazy enough to mix things upwithout turning the boat over. And even the non-sf scenario-based planners can be affected by
oddball stories. An emotionally evocative story can put your head in a different place, and
suddenly facts have new connections.
SCHWEITZER: I am thinking of the Balonium Factor. Balonium is that substance or field
effect which suspends the laws of the universe as needed for the plot. Doesn't any given science
fiction contain something which is probably impossible? If Wells had known how to build a time
machine, he wouldn't have written a story about it. He would have done so, and collected his
Nobel Prize. What he was actually doing was pretending that someone had discovered a new
scientific principle which made this possible. So, in a science fiction story, how much
extrapolative realism is required, and how much balonium is desirable.
VINGE: [Laughs] There are different grades of balonium. At one extreme, there are stories
describing inventions that may well be patentable. At the other extreme, there is fantasy
relabeled. I think that it's unwise to say, "beyond this point, the balonium is pure." Sometimes
you read something that seems like pure balonium, but if you look at the story in the right way,
you say, "Oh, geez, this situation could be caused by X," where X has nothing to do with the
balonium. The area of computers may be the most fertile source of such surprises, because there
are many crazy and magical things you can do if you have a proper distributed system setup. For
science fiction, it's good that we don't have any predefined proper balance between realism and
balonium, that that remains a matter of taste and saleability.
SCHWEITZER: The idea that what you can imagine might actually become possible must have
itself only been possible when people became aware that the past was different from the present.
VINGE: Yes! This and your point about Wells's time machine could be part of a framework
describing much of human progress. Certainly, if we really knew how to make some bit of future
super-science, we'd do it now and it wouldn't be future super-science anymore. In fact, even if
we don't know the details, just knowing that something desirable is possible is enough to prod
some humans into the invention. And then there is the most extreme case, namely just imagining
that something is desirable being enough to get the invention.
In the millennia of pre-history, I imagine that the idea of human progress was a rare notion --
and that was an important reason why change was so slow! The acceleration of progress through
the Renaissance was partly due to "compound interest" growth of knowledge, but also to the
notion that progress was possible. One might argue that part of the acceleration during the
Industrial Revolution was due to smart people consciously focusing on the importance of
invention.
Somewhere I read that in the late 1940s, Soviet nuclear espionage did not get detailed
engineering data that made a big difference. The most important thing the spies delivered were
statements such as, "Yes, this approach works. You can do it this way." That by itself was
enough to point their physicists to success. Now in the first part of the twenty-first century,
humanity is toying with the more dubious extremes of this progression, wondering where is the
balance between the wish and the fact.
SCHWEITZER: But, for example, Lucian of Samosata could imagine going to the Moon. He
made a joke out of it. He wrote something like a Douglas Adams story. But for thousands of
years thereafter there was no feeling that merely because you could imagine going to the Moon
that one day it would be possible. Maybe somewhere around the 18th century did this begin to
change. People had imagined flying for a very long time, but only about then did more than a
very few people realize that they actually could. There was Leonardo back around 1500, but he
was way ahead of his time and pretty much alone.
VINGE: Yes, the wish and the fact can be very far apart -- or near enough to be extremely
frustrating. I believe Benjamin Franklin once speculated that within two hundred years or so, we
would have prolongevity. Too bad for Ben. Maybe too bad for you and me.
End of Part One. Part Two will appear in IGMS issue 16.