Source: Thinking About A.I. with Stanisław Lem | The New Yorker
Thinking About A.I. with Stanisław Lem
“We are going to speak of the future,” the Polish writer Stanisław Lem wrote, in “Summa Technologiae,” from 1964, a series of essays, mostly on humanity and the evolution of technology. “Yet isn’t discoursing about future events a rather inappropriate occupation for those who are lost in the transience of the here and now?” Lem, who died in 2006 at the age of eighty-four, is likely the most widely read writer of science fiction who is not particularly widely read in the United States. His work has been translated into more than forty languages, many millions of copies of his books have been printed, and yet, if I polled a hundred friends, 2.3 of them would know who he was. His best-known work in the U.S. is the 1961 novel “Solaris,” and its renown stems mostly from the moody film adaptation by Andrei Tarkovsky.
Among Lem’s fictional imaginings are a phantomatic generator (a machine that gives its user an extraordinarily vivid vision of an alternate reality), an opton (an electronic device on which one can read books), and a network of computers that contains information on most everything that is known and from which people have a difficult time separating themselves. Though we may have forgotten the prophet, the prophecies have not forgotten us.
Lem also wrote numerous stories about machines whose intelligence exceeds that of their creators. About two years ago, when OpenAI released ChatGPT, my initial reaction was: what a fun toy! The poems it wrote were so charming. It also seemed like an interesting form of “distant reading,” since its writing was based on having “eaten” enormous amounts of text from the Internet; it could tell us something about the language and interests people have collectively expressed online. I wasn’t too troubled about such a tool replacing more conventionally human writers. If writers have to become art therapists or math teachers or climate scientists instead, is that so bad? I am programmed, I have come to realize, to default to the reassuring assessment of “no biggie.”
A few news cycles later I was reading—and maybe thinking?—that ChatGPT and its A.I. cousins could easily be like what radio was to Hitler. (Or would it be like what the printing press was to Martin Luther?) Geoffrey Hinton, the godfather of A.I., whom we all suddenly knew about, had resigned from his position at Google and was speaking openly about everything that was likely to go wrong with his creation—a creation he appeared to regret only slightly less than Victor Frankenstein did his. There were very knowledgeable people who thought that all these concerns were nonsense and very knowledgeable people who found them terrifying. I wondered, What would Stanisław Lem think? Or, What did Stanisław Lem think, given that he so accurately foresaw enough of our modern world that he should be taken at least as seriously as Nostradamus or Pixar?
As a young man in Poland, Lem worked as an auto mechanic at a Nazi-run waste-sorting company, wrote poems for a Catholic weekly, and attended medical school but chose to not graduate (to avoid military service), something that, he noted, his mother never forgave him for. Lem’s family was Jewish, but that was of little consequence until it was of existential consequence. (“I knew nothing of the Mosaic faith and, regrettably, nothing at all of Jewish culture,” he wrote.) Nearly all but Lem’s most immediate family were killed in the Holocaust; he and his parents survived with false papers. Later years weren’t simple, either. His family had been impoverished, his early novels were subject to state censorship, and, in 1982, after Poland fell under martial law, Lem left his home in Krakow, first for West Berlin and then for Vienna. Lem wrote dozens of books, which seems proportionate to the various hells he survived; often, a reader will feel she can see something of the frightening real world behind the imaginative and goofy sci-fi masks. The juggernaut of history writes the A plot, the human the B plot.
“Solaris” is mostly serious in tone, which makes it a misleading example of Lem’s work. More often and more distinctively, he is funny and madcap and especially playful on the level of language. A dictionary of his neologisms, published in Poland in 2006, has almost fifteen hundred entries; translated into English, his invented words include “imitology,” “fripple,” “scrooch,” “geekling,” “deceptorite,” and “marshmucker.” (I assume that translating Lem is the literary equivalent of differential algebra, or category theory.) A representative story, from 1965, is “The First Sally (A) or, Trurl’s Electronic Bard.” Appearing in a collection titled “The Cyberiad,” the story features Trurl, an engineer of sorts who constructs a machine that can write poetry. Does the Electronic Bard read as an uncanny premonition of ChatGPT? Sure. It can write in the style of any poet, but the resulting poems are “two hundred and twenty to three hundred and forty-seven times better.” (The machine can also write worse, if asked.)
It’s not Trurl’s first machine. In other stories, he builds one that can generate anything beginning with the letter “N” (including nothingness) and one that offers supremely good advice to a ruler; the ruler is not nice, though, so it’s good that Trurl put in a subcode that the machine will not destroy its maker. The Electronic Bard is not easy for Trurl to make. In thinking about how to program it, Trurl reads “twelve thousand tons of the finest poetry” but deems the research insufficient. As he sees it, the program found in the head of even an average poet “was written by the poet’s civilization, and that civilization was in turn programmed by the civilization that preceded it, and so on to the very Dawn of Time.” The complexity of the average poet-machine is daunting.
But Trurl manages to work through all that. When glitches occur—such as the machine in early iterations thinking that Abel murdered Cain, or that “gray drapes,” rather than “great apes,” are members of the primate family—Trurl makes the necessary tweaks. He adjusts logic circuits and emotive components. When the machine becomes too sad to write, or resolves to become a missionary, he makes further adjustments. He puts in a philosophical throttle, half a dozen cliché filters, and then, last but most important, he adds “self-regulating egocentripetal narcissistors.” Finally, it works beautifully.
But how does the world around it work? Trurl’s machine responds to requests to write a lofty, tragic, timeless poem about a haircut, and to write a love poem expressed in the language of mathematics. Both poems are pretty good! But, of course, the situation has to go awry, because that is the formula by which stories work. Lem doesn’t give much space to worries about undetectably faked college essays, or displaced workers. Nor does he pursue a thought line about mis- or disinformation. (In another story, however, Trurl builds a machine that says the answer to two plus two is seven, and it threatens Trurl’s life if he won’t say that the machine is right.)
The poets have various manners of letting Trurl know their take on his invention. “The classicists . . . fairly harmless . . . confined themselves to throwing stones through his windows and smearing the sides of his house with an unmentionable substance.” Other poets beat Trurl. Picket lines form outside his hospital room, and one could hear bullets being fired. Trurl decides to destroy his Electronic Bard, but the machine, seeing the pliers in his hand, “delivered such an eloquent, impassioned plea for mercy, that the constructor burst into tears.” He spares the uncannily well-spoken machine but moves it to a distant asteroid, where it starts to broadcast its poems via radio waves; alas, they are a tremendous success. But thinking about the perils of the technology is not at the center of the story; thinking about the vanity and destructiveness of people is.
It’s curious that Lem seems not to have been afraid, or proud, of the idea of a machine outdoing the creative work of humans. He was also not shy about denigrating some of his earlier work, saying of his first science-fiction novel, “The Astronauts,” that it was “so bad” and of his second science-fiction novel that it was “even worse than the first.” He moved from style to style, sometimes guided by an effort to evade state censors: some of his work is realist, some absurd; sometimes his prose is dense with puns and allusions, and at other times it’s straightforward. It’s as if Lem were a storytelling machine that, in order to survive, had to repeatedly alter its program. I don’t really mean that Lem, or any writer, is a “machine”—a word used to refer both to nearly inhuman levels of productivity and to the all-too-human corruption and greed of some groups or institutions. But we are all “wired” by a mix of experience and innate character.
Is there another science-fiction writer of Lem’s generation with an imagination as prophetic and transcendent? Maybe Philip K. Dick, who thought as deeply as anyone about the soft boundary between humans and machines. Recently, upon rereading his novel “Do Androids Dream of Electric Sheep?,” I found that I finished the book still uncertain about who was or wasn’t an android. Rick Deckard, the bounty hunter who makes a living hunting down androids, is probably an android. Does that matter? The novel invites anxiety and paranoia about who is human, even as the counterpoint melody suggests that androids do have dreams, and do feel love.
Dick published “Androids” in 1968. In 1974, he wrote a letter to the F.B.I. about a peril facing the field of American science fiction. That peril was Stanisław Lem. Dick warned that Lem was “a total Party functionary” and that he was “probably a composite committee rather than an individual,” because he wrote in so many styles and appeared to know so many different languages. The committee known as Lem aimed “to gain monopoly positions of power from which they can control opinion through criticism and pedagogic essays,” and in this way threatened “our whole field of science fiction and its free exchange of views and ideas.” Lem, in other words, was an avatar of the Communist Party machine, and that machine was infiltrating American thinking.
Dick was taking mind-altering substances at the time, and his novel “VALIS,” set in 1974, features a Dick-like character living simultaneously in two eras: Nixon’s America and ancient Rome. Still, the particular content of a dream or delusion remains telling, and personal. What is it that makes one piece of writing or thinking feel more “human” than another? And why did Dick, who wrote dozens of novels, perceive the variety and volume of Lem’s work as so threatening? Dick closed his letter by noting that it would be “a grim development”for science fiction to be “completely controlled by a faceless group in Krakow, Poland. What can be done, though, I do not know.” Perhaps Dick was frightened by the reflection of himself he saw in Lem, or of the future he saw for America.
The F.B.I. does not appear to have followed up on the tip (but it did keep a file on Dick himself). A year later, in 1975, Lem wrote a lengthy note of his own, published in Science Fiction Studies. In it, he decried American science fiction as formulaic, unimaginative trash. He said that its “herd character manifests itself in the fact that books by different authors become as it were different sessions of playing at one and the same game or various figures of the selfsame dance.” American science-fiction writers, in other words, were little but primitively programmed machines.
Except, he wrote, for Philip K. Dick, who has “rendered monumental and at the same time monstrous certain fundamental properties of the actual world.” Dick was the only one telling a story that Lem deemed human.
In a piece called “Chance and Order,” which Lem published in The New Yorker, in 1984, he asks himself to what extent his life, and its path, was determined by chance, and to what extent by “some specific predetermination . . . not quite crystallized into fate when I was in my cradle but in a budding form laid down in me—that is to say, in my genetic inheritance was there a kind of predestiny befitting an agnostic and empiricist?” It’s a question that requires imagining a forecast not of the future but of the past that could have been. It lies alongside an attempt to imagine humankind as extricated from the malfunctioning machine of history—a history that is often characterized by the momentum of technological changes.
In “Summa Technologiae,” Lem emphasizes how humanity, in thinking about the future, has often thought through the wrong questions. We can transmute other metals into gold now, but it was a misunderstanding that made us think we would still want to, he says. We can fly, but not by flapping our arms, since “even if we ourselves choose our end point, our way of getting there is chosen by Nature.” When Lem writes in “Summa” about artificial intelligence—or “intelligence amplifiers,” as he terms it—he says it’s likely possible that one day there will be an artificial intelligence ten thousand times smarter than us, but it won’t come from more advanced mathematics or algorithms. Instead, it comes from machines that can learn in a way similar to how we do—which is what is happening. With the Electronic Bard, Lem derides the worry that our own human creativity will be dwarfed. That, like the questions of the alchemists, he suggests, is the wrong concern. But what is the right one? ♦