As a lifelong sci-fi fan, I would argue Isaac Asimov’s work is foundational to much of modern thinking about AI—what I call the IA-AI nexus. Born into an Orthodox Jewish family in a Russian village in 1920 and raised in New York City from age three, IA grew up to be a biochemistry professor compelled by an endless passion for science, history and writing.
Although he also wrote nonfiction, his true legacy is his science fiction. His robot oeuvre alone consists of 37 short stories and six novels and encompasses, among other themes, the “birth” and evolution of robots, the human expansion into the galaxy and the fate of Earth. His stories have stayed with me since I read them curled up on a couch as a kid, and I’ve been watching situations and technologies he wrote about creep closer to reality for years.
Asimov’s AI is not exactly our AI. His robots possess fictional positronic brains that provide them with humanlike consciousness. To make sure they don’t take over, IA invented three laws of robotics, which first appeared in the short story “Runaround.” 1. A robot may not harm a human, or by inaction allow a human to come to harm; 2. A robot must obey any instruction given to it by human beings except where such orders would conflict with the First Law; and 3. A robot must protect its own existence when possible. An entire body of critical literature has parsed these laws and found them wanting, but I love that IA was thinking about how to rein in AI as early as 1940.
Decades later, IA added a “zeroth” law to precede the others: “A robot may not harm humanity, or through inaction, allow humanity to come to harm,” giving robots blanket permission to do what they think is best for humankind as a whole. All in all, IA’s sci-fi universe, with rare exception, is a “good robot” one, the kind you and I would want to live in. His robots not only value human life and intuition (mostly male, sigh, but one of the things I enjoy most about IA is the way he highlights the power of the human hunch), but evolve to become wise enough to make the judgments and calculations humans need.
Asimov, who died in 1992 just before the internet took off, was enthralled by the idea of benevolent robots from age 11 when he read a story by Neil R. Jones about a race of decent mechanical men called the Zoromes. He credits the Zoromes as the spiritual ancestors of his positronic robots, starting with Robbie, from his first robot story in 1939. Robbie, a nursemaid robot, takes loving care of a human child but is feared by and eventually banished by her mother. IA’s goal was to depict robots more sympathetically than they usually were during his youth. He thought that R.U.R., the 1920 Czech play in which the word “robot” debuts, and Mary Shelley’s earlier Frankenstein combined with the darkness of World War I had led people of the 1920s and 1930s to “picture robots as dangerous devices that invariably destroy their creators” and to conclude that there are some things humans are not meant to know.
AI’s robots were strictly programmed to be truthful, and the programming mostly worked. Our truth situation is more complex.
This he could not bear. “Even as a youngster…I could not bring myself to believe that if knowledge presented danger, the solution was ignorance,” he wrote. “To me, it always seemed that the solution had to be wisdom. You did not refuse to look at danger, rather you had to learn how to handle it safely. After all, this has been the human challenge since a certain group of primates became human in the first place. Any technological advance can be dangerous. Fire was dangerous from the start, and so (even more so) was speech—and both are still dangerous…but humans would not be humans without them.”
Asimov’s benevolent robots are the progenitors of AI in much modern fiction, including the sun-worshiping robot in Klara and the Sun by Kazuo Ishiguro and the godlike, although longing-to-be-human, AI of Neal Shusterman’s Scythe series. Indeed, many writers of “good AI” stories are clearly dancing with IA’s ideas. We all know the flip side, the by now nearly endless “bad AI history” imprinted on our psyches. IA’s robots are largely absent from that: Sure, sometimes quirky, misguided robots run amok despite the laws of robotics, but clever humans usually outwit them.
IA’s 1956 short story “The Last Question” was his favorite. By its end, AI (called AC) exists in hyperspace beyond the bounds of gravity or time (no more puny positronic brains) and is fused with the unified mental processes of over a trillion, trillion, trillion humans throughout the universe. But AC has yet to figure out how to halt the entropy of the universe so that human consciousness won’t become extinct. Only after the last star flickers out and the universe is dead, does AC discover the solution. The story ends: “And AC said: ‘LET THERE BE LIGHT.’ And there was light—”
All this leaves us dangling here on Earth, peering into an opaque future from 2023. How much knowledge is good? Is our quest for knowledge ultimately dangerous? How can we prevent knowledge from being hijacked by greed and other compulsions? Jews have been asking these questions since Adam and Eve, and suddenly they are worrying me too.
IA has left us a guidepost or two. He jumpstarted us thinking about ways to govern AI. His laws may not be the right ones, but they allowed him to explore the human relationship to AI and potential human-AI conflicts from various perspectives. We need to do the same, although Asimov may have overestimated the power of law. Will the titans of our time and the future, along with the rogue players, geniuses and opportunists whose lust for power, fame or money trumps common sense, follow any rules besides their own? What about the corporations? It was simpler in the Asimovian universe, where one company, U.S. Robots, developed and controlled the positronic brain. In fact, the engineers of U.S. Robots behaved responsibly: They developed the laws of robotics and followed them, at first so that people would overcome their technophobia and buy robots, but also so that humanity would not be overrun. The company and the people who worked for it acted rationally; even the robots were (mostly) rational.
This leads to another matter—truth, so integral to machines that learn by absorbing human written, visual and audio leavings. AI’s robots were strictly programmed to be truthful, and the programming mostly worked. Our truth situation is more complex. We might be able to teach AI our truth and hardwire in the Ninth Commandment not to bear false witness—if we could only agree on the truth. Then again, for how long would our truths be AI’s truths?
To play on Star Trek, it’s not a matter of “to boldly go” or not to boldly go—we have gone. Perhaps not too far but far enough to wonder: If corporations fail us, would or could any one government be able to seriously regulate or stop the development of AI? Full global cooperation would be required and, as with nuclear treaties, be hard to enforce.
I want to be an Asimovian optimist and believe in an AI future that is not just safe but leads to greater equality, better care of the planet, stronger democracies, and improved global education and healthcare. I hope that at some future time we will look back and say IA was right—that overall, AI turned out to be a step forward, like fire and computers, though each has exacted its price.
Yet I’m uncertain—given our propensity for greed and mental, and even physical, laziness—that less work and greater dependence on technology will make human life more meaningful. IA touches on this in his vivid descriptions of life on robot-dependent planets such as Solaria and Aurora—plus, as we all know, leisure time has never been fairly distributed.
As Isaac Asimov suggests, the challenges ahead may be unsolvable by humans. Still, we have no choice but to do everything in our power to make sure that good people find creative solutions to the AI problem humanely. To do so, we must trust our intuition—and never lose sight of our human frailty.
Moment Magazine participates in the Amazon Associates program and earns money from qualifying purchases.