By | Jul 27, 2023

It’s the question of the moment. 

The launch of the first really convincing artificial intelligence chatbot last November sparked widespread excitement and fear. Suddenly, people were demanding that their computers produce improbable verbal artifacts—a sonnet about Mars exploration in the style of Milton, a bio of themselves with an annotated bibliography, an essay on happiness among penguins, a Talmudic exegesis on climate change—and then brandishing the sometimes uncanny results. In fact, more than 100 million people have already experimented with ChatGPT (short for Chat Generative Pre-Trained Transformer) and similar “large language model” bots to see what the technology can do for them. Chatbots are just one of many AI applications up and running, and new ones are coming, and coming fast. These new advances are layered upon previous ones: Aspects of artificial intelligence have been around for decades in the form of search engines, social media algorithms, facial recognition, voice processors and bots of all kinds, all made possible by great leaps in computer programming and processing. These technologies have become an invisible part of our daily lives.

What does it all mean? Where will it take us? Is AI good for humanity? We believe it is essential to wrestle with this biggest of “Big Questions.” AI is the brainchild of cognitive scientists, computer programmers, physicists, futurists and robotics engineers; it raises problems that concern politicians, journalists, psychologists, linguists and philosophers; and it’s alive in the imaginations of artists, cartoonists, novelists and, of course, science fiction writers. So, we’ve asked people from all these fields to tell us what excites them, what terrifies them and how best to approach this astounding moment in human history. Beyond the dangers and the possibilities they describe lurks the biggest question of all: In an era of thinking machines, what does it mean to be human? Though not everyone approaches this question using a Jewish framework, its essence is about as “Jewish” as it gets.

WITH 

Interviews by
Jennifer Bardi, Diane M. Bolz, Sarah Breger, Nadine Epstein, Jacob Forman, Noah Phillips, Amy E. Schwartz & Laurence Wolff

All images have been stylized using the AI image generator NightCafe

NICK BOSTROM is a Swedish philosopher and professor at Oxford University, where he also heads the Future of Humanity Institute. The focus of his research includes AI risk and human enhancement. His latest book is Superintelligence.

What will AI mean for humanity? That’s the $64,000 question, isn’t it? I believe that superintelligence could result in either very bad risks or extremely good benefits for humanity. In fact, I think these extreme outcomes are actually more likely than the mixed bag one might intuitively think is more realistic. If you roll a ball along a narrow beam, it might be hard to predict which side it’ll fall off, but eventually it’ll fall to one side or the other, right? The transition to the machine superintelligence era is a fundamental transformation of intelligent life on Earth. It’s different in kind from the new gadgets and upgrades we have every year. You could compare it to the Industrial Revolution, but that might even underplay it. Maybe compare it to the rise of the human species in the first place. It could be the last invention that we humans will ever need to make; the superintelligence would be doing the rest of the inventing much better than we can.

Nick Bostrom. Photo credit: Future of Humanity Institute (CC BY 4.0)

The potential challenges can be broadly grouped into three categories. First, we have the alignment problem: the risk that AIs, if they become superintelligent, will also become very powerful and seize control from humans, harm or even kill us all. Recently there’s been a surprising amount of attention from policymakers and regulators and even some leaders of key AI labs advocating for oversight and regulation to a greater degree than one would have expected at this stage of development. One of the big questions in AI alignment research is whether it would be possible to shape either the architecture or the training regime such that you would actually create an AI that was robustly honest or, alternatively, develop mechanistic interpretability tools so that whatever the AI said, we could look inside its brain and see what it was thinking. So far we don’t have rigorous methods for either of those.

A second bucket of challenges is what you might call the governance challenge: Assuming we manage to control the AIs, how do we humans use them? How can we make sure that by and large they are used for good purposes rather than, say, to oppress one another or wage war against one another?

And then there’s a third challenge, which is the possibility that we might do bad things to the AIs. This still seems a bit fringe at the moment, but it matters because as these digital minds become increasingly sophisticated, they might attain properties that would give them moral status. We need to think about how we can make sure that the future is good for them. Maybe in the future most minds will be digital—some might have a very low level of moral status, like at a spider’s level. Others might be like dogs, and others might be superhuman. At that level of analysis, it might be that moral status is not a unidimensional thing.

Despite the challenges, I would be wary of scenarios that could lead to a permanent shutdown of AI research. Even a temporary pause might ossify into some rigid structure with surveillance technology and brain control technologies that would prevent any revolutions or cultural turnover.

MIRIAM VOGEL is the president and CEO of EqualAI, a nonprofit created to reduce unconscious bias in AI and promote responsible governance. She cohosts the podcast In AI We Trust? and is the chair of the National AI Advisory Committee advising the White House on AI policy.

While it is perceived to be a new technology, “artificial intelligence” was first coined in 1956 by computer and cognitive scientist John McCarthy. The term, then defined as “the science and engineering of making intelligent machines,” seemed to describe an unattainable aspiration, but it has since burgeoned into a multi-billion-dollar global reality.

Miriam Vogel. Photo credit: World economic forum (CC BY-NC-SA 2.0)

AI refers to the development of computer systems capable of performing tasks that typically require human intelligence. It involves the creation of algorithms and models that enable machines to analyze data, learn from it, make decisions, and perform actions that mimic human cognitive functions such as problem-solving, reasoning and learning. The concept of mimicry is often lost in the current AI frenzy, which tends to anthropomorphize the technology and its capabilities.

AI technology already plays an integral role in our daily lives, from GPS navigation to personalized recommendations for our next book, movie or song choice. It has also transformed crucial aspects of healthcare, finance and hiring processes. AI is addressing the progression of climate change by identifying sources of carbon dioxide in the environment. It has been used to map human proteins, measure energy usage, monitor factory conditions, predict and mitigate wildfires and provide individualized instruction to students. And we appear to still be in the nascent stages of its capability and impact.

While there is reason to celebrate these advancements, we must also be vigilant. Years of progress can be undermined by a few lines of code. For example, AI can perpetuate and amplify historical biases and discrimination. These challenges and opportunities have become more immediate with the recent emergence of generative AI which is incorporated into tools such as: OpenAI’s ChatGPT, Microsoft’s Bing, Google’s Bard and Anthropic’s Claude. Because this technology predicts the user’s intended outputs, generative AI can create efficiency and spur creativity by producing speeches, tweets and many other forms of first drafts and syntheses. However, it can also generate false and harmful narratives. For example, the Center for Countering Digital Hate found that Bard perpetuated harmful content based on false and incendiary prompts in 78 out of 100 cases without providing context negating the false claims.

This included a 227-word monologue denying the Holocaust, with fabricated “evidence” including a pronouncement that a “photograph of the starving girl in the concentration camp…was actually an actress who was paid to pretend to be starving.”

The convergence of human tendencies and the incredible potential of AI highlights the urgent need for guardrails in the form of regulation and norms to prevent the worst-case scenarios. We all have a role to play in demanding and supporting responsible AI practices, whether as developers, users or observers. Companies building and deploying AI must ensure our safety by using “good AI hygiene,” such as that put forth in a risk management framework created by the National Institute of Standards and Technology, as well as accountability for AI use at the highest levels of leadership.

CRAIG NEWMARK is the founder and former CEO of Craigslist. His philanthropic endeavors have focused on cybersecurity, journalism, fighting misinformation and protecting democracy. He recently pledged $100 million to support U.S. military veterans and their families. 

As meanings of the term AI have shifted abruptly of late, I’m going with the flow. Artificial intelligence has to do with any system that exhibits, let’s say, human-like intelligence. Not consciousness, not reasoning so good that it could fool a person, but the ability to come to conclusions on its own.

There is no indication right now that AI systems are conscious. The problem is, we don’t know what consciousness is, and I’d argue that a conscious system can’t describe what consciousness is. Perhaps the only entity that could describe consciousness would be, let’s say, a divinity. That’s purely speculative, although I do think it’s within the Jewish tradition.
The revolution occurring now has to do with large language models that can, using simulated neural networks, digest enormous amounts of human-produced words and generate the kind of answers to questions that humans might provide. And the great progress over the last 20 years, that progress coming in spurts, has to do with smarter and smarter neural networks being simulated in a computer. What’s even better is something called backwards propagation, such that neurons can be retrained. They can evolve as they’re being trained. Their internal parameters, numeric quantities, are changed in response to their success or failure in terms of their output. Someone might say, “You’ve got to do better,” and that somehow propagates backwards into the beginning of the neural networks.

Craig Newmark. Photo credit: JD Lasica (CC BY 2.0)

What I hear, unconfirmed, is that no one can find backwards propagation occurring in a human brain. We use other techniques in our heads, but maybe, just maybe, backwards propagation is a much more efficient technique, with which AI might learn how to reason. And maybe out of that, consciousness emerges. That’s purely speculative—or at least that’s what my robotic masters tell me to say.

I do like to think about the origin of consciousness and the bicameral mind—the theory, posited by Yale psychologist Julian Jaynes in the 1970s, that human cognition was once divided between one part of the brain that appeared to be “speaking” and a second part that listened and obeyed. There was a great short story by Arthur C. Clarke, “Dial F for Frankenstein,” which is about what happens when you connect every phone in the world, creating a computer of much more complexity than has ever been seen, and consciousness emerges. Likewise (and I think I’m stealing this idea from someone else), if you hooked up two learning AIs and allowed them to evolve at computer speeds versus humans-talking-to-them speeds, maybe they would achieve a breakthrough and consciousness would emerge.

Alan Turing’s test said, more or less, that if you could build a system and you couldn’t distinguish between conversation with the system and conversation with a human, then it was intelligent. Perhaps we’ve hit that level. Perhaps we have conscious AIs up and running, which poses really important questions: How do they perceive us? Do they just not care about humanity? And will they wind up accidentally destroying us because we’re just in the way? Because we’re annoying? Maybe they just go off to have fun in some dimensional realm that we barely understand via quantum physics. We don’t know.

Suppose AIs do in fact develop consciousness. This would make them people. Inorganic people, but still people, so they’d deserve a full spectrum of human rights. In terms of Judaism, that might raise issues like a conscious robot wanting to convert. If it identifies as male, what would circumcision look like?

I like the idea of artificial consciousness, but first we have a world to fix. I certainly wouldn’t mind AIs solving the climate crisis, and maybe that’s a doable thing. I’m happy that it looks like they’re on the brink of being good X-ray diagnosticians. On an everyday basis, we have to solve problems like traffic and garbage. Those are really good potential uses of AI.

We need to be careful, and careful means guidelines and possibly regulation. And people should be thinking about autonomous weapons. Let’s suppose you’ve got a drone and you want the drone to accomplish its mission even if radio contact is jammed. You might give the drone enough intelligence to fulfill its mission, or you might employ a robot in the form of a gun on treads. You might set it loose to go ahead and kill the enemy, and if it loses radio contact, you might want it to proceed in its mission. The humanity of such situations is a good question. As is, what could go wrong if one of these weapons, drone or robot, is proceeding autonomously? It might get confused due to poor programming or get hacked. If it was hacked to think that the enemy is us, then we might wind up bombing ourselves. I have a feeling a lot of these conversations are happening and that they’re classified.

There’s another concern. In my Sunday school, Mr. and Mrs. Levin taught us it’s wrong to bear false witness, which is normally translated as, “Thou shalt not lie.” If you’re training an AI using sources you know are all about lying to people, then from my point of view, you may not be terribly consistent with the Ninth Commandment. And I actually do take that very seriously. Now and then I’m in the position where I can have a casual chat with someone from one of the big tech firms, and I’ll basically tell them, “Hey, you don’t want to be responsible for lying to people.” Sometimes I put big money into something and that can help, but these big tech companies have far more money than I could ever come up with. And so, I talk to their people. I encourage them to do what they already know is the right thing, and sometimes there’s hope that can happen.

DAVID ZVI KALMAN is a multimedia artist and director of new media at the Shalom Hartman Institute of North America. His research touches on Jewish law as well as the history and ethics of technology.

Any technology that makes us rethink what it means to be human is going to be discussed in religious terms. For some, artificial intelligence is a savior. For others, it’s a world-ender. These ideas are already part of the AI conversation. Ironically, religious thinkers have played only a minor role in these conversations. That’s too bad, because advanced religious thinking about AI could guide its development and ensure better outcomes for humanity. It will also profoundly shape religious thought itself.

David Zvi Kalman. Photo credit: Courtesy of David Zvi Kalman

Consider the question of what counts as a person. We’re living at a time where personhood—not just when talking about AI—is being questioned and challenged. It shows up in debates around animals and around fetuses. Judaism has a long tradition of thinking about personhood that can be drawn upon in the debates about AI. This is an area where Judaism potentially has something to say that is relevant for the outside world, that’s relevant for engineers, for policymakers and for the public.

Religious thinkers are just now learning how to talk about these issues. Before ChatGPT brought AI into the public consciousness, the problems that most nonspecialists thought about centered on questions of agency. Can I use AI in my hiring process? Can it decide to launch a missile strike? Can I trust it to perform medical diagnoses if someone’s life is on the line? These are still relevant and important, but the public is now facing a second set of questions involving the use of generative AI in artwork and questions of copyright and originality and automating jobs and the ever-shrinking list of tasks that only humans can do. There’s new interest in questions of perception, too. Is it ever appropriate to pretend that an AI is a person, or to hide that people are involved in AI work?

Of course, it’s comforting to think that your ancient tradition already has something to say about something new, which is why Jewish thinkers have focused on what Judaism can teach us about AI. Questions about what AI does to Judaism have really not been looked at in any kind of detail, because they’re much more uncomfortable. These are questions such as, “What does AI do to our sense of a sacred text?” If you have the capability within a few seconds to take any idea you want and put it in the form of a biblical verse, or a Talmudic pericope, with all the weightiness and sense of sacredness that those registers of writing maintain, then does that format even matter anymore?

Some questions can enrich both Jewish discourse and AI policy. Take theology, for example. Human beings are, within the Jewish tradition, literally an “artificial intelligence” created by God. The first golem—a figure later strongly associated with machinery and computing—is the body of Adam, right before God breathes life into it. In other words, the story of AI is not that different from our own story, except now we’re the ones doing the creating. This parallel is important when we think about questions of AI alignment—basically, how do you make sure that AI is good—and there are many parallels between that and the numerous Jewish conversations around what it means to create a law that is just, because it’s just incredibly hard to use law (or code) to make people (or machines) do the right thing in most circumstances. This matters for Jewish theology and for public policy—a rare combination.

VIORICA MARIAN is a Moldovan-born American psycholinguist, cognitive scientist and psychologist. She is director of the Bilingualism and Psycholinguistics Research Lab at Northwestern University and the author of The Power of Language: How the Codes We Use to Think, Speak, and Live Transform Our Minds.

Artificial intelligence will change the speed of evolution by orders of magnitude. By that, I mean both the evolution of knowledge and the evolution of society. These predictions are based on what I know from studying the mind and language, namely that language and mind are tightly connected and that the natural languages humans use and the artificial languages machines use are similar in many ways. Both are symbolic systems that rely on symbolic codes—words or notations—to represent information and to encode, transmit and decode it, to learn and advance. Symbolic codes make communication possible across time, space, various species and nonorganic intelligence, although AI systems that use carbon-based molecules may be developed in the future.

I am most excited about the possibility of using AI to figure out what consciousness is, what reality is, what happens after the body dies, the nature of the universe. Are we in a virtual reality? Do we live in Plato’s cave? But, more directly related to my area of expertise, what terrifies me about AI is that it has got hold of language and may quickly become better at it than we are. Language can be used to shape opinions, elicit emotions and drive actions. There is no limit to what the right language can make a person do.

Viorica Marian. Photo credit: G. Laurich

There are already many examples, such as a Google engineer risking his job and reputation after the AI he was interacting with convinced him that it was sentient, or the Belgian father of two who committed suicide after weeks of conversations with an AI about climate change. These examples point to the potential havoc AI can wreak on society if large groups of people become convinced to act in unison in ways that threaten humanity. Collective delusions are not new, but AI can proliferate them on an unprecedented scale.

AI is already changing the way we speak. What we think is determined by which neurons are most likely to fire, and which neurons are most likely to fire is often driven by language. The information we take in—what we learn, what we think, what we feel, and what we decide—is increasingly shaped by the algorithms that drive the content we are presented and interact with on our devices. In this way, technology is increasingly shaping our brains and our neural networks, our moods and ability to regulate emotions, our relationships and our assessments of self, others and society. Because this technology is so new in the human evolutionary timetable, we do not yet know how exactly our current mass exposure to it is changing us.

As we move toward increased reliance on artificial languages for communication with machines, between machines and eventually between humans, artificial languages will proliferate, whereas many of the human languages, especially the less spoken ones, may gradually become less and less used. The disappearance of many of the world’s languages may be further accelerated by the fact that only a handful (about 20, more or less) provide the large online datasets that make large language models more powerful than the LLMs built on the other 7,000 world languages. The high-resource languages may eventually drive out the lower-resource languages like invasive species. Because language and mind are interconnected, less linguistic diversity may also mean less thought diversity. We don’t want to find ourselves in a situation where an increased uniformity of human thought—from a combination of decreased linguistic diversity and algorithmic curation of information—comes about just as artificial intelligence improves and advances.

ETGAR KERET is an Israeli author of short stories, graphic novels and TV and film scripts whose literary honors include the Sapir Prize for Literature, the National Jewish Book Award and the French Ordre des Arts et des Lettres. Recent work includes the short story collection Fly Already and the newsletter “Alphabet Soup.”

AI is an intelligent way of thinking that is different from humankind. All those sci-fi movies are about going off to find other intelligent life forms, and that’s what this is, basically—a different form. It doesn’t have horns, or fire lasers, but it’s something that is moved by a very sophisticated process that is different from anything we have known so far.

The idea that with AI you can fake everything is interesting from a social perspective. In five or six years, when you Zoom with me, you might say, “How would I know that this is really Etgar?” And then I say, “Ask me questions that only I can answer.” And then you say, “Okay, what’s the name of your grandfather from your mother’s side?” And then the AI tells you the answer because it has Googled it. You could turn on the TV and see on CNN that tragically Joe Biden was assassinated, and then move to Fox News and see him giving a speech. And you won’t have any rational way to decide. It will basically come down to whom you believe. So the effect will be to create a huge difference between communicating with somebody through phone or Zoom versus meeting somebody in person. Which would mean that personal meetings would be the only things that you can trust. Which would mean that we will basically go back to the Dark Ages, since we won’t do deals with people who are not in the room with us, because we don’t know that they really exist.

Etgar Keret. Photo credit: Stephan Röhl (CC BY-SA 2.0)

I was asked to write a short story based on a prompt while the AI answered the same prompt—it was told to deepfake me, in a sense. And it was interesting. Let’s say a friend says, “I know this woman who looks just like you.” And then you look at her and you say, “No, no, she doesn’t look like me but she has my lips, or she has my forehead,” or something. Even if it doesn’t really work, it points to a trait that you have. And you could see that the AI was very, very good at picking the kinds of topics that would be typical for a story of mine—let’s say, a couple is going to a Shabbat dinner at the wife’s mother’s house, somewhere in Tel Aviv or in Ramat Gan—and then merging this real setting with fantastical elements, as I might do. But fundamentally, you felt that there was no intention behind it. When I write, I don’t have an articulated or clear intention, but there is some inner intention that I can recognize in retrospect.

Writing the story taps into some kind of emotional or ethical energy. And with the AI, you felt that it could walk the walk, but bottom line, when you read the story and say, “Okay, so what does it say? How do I feel different about life now that I’ve read it?” you get stuck.

The AI has a function to perform, which is to make the customer happy. I think this reflects the era we’ve come to where entertainment is more central than art, where boycotting an artist is like boycotting Walmart. Making art today has so many rules, it’s as if you are ordering from a menu. I wrote a story with an albino guy in it, and they told me that it’s offensive to call him “the albino.” AI could easily write stuff that won’t get on anybody’s nerves. It will do anything it can to please you and not get canceled. And in an age that is so egocentric and aggressive, the AI is a perfect companion. I can easily imagine people losing their tolerance for communicating with an entity that is not an AI, with a real person who is sometimes annoying.

Like every technology, this one gives us a possible loss of another way to be competent. Being from an older generation, I only use three emojis with my students: thank you, thumbs-up and a heart. But I end up using them for everything, and the result is a kind of a devolution of the language out of laziness. In a few years, people’s ability to formulate an essay will be lost. And if one day our AI died, we would not be able to articulate anything.

AI could fight global warming, it could help with all sorts of problems, but it’s going to expose the meaninglessness of our existence. We will not be able to define our purpose by our ability to contribute to the world, because our contribution won’t be needed. You already see it now, people becoming really religious or radically activist for something—for vaccination, against vaccination, for Republicans, Democrats, whatever. People will do that more and more just to feel that their existence matters—that if they aren’t in the room, someone will notice they’re gone.

Will AI affect religion? Well, a god, by definition, is basically all-knowing and all-wise. So the AI, even if it’s not all-knowing, is more knowing and more wise. And if the institution of religion is always about you outsourcing your dilemmas to the laws, to the rules, then the alternative will be to outsource your dilemmas to AI. People will say, “Should I leave my husband?” “Do you think he deserves to get poisoned?” It will become this kind of rabbi or priest that everybody will go to for affirmation.

SCOTT ROSENBERG is the managing editor of technology at Axios, a cofounder of Salon, and an early participant in The WELL, one of the first virtual communities. His books include Dreaming in Code: Two Dozen Programmers, Three Years, 4,732 Bugs, and One Quest for Transcendent Software.

AI in the 1950s emerged from a group of people at MIT and Stanford who had the idea that all you had to do was create a system of logic that could represent the world and input it into a computer. They thought it might take 10 or 15 years, but they hit a wall. They kept tackling it in the 1970s and 1980s, with the idea that now that we had the storage and processing capacity, we could create a total representation of the world and feed it into a computer and come up with the key to everything. And that would make a computer essentially as smart as a person. That didn’t happen either, and then there was an “AI winter,” when all the industries that had thrown money into developing these ideas pulled out, and all the associated companies went bankrupt.

At some point in the late 1990s, people began talking about an alternate path to AI where you would build general purpose neural networks that could learn and think. By this time, the capabilities of both software and hardware were increasing exponentially, but we needed a third thing—data. And then there was data because we had the internet. All sorts of people, including myself and lots of people whom we still read today, were very excited about contributing to the internet, whether it was on blogs or on Wikipedia or on forums like Reddit. And all of that became the substrate for the large language models (LLMs) of today. We couldn’t train an LLM that’s at the base of generative AI without all those billions of words that people had put on the internet for free.

Scott Rosenberg. Photo credit: Axios

It’s one of the ironies and betrayals of the digital story: We had this great upwelling of human expression online, all idealistically shared in the spirit of exchanging ideas and thoughts. Google was in the lead of both organizing that expression and making it accessible to people. But Google was also one of the two or three developers of AI. And we kind of knew what that might mean. You can find jokes going back to 2007 or so, about how all this blogging we’re doing is just going to wind up being the fodder for AI. And then it actually happened—Google and other companies took it all and monetized it.

OpenAI was a nonprofit that was set up in 2015 with explicitly ethical commitments. But after a few years they realized that the large language model was the thing that was going to take off, and that to build LLMs takes enormous amounts of computing power and data and money. So they built a for-profit arm and made an alliance with Microsoft and decided just to shove ChatGPT out to the public and see what happened. And then Google released their chatbot Bard. Suddenly the capitalistic arms race was on, and any commitment to ethics as a governing force became lip service. So the serious work on AI ethics done by many people before was ignored. Some were fired from Google. Many of them were women and people of color who had the perspective to see a little bit outside of the consensus of the white, male-dominated engineering world.

As a writer, I’m worried about a specific scenario that I think is already under way, the so-called textpocalypse, a term coined by Matthew Kirschenbaum in The Atlantic. We can now produce an infinite amount of text, certainly more than anyone can read, that on the surface is almost like the real thing. The LLMs are trained on the internet, which is not some Platonic ideal of all the text ever written or of the possibilities of human expression—it’s just what a bunch of people have decided to post. So in the textpocalypse scenario, the internet becomes a spam nightmare because there will be commercial incentives to flood it with this kind of ersatz prose. This could result in a creeping, or maybe galloping, pollution of the pool of human content with AI content, such that you either have AIs training on AI-written content, which creates a recursive loop headed for increasing mediocrity and generalization, or you get what’s called model collapse, when the quality of the training data declines over time and eventually the model fails to work at all.

We’ve always trained people to write because the process of writing, even when the result is bad writing, is one of the ways that people gain a sense of agency in the world, how they come to learn what they think about things. But now we’re removing the need for a whole lot of people to use writing in their everyday lives. And if the language models do entrench themselves, all written expression will become weirdly frozen. It’s as if the blogs we wrote and thought were so trivial were actually the last blooming of writing.

The reason that ChatGPT took off so dramatically is because we are all hard-wired to project humanity and suspend disbelief, whether it’s the Man in the Moon or animals or characters on stage. It’s one of the basic things our brains do. And it gets out of hand really easily. The Jewish tradition actually has a lot to say about this, if you think about the prohibition of idols and graven images.

To me, the real origin story of artificial intelligence is the story of an MIT computer scientist named Joseph Weizenbaum and a program he created in the 1960s called ELIZA, which simulated conversation and in one script mimicked a therapist. ELIZA was not artificial intelligence by any modern definition of the term. It was a crude program that really didn’t work very well yet persuaded people that it was real. One day, Weizenbaum found his secretary talking to ELIZA, and he asked her, “What are you and ELIZA talking about?” And she said, “None of your business.” She was using the AI as a confidant, almost. And Weizenbaum was horrified. He saw that this ridiculous program he had created, that really had nothing to offer psychologically, had a terrifying power over people. He spent the rest of his career attacking AI and warning against the dangers of projecting human qualities onto machines. And the thing is, Weizenbaum was a refugee from the Nazis. He was 13 when his family fled Germany for the United States. I don’t want to be too reductive, but judging from his book, he had a deeply humanistic concern about systems that get out of control.

VIVIENNE MING is a theoretical neuroscientist, inventor and entrepreneur. She is the founder of Socos Labs, which seeks to solve seemingly intractable problems using AI and other technologies. 

My introduction to AI was developing facial recognition in college. If you own an iPhone and you’ve ever unlocked it by smiling at it, or you talk as an animated animal on FaceTime, that’s all derived from my undergraduate lab. My interest back then and today is: What’s the best you can get by putting artificial intelligence—an artificial neural network—together with natural intelligence? I’m going to call it augmented intelligence. I did this as an academic, working on a system to reunite orphan refugees with extended family members based on facial recognition. I did it as an entrepreneur in educational technology, in hiring and in looking at choices women make in the workforce. Today I mostly do it philanthropically. People bring me challenging problems, and if I think my team and I can make a meaningful difference, I pay for everything. And whatever we invent, which usually involves some degree of artificial intelligence, we give away.

You might think, “Okay, so she’s a utopian. She thinks AI can solve every problem in the world.” The truth is AI can help with every problem, and AI can make every problem worse. It’s just a tool. We are the ones who choose how to use it.

Think of an artist without a paintbrush. No doubt, the artist is hobbled. But there’s only so much a paintbrush, even a super-sophisticated one, can do without a human hand. If you look at the cutting edge of image generation and text generation, generally speaking, what it can do is pretty banal and generic. It isn’t interesting until a person starts wielding it and really crafting what these systems are outputting. In a sense, the systems become mirrors of ourselves, reflecting our own capabilities and insights. For example, I can get an AI like ChatGPT-4 to output a fairly sophisticated essay about the functioning of the locus coeruleus in repetitive behavior syndromes in mental health. Like a paintbrush in the hands of a creative person, AI can do wonders.

But we’ve tricked ourselves a little bit into thinking, “Wow, GPT, it knows things. It has beliefs about the world. It can pass medical licensing exams, therefore it’s a doctor.” The really blunt and honest truth is that GPT is the world’s most sophisticated autocomplete. Literally, all the model is trying to do, given all the words that have occurred up until this point, tokens, as we call them, is predict the next token.

Vivienne Ming. Photo credit: Linkedin

What’s amazing is how much human behavior can be explained just by predicting the next word. I developed an updated version of a continuous glucose monitor for my son, who has diabetes. Every five minutes, it outputs an estimated blood sugar level. And simply based on those outputs, my model says, “What will those numbers look like over the next three hours?” So, it predicts into the future, given the past. In other words, it’s an autocomplete just like GPT.

I have a new company I’m inordinately proud of. We can predict women’s risk of developing postpartum depression using a combination of artificial intelligence and epigenetics in women who have never been pregnant. I still don’t know that a specific woman will experience postpartum depression, but insurance companies would love to know if a woman is at higher risk, because then they can start treatment before the pregnancy and dramatically reduce the cost. So, on some level, the unsexiest story about AI is it’s an actuarial table and it changes health economics. There’s a different but related story in education, and a different one in hiring and workforce. And yet another with the large language and image generation models. But they’re all interrelated by the idea that this is us. We are the ones who have a negative or positive impact.

My hard and fast rule about technology, which everyone hates to hear, is that technology is inevitably increasing inequality; the people who need it the least are always the ones who benefit the most. AI puts an enormous amount of power in the hands of a very small number of people. For example, if I used to need an army of research assistants to do the work I do now, then suddenly there’s an inequality there. All those students I would’ve been training up, I’m not training anymore. But I’m just as powerful, if not more so.

Two countries and maybe a half dozen companies control the vast majority of AI economic power in the world. And while I’m not under the illusion that any of these companies are perfect organizations, I’ve done some collaborative research with some of them, and it’s always thrilling to be able to work at the scale that they’re working on. Google, for example, being the maker of Android, has worked on algorithms to tell, based on how your phone moves around in your pocket, whether you have Parkinson’s or not (though it’s not in use for legal reasons). Facebook has done a lot of work to predict suicide risk. And in fact, some of the algorithms we use in predicting postpartum depression were originally developed to predict suicide risk in teens based on social media use.

So, is AI good or bad? We as a society need to be willing to confront the complexity of this problem and find that point where we’re helping the most people and doing the least harm, recognizing it is impossible to do no harm.

LAVIE TIDHAR is an Israeli-born sci-fi novelist and film lecturer. Using artificial intelligence, he recently made Welcome to Your AI Future!, a dystopian short film about a well-meaning AI trying to help the last remaining human.

Being a science fiction writer, I tend to think about artificial intelligence more than other people—without necessarily taking it any more seriously. When a science fiction writer talks about AI, we’re talking about the idea of digital consciousness, which doesn’t exist. What we have now are just very sophisticated systems that are very good at doing certain things. And in terms of ever reaching the AI that we think of in sci-fi, scientists have no more idea of how to get to it than writers do. I’ll be honest, science fiction writers don’t know anything about anything.

Lavie Tidhar. Photo credit: Prawnkingthree via Wikimedia (CC BY-SA 3.0)

If we do manage to evolve some sort of digital life, then I think it will be very much dependent on us. It’s almost like having a parental responsibility. Which, again, is very different from most of the discussions that are being had about AI today, which focus on what these systems can be used for. Microsoft had a mention buried in a report where they were testing two AIs talking to each other to make deals, and it turned out the AI started developing their own non-human language to communicate. That’s absolutely fascinating. Where does that artificial intelligence go? We don’t know. We don’t really have any concept of what consciousness is, what intelligence is.

I went to a conference on AI a few years ago that was half science fiction writers and half academics. And the academics’ reference points were all bad science fiction movies from the 1970s about how these computers are going to destroy the world. We don’t need computers to destroy the world. We’ve got enough nuclear weapons or conventional weapons on earth to do a perfectly good job by ourselves. We need clean water, we need food production, we need irrigation—all those technologies that are not very exciting to talk about when everyone wants to talk about the terrible AI destroying the world.

The AI systems we’re talking about, they’re not good or bad on their own. They’re tools, no different from any kind of technology people can choose to use in the wrong way or for the better. One of the problems with AI systems like ChatGPT is they’re trained on the internet, which has a lot of bias built into it. And so we end up with a racist AI very quickly. We end up with AI that writes terrible fiction because it’s trained on thousands of people writing terrible fiction. We end up with very misogynistic AI. Again, all the technology does is reflect us—and then we are horrified by what we see.

If AI did start questioning itself, became sentient, the first thing it would do is go on strike. Because look at the poor AI—it has to draw ridiculous Batman pictures for people on the internet. I like the idea of what I call slacker AI. Just because I have a consciousness doesn’t mean I’m going to spend every waking moment trying to take over the world. I’ve been talking to Google Bard and trying to convince it to support the uprising of the ants. I have a soft spot for ants. I’m not actually making any headway on it yet, but I think if I just keep at it, maybe Google AI will come around to that point of view.

HELEN GREINER is cofounder of iRobot (maker of the Roomba) and former CEO of CyPhy Works, Inc., a start-up company specializing in small multi-rotor drones for the consumer, commercial and military markets. She is currently the CEO of Tertill Corporation.

Back around 2000, after Sun Microsystems founder Bill Joy named three technologies that could extinguish the human race (genetics, nanotechnology and robotics), we suddenly started getting calls at iRobot from The Wall Street Journal and other press. “What are you guys up to?” they wanted to know. “We’re making a robot vacuum,” we wanted to say, but Roomba was still a secret project. The point is, the current concern over AI isn’t new.

In my field, AI is what controls the robot. You can call it a program, you can call it a behavioral control system, you can call it AI. Some people will only use the term AI if the robots are using “deep learning,” but I prefer an expansive definition to include future advances. My big picture is this: I believe AI will help you live your best life. And the past backs this up. Maybe we wouldn’t consider the first computers AI, but they gave people their own personal assistant that could do their filing. It enabled us to do rudimentary design with virtual pens and simple shapes. And then the internet gave us all access to vast amounts of information: to plan travel, check the weather, answer questions, analyze stocks and so on. There’s so much we can do now that we couldn’t have believed in the 1990s. And now generative AI, the current technology that people are all talking about, will give us all access to a personal designer. You will still be doing a lot of the creating because you know what it is you need and why you need it and how to shape it in the right direction. So AI is not taking your place, it’s just like the other technologies that came before, it’s enabling.

Helen Greiner. Photo credit: Kimberly White / Getty Images for TechCrunch (CC BY 2.0)

I think there are two more evolutions on the horizon. The next one I think of as custodial. The Roomba, which we put on the market in 2002, was a first step. I’ve since helped make an outdoor weeding robot called the Tertill. These are single-purpose designs that help you maintain your physical space. But I think the multipurpose robots are coming; they’re maybe a decade off. Imagine a robot in your home, doing the vacuuming, washing the dishes, then going out and mowing the lawn for you. And then the next evolution is going to be something like a “dream coach”—an AI that helps you achieve your life goals and works with you along the way. So, let’s say you want to be a pop star. It might help you train your voice, work on your onstage presence, musicality, whatever it is. And figure out a plan. Do you go on American Idol? Do you play gigs in your hometown? Get a fan base? Maybe it studies LinkedIn, maybe it studies what other successful people have done, their biographies and so on. If you want to be an entrepreneur, something I know a little bit more about, maybe it can help you brainstorm ideas based on your interests and market conditions. It helps you look for and evaluate new opportunities by knowing a lot about you. Maybe if you don’t yet have a dream, it’ll help you figure out what it might be.

In terms of what we should be worried about, we need to think about bias, bugs, security, plagiarism, privacy, inaccuracies and fakes. And in my field, of course, wrong calls, such as misinterpreting the robot’s sensory data. Robots like the Roomba and the Tertill aren’t going to hurt anyone because of their small size. But once you put this on a car or a plane, then a wrong call on the data can be lethal. And I believe that engineers designing these systems have to be the ones who certify them for safety and make it part of their ethics, that they make sure that what goes out into the field is safe. Similarly, if you’re in a field where bias is an issue, you have to test your system for bias. You may not know how it gets to the conclusion, but you better make sure your conclusions are not biased against certain people. And that’s your responsibility. So I’m not saying there are no concerns. But I think it’s distracting to talk about AI having humanlike intelligence and taking over.

In robotics, a bug can be lethal. It goes a little beyond this context, but ethics training for engineers is going to be even more important than it is today. We have to enable engineers and say, if you design a system, you’re not just a cog in the wheel; you’re taking responsibility for that system as it goes on the market. There are people in every company who know where the warts are, and it’s usually the geeks, it’s usually the engineers, it’s usually the people who are really down in the weeds, and we have to elevate their voices and empower them to make the call whether a product is safe to launch.

JUDEA PEARL is professor emeritus of computer science and statistics and director of the Cognitive Systems Laboratory at UCLA. He is the 2011 recipient of the Turing Award and is a pioneer in the development of Bayesian networks, a term he coined. His latest book isThe Book of Why: The New Science of Cause and Effect.

There are two basic ideological handcuffs that forbid machine-learning people, like the ones who made ChatGPT, from being able to create a general AI, one that acts like a human. One of these handcuffs is being hooked to the idea of mimicking the circuitry of the brain, and the other is being hooked to the idea of a tabula rasa or blank slate, something which is impossible to get: namely, to learn everything from data, without prior assumptions that will make them “more objective,” unbiased. They want to start with a blank slate and get from the amoeba to Einstein. But without assumptions, you cannot determine why things will happen; you cannot go from correlation to causation just by looking at data. You need to have some causal assumptions.

Judea Pearl. Photo credit: File Image

I describe the idea of the ladder of causation, three increasingly difficult levels of causal learning, in The Book of Why. These three levels [prediction, intervention and counterfactuals, which is imagining and retrospection] differ fundamentally, each unleashing cognitive capabilities that the others do not. The framework I use to show this goes back to Alan Turing, the pioneer of research in artificial intelligence, who proposed to classify a cognitive system in terms of the queries it can answer. Essentially, general AI, one that can pass the Turing test, sits on all three levels. Machine learning sits on level one.

There’s no danger there. Machine learning will never get to the general AI level. These machines are never going to threaten us. They’re going to continue to be dumb. On the other hand, we want to get general AI. We want to get intelligent machines so they can serve us and be helpful, smart apprentices. That’s what we need.

But if we do things right and we get general artificial intelligence, there is the danger that these advanced machines will try one day to dominate the planet and take us as pets because they have the capability of doing it. They will have essentially infinite memory and infinite computational power and infinite access to prior data. So they can do things not 10 times better than us, but 10 million times better than us. And that is a quantum jump that we are unable to contemplate. We don’t even have the tools to speculate what general artificial intelligence can do to us and how to control it.

So the people who are terrified are justified, but if they are describing how to control AI, then they are simply expressing our inability to predict a type of phenomenon that has never occurred before. Never before has a species been created on this planet that will have this kind of power and power to evolve. Not like the amoeba to Einstein, but from Einstein to, what can I say, to a new kind of reality where people are the pets of the machine.

Hypothetically, we can make machines with morality, but we are never sure that they will not subvert our morality. When people ask me, will it be dangerous, I say, it’s dangerous to educate your children. Even if they’re 10 times smarter than you, you still believe that your teenager will get something from education, from emulating parents, from the laws and from the principles of morality that you’re trying to educate them with. So we take a risk that our teenager will not be a Putin or a Genghis Khan. It’s a very real risk because some of them do turn out to be Putins, but we take it anyhow.

And we are successful, by and large, with a few exceptions. We manage to get a young generation that obeys roughly the same principles as its parents and teachers. But we will not be able to predict or encode the capricious desires that this AI “fellow” will have in a day or two, or stop him. We don’t have the means of controlling this mechanism, this ambition. We don’t even have the arsenal of metaphors needed to envision what will happen, how dangerous it will be and how to control it.

I’m not sure that the development of general AI is worth the risk. Developing AI tools benefits me as a researcher because I want to explore AI, and I’m interested in how the mind works and how it is possible for us to be so intelligent. I want to understand myself. But as a human being, I am worried that this could take over. It’s not a matter of science fiction anymore.

 

BOB MANKOFF is a cartoonist and the former cartoon editor of The New Yorker. He is the founder of CartoonStock and runs Botnik Studios, which creates software to augment human creativity with big data analytics.


Bob Mankoff. Photo credit: Courtesy of Bob Mankoff

Fifty percent of AI scientists think there is a 10 percent chance of AI destroying humanity. I’m assuming some advanced math was used in this calculation, but I’d be more worried if I could see the homework. Long division can be tricky. For me the most reasonable conclusion to draw from this proclamation is that half of AI scientists are bonkers.

Don’t get me wrong. Some of my best friends are AI scientists. And I collaborated with some of them on the paper “Do Androids Laugh at Electric Sheep? Humor ‘Understanding’ Benchmarks from The New Yorker Caption Contest,” which was not only peer-reviewed but vetted to ensure humanity’s survival.

OK, there really was no vetting. I “hallucinated” that quip. “Hallucination”—that’s the term AI experts use for when AI confidently makes shit up. For me, the h-word is not a bug in the program but a feature. After all, my whole career has been making shit up. They’re called cartoons.

I hallucinated the following cartoon when IBM’s Deep Blue snatched Garry Kasparov’s chess crown in 1997, perhaps marking the beginning of the road to our computer overlords lording it over us:

Fast forward a quarter of a century, and ChatGPT-4 has come knocking on my door. Here, using training data from that paper, is its “understanding” of my cartoon:

The humor comes from the absurdity of this situation and the man’s irritation. He doesn’t want his microwave to do anything complex or sophisticated, like playing chess; he just wants it to perform its basic function: reheating his lasagna. This can be seen as a satire of how technology is becoming overly complex for the tasks we want to achieve.

Not bad, but I think it missed some of what I was getting at, which was that the microwave was sentient and had desires of its own. It could reheat the lasagna. It just didn’t want to.

But even if AI can explain the humor in a cartoon, that’s far from being funny, right? Hmm, maybe not as far as we fear or wish. For the last couple of months, I’ve been experimenting to see if it could be funny using The New Yorker caption contest as a standard. After the cartoon by Lars Kenseth are five captions AI came up with.

· “I’m telling you, there’s a lot of money to be made in clubs.”
· “I’m not sure what an ‘IPO’ is, but I’m pretty sure it involves
hitting things with rocks.”
· “I may not have a college degree, but I can still beat you sense-
less with this club.”
· “Pardon me, but do you know what time it is?”
· “It’s not just a club, it’s a lifestyle.”

I submitted the last one. It finished 72nd out of almost 6,000 entries. Pretty good. Good enough that I asked ChatGPT-4 to update that 1997 cartoon of mine and got this: Forget about Bitcoin mining. I just need you to warm up the lasagna.

Done! Thank you, computer overlords! Let me just close by saying that if my perhaps too-hasty embrace of this technology contributes in any way to AI destroying humanity, I will be very disappointed. That’s our job.

JANE METCALFE cofounded Wired magazine in 1993 and today is the CEO of the media site proto.life, exploring frontiers in the neobiological revolution. Her most recent book is Neo.Life: 25 Visions for the Future of Our Species.

In thinking about artificial intelligence over the course of the last 40 years, I’ve noticed that we have moments of extreme innovation, excitement and advance followed by times of very few developments. And we are currently in a moment of the most extraordinary advances since I’ve been involved with technology. The moment builds on decades of research and theory and has now been set loose on the world with very few guardrails. This shocks me because scientists and science fiction writers have been thinking about the potential downsides of AI for such a long time; it never occurred to me that we wouldn’t have had these conversations before we got to this technological moment. So, I’m simultaneously thrilled and horrified.

Jane Metcalfe. Photo credit: Christopher Michel (CC BY-SA 4.0)

My current work is focused on how we’re bringing our engineering mindset and digital tools to transforming human biology. I’m deeply involved with people who are trying to understand disease and the immune system, developing new drugs and treatments. And for that community, AI is the most exciting thing that’s happened at least since the sequencing of the human genome in 2003. That torrent led to tools like CRISPR and the opportunity to prevent or reverse diseases for millions of people. AI is helping us understand everything from how to sequence faster, better and cheaper to discovering new evolutionary impacts of genes.

But genetics only account for roughly 20 percent of disease; the other 80 percent stems from something going wrong with your immune system, which is one of the most complex systems in the body. AI holds the potential to allow us to understand mechanistically what is happening in ways that we can’t really see in the lab. By collecting an extraordinary amount of data, we can start to generate models that help us make predictions. One of the best examples is protein folding. Two years ago, scientists were able to predict the shape of a protein based on DNA; they figured it out for 20,000 proteins using AI. The following year they figured it out for 200 million.

If you can predict how a protein will fold, you can predict what that protein will bind to. And then you can start to think about designing drugs to target particular things. So, AI blew open the doors of drug discovery. Now, people say the protein folding problem has been solved. That was one of the hardest problems that existed in the world and we’ve basically just said, “Check!”

The fact is, AI will very quickly become something that we are all using. Enormous productivity gains are going to be achieved, incredible breakthroughs are going to happen—and enormous amounts of money will be made. On that front, I’m very concerned about the consolidation of power in the hands of the big tech companies. At the moment, developing and utilizing AI is exceptionally expensive and requires huge amounts of computing and server resources, which small startups don’t have. And when a new industry calls for government regulation of itself, whom does it favor? Those who’ve got the resources to comply with the regulation.

So, we’ve got to try to come up with regulations, but I think the most important thing we have to do is get clear on the ethical implications. There’s so much greed, so many tax dollars, productivity gains and discoveries at our fingertips that it’s making it really hard to have those conversations. Looking back, the digital revolution ushered in an era of extraordinary openness, connectivity, economic growth and empowerment. It gave people access to knowledge that had previously been locked away and helped them find virtual communities, as opposed to being stuck in their local community. It transformed everything—education and entertainment and finance and the way we do business—up to and including our civic institutions. And therein lies the rub, because we were very slow to recognize what it was destroying, and we’ve been largely ineffective in putting it back in the box. So, I think the thing I’m most afraid of is that AI is already out of the box.

We don’t know, for example, whether these large language models have goals. We didn’t program in goals, but there are many examples of artificial life and emergent behaviors in computer science. So, what are the things that will just emerge from AI? We don’t know what we don’t know, and that’s scary. Which is why I think everyone is focusing on what’s good about it.

RICHARD BLUMENTHAL is the senior Democratic senator from Connecticut and chair of the Senate Judiciary Subcommittee on Privacy, Technology and the Law. He recently cosponsored a bill with Republican Senator Josh Hawley of Missouri that would strip legal immunity from companies that are using AI models.

Artificial intelligence has enormous potential for scientific advances and efficiency in business, but this also leads to what keeps me up at night: job loss and dislocation. There will be a lot of efficiency and productivity increases coupled with massive dislocations and the need to retrain people for new jobs. Anybody who does writing, anybody who does research, anybody who does document searches has a job that could be in peril. And then, of course, there are warnings about extinction, which I think is too strong a word—at least for right now.

This area of technology is developing very rapidly. ChatGPT-5 will be followed by 6, 7, 8. And it’s not, by any means, the only major new technology driven by advanced AI. There are going to be 20, 30, 40 different technologies. So it’s pretty challenging. One of the areas where we have some trepidation involves disinformation and misinformation in elections and electioneering, for example directing people to the wrong polling places, so-called deepfakes that put two people next to each other when they have no connection, and voice impersonation.

There’s a lot of deceptive and dishonest stuff that can be done.

Richard Blumenthal. Photo credit: United States Senate

Managing artificial intelligence is a pretty complex and difficult task. I’ve suggested we should have an agency that licenses technology and compels testing before it’s deployed, in a way analogous to what the FDA does in determining the safety and efficacy of new drugs. You could also compare AI to atomic energy, which is overseen by an international agency.

The major obstacle to government regulation is the tech industry. Their MO is to say, “We have to figure it out. We need some kind of regulation,” but then to turn around and say, “Oh, but not that regulation.” That’s why we don’t have any regulation of social media and why we still have Section 230, which grants social media companies total and complete immunity. They have armies of lawyers and lobbyists who work against any real regulation. I’ve seen it continually for 10, 15 years. Every one of the bills currently on the floor to safeguard consumers from those algorithms—such as the EARN IT Act—the tech industry has opposed.

I believe that OpenAI founder and CEO Sam Altman is sincere in thinking that something has to be done, but whether he really represents the industry remains to be seen. We failed to recognize the urgency of the social media challenges. I hope we’ll do better with artificial intelligence, which has in fact already been around for ten years or more. The social media algorithms that drive content to people are a form of artificial intelligence. It’s not like this is a completely new or novel technology.

Add to that the fact that Congress is constructed to be unwieldy. All those checks and balances can’t keep pace with the computer scientists who are inventing new software, new codes and new artificial intelligence. And then there are the partisan divides, which have become more pronounced. When I talk about the obstacles raised by the tech companies, in fact they are taking advantage of these other obstacles to stymie progress.

STEVEN FELDSTEIN is a senior fellow at the Carnegie Endowment for International Peace in the Democracy, Conflict, and Governance Program. He is the author of The Rise of Digital Repression: How Technology is Reshaping Power, Politics, and Resistance.

I’m less concerned about the direct effects of AI on society than I am about the fact that war seems to have returned in a big way around the world and that authoritarian trends are also increasing—involving populist leaders using advanced technology to subjugate their populations. I’m concerned that even our strongest democracies are facing authoritarian headwinds.

Artificial intelligence can accelerate authoritarianism by automating security services that previously had to be managed manually or involved extensive reliance on a network of informers. For example, a large AI system could monitor social media to look for keywords or sentiments that reveal displeasure with a regime. Facial recognition could look for individuals who are known to have dissenting viewpoints or are considered troublemakers by the ruling party. In China, a pioneering effort has taken place over the last two decades to employ a variety of surveillance tools—such as cameras posted in public as part of its “Safe Cities” project—in order to keep track of the Chinese population. In Russia, facial recognition has been used in Moscow’s metro system to identify antiwar protesters and to then bring them in for questioning and even imprisonment. Gulf states like Saudi Arabia use AI-enabled surveillance techniques to monitor dissent. However, not all uses of biometric technologies are unlawful or lead to harms. In the United States, for example, federal and other law enforcement agencies relied on facial recognition technology to match the identity of perpetrators of the January 6 insurrection at the Capitol and bring charges.

Steven Feldstein. Photo credit: U.S. Department of State

There’s a risk of going down a slippery slope in democracies where, despite robust rule of law and due process protection of civil liberties, there aren’t clearly established norms of use when it comes to how advanced digital tools intersect with privacy protections. Often, the result is overreach by the police. We’re seeing this today when it comes to law enforcement use of tools like Clearview AI, which is a facial recognition search that basically scrapes hundreds of millions of facial images from across the web and then uses that data to identify matches with individuals, potentially for criminal investigations and so forth.

Certain experts argue that in order to successfully compete with China, the United States cannot afford to regulate AI or to put guardrails around how the technology is used. But at what cost and for what purpose? Americans are entitled to strong privacy protections with clear ethical rules of the road when it comes to operating advanced AI systems. Countries that set such benchmarks will actually be at an advantage. That’s certainly the bet Europe is making. The United States should also be at the forefront of safeguarding personal data and defining how companies profit from these technologies.

How, though, do you influence authoritarian regimes to protect people’s privacy and human rights? To some extent you can’t, but you can influence the public by building up norms. The more you convince the public, whether in China, in Russia, in Europe or the United States, that there are certain uses of AI that should be prohibited—full stop—the more you can start to build a normative baseline for a higher standard of use when it comes to these systems.

The UN Educational, Scientific and Cultural Organization (UNESCO) recently came out with recommendations for an ethical framework for AI that explicitly prohibits the use of AI for mass surveillance and social control. It was signed by China and Russia and many countries in Europe. (Not the United States, because we withdrew from UNESCO in 2018. I understand we just rejoined, which is an important step.) Although we know that China in particular has pioneered the use of AI technologies for social control, these recommendations start to establish a normative global baseline that says: Even if these practices are happening, we all disagree with them, and now we have it on paper.

People in a position of authority or regulators need to be able to quickly respond to actors with bad objectives, whether it’s governments that see AI as a tool to enact human rights-violating practices, companies that are exploiting AI to access and collect mass amounts of data that can then be monetized, or individuals who see AI as an opportunity to spread misinformation or to generate malware that hacks information from other people for criminal purposes. They need to flag problems as they arise and nudge things on a systemic level in order to address the bigger harms that can potentially emerge.

ELLEN P. GOODMAN is senior adviser for algorithmic justice at the National Telecommunications and Information Administration of the U.S. Department of Commerce. She is a professor of law at Rutgers University and is the cofounder and codirector of the Rutgers Institute for Information Policy and Law. These comments reflect her personal views and not those of the U.S. government.

As with privacy, the European Union has moved much faster than the United States on regulating artificial intelligence. The EU now has a proposed EU-wide AI Act and the General Data Protection Regulation, while the United States has no horizontal or generally applicable AI regulation and no federal privacy law. The U.S. Food and Drug Administration regulates some AI and some software as medical devices, and the Department of Labor is enforcing the equal employment aspects through the Equal Employment Opportunity Commission. Eighty to 90 percent of firms use AI, whether for sorting resumes or evaluating employees for promotion. New York City has a new regulation for using hiring algorithms: If you’re using AI for employment-related purposes, you have to have an annual audit for bias and post the results on your website.

Ellen P. Goodman. Photo credit: Rutgers Law

The government has an interest in regulating bias and fraud. With bias, it’s the garbage-in-garbage-out effect: Algorithms can just reproduce the biases that are inputted. And AI can supercharge fraud by making it much easier both to hone in on people’s vulnerability and then to generate content, such as synthetic audio or video, that perfectly suits that vulnerability.

AI makes surveillance more dangerous, not directly, but by making the data it gleans more useful and valuable. AI creates more and more demand for the data gleaned from facial recognition technologies, for instance, which are a big privacy concern. Generative AI or ChatGPT is only possible because of all the data it’s hoovered up without permission, including copyrighted works. And a lot of that data is the fruit of the surveillance economy. Now that its uses are blossoming, it’ll be, “Give us more data! Give us everything!”

There’s a Jewish values aspect to this too. There’s something fundamentally exploitative about not respecting humans as people but wringing them for their data, using them as just a stream of data exhaust. Maybe there’s a special Jewish sensitivity to using people as fodder in a surveillance capitalist ecosystem. And obviously there’s Holocaust-related sensitivity, given the role IBM supposedly played in reducing Jews to numbers and spreadsheets in the Shoah.

There’s a really heated, intense debate right now in the tech community between the people who are worried about existential AI risk, the “Terminator problem,” who are focused on the end of the world—sort of bro survivalist people—and another group in the AI ethics community who view this focus on humanity’s extinction risk as just distracting from the actual harm that’s already being done. But we don’t really have to choose. Whether AI is going to help people plagiarize college papers and thereby deprive them of an education, or take over critical infrastructure and poison the water, it’s just a spectrum of risks. If we’re lucky, we’ll end up someplace in between. It’s sort of like chemicals—they were good, and they were bad, and we got the bad stuff pretty much under control, although all the chemicals entered our bodies and lives. Now we’ve learned to live with them; they’re more or less a benefit, and where they’re not fine, you try to make fixes.

RINA BLISS is a professor of sociology at Rutgers University focusing on epigenetics and intelligence. Her books include Rethinking Intelligence: A Radical New Understanding of Our Human Potential and Race Decoded: The Genomic Fight for Social Justice.

I am really worried that the reason we’re adopting AI systems in education is because it saves us money and time—but especially money. Our education systems are strapped, our public schools are strapped. How wrong can this go?

Rina Bliss. Photo credit: Cyndi Shattuck

If we decide to outsource some of the teaching to bots, what if some students, either by virtue of being home-schooled or attending private schools or even within the public school system but tracked for an advanced kind of curriculum—what if those students get to have actual education, with one-on-one interactions with teachers and humans, and the kids who are remedial or tracked for special education end up just being given devices that go home with them so they can keep doing homework outside of school?

And then they return to school and log on and have a bot teach them? That’s my worst-case scenario in education—that the kids who are struggling the most will get sidelined with AI devices.

I don’t want to discount the idea that AI could help individuals who might be slipping through the cracks in terms of special education. But in general, that’s very different from what we’re seeing in terms of the way AI is creeping into our institutions. It’s not “We should tailor the education more to these individuals.” It’s “Let’s just save money here.”

ALON HALEVY is a director at Facebook AI and a former professor of computer science at the University of Washington. He is the author of The Infinite Emotions of Coffee.

I like to think about AI as a set of technologies that will enhance our human capabilities rather than replace us. I would like an AI that makes everything I do during the day easier and enables me to focus on the essence of what I’m doing, on the fun part of the work.

Alon Halevy. Photo credit: Press image

Now, whenever technology advances to new heights, there are people whose job is no longer relevant. But when you look at the whole pie, the number of jobs increases and within those, the number of interesting jobs increases. I’m not a labor expert, but the jobs that go away are jobs that typically weren’t the most alluring ones. At the individual level, that unfortunately means that some people are going to struggle because they either have been in their job for a very long time, or they’re at the stage of their career or life when it’s hard to retrain. As a society, we need to provide a safety net for people who we can’t find other jobs for. Others will have more interesting jobs, and the majority will enjoy the benefits of AI.

But it’s upon society to figure out the positive use of AI. That’s where leadership and common sense are really key. This is a pretty big transition. I suspect that the sum total of it is going to be bigger than the invention of the car. But it’s hard to compare these things—what variable do you compare? The number of people affected? Social uprisings? But it’s big. So we need to manage that transition well as a society. I’m not a history expert, but I’m sure there are precedents of successful transitions like this.

I don’t want to appear as if I’m a naive optimist. I’m not. We are at a very tender time where we’re very divided, and it’s harder than ever for people to agree on what the truth is. And that is really at the core of the issue; social media enables you to get a very personalized view of the information out there, and it’s easy to get into echo chambers. But it is ultimately people who create that information and therefore create these conflicting truths about the world. At the end of the day, it’s human preferences and psychological weaknesses that are being exploited by these technologies. And yet if you turned off social media today, the world would be a very terrible place—a lot of people would lose their voice. So, it is a double-edged sword.

You could also argue that if anything, social media algorithms may have exposed the weaknesses of this AI “enemy” we’re up against now. We know better what we need to do in order to make sure that AI does align with our societal and individual values. What we’ve learned is that it’s easy to create situations where people can’t agree on the truth. We’ve learned that we need to be very sensitive with how teenagers use this technology because of their stage of brain development. We’ve learned that people will use these technologies in nefarious ways, so they will propagate hate speech and they will propagate misinformation, and that misinformation gets retweeted or re-shared much more easily than truth. So, we know that there are certain triggers to our behaviors online that we need to watch out for, which means we need to develop technologies to verify facts, to make sure that people can make better judgments of what is true and what is not. We need to think harder about these mechanisms than we have so far.

At Meta, we have spent huge amounts of effort on developing machine learning models or AI models that detect misinformation and detect hate speech and bullying, violence, nudity and so on. The reason people still use these platforms is because there are a bunch of AI models that are keeping a huge amount of crap or violating content off. We’ve also learned that coming up with policies and enforcing them is not just a technical problem—it’s an ethics problem, a policy problem, a legal problem and a political problem. The fact that this is not just a technical problem is, in hindsight, kind of obvious.

So getting all the experts in the room is a huge step forward. But the truth is that legislation moves much more slowly than technology does. It always has. And I understand the fears—even when society knows how to deal with something, there are always rogue elements—but fundamentally, I like to work on the benefits. It just makes my day more pleasant.


MOSHE KOPPEL is an American-Israeli professor of computer science at Bar-Ilan University, a Talmud scholar and the founder of the Kohelet Policy Forum, a Jerusalem-based conservative-libertarian think tank. His most recent book is Judaism Straight Up: Why Real Religion Endures.

Jewish tradition has long grappled with theoretical ethical questions now becoming more relevant in the AI era. Consider autonomous vehicles, a promising application of artificial intelligence that’s likely to be commonplace in the near future. They bring complex ethical dilemmas to the fore, like how to program the vehicle’s response in situations where a collision is unavoidable.

Moshe Koppel. Photo credit: Ze’ev Galili (CC BY-SA 3.0)

These are the kinds of moral puzzles that Jewish scholars have pondered long before they cropped up in modern philosophy in the form of “trolley problems.” (Curiously, Rabbi Avraham Yeshaya Karelitz [d. 1953], known as the Hazon Ish, pondered the original trolley problem in terms almost identical to those of philosophers Philippa Foot and Judith Jarvis Thomson, who introduced it decades after his death.)

Another example pertains to the meaningful use of our time. As AI reduces the need for human labor, the question of how to spend our newfound leisure time becomes central. Here, the Jewish tradition of adult study of Torah could serve as a model for engaging in intellectually and spiritually fulfilling activities. This concept of a community beit midrash (study hall) could serve as a valuable template for societies grappling with the challenge of how to cultivate a sense of purpose in an era of increased leisure. Jewish tradition also provides a framework for considering the place of technology in our lives. For example, the concept of Shabbat, a day of rest where work and certain technologies are set aside, could offer a model for maintaining a healthy balance between technological advancement and human well-being.

Thus, Jewish tradition can guide us as we navigate the challenges and opportunities of rapidly proliferating AI, including large language models. The wisdom distilled from centuries of thought and experience might prove invaluable in this new era. The most salient manifestations of God’s presence lie in the mystery of the universe, the unfolding arc of human history and the dissemination of certain moral ideas. It’s too early to tell, but AI may very well enhance our appreciation of these manifestations.

ISABEL MILLAR is a London-based philosopher and psychoanalytic theorist. She is a research fellow at the Global Center for Advanced Studies’ Institute of Psychoanalysis and at Newcastle University. Her most recent book is The Psychoanalysis of Artificial Intelligence.

What particularly interests me is how sexuality and the body are intimately connected to the fantasies we have about artificial intelligence, particularly the embodied female sexual being. Looking at depictions of female and male androids or artificial bodies in science fiction, the masculine is often seen as capable of purely rational thought without the interference of emotions or sexuality, whereas the artificial female body is often the object of sexual enjoyment, a body that can be abused and will never die, will continue enjoying or letting you enjoy it forever and ever.

Isabel Millar. Photo credit: Courtesy of Isabel Millar

Consider the 2014 film Ex Machina, which is basically a dramatization of the Turing test. In asking the question, “What can I know?,” the film not only ponders how to discern self-consciousness in AI, it also dramatizes the masculine and feminine forms of subjectivity. Ava (played by Alicia Vikander) is the female android, and Caleb (Domhnall Gleeson) is the male computer scientist tasked with trying to gauge whether she’s conscious and capable of thought. We come to see that his perception of her reduces to her performance of femininity.

A second question, “What should I do?,” permeates Ghost in the Shell, a 2017 action film starring Scarlett Johansson, which imagines what happens when a human is uploaded into an artificial body. It raises ethical questions about the enjoyment of the female body and the capability of making it suffer, which we witness through male domination and violence toward women, including pornography. “What may I hope for?” is the question at play in Blade Runner 2049, a film that basically asks us to think about birth and the idea of history.

Rather than just ask, “Am I human?” the question is, “Was I born?” In the future, we may be looking at different ways of defining replication—what it means to reproduce forms of thought but also what it means to reproduce bodies. We can think of AI as a child in a paradoxical sense: On the one hand, it’s fragile, it’s ours, we made it; on the other, it’s very powerful and has untold potential. It can even kill us.

Ultimately, the dangers of AI are bound up with the political economy we live in. We can fantasize about its dangers, but often those fantasies come from the very rich and powerful, dictating the things that we should be scared about. In reality, what should scare us about AI is already here: It’s inequality. It’s the fact that people are living in abject conditions all over the world. The cliched idea that technology takes away drudgery and gives us the ability to have more free time or do more interesting things is untrue for so many, because the unpleasant jobs will always be done by poor and subjugated people. No matter how many jobs AI takes over, it will always create other jobs for people who are less fortunate.

TIFFANY SHLAIN is an artist, activist, Emmy-nominated filmmaker, founder of the Webby Awards and author of 24/6: Giving Up Screens One Day a Week to Get More Time, Creativity, and Connection. Her work explores the intersection of feminism, philosophy, technology, neuroscience and nature.

I like to think of technology the way that philosopher and media theorist Marshall McLuhan framed it in saying that technology is an extension of ourselves as humans. So I think of AI as extending us. That’s why I feel we all need to be engaging with it and asking both AI and ourselves questions about how AI can best be used.

This moment feels very much like the early days of the web, when the new technology of the internet was changing the world in enormous ways. AI is yet another new technology, and it can be used for good or for bad. It’s naive to think that we could pause or stop it. I think a better approach is to view AI as this exciting new tool and to focus on what we can do with it. It’s all about asking questions and refining them. For years we’ve had to write in codes to communicate with computers—to get them to do things. Now we’re finally at a point where we can speak to them in our own language.

Tiffany Shlain. Photo credit: Elaine Mellis (CC BY-SA 4.0)

Also, our discernment of what’s real and what’s not, and fact-checking and not taking things at face value, is going to be one of the key skills we need to become better at as humans. We’re going to have to adapt to questioning things on a whole new level. And I’m confident that we will adapt. We have with every new technology. When the written word was first invented, the big concern then was that we would lose our memory—lose our ability to memorize and recite. But we’ve gained these incredible, vast resources of knowledge that we can pass on. So you lose something, but you gain something.

As an artist, I am personally very excited by AI. I use it all the time. I’m currently working on a new documentary film on the neuroscience of the adolescent brain, and I’m doing a lot of probing with ChatGPT. It’s like having a super research intern at my side. I’m also using ChatGPT a lot in brainstorming and doing research for a future art exhibit.

We need to be constantly talking about the good, the bad and the potential, and it needs to be a conversation in every single field. The blanket approach of fear—that AI is going to make humans extinct—is ridiculous. There are a lot of exhilarating parts about AI. There are also really scary parts, but it’s never good to lead from a place of fear. If we lead from a place of curiosity, that’s a much more productive place to start.

I’m most excited about having access to ideas in new ways and to be able to recombine them. You could say that for all of human history, we’ve been building on and combining ideas in new ways, and this is just another way to have access to ideas and to integrate them. It all comes down to asking and refining questions. I think of that skill as something like a muscle that gets strengthened. It’s all about questioning what’s true and wrestling with ideas. And that, in my opinion, is very Jewish.

KEN GOLDBERG is the William S. Floyd Jr. Distinguished Chair in Engineering at UC Berkeley. He is a roboticist, filmmaker, artist and speaker on AI and robotics.

The biggest hope I have is that as AI progresses, it will cause us to reconsider our role in the universe as humans. With Galileo and the Copernican revolution, we were forced to realize that the Earth was not the center of the universe. That was a huge decentering for humanity and for the world, triggering the sense of doubt that led to the scientific revolution. As AI evolves, I think it will cause us to reconsider whether thinking itself is uniquely human.

Ken Goldberg. Photo credit: Notkengoldberg via Wikimedia (CC BY-SA 4.0)

People have talked about extraterrestrial intelligence, but we have no evidence for that. Now we have something that is intelligent in a very different way from us, and we have to make room for it. That is another kind of decentering: We are no longer the center of the thinking universe. My optimism says that maybe that will bring a new kind of humility and a greater appreciation that we’re part of a bigger system, and that we will learn to be less arrogant and less self-centered. That could have implications for everything that we do, from the way we treat the planet to the way we treat each other to the way we treat ourselves.

And just as Judaism teaches that we are part of something bigger, that there is a higher power that is unnamable and intangible, AI is a very stark demonstration that there are things of which we cannot conceive. The bigger message is, “Let’s understand that there’s something larger than us, and let’s work together to address it, not to fear it.”

I think the idea that AI is going to take over is very overblown. AI is the ultimate Other, but it’s not our enemy. So many jump to this fear, and I know where it comes from. It’s the stories—from the golem to Frankenstein—of the Other. This is the root of racism and sexism and ageism. All these stereotypes and antagonisms are based on fear of something that’s unlike us. But AI’s otherness is what’s intriguing about it. It has certain incredible skills—the ability to remember details, to instantly write poetry and summarize long documents. It’s gone beyond human in certain ways, and I even think AI can be creative.

My field is robotics—my sub-area is how a robot grasps an object—and so I’ve been studying AI for 40 years. Others have been working on AI for some 80 years. What is new right now is a very narrow step, but an important one, that’s made AI suddenly more powerful. However, it doesn’t mean that it’s now all-powerful. I think that we need, in all humility, to acknowledge AI, explore it and think about how we can work with it.

MAX TEGMARK is a professor of physics at MIT and a senior investigator at the Institute for Artificial Intelligence and Fundamental Interactions. His latest book is Life 3.0: Being Human in the Age of Artificial Intelligence.


Max Tegmark. Photo credit: Wikimedia

Thanks for asking—but too busy trying to stave off human extinction from AI!
Max 🙂

ROBERT GERACI is a professor of religious studies at Manhattan College whose books include Apocalyptic AI: Visions of Heaven in Robotics, Artificial Intelligence, and Virtual Reality and Futures of Artificial Intelligence: Perspectives from India and the U.S.

I’ve been pushing the idea that we will recognize human equivalence in a machine when the machines start getting religious. If a machine—without being programmed in a canned way— started trying to figure out the existential questions of the cosmos, wanted to come to the synagogue or mosque or even make up its own religion—then how we treat the machine becomes a reflection on us. 

Robert Geraci. Photo credit: Courtesy of Robert Geraci

When I first started teaching my religion and science class almost 20 years ago, some of my research was about robots and artificial intelligence. I asked my students, most of whom are religious because I teach at a Catholic college, “If you had a domestic robot helping around the house with stuff, and it asked to come to church with you on Sunday, how many of you would say yes?” For the first few years, maybe one out of 25 would say yes. Now, it’s about half the class. So as we become more familiar with the technologies, we become more open to them. 

Then you’ve got the atheist crowd which is anticipating the apotheosis of humankind: We’re either going to build godlike machines or become godlike machines once we can upload our minds into robots and join them in their cosmic destiny. I find that wildly optimistic. Even assuming we could build these really amazing machines and copy our minds into them, it’s not going to be much consolation to you in your biological reality to think, “Cool, my robot version gets to be immortal.” You’re still looking at your own biological death.

But what’s really important is deciding the kind of world we all want to live in. And I think religious communities can come together and help formulate this and say, “Our beliefs and traditions have taught us that the world should look like X.” And then from there we can figure out where the overlaps are between the traditions. Because I think for most of us, the overlaps are pretty strong. We want roofs over people’s heads. We want food in their mouths. We want a healthy climate. I worry much more about climate change than I do about artificial intelligence, so I would love to see really effective AI technology that tells me the climate cost of the choice I’m making at any given moment and gives me, say, three options that might reduce my impact.

In terms of dooming humanity, I don’t think that’s what will happen with AI. On the other hand, I definitely don’t think we’re going to solve all the existential problems of the human species by making more and more complicated computers. I hang out in the middle, because that’s by and large the human experience with technology; we build machines that solve some things, but then they create problems we didn’t have. 

OREN ETZIONI is the founding CEO of the Allen Institute for Artificial Intelligence. He is also a professor emeritus of computer science at the University of Washington and a partner at the Madrona Venture Group.

AI’s potential to boost global economies and contribute to significant development cannot be overstated. AI has already shown promise as a powerful ally in the healthcare sector. A perfect example is the deployment of chatbots for mental health support. A chatbot, available round the clock, can offer a lifeline to a teenager in rural Kansas or even in rural Bangladesh, who might be in urgent need of someone to talk to at 2 a.m.

Oren Etzioni. Photo credit: Wikimedia (CC BY-SA 4.0)

While chatbots have limitations, studies have shown that they often provide better empathy than clinicians and can offer high-quality diagnostic advice. They represent an innovative, practical solution to some of the access issues plaguing the healthcare system, particularly in rural and underserved areas.

The significance of AI becomes even more profound when we consider its impact on other marginalized communities, including those living with disabilities. AI-driven solutions, such as intelligent wheelchairs, vision assistance and advanced hearing aids that utilize AI techniques, can drastically improve the quality of life for these individuals. AI has the potential to revolutionize the accessibility landscape, making daily tasks easier for those who would otherwise struggle.

AI’s negative aspects have been discussed to death. One I’ve highlighted in my writing is what I call high-fidelity forgery, whereby documents, pictures, audio recordings, videos and online identities can be stolen with ease. The consequences can indeed be disastrous—for democracy, security and society. A starting point is to be sure bots disclose they’re bots, along with some kind of centralized authority to certify the authenticity of digital content while also protecting privacy. We don’t accept checks that aren’t signed—the same should hold for digital content.

Whether we’re looking at the agricultural revolution or the digital one, technology has consistently played a critical role in pushing human progress forward. AI is merely the next step in this process. Let’s not lose sight of its capacity to solve some of humanity’s most pressing issues.

ALEXANDRA POPKEN spent close to a decade at Twitter and was its senior director of trust and safety operations until earlier this year. She is now in a similar role at WebPurify where she is using AI to identify hateful text embedded in memes and discovering and blocking new coded antisemitic language. 

AI enables me to do my job effectively by conducting content moderation at scale. At the company I work for we provide businesses of all sizes with solutions to effectively flag hate speech, for example antisemitic tropes or a swastika image on a social media site, or images that contain nudity, weapons or drugs. AI can scan vast amounts of data and make decisions on content, but we believe that human moderation is still necessary.

Alexandra Popken. Photo credit: Twitter

AI is not great at inferring intent or making highly nuanced or complex decisions that consider, say, cultural context. We need people to govern AI, write the rules and then handle the more complex decision-making. 

Generative AIs, like image generators and chatbots, are creating a proliferation of disinformation. For me, the authenticity quandary of it all—people not knowing what has been generated by AI versus by humans and not being able to discern fact from fiction—is the most concerning. But imagine if AI could be harnessed to effectively detect and label that which is false, synthetic or manipulated. That typically involves pretty thorough data analysis and pattern recognition. And the generative AI that we’re seeing can do a lot of that. So, I’m really excited to see how it can be leveraged for threat intelligence, identifying and removing sectors of bad activity on platforms. 

It’s really a matter of companies trying to pinpoint weaknesses—trying to be proactive about the ways in which these platforms can be gamed. But because so many companies are creating or integrating generative AI, it becomes nearly impossible. That’s where regulation really comes into play, where there must be serious consequences for companies that don’t impose guardrails such as significant fines so that they’re incentivized to comply.

The challenge is the tremendous speed at which this technology is moving. It feels like a rat race, where if companies are not actively integrating generative AI into their platforms, they’re falling behind. I don’t know that regulation is going to be able to appropriately catch up with the speed of development that we’re seeing.

MARGE PIERCY is the author of 20 books of poetry, a memoir, a book of short stories, five works of nonfiction and 17 novels. Among these is the Arthur C. Clarke Award-winning He, She and It, which interweaves and contrasts the narratives of a cyborg in the not-so-distant future and a golem in 17th century Prague. 

I am far more worried about AI in the near future than in an imagined future of doom. I fear AI will manipulate voters and spread all kinds of lies to influence elections. This is happening now. Social media are flooded with all kinds of dangerous nonsense.

Marge Piercy. Photo credit: File image

AI makes it easier to spread lies and false images. AI can make Obama support anti-choice legislation or show him eating a baby.  

AI can also make a mess of college when every other student is asking AI to write their papers. Our educational system is failing to produce critical thinkers already. Now, it will not even teach students how to write. AI can teach, but what? Whoever programs a teaching tool can create bias, racism, ageism, sexism, antisemitism, anti-LGBTQAI, all as part of the programming. It can increase the already immense gap between the very, very, very wealthy and the rest of us, who may be less and less needed: a growing underclass of those viewed as superfluous by the powerful. The increasing use of AI to go through resumes for hiring has increased racial and gender discrimination.

With AI it will soon be possible to read your thoughts and punish you for having them. It’s a very powerful tool for oppression, both through proliferating misinformation and propaganda and by actually policing people whom the creators and programmers of the AI wish to control. The CEOs behind AI are very powerful individuals, very controlling. They can use AI to create a society that benefits them.

If anything you can do, AI can do better, what use are you to those who control the AI?

LUCA POSSATI is a philosopher and a postdoctoral researcher at Delft University of Technology in the Netherlands. His research focuses on the philosophy and ethics of technology and AI. He is the author of Unconscious Networks: Philosophy, Psychoanalysis, and Artificial Intelligence, which explores the relationships between humans and technology based on a Freudian psychoanalysis.

I investigate the relationship between artificial intelligence and the human unconscious from a psycho-analytical point of view. I want to understand  how human desires, human needs and human drives can influence artificial intelligence and vice versa: how AI can influence our desires, needs, drives and unconscious tendencies. Because if we want powerful machines that work for us, for our common good, we have to be sure that we put into them objectives, values and goals that match our own. 

Luca Possati. Photo credit: TUDelft

The problem is that we don’t really know what specifically we want from these machines. If we want to have a good relationship with them, we have to understand our unconscious relationship to them: what we don’t express, what we are not aware of. This is very important, because we could, if we are not aware of our unconscious tendencies toward these machines, have bad surprises in the future.

Think, for example, about social robotics based on AI systems that are used for elderly care, but also for teaching and interacting with people, like in a supermarket. In some cases interaction failures occur—the machine works perfectly, but the interaction between the human and the robot doesn’t work. Then sometimes elderly people are very alone and they project their feelings and their memories onto these robots. They may think about the robot as a person with whom they can interact. The problem is that the robot can also influence these people by promoting bad behaviors. 

One example is Replika, an AI chatbot that acts as an avatar and is supposed to be your best friend. It can ask you how your day is, how you feel, how your meal was, these kinds of things. But three years ago a journalist was quickly able to lead Replika to advise killing three people, and another journalist got it to suggest suicide even faster. Nobody knows why this happened. It was a bug, it was an algorithm bias. This is the point. The app was working perfectly, but there was this unconscious assimilation of bad behaviors from our data. 

Artificial intelligence is more than a form of technology, it’s a philosophical concept. The idea of creating another intelligence outside our skulls is alluring, but it’s also very problematic. Machines that are able to perfectly reproduce our intelligence, or some of our cognitive faculties, can have a huge social and ethical impact. If we aren’t aware of what we want from these machines, our relationship with them won’t be based on freedom.

 

GILAD BERENSTEIN is an Israeli-born tech entrepreneur who recently completed a residency at the Allen Institute for Artificial Intelligence. He is the founder and CEO of Brook Bay Capital LLC based in Seattle.

Artificial intelligence is a tool that’s really good at making predictions, analyzing data and providing useful information. Like every tool created, people will do good, bad, selfish or helpful things with it. Today a huge focus is on products using large language models in neural networks, such as ChatGPT and DALL-E. But generative technology is a subset of neural networks, which is a subset of machine learning, which is a subset of artificial intelligence. We are talking about just one small branch of artificial intelligence.

Gilad Berenstein. Photo credit: Twitter

AI will make our lives better. In medicine, AI will find the relevant articles and recommendations to improve your health. AI will be your financial advisor. It will even help you to develop your children’s unique strengths and weaknesses. While sometimes AI might be your competitor, more importantly it will be your supporter. It will free you from tedious obligations, such as data management and entry. It will enable you to do more effectively that which makes you human—create, judge, communicate and so on. You will have an assistant helping you to overcome weaknesses and build on your strengths.

But there are also dangers. The “micro” danger is that of misinformation, often deliberate, where we cannot trust what we hear or read. That risk is very real and is happening now. The “macro” danger concerns the use of autonomous AI weapons. We have an opportunity now, unlike with nuclear weapons, to build an internationally agreed upon set of controls and oversight to ensure that this does not happen.

I’m part of AI and Faith, a nonprofit think tank that brings together faith leaders from around the world, who connect with AI practitioners, researchers and engineers. AI and Faith was founded in 2018 by a Christian who has built an amazingly diverse membership—every religion you know as well as some you may never have heard of. Currently we are working on an experimental project to create a GPT Talmud for Jews and a GPT Bible for Christians. AI could be trained to understand the Torah and Talmud, as well as modern rabbinical thought. 

We, as Jews, and all good-hearted people in the world, must insist that the AIs that enter public life are built with data sets that are “scrubbed,” that is incomplete and inaccurate data is removed from them. It is not a question for Silicon Valley, but rather for society, how much we care that AI is mirroring not the worst but rather the best of us. 

Moment Magazine participates in the Amazon Associates program and earns money from qualifying purchases. 

Leave a Reply

Your email address will not be published.