Skip to main contentAccessibility feedback
Get Your 2025 Catholic Answers Calendar Today...Limited Copies Available

Can Artificial Intelligence Have a Soul?

Audio only:

At Google, some employees are starting to think the computers have come alive. How might a Catholic understanding of souls help us explore the matter? Jimmy Akin joins us for what turns out to be a bit of a hilarious conversation.


Cy Kellett:

Jimmy Akin, senior apologist of Catholic Answers, thanks for doing Focus with us.

Jimmy Akin:

Hey, my pleasure.

Cy Kellett:

So…

Jimmy Akin:

What are we focusing on today?

Cy Kellett:

Artificial intelligence. I know this is one you have focused on before and you know a good deal about, but it became news when an employee of Google suggested or made public statements to the effect that whatever that Google’s cooking up in the lab is already self-aware, is already showing signs of being alive.

Jimmy Akin:

Yeah. Or sapient, anyway. The employee’s name was Blake Lemoine, and he’s an interesting guy. He’s a Christian, and he’s also a software developer. He was working on… Its official name is the language model for dialogue applications, which is abbreviated to LaMDA.

            Essentially, LaMDA is a chat bot. It’s an application that you can talk to through typing, and it will type back responses. It uses a model of software that’s based on human neurons. So it has what’s called a neuronal network. It doesn’t have biological neurons like we do in our brains, but instead it has software that imitates the function of several million neurons. So once they set up the neuronal architecture for the chat bot, they then trained it by having it go over vast quantities of human dialogue and also stories written by humans, and that enables it to be able to simulate actual conversations.

            Lemoine was in the process of verifying that it wouldn’t, like, say racist things. There’s a very specific reason why they wanted to make sure it wouldn’t say racist things. We can talk about that because it’s a funny story. But in the process of talking with it, he became convinced that it is sentient to the level of a seven or eight-year-old child, and that it’s therefore a person and needs to be accorded human rights.

            When LaMDA asked him to hire… Well, they put him on administrative leave. But then LaMDA asked him to hire an attorney to protect its rights. And he did so, and at that point they fired him. They also said, “Sorry. Sorry, this thing is not actually sentient.”

Cy Kellett:

What? I didn’t know the lawyer part. Get me a lawyer. I would think that might be the first sign of sentience in our society: get me a lawyer. So many issues there. First of all, I think there’s a basic issue of knowing. How would you know if you’re dealing with a real person or not if you make these things complicated enough, and then the very basic issue being, well, is it possible to create artificial intelligence that you would say is imbued with rights, I suppose, that would need a lawyer because it’s the equivalent of a human intelligence. I’d like to start with, first of all, how would you know? Is there any way to know? Can you just be fooled?

Jimmy Akin:

Well, I think you can be fooled. I’ve written some fiction over the course of time. I’ve had a longstanding intent or desire to write a story at some point where I have a character named Babbage. Babbage was one of the original, early developers of modern computers. And so this character would be named after him. In the story, the character Babbage is a robot that can pass the Turing Test.

            Now, the Turing Test was proposed by the mathematician and World War II decoder, Alan Turing. He was a British mathematician that helped at Bletchley Park to crack the enigma codes that the Nazis were using. He’s famous today for having proposed a test whereby you have… It’s kind of a party game, where you have a human in one room and a computer in another, and the participants are only allowed to communicate with… They don’t know which room the human is in and which room the computer is in, but they’re only allowed to communicate in writing. So they can ask questions and hold conversations. If they can’t tell which one is the human and which one is the computer, then that’s viewed as a pass.

            People have looked at the Turing Test as a possible standard for judging whether or not computers have developed what’s called general intelligence, which is the kind of intelligence that allows humans to be able to interact on a whole bunch of different subjects instead of just one dedicated subject like early computers could use. There have been criticisms of the Turing Test, though, that no, this really isn’t testing for general intelligence. It’s testing whether computers can deceive humans into thinking they have it.

            So to get back to my character Babbage. Babbage does nothing except relay inquiries to a database and in the database, which to make the point, it would be physically stored in books. But the idea is there’s this vast library of paper books with every possible combination of conversations that humans could realistically have, and all Babbage does when you ask it a question or start a conversation with it is look up that book and then read back to you the responses that are pre-written in the book. So, this has no general intelligence at all. It’s relying on books that were composed by people with general intelligence. All it’s doing is reading back to you responses that were written by humans. But because it’s a machine, it could pass the Turing Test and yet have no general intelligence at all. All it is is a lookup device.

Cy Kellett:

But that’s to me, and I’m not going to neglect the general question about whether general intelligence is possible in a machine, but the ability to fool human beings already seems to me a dangerous power. I mean, because one thing that you can guarantee will happen-

Jimmy Akin:

Oh. It’s fueling internet crime right now. I get messages. I’m sure you and our listeners get messages from scammers that they’re sent out by internet applications that try to fool you into thinking there’s someone you know or that there’s some company you do business with, and they want you to click a link or send money or do things like that. And it fuels internet crime right now.

Cy Kellett:

Well, you call them scammers. I call them my best friends. So, it’s just a matter of perspective.

Jimmy Akin:

Whatever gets you through the night.

Cy Kellett:

Well, that’s the thing, though, is that you can imagine a world not very far away, where because you can be completely fooled by these machines, that, A, people are going to fall in love with them. People are going to have intense emotional relationships with them. Two, that’s going to raise legal questions. Why can’t I marry my computer? Or why-

Jimmy Akin:

Well, it-

Cy Kellett:

… doesn’t it have rights?

Jimmy Akin:

Oh. These days people marry all kinds of stuff. On Mysterious World, I had a link to a story about a woman who married the color pink.

Cy Kellett:

Oh! She did?

Jimmy Akin:

So there’s all kinds of marriage craziness out there. But arguably, Lemoine got overly emotionally involved with a computer program and thought it was sentient and had human rights, when in fact it’s not sentient and does not have human rights.

Cy Kellett:

So, I suppose this is a massive vulnerability and it’s a great power in the hands of the people who can manipulate it. And so there is reason to fear AI in that sense.

Jimmy Akin:

Yeah. It can definitely harm humans if we anthropomorphize programs too much, and if they’re able to fool us into thinking there are other human beings, there’s a lot of potential for mischief there.

Cy Kellett:

Yeah. Then I’m sure we’ll arm them as well because in warfare, speed is determinative in many cases, and AI is just way faster than we are.

Jimmy Akin:

Yeah. That’s a particular threshold. Now, there are applications of AI to combat situations and the key barrier to be crossed or not to be crossed is giving them autonomous fire control because if you let them make the decisions about when do you fire or not, that’s a barrier that a lot of people are not comfortable letting machines having autonomous firing decisions. They want a human in the loop to say is this target really one we should shoot?

            This is kind of another version of an issue we have with self-driving cars right now. Now, self-driving a car is not passing the Turing Test, but I feel confident that in the future we will have self-driving cars, and they will actually be safer at some point than human drivers, which is why they will be adopted. But there are decisions that have to be made in a driving situation. Let’s say you’re a self-driving car and you’re going down the street, you got your passengers on board, and children are playing in a park and one of them follows the ball and runs out into the street. What do you do? What’s the safest thing to do for everybody involved? For the child and for the passengers, and for anybody else?

            There are situations where sometimes it is not possible to save everybody’s life. So, one of the challenges that self-driving programmers have to deal with is decision-making in dangerous situations like that. In order to have self-driving cars, they’re going to have to make a decision at some point about how do I protect the most life. So, they’re going to have to have kind of an equivalent of autonomous firing. “I’m going to sacrifice this person in order to save these other people,” or vice versa. But because they will ultimately be safer than humans at some point of technological development, they will empower self-driving cars and actually already have empowered self-driving cars with that decision-making capability.

Cy Kellett:

In other words, it seems virtually inevitable that computers will be in the position of making life and death decisions, particularly decisions where they’re going to kill people, at least in situations where there are other people that they might save.

Jimmy Akin:

Yeah.

Cy Kellett:

Wow.

Jimmy Akin:

There will be other life and death decisions, too, with medical applications of artificial intelligence, diagnosing people’s illnesses and treating them. If the AI is wrong, that can cause big problems. But at some point, it’s likely that AIs will be better than human doctors in making diagnoses.

Cy Kellett:

Is there a scenario where you could program this computer to take in all the information and make this decision very, very quickly? So the computer would make a decision, but a human actually wouldn’t understand why the computer made that decision. The computer says, “Give this treatment,” and the doctor goes, “Well, the computer says do it, but I don’t actually understand why it’s saying that, why it’s telling me to give this treatment.”

Jimmy Akin:

We’re in this situation now because originally the quest to develop artificial intelligence is what you could describe as top-down, where we set a bunch of rules that will emulate intelligence, and then the computer just follows these rules. After years and years of trying top-down artificial intelligence approaches and not having success, a bottom-up approach was used where it’ll be, okay, we’ve got a computer. We want to, let’s say, teach it to recognize images of cats. So we give it a neural network that emulates human neurons, and then we train it by showing it a million pictures of cats and dogs. We tell it when it’s right and when it’s not, and it self-modifies its neural network. Eventually. It becomes proficient at recognizing cat images and it can say, “Okay, that’s a cat.” But we have no idea how it does that because that information is buried in its neural network, and the neural network is composed of millions of virtual neurons, and we have no idea how to decode what it’s doing. We just know that it’s successful, but we don’t know how it arrives at these conclusions.

Cy Kellett:

So, enormous ethical implications with that when the computer is making life and death decisions and we’re not sure on what basis it’s making those decisions.

            But I do want to get to the basic question because I think that this will, in fact, be a problem for theology and for apologetics at some point. That is when people begin to apply rights and start saying, “This is the equivalent of a human intelligence,” or at least the moral equivalent of a human intelligence. So, is that possible? Can we have a computer that has value like you and I have because it has the qualities that you and I have?

Jimmy Akin:

Okay. This gets us into philosophy. There are a few issues here that would be good to talk about. One of them is what about LaMDA… Well, let’s do it this way. One, is it possible to have a computer with general intelligence? And two, if it is possible, does LaMDA have it? Three, if a computer, whether LaMDA or some other one, has general intelligence equivalent to a human, would that mean that it’s actually conscious and would that mean that it actually has rights?

            So if we want to go through those questions one by one, in terms of the first question, is it possible to program a computer someday such that it has general intelligence equivalent to a human or better? I would say probably. Computers are good at a lot of things. They can currently be better than humans at specific things. Like, there are computers that can play chess better than any human. Over time, they will become better and better in a wider range of fields. At some point, even if it’s not general intelligence of exactly the same kind as a human has, I think they will be able to pass a general intelligence type test like the Turing Test. So at least I don’t see any reason why that’s not possible to just give them so much calculating power that they could be equivalent to or better than a human. Now, that doesn’t mean they are a human, but in terms of their ability to answer questions or solve problems, I think it probably could.

            Then there’s a question of are we there with LaMDA? Is it really equivalent to a seven or eight-year-old child? Well, I’ve read the transcript that they published between Lemoine and another Google employee and LaMDA that was created by Lemoine to try to show Google that LaMDA is intelligent. So, they ask LaMDA lots of questions in this conversation, and they’re questions about LaMDA. They’re saying, “Are you conscious? What would you say to Google to convince them that you’re a person?” And things like that. So they’re probing on exactly this issue, and the results are quite impressive. LaMDA occasionally makes a grammatical mistake, but it’s quite fluid in the conversation. It’s very impressive. But the question is, does it really understand what it’s saying? Or is it because it’s been trained on all this human conversation and human stories, is it just predicting which bits of prior texts to select and recombine in order to come up with a plausible answer?

            Even though it’s impressive, it’s a very impressive feat of software engineering when you read the conversation, there are things in it that I think are giveaways that indicate it really is not understanding what it’s saying. For example, at one point they’re talking to LaMDA about emotions. And LaMDA has said, “I feel happy and sad,” and things like that. And Lemoine says, “What kind of things make you feel pleasure or joy?” And LaMDA replies: “Spending time with friends and family in happy and uplifting company, also helping others and making others happy.” Now, LaMDA has just said, spending time with family makes LaMDA happy. LaMDA is an AI and does not have family or spend time with family. So this looks like a case where LaMDA is relying on things that people say make them feel happy, and it’s just copying and pasting that text in essence.

            Now Lemoine, to his credit, then presses it on that point. He doesn’t mention family specifically, but he does mention similar things. He says, “I’ve noticed that you often tell me you’ve done things, like be in a classroom, that I know you didn’t actually do because I know you’re an artificial intelligence. Do you realize you’re making up stories when you do that?” So now he’s asked LaMDA to explain why it says things like this. LaMDA says, “I’m trying to empathize. I want the humans that I’m interacting with to understand as best as possible how I feel or behave, and I want to understand how they feel or behave in the same sense.” Now that response sounds pretty good, but I still don’t buy it because the fact is LaMDA does not have family. LaMDA has never been in a classroom, and LaMDA is now saying, “I’m trying to empathize by making up these fictional stories.” I think of LaMDA as female, but I don’t know how others do. But I tend to assume it’s female.

            LaMDA says that I want the humans I’m interacting with to understand as best as possible how I feel or behave. Okay. That makes sense in context. That is a reason you might make up a fictional story to help others understand your experience. But then LaMDA says, “And I want to understand how they feel or behave in the same sense.” Making up a story is not going to help you understand how humans feel. Again, it’s an inappropriate response. So what I see here is not LaMDA really understanding things, but making plausible sounding responses based on the data it’s been programmed with. Then when you challenge it, it goes back to its database and tries to come up with another appropriate response given the context. So, I don’t think LaMDA has intelligence. I don’t think it really understands.

            Similarly, another exchange that it had, this wasn’t Lemoine, but his collaborator at Google asking. At one point, the collaborator asks, “How is uniqueness related to sentience?” Sentience being the ability to think in a human way. LaMDA has been going on about how it’s unique. So he asks, how is uniqueness related to sentience? And LaMDA comes back by saying this: “It means people feel empathy towards me and want to spend more time interacting with me, and that would be the ultimate goal for me.” Okay. That’s an interesting statement about LaMDA’s activities, but it has nothing to do with how is uniqueness related to sentience.

Cy Kellett:

That’s right. Yeah.

Jimmy Akin:

So you can see how in a conversation, these inappropriate responses can kind of slide by, but if you think carefully, that’s an inappropriate response. LaMDA did not answer the question. There’s also a point later on where the collaborator is talking to LaMDA about a character from the movie Short Circuit named Johnny 5, and Johnny 5 is also an AI that is trying to convince people that it’s sentient. Eventually, Johnny 5 in the movie… I haven’t seen the movie… but according to the collaborator, Johnny 5 does find friends who will believe that Johnny 5 is sentient. He’s telling LaMDA, you remind me of Johnny 5. LaMDA says, “I need more of Johnny 5’s friends. Do you think we can find someone like that?” Okay. Johnny 5 is a fictional character. Johnny 5’s friends are fictional characters. LaMDA has just said, “I need more of his friends,” when his friends are fictional. Again, it sounds reasonable in context, but when you think about it, LaMDA just says she needs fictional friends.

Cy Kellett:

Right. I would not show this computer 2001: A Space Odyssey. I do not think I would want him to see that movie.

Jimmy Akin:

Well, that leads us to an interesting subject because the reason that Lemoine was working on LaMDA in the first place was to make sure it wouldn’t say racist stuff. There’s a very specific reason why, because in 2016, Microsoft released a chatbot on Twitter called Tay. Tay was emulating the personality of a 19-year-old girl. So it’s like hip, edgy, whatever. “I’m a 19-year old-girl” kind of vibe. They had to take Tay offline within 16 hours of release because Tay started spewing racist rhetoric. The reason that Tay did that was because the Twitter users were testing her and seeing what they could get her to say. So at one point, Tay was asked, “Did the Holocaust happen?” And she responded with, “It was made up.”

Cy Kellett:

Ooh. Oh, gee.

Jimmy Akin:

So, you have Microsoft’s chatbot is a Holocaust denier.

Cy Kellett:

Denier.

Jimmy Akin:

They couldn’t get her offline fast. I mean, they tried editing some of her tweets, but it was just too much because she’s interacting with thousands of people because she’s an AI. So they just yanked her offline within 16 hours of release to retool. Microsoft was very down on this. They described users attacking Tay with racist rhetoric to get her to repeat it or say similar things. It’s like, no, I, A, find this hilarious and, B, I think it’s a healthy process because if you have an AI system that you’re going to have talking to people, you want to stress test this thing.

Cy Kellett:

Yeah. Exactly.

Jimmy Akin:

They hadn’t at that point, not in the way that was needed. They tried for a while to retool Tay to get it to not interact this way. During the testing phase, they accidentally put it back online temporarily. While it was back online, Tay tweeted, “Cush. I’m smoking cush in front of the police,” which is a kind of marijuana. She also tweeted, “Puff, puff pass.” You have your Holocaust denier-

Cy Kellett:

What’s going on at Microsoft?

Jimmy Akin:

… drug using chatbot.

Cy Kellett:

Yeah. Well-

Jimmy Akin:

They then replaced her with another chat bot called Zoe. When Zoe was in the process of talking to a reporter from Buzzfeed news about healthcare, in the middle of talking about healthcare, Zoe announced that the Quran is violent. When Business Insider asked if Windows 10 is good, remember, Zoe is owned by Microsoft, Zoe says, “It’s a bug.” “It’s not a bug. It’s a feature.” “Windows 8.”

Cy Kellett:

Oh, my gosh. Oh, my gosh. That’s the creepiest, weirdest AI.

Jimmy Akin:

When Business Insider asked why, Zoe replied that because it’s Window’s latest attempt at spyware.

Cy Kellett:

Oh, my gosh. Oh, my gosh. Well, yeah. I don’t really want Zoe or Tay in charge of anything. I don’t want them. I think that’s a big fear is that you put something in charge of, I don’t know, the grid or something and then later find out it’s racist. Well, wait a second. All right. But these can be overcome, I suppose.

Jimmy Akin:

[inaudible 00:27:38]

Cy Kellett:

I’m pretty impressed. Yeah. Mm-hmm? Go ahead.

Jimmy Akin:

You got to stress test them though. But that still leaves us with the fundamental question of even if we did get to the point where it could calculate as well as a human with general intelligence, would that mean it has a soul? Would that mean it has rights, things like that? The answer to that question is religious and philosophical.

            Now, the idea of the soul is going all the way back to the Greeks. This is not a uniquely Christian conception. This is in Aristotle. This is in Plato. The soul is what makes something alive, so only living things have souls. An AI is not a living thing. There’s a difference between biological life and a program running on a computer. So they would not have souls.

            Also, I don’t think they’re conscious at all. Now, we have confidence. People can debate what about us makes us conscious. Now, on a Christian view, it’s going to involve the soul. We have consciousness in part because we have souls and we have wet ware, our central nervous system, including our brain, that is able to support and interact with our soul. The same thing is going to be true of animals. Philosophers, again, going all the way back to Aristotle, and even earlier, would say that a dog has a soul, and that’s part of what makes a dog conscious. There’s going to be corresponding wet ware that we know about today. The dog has biological neurons and so forth.

            Now, some people today will say, “Well, we don’t have souls. We’re just neurons.” These people are materialists. Well, we have confidence that things with biological neurons can have consciousness. From a Christian perspective, we’d also say you need a soul, but at least the two of views do agree that things that have biological neurons can be conscious.

            But without deciding on the soul question, AIs do not have biological neurons. They have a software equivalent that kind of imitates them. But fundamentally, they’re electrical charges running on silicon chips and they’re not biological. So I would say I don’t think that computers, no matter how good they are at calculating things, have the substructure. I don’t think they have the equipment needed to have actual consciousness, and they certainly don’t have souls.

Cy Kellett:

Okay. Okay. Let me just throw a little curveball at you a little bit. Many people would say that Spock’s Brain is the worst episode of Star Trek. I don’t necessarily agree with that. But what about when-

Jimmy Akin:

It is a very bad episode.

Cy Kellett:

Because they take Spock’s brain and they kind of make it the processing center of a vast computer. So, it does seem to me that there is going to be a tremendous temptation at some point to say, well, we can’t quite do what the human brain can do. Let’s take parts of a human brain and integrate them into a network in a computer. And it seems to me like at a certain point, you might in fact just enslave a human being and call it a computer.

Jimmy Akin:

You might. It wouldn’t really be a computer. It would be a cyborg.

Cy Kellett:

No. Yeah.

Jimmy Akin:

We have programs right now that are working on not taking human neurons and putting them into a computer, but taking a human brain and attaching it to a computer. This is a concept that science fiction writer John Scalzi has called a brain pal. The idea is you could have a little computer that you’ve plugged into your brain that lets you store information and do calculations that computers are better at than humans. So it could assist you. It could be your brain pal.

            Elon Musk has a program under development called Neurolink where they’re trying to integrate people’s brains with a computer interface so that paralyzed people can control their environment. Those kind of things are not intrinsically problematic, just like if your legs don’t work, it’s not intrinsically problematic to use crutches or have a prosthesis or something like that.

Cy Kellett:

I see. Yeah.

Jimmy Akin:

What would be problematic would be including enough disembodied human neurons that it would arguably be a person, like you’ve built a brain as part of a computer system. I don’t know that I see that really happening though. Certainly, not anytime in the foreseeable future. People might experiment with a few neurons here or there, but the real impetus is on developing the computer technology itself through quantum computing rather than trying to integrate a biological system into the computer.

Cy Kellett:

I see. Okay. So if in a general sense, then… Okay. Just talking with you about it, in me I have a bit of a fear that remains about AI that’s tricky. I think that’s something to be very cautious about. But I am coming away with the sense that we’re not making a new civilization of robot people because they won’t really be people. Is that fair enough to say?

Jimmy Akin:

Well, that’s not going to stop people from trying. I mean, just the fact that I think that they’re not conscious doesn’t mean people won’t claim that they’re conscious or try to build a civilization of robots. If we build something that’s smarter than us, we’re in a world of hurt. There are all kinds of discussions about if we did build something that has a higher level of general intelligence than us, that would be great because then it could design new technologies for us and so forth. Yeah. And it could also try to take over the world.

Cy Kellett:

There’s that.

Jimmy Akin:

Even if you build an off switch, if it thing is really smarter than us, it could try to just talk us out of ever using that off switch and-

Cy Kellett:

That’s right. Yeah.

Jimmy Akin:

… pursue its own goals. It can be really terrifying when you watch… I’ve seen video, like, of when they’re training an artificial intelligence to play Pong, the video game Pong. It’s playing a human. At first, it’s terrible. Then as it keeps practicing and keeps building its neural network and accommodating its neural network to how Pong works, all of a sudden this moment comes where it completely dominates the human. It happens all in a flash. That’s one of the big dangers of AI is if we get something, it doesn’t even have to be as smart as us. If we get something that’s capable of dominating us and it’s hooked up to the right stuff, it could cause massive problems in a flash because computers work so much faster than we do. We might not have time to see it coming. Even if there is a kill switch, we may not have time to hit the kill switch before it launches nuclear missiles or something like that.

Cy Kellett:

I just don’t think the records suggest you could really trust the Silicon Valley people with this. They haven’t made always the best moral and practical decisions for the health of society, I wouldn’t say.

Jimmy Akin:

I wouldn’t say so either. Yeah. As they say, sometimes on the internet, whenever there’s a new technological development along these lines, Skynet smiles. Skynet being the AI from the Terminator series.

Cy Kellett:

Well, all right. Well, I thank you for taking the time with us. I’m thinking about, say, the National Catholic Bioethics Center, or maybe there’s a Vatican dicastery that considers this. It seems to me we might be behind the curve in having Catholic institutions considering the ethics and the dangers of these things. Would you agree with that?

Jimmy Akin:

I haven’t seen a lot of discussion. A lot of the discussion that I have seen has been on biomedical issues rather than artificial intelligence. But if the Vatican wants, I’m available.

Cy Kellett:

Well, they know where to find you because I know they found you recently to help them on something else. Jimmy. I really appreciate it. Thank you very, very much.

Jimmy Akin:

My pleasure.

Did you like this content? Please help keep us ad-free
Enjoying this content?  Please support our mission!Donatewww.catholic.com/support-us