Skip to main contentAccessibility feedback

Artificial Intelligence and the Faith

Will humans ever create a computer with a soul? Will we ever have to extend charity to robots?

Jimmy Akin

Artificial intelligence is hot right now. There are lots of news stories about it. Search engines are starting to use it in a new way. So are robots. And some speaking of a coming singularity.

What do faith and reason tell us about all this?

Can a robot have a soul? Is there any truth to sci-fi movies like Blade Runner? Does a point come where we should consider artificial intelligences “our neighbor” (see Luke 10:29-37)? Let’s take a look at these issues.

First, what is artificial intelligence? It can be defined different ways, but put simply, artificial intelligence (AI) is the ability of machines to mimic the performance of humans and other living organisms in performing tasks that require the use of intelligence.

There are many forms of AI, and most of them are very limited. Early mechanical adding machines were capable of performing simple mathematical feats that otherwise required human intelligence, so they could be classified as a form of primitive AI.

Today, aspects of AI are used in all kinds of devices—from computers to smart phones, to washing machines, to refrigerators. Basically, anything that has a computer chip in it has some form of AI operating.

However, people tend to reserve the term for more impressive applications, and especially those that have not yet been developed. The “holy grail” of AI research is producing what’s known as artificial general intelligence or strong AI. This is often understood as endowing a mechanical system with the ability to perform on intelligence-based tasks as well as or better than a human.

What is the singularity? Some authors speak of a coming technological singularity—that is, a point where technological development becomes uncontrollable and irreversible, transforming human life and culture in unforeseeable ways.

The development of strong AI could play a role in this event. Science fiction author Vernor Vinge sees the singularity as involving the development of strong AI that can keep improving itself, leading it to surpass human intelligence.

Some authors have proposed that the singularity is near, that we may be living through its early phases, and that it will truly take hold between 2030 and 2045.

However, others have been skeptical of this, arguing that we are not anywhere close to having strong AI, and we may never be able to develop it. Further, it can be argued that the trends that would lead to a singularity may break down.

For example, Moore’s law—according to which computing power doubles about every two years—is either breaking down or has already broken down, and without major, continuing improvements to computer power, developing strong AI or having a singularity would be considerably less likely.

Can robots have souls? No. Since the time of the ancient Greek philosophers like Aristotle, the soul has been recognized as the thing that makes your body alive, and as James 2:26 notes, “the body apart from the spirit is dead.”

Souls are associated with living organisms, and robots and computers are not alive. Therefore, they do not—and cannot—have souls.

This is not to say that artificial life cannot be developed. That is a separate question, and alternative life chemistries are conceivable. However, entities that would be genuinely alive would not be computers or robots as they are presently understood.

Is there any truth to movies like Blade Runner? There are truths contained in all forms of fiction, but if the question means, “Are we likely to have replicants like the ones depicted in Blade Runner?,” then the answer is, “Not any time soon.”

In the movie Blade Runner, Harrison Ford’s character hunts down “replicants”—artificial creatures that can be distinguished from humans only by very subtle clues psychologically induced under testing.

These beings are apparently biological in nature. If they weren’t—if they were just robots—then you wouldn’t need to apply a psychological test. You could just perform what might be called the “Shylock test” from Shakespeare’s The Merchant of Venice.

In the play, Shylock argues that Jews are like other people by saying, “If you prick us, do we not bleed?” All you’d need to do to unmask a human-looking robot (i.e., an android) is stick it with a needle, see if it bleeds, and then do a blood test.

Such a test would apparently not unmask a replicant. Although we are beginning to build synthetic lifeforms (they’re known as xenobots), we are nowhere close to being able to build a synthetic lifeform that could pass as human. Neither are we anywhere near being able to build androids that could.

Does a point come where we should consider artificial intelligences “our neighbor”? The short answer is no, but it comes with a qualification.

To see the principles involved, consider the case of animals. Non-human animals do not have rights, but this does not mean that we can treat them with utter disregard. We can use them to serve human needs, but as the Catechism states, “it is contrary to human dignity to cause animals to suffer or die needlessly” (2418).

The reason that we cannot be wantonly cruel to animals is that doing so is contrary to human dignity—that is, there is a defect in the human who treats animals completely callously. Even if a dog has no intrinsic rights, for a human to torture a puppy for fun reveals that there is something broken in the human.

Of course, AIs don’t have the ability to suffer, but they can act as though they do. To deliberately stimulate an AI in a way that caused it to appear to suffer—and, say, beg for mercy—would be the equivalent of deliberately playing a torture-based videogame where the player inflicts intentional suffering on a simulated victim for fun. In fact, since videogames run on AI engines, that’s exactly what the player would be doing.

Yet we would recognize that something is wrong with a person who derives pleasure from deliberately torturing a videogame character—say, ripping out the character’s fingernails in order to hear it scream and beg.

The position of AIs is thus similar to the position of animals. AIs do not have rights, can be used to serve human needs, and should not be regarded as equivalent to human beings. They are not “our neighbor,” no matter how smart they become. However, to the extent they simulate human responses, we should interact with them in a way that isn’t cruel.

Not for their sake, but for ours.

Did you like this content? Please help keep us ad-free
Enjoying this content?  Please support our mission!Donatewww.catholic.com/support-us