In the spate of recent stories on artificial intelligence (AI), Catholic News Service carries one in which they interview Fr. Phillip Larrey of the Pontifical Lateran University in Rome.
Fr. Larrey says that Silicon Valley technology companies are now consulting with religious leaders on matters related to AI—apparently including the nature of consciousness, humanity, and the purpose of life.
When it comes to defining consciousness, good luck! This is a perennial problem that nobody has a good handle on. Consciousness is something we experience, but defining it in a way that doesn’t use other consciousness-related terms has proved nigh onto impossible, at least thus far.
It’s clear that consciousness involves processing information, but being able to process information doesn’t mean having the awareness that we refer to as consciousness. Mechanical adding machines from the 19th century can process information, but they aren’t conscious.
Neither are computers, robots, or AI. The Next Generation episode “The Measure of a Man” featured a line that bluntly summarized the situation: “Data is a toaster.” He may look and act human, but Mr. Data not only has no emotions, he has no consciousness (though the episode tried to pretend otherwise). He’s just a data-crunching machine.
The same will be true of any silicon-based AIs we have or will come up with in the foreseeable future. They may be programmed to sound human, and—hypothetically—they could one day process information better than a human, but all they will be doing is shuffling symbols around according to rules. They will not have genuine consciousness.
Still, it’s good that tech companies are talking to ethicists and religious leaders about the impact that they will have on human lives. According to the CNS piece:
He [Fr. Larrey] also identified potential adverse effects of AI for everyday users, noting that minors can ask chatbots for advice in committing illicit activities and students can use them to complete their assignments without performing the work of learning.
A major downside of AI, he said, is that “we become dependent on the software, and we become lazy. We no longer think things out for ourselves, we turn to the machine.”
When it comes to minors asking AIs how to commit crimes, I’m sure that tech companies will come up with ways to stop that. (“I’m sorry, Dave. I’m afraid I can’t answer that question.”) Legal liability alone will ensure that they do.
However, education will adapt to the student use of AI—at least in some situations. Back before the 1970s, people were concerned that electronic calculators would cause kids to become lazy and not memorize their multiplication tables. But you don’t need to memorize all the stuff you used to need to when you can rely on computers to provide answers. Thus math classes today regularly include calculators.
The same thing will happen with AI in education. It will take time, and there will be some tasks for which the use of AI will be prohibited, but eventually educators will figure out ways it can be incorporated, and Fr. Larrey acknowledges this in the piece.
Perhaps the most chilling part of the story comes when it says:
The pope urged [an audience of tech leaders] to “ensure that the discriminatory use of these instruments does not take root at the expense of the most fragile and excluded” and gave an example of AI making visa decisions for asylum-seekers based on generalized data.
“It is not acceptable that the decision about someone’s life and future be entrusted to an algorithm,” said the pope.
Amen! Part of the reason is that we no longer really know how modern algorithms work. They are judged on their results (e.g., does the YouTube algorithm keep you watching videos?), but we don’t clearly understand the specifics of what’s happening under the hood.
This results in algorithms making mistakes, and companies like Google, Facebook, and YouTube are already bureaucratic black boxes that make secretive decisions to the detriment of their users.
There are thus real dangers to AI. Even assuming you don’t give them autonomous firing control in a wartime situation, nobody wants to hear, “I’m sorry, but the AI has determined that curing you would be iffy and expensive, so your fatal disease will just be allowed to run its course.”
Religious leaders need to be involved in this conversation, so it’s good to hear that tech companies are consulting them.
Data may be a toaster, but he shouldn’t become a creepy, opaque toaster with the power of life and death.