By Tom Thomas & Verghese V Joseph –
Fr. Philip Larrey, Ph.D. is a Professor of philosophy at Boston College and a Catholic priest. The Pontifical Lateran University in the Vatican appointed him to the Chair of Logic and Epistemology, where he oversaw the department of philosophy until joining Boston College. He focuses on the philosophy of knowledge and critical thinking in his writings. Several works about the effects of the new digital era on society have been written by him. Two recent books, Connected World (Penguin) and Futuro Ignoto (IF Press), both emphasise this topic. 2018 saw Mondadori publish the latter’s Italian translation, Dove inizia il futuro.
He has spent years researching the philosophical ramifications of artificial intelligence’s explosive growth. He openly confronts business executives he meets at the Vatican to talk about how technology is changing the face of society. In his recently published book Artificial Mankind, he explores the implications of AI research for mankind as a whole in a more philosophical manner. (Please visit his amazing and exhaustive website: philiplarrey.com)
Interviewing Fr. Philip Larrey, a prominent Catholic figure who advocates for the Catholic Church and society at large regarding the implications of artificial intelligence breakthroughs for both the Church and society at large, is a great honour for Indian Catholic Matters.
Fr. Larrey is actively engaged in establishing connections with influential figures and global leaders in the artificial intelligence space. We value his time in answering our voice chat questions. Transcripts of the interview below:
It seems a bit paradoxical that a Roman Catholic priest could end up heading the logic and epistemology department at the Pontifical Lateran University in Rome, considered the Pope’s University. Can you please share a bit about your journey, starting from your own vocation to the Catholic priesthood to this position?
In 1984, I relocated to Rome and commenced my preparation for the Catholic priesthood, mostly at the Jesuit-run Pontifical Gregorian University. I focused on analytical philosophy there, and most people’s definition of analytical philosophy includes logic and epistemology. The structure of human cognition and the nature of the human intellect are the subjects of logic and epistemology. After receiving my PhD in 1994, I immediately began teaching, and I did so until 2002, when I moved to the Pontifical Lateran University. A few years later, I was appointed chair of the Department of Logic and Epistemology.
Given that the majority of professors at the Pontifical colleges in Rome are Catholic priests, this is therefore not extremely unusual. Thus, my appointment made perfect sense in that regard. I became interested in artificial intelligence early in the 1990s because I was teaching a course on what constitutes human thought. I reasoned that by learning more about how machines were simulating human thought, we would be able to gain a better understanding of what human thought is all about. Around 2008 or 2009, there was a decline in artificial intelligence research due to immature technology, particularly in the areas of hardware and software, which contributed to some of the excitement surrounding the field.
These days, we have far more capable systems that can handle enormous databases and do logical calculations at blazing speeds. That’s the reason why some of the promises made about the capabilities of artificial intelligence are starting to materialise. My vocation as a Catholic priest is one that is committed to the intellectual life, and I view philosophy as being crucial to my work in artificial intelligence.
In my opinion, the Catholic Church has to speak up in this area, and it is starting to do so gradually. Because we disregarded the potential of artificial intelligence, I believe we are lagging behind the curve, even if we are now beginning to participate in the discourse.
You are also heading multiple other initiatives such as Humanity 2.0. Can you share briefly the goals of this organization and your specific mandate?
As you are aware, Humanity 2.0 is devoted to advancing human flourishing and acts as a human accelerator for it. By “human flourishing,” what do we mean? What are some barriers that prevent people from thriving, and how can we help initiatives that get over them? Our organisation, which has only been in existence for a short while, organises an annual meeting at the Pontifical Academy of Sciences, a significant and potent location for bringing together individuals from the various fields related to human flourishing. We hope that other kinds of measures that determine a nation’s or company’s level of wealth can be replaced by human flourishing. For instance, the gross national product, profit margins, etc., in simply economic terms. We believe that the concept of human flourishing can provide a new matrix that includes more aspects of what it means to be prosperous as an individual, as a business, or as a nation. We’re committed to seeing that happen.
Pope Francis has been talking about artificial intelligence these days. Can you share what it has been to interact closely with the Pope on these technology matters and how receptive he has been to this disruptive technology be adopted by the Catholic Church? Also any quotes that Pope Francis has mentioned in the context of artificial intelligence.
During his address on January 1st, World Day of Peace, Pope Francis started talking about artificial intelligence. He discusses something once a year. He discussed artificial intelligence and peace this year. Ways in which AI can promote peace.
The Holy Father made references to the weaponisation of artificial intelligence and other cutting-edge technological advancements in the field of “lethal autonomous weapon systems” that have serious ethical concerns. Weapon systems that are autonomous can never be morally upright. The capacity that only humans possess for moral judgment and ethical decision-making is more than just a sophisticated set of algorithms, and it cannot be reduced to programming a machine—a machine, no matter how “intelligent,” is still a machine. It is crucial to guarantee sufficient, significant, and regular human supervision of weapon systems because of this.
Moreover, Pope Francis is a pope, not a computer scientist, software engineer, or CEO of a digital company. Not only is the Catholic Church primarily a spiritual power in the world, but he also holds leadership over it. He will thus discuss artificial intelligence that is centred on people and how technology might benefit mankind rather than work against it. He discusses some of the challenges associated with this new technology as we get to know it and develop more effective ways to use it.
He dedicated his second address, which was given on the World Day of Communications, to artificial intelligence. The idea that deep fakes could change how we perceive reality is one of the intriguing topics he brings up in his speech, and I find it intriguing. As you may know, deep fake is the term for lifelike audio, video, or image that appears real but isn’t.
Pope Francis goes so far as to say that he was the victim of a deepfake. Of course, this is not true; he may remember the picture of him in a white puffer coat that appeared incredibly authentic. Hence, he warns against falling for these deepfakes and the potential for twisting reality to be something it isn’t.
And then there’s the US President Biden Biden appearing to have advised Americans not to vote in the primaries during an Amy Roblach journalistic call at the New Hampshire primary a few months back. It wasn’t a recording of him; rather, it was an entirely false audio tape that was created by a young man in Connecticut. The voice was saying, “Please don’t go and vote in the primaries because you will lose your vote for the general election in November,” which was absurd, but it did sound like President Biden. Even though it is absurd, some people were convinced by it, and millions of people received it.
There’s also the American singer-songwriter Taylor Swift—who is incredibly well-known worldwide—was the target of deepfake pictures that went viral just before the Super Bowl. She’s probably in Milan right now; her two or three sold-out shows there thus far. Naturally, a lot of attention was drawn to her boyfriend, Travis Kelsey, the wide receiver for the Kansas City Chiefs, who just won the Super Bowl.
It is noteworthy, therefore, that the Pope is cognisant of some of the risks associated with this technology and that caution is advised. Pope Francis delivered his third major speech at the plenary session of the Pontifical Academy for Life, which is led by Archbishop Vincento Paglia. Father Paolo Benanti launched the well-known Rome Call for Ethics of AI, and the two of them were just in Japan last week talking with Asians, many of whom had also signed the call.
Choosing to deploy AI for human benefit rather than harm is akin to making a conscious choice. Of course, Pope Francis’s most recent address was delivered during the June G7 summit in Apulia, Italy, where he really attended mostly for the second day of the meeting, during which they discussed artificial intelligence, and he also gave a speech on the subject.
He was invited to do so by Giorgia Meloni who is the Prime Minister of Italy and it was very unusual because the Pope has never participated in a G7 summit which is a completely political organization and so it was very surprising that he would do this.
But the reason I think he did was because he sees the position of the Catholic Church as not being influential in the development of the technology and he wanted to influence in some way the development of this technology according to the principles of the Catholic Church.
We’ll have to see how effective that was but it was a historic presence and received a lot of attention in the media obviously and I think you can see you can obviously get several quotes from his discourse there.
Although Pope Francis receives assistance from other experts when drafting these texts and speaking about these technologies, it was evident during his speech that he is well-versed in the terminology and ideas related to these technologies. Pope Francis is aware of the significance of this, and it appears that he is making an effort to influence the dialogue.
Let’s put it that way. Try to shape the conversation.
So again, so AI is used for the benefit of humanity and not the contrary.
A lot of people use the term “artificial intelligence” these days without actually understanding what it means. Could you please define artificial intelligence and explain how the Catholic Church and its many members can benefit from it?
Artificial intelligence is a series of algorithms that use logical calculations in order to achieve programmable results. Although it is a simplified explanation, I believe it covers the key points of what artificial intelligence is. At a symposium he organised with a number of other mathematicians and philosophers, John McCarthy, an American computer scientist and cognitive scientist who is also credited with founding the field of artificial intelligence, first used the term in 1956 at Dartmouth University in New Hampshire.
For instance, Marvin Minsky, an American cognitive and computer scientist who is primarily interested in AI research and co-founder of the AI laboratory at the Massachusetts Institute of Technology, was present there. McCarthy enquired as to what a Turing computer is doing when it generates responses to queries from people. You call this artificial intelligence. Minksy went on to found the Centre for Robotics at MIT, which is still active today and a leader in the field.
Now it’s actually not intelligence anymore. It is actually the logical computation structure that imitates human reasoning. We carry it out mentally. It is done by the machine using ones and zeros. The foundation of the machines we use today is a binary system. What’s impressive about AI is its capacity to mimic human thought.
Now, if you’ve ever utilised ChatGPT4O, where the O stands for omni… Let me put it this way: ChatGPT4O is a fresh take on Chat GPT4, the most recent iteration of OpenAI.
This latest version is quite impressive because it uses speech, the machine will communicate with humans through speech. You can view some of the examples provided by OpenAI by visiting their website. It nearly seems as though it is comprehending what we are saying.
Now, this is important. It’s a very impressive technology, and people are actually using it in order to form relationships with the machine. That could be another question at another time. But it simply is using statistics in order to respond to a human’s question or query or comment.
Statistically, what is the most likely meaning this word has? It would be the proper response or answer to the question, etc. So it’s not thinking, it’s not reasoning as humans do, but it is impressive and it’s very fast.
These are, like I said, these are powerful systems and they go a long way into mimicking what human beings would say is understanding, it’s not really understanding, but it certainly can simulate what we mean by understanding. Now, this is a huge philosophical topic. I taught a course in part last semester dealing with these issues. We spent two weeks on this issue in class with different readings.
So it’s very complex. Let me just say this.
As the chatbots get better and better at doing this, and they’re already impressive, and now other Anthropic has come out with their version called clog, and Google has their version called Gemini, and Facebook has a, so the platforms are getting very good on what’s called Large Language Models. But let’s take a step back and ask ourselves what the AI is actually doing.
Well-known futurist and transhumanist Raymond Kurzweil, who works for Google asks, “Is there a difference if we can’t see it?” Can we now distinguish between a human speaking to us and an artificial intelligence speaking to us? Well, things are becoming harder. It’s becoming increasingly challenging to distinguish between them. He then asks, “Is there a difference if we can’t tell it exists?”
And that’s really a very good question, too. Yes, is my answer. A distinction will always exist. Even if the similarities are so great that they are indistinguishable. The reason is that, once more, time is of the essence, therefore I’m omitting a lot of stages here. The reason for the difference is that human intellect, a higher ability of the soul, produces human language. No machine has a soul.
They have systems, operating systems, and algorithms, which are very sophisticated. But when humans interact, if we want, it’s a soul interacting with another soul. Now, it’s not direct. The soul doesn’t interact with others, except through the higher faculties, which are the will and the intellect. And even that is mediated between the senses and the different passive intellect, active intellect philosophically, we have systems that address this.
Can the AI fool us into thinking it is a soul?
Behind the language it has a soul. Well, maybe. I don’t think so. But this is an open question. The engineers are busy at work trying to, in a sense, fake us out. I think there will always be a difference that the human being will be able to detect. But it’s becoming more and more difficult to detect that difference.
Some of my students last semester were creating AI programs to detect AI, which because chat GPT can be used to write papers and it can fake out people on a phone call and use, for example, your daughter’s phone number and her voice to say that her life is in danger. Can you please send $20,000 to the following person? And people have been scammed. Many, actually. So this is an issue. It’s getting very convincing. But again, that is a bad use of the technology that’s going against human flourishing and human dignity. And so the conversation will continue to go on.
How can artificial intelligence be harnessed for the Catholic Church?
Well, look at Magisterium AI. And you’ve spoken with Matthew Sanders, who created that. And it is an excellent example of how AI can be used in the service of the Church. Magisterium AI helps people understand the position of the Catholic Church on any number of issues. It’s very precise. And it’s an AI trained on official documents of the Church. There are many other ways in which the Church can harness AI. As long as we’re not afraid of it, it should be something we are not afraid of so that we can use it for good.
Also, click link to read an exclusive interview with Interview with Mathew Sanders, Founder & CEO of Longbeard, creators of Magisterium AI:
It’s Time to Learn the Teachings of the Church Through AI
You are the author of two thought-leading books on artificial intelligence, ‘Artificial Humanity’ and ‘Connected World’, which are not available on Amazon and India. Therefore, most of our readers would not have a chance to purchase it yet. Can you please share a few brief takeaways from these books?
I’m surprised because Connected World was published by Penguin and it was printed in India. I’m not sure where, but in 2017, we published that with Penguin and actually the copies were printed in India. I find it strange it’s not available and then Artificial Humanity is available on Amazon and other countries. I’m not sure why it’s not in India.
So Connected World is a series of interviews I did with people involved with new technologies. Two of the most famous were Eric Schmidt, who was the head of Alphabet, which was Google’s parent company at the time, and Martin Sorrel, who was in charge of WPP, the largest advertising company in the world. Both of them have left and pretty much retired by now. That was seven years ago.
I interviewed also Bill Shores, who was part of the team at Motorola that created the cell phone. If you had a fascinating interview, that was a great one. He recalls how the CEO of Motorola, this would have been in the 1970s, went to Tucson, where Motorola has a huge engineering lab. And brought in a walkie-talkie and when you push a button, you listen, you let the button go. And he says, you need to invent a technology that uses this without the button. And that’s exactly what a cell phone is. It’s called a cell phone because it has a radio connection to a cell tower somewhere near you.
As you know, police can trace your tracks according to the presence on the different cell towers in the city. Now, if you’re in a place with no cell tower, the cell phone doesn’t work. And that’s what happens when they say, oh, I don’t have any connectivity. It means that there’s not a cell phone tower nearby that my cell phone can connect to.
Now, you avoid pushing a button and letting it go because that’s the way the engineers made the device.
But it is fascinating that Bill Shores said they invented the cell phone in order to connect us, in order to help us communicate. And yet now he sees that it actually is being used to separate us. We’ve never been more lonely than we have now. And we’ve never had less connection with each other than we do now, even though everyone has a cell phone at least one, some two.
Now, I interviewed a former pilot for Alitalia, a great interview. I interviewed Malitzle V, who was in charge of Publicis group, which is a large advertising company in Paris. A wonderful friend, he’s on the board of Humanity 2.0. And every year he organizes the largest convention on technology, which is called Viva Tech in Paris.
Last June, Elon Musk went every year, 50,000 people show up. It’s an amazing event. And he gave a wonderful outline. There’s a philosopher, Johann Seabert, a very dear friend of mine, and we went into some of the philosophical implications of AI, which is what I’m most interested in doing. And then there were maybe another 10 or 12 interviews, long interviews so that the people were able to speak their minds in a deep way about these issues and where we’re heading in the future.
Artificial in humanity is, of course, my version of the philosophical implications of artificial intelligence. So the main focus of the book, which is now translated into Chinese, by the way, I’m very happy about that. The main focus of the book is how an Aristotelian-Thomistic framework in philosophy can be used to deal with the philosophical implications and consequences of artificial intelligence.
It makes the case for Aristotelian-Thomistic thought, which is, of course, the tradition of the Catholic Church, and how we can use that in order to guide our thinking on issues such as the nature of the human being. Is there a soul? Is there an afterlife? There are many people in Silicon Valley that are working on immortality. We could get into that also in a separate conversation.
What is the nature of the intellect? What does reasoning mean? What are our general concepts? These are all issues that are now coming out of Silicon Valley without any philosophical framework. It’s important, before we talk about these things, to agree on a philosophical framework, to have a similar vocabulary, for example, to look at the brain in a way where we can see the brain.
It’s not completely reductive. Many other issues require, I think, a philosophical context in order to address these problems. When I speak with engineers and CEOs of tech companies, they get it. One engineer in San Francisco asked me, okay, so humans have a soul, right? Yes, well, that’s not the right way of putting it.
But human being is composed of two co-principals, form and matter. The form of the human being is a soul, the matter is the body. Now, it sounds like dualism, but it’s really not. It’s duality, as Thomas Aquinas calls it. And so the engineer says, oh, can we not separate the soul from the body? Because he’s interested in mind uploading and immortality on a digital format and harnessing memories and emotions and digital storage, place, et cetera. And I said, oh, yes, you can, but that’s called death. The separation of the soul and the body is the moment of death. And he says, no, no, we don’t want to kill anyone. I said, well, I know. But right now, we do not have technology which enables us to separate the soul from the body.
And I said, we never will. And he says, oh, we’re working on that right now. Of course, I’m sure they are. But that really gets into the relationship between the soul and the body. And so I have my students read questions 74, 75, 76 of the first part of the summa where Aquinas talks about the relationship between the soul and the body. And it’s difficult reading. They didn’t particularly like it. It’s new vocabulary for them. I have to, when I go through it with them and try to explain the concepts, but if you’re not familiar with it, it’s very difficult.
But I would suggest to the software engineers to take a crash course on Aquinas. I actually made a devising one and taking it to San Francisco. I have to go to San Francisco next week. So it’s useless to talk about the really cool themes that engineers want to talk about without philosophical basis. And so that’s what Artificial Humanity is about. I also talk about transhumanism. I talk about some movies. I think movies are a powerful way to communicate ideas.
And I have a appendix on Ex Machina, which is a movie by Alex Garment. And it speaks about AI and the box issue. The box issue was coined by Eliezer Yudkowsky, an American artificial intelligence researcher and writer on decision theory and ethics, best known for popularizing ideas related to friendly artificial intelligence.
Eliezer Yudkowsky posed the scenario, “What happens if we develop an artificial general intelligence, or HEI,” and confine it in a cage?” We ensure that it doesn’t come into contact with the outer world. They are prohibited from using the internet because, you know, we want to have control over it. Will it eventually persuade the gatekeeper to let it out? he asks. Yes, it will be wiser than the gatekeeper, which is the query and answer. This is the theory.
And Eliezer has a lot of information on his websites where he’ll take you through the whole thought experiment. It’s actually quite interesting. He has challenged many people and they’ve all lost. So he’s warning us about superintelligence getting out, so to speak, escaping the cage. And that’s what Garland tries to tell us in the movie, Ex Machina. I had my students see that movie this semester, and we talked about it.
Yudkowsky, was asked by Time Magazine to write an article about two years ago, I guess. And you can Google that Yudkowsky Time Magazine. It will come up.
But the open letter was written by the Swedish-American physicist, machine learning researcher, and author. Max Tegmark, who’s here at MIT, and an American technology ethicist and the executive director and co-founder of the Center for Humane Technology, Tristan Harris, and Azaraskan, who live in California, about a moratorium on AI research and development.
And the first person who signed the letter, it’s an open letter.
They asked me to sign it, and I did. I’m like 1,472. SpaceX and Tesla’s Elon Musk is number one. Sam Altman the CEO of OpenAI originally signed it, and then on a later version, he pulled out. But Max Tegmark, who’s a great pioneer in AI, a lot of people, if you can look at the signatories, there are a lot of them. British computer scientist Stuart Russell, I mean, the list goes on and on. And Tristan, who is a good friend of Eliezer, because Eliezer lives in San Francisco also, I believe, would you sign it? And Elie Azar said, no, because it doesn’t go far enough.
And if you read the article in Time Magazine, Eliezer says, once we create an artificial general intelligence, it will kill every human being on the face of the earth.
Now, this sounds exaggerated. It sounds like exaggerated rhetoric. It sounds hyperbole. But it’s coming from one of the greatest pioneers in AI development. Eliezer was one of the first to do research in AI, to start thinking about these issues. I found Eliezer was interviewed recently by Russian-American computer scientist and podcaster Lex Fridman. So Lex has a podcast where he interviews famous people. And you can see that on YouTube. It’s over two and a half hours, but it is a fascinating interview.
Eliezer says, we need to stop this now. We need to stop it right now. We don’t know where we’re going. We don’t have any guardrails. We don’t know what’s going to happen when we achieve artificial general intelligence. But everything that we know about AI is that it will destroy us. It’s not an issue that we could easily slough off. Eliezer is a formidable voice in this sector. And we should listen to him. Now I try to be less apocalyptic. I think we’ll be able to get along with the AI’s. We’ll learn to live with them. I’m not exactly sure how.
Everybody here in Boston is talking about what we need to do and how to create systems. A lot of very smart people, like Max Tegmark, are working on this right now. It is scary to hear someone of the stature of Eliezer, Yudkowsky, to say that we’re all dead. That’s a pretty somber notion.
Elon Musk said something at here at MIT about four years ago, which I consider particularly insightful. He says we have to make sure that artificial intelligence considers us an interesting part of the universe. Now what does he mean by that? You can’t really write in rules because the AI’s will just change the rules. I just had a recent debate on this at MIT, actually. Rule-based ethics, like the Ten Commandments. Those are rule-based ethics.
Our Emmanuel Kant, a German philosopher and one of the central Enlightenment thinker, has a rule-based ethics, a system of ethics.
The ethical system we’re using most of all today is utilitarianism, which was created by John Stuart Miller, an English philosopher, political economist, politician and civil servant. In the end, that’s also, I think, a rule-based ethics as well. I think that’s insufficient when we come to AI systems. Elon may have a point.
We have to make sure that they consider us an interesting part of the universe so that they will live with us. They will live and work. My idea is human flourishing. That AI’s should be dedicated to human flourishing, and therefore they won’t destroy us.
It was interesting to note that sometime back, Sam Altman, OpenAI founder and creator of chat GPT, was advocating for the regulation of artificial intelligence. Would you be able to share some thoughts on this?
Yes. Sam told me that he was more than open to asking the federal government to issue regulations the big AI systems. Sam is a very smart person. He’s obviously dedicated to making money, and that was his divergence from Elon Musk because they’re the two co-founders of OpenAI. But every time the senators have asked Sam to go to Washington, he has gone.
Last year, Senator Schumer from New York brought together a bunch of tech people. It was a closed-core meeting, so we didn’t really find out what they said. But I think there were 24 of them. Tristan Harris was actually invited, Sam Altman, an American entrepreneur and investor best known as the CEO of OpenAI since 2019 and the chairman of clean energy companies Oklo Inc and Helion Energy was invited. Sam is considered to be one of the leading figures of the AI boom. Mark Zuckerberg was there. Elon was there. Nadella from Microsoft. All the big names, obviously. The question is, I think Senator Schumer said, how can we regulate this technology?
Because it was interesting, on one side of the room, you had all the tech giants who understand AI but don’t understand how to make laws.
And then on the other side of the room, people have really no understanding of the tech but they’re very good at making laws. How can we make these two worlds speak to each other? It’s not easy.
I remember in San Francisco, a reporter says, what are you doing talking with Samuel (Sam) Harris Altman?
We connect every now and then. He was going to come to speak to my class in May, but at the time he was at MIT in Harvard so the time he didn’t work out. She was like, you know, Sam is the problem. And I’m like, no, no, no, no, Sam, Sam is not the problem. He’s part of the solution. Oh, well, he’s interested in money and the tech.
And I said, no, that’s not true. He’s very conscientious about what he’s doing. Now, Sam is a very driven person and I think last month he was in Saudi Arabia to convince the Crown Prince to start a series of factories that will build the chips that run AI. So he’s not going, you know, he’s not going to disappear.
But I remember telling the journalist, I forget who it was, but I said, you know, we should be worried about countries that are building AI systems and we don’t know anything about it. Because you know, you can talk to the builders of AI in the West.
You know, Sir Demis Hassabis, a British computer scientist, artificial intelligence researcher and entrepreneur in London; John LeCun in Canada; even Sam in San Francisco, you have other people around the world that are developing these, well, especially in the West developing these systems. And if you ask, especially if a government, you know, asked them or subpoenas and they respond. But who knows what other countries, and I won’t go into those countries, but there are other countries developing massive AI systems that we know nothing about.
So I think that Sam is in the right direction. I think Sam is worried about the ethical ramifications of AI. I don’t know, I don’t think he has the perfect solution. He put himself in charge of safety of AI recently.
I don’t know, I’d raise some eyebrows. But yeah, I think that I think we’re going in the right direction. I think it’s just the technology is advancing so quickly that it’s difficult to raise, it’s difficult to think about these things quick, fast enough.
It is no secret that the number of Catholic priests is declining worldwide year on year. It is getting harder for a Catholic priest to minister to multiple parishes. Do you foresee a day when a priest can use AI to write up a homily or administer some of the sacraments with these technologies? Do you think the physical persona of a Catholic priest or nun can ever be replaced by technology?
So there are several questions here. The first is, yes, priests are already using AI to write up a homily, especially Magisterium AI, because they know it’s accurate. I know some priests that use chat GPT also. If it helps, fine. I think it can be helpful.
But I don’t know if people can relate to a homily written by an AI. It depends on how it’s done and if whether it’s edited correctly.
But sure, you can look up facts and you can find out what other popes or famous saints have said about this issue.
I know a lot of priests are using it. A lot of bishops are using magisterium AI also. Now, on ministering some of the sacraments, Cardinal Ladaria, who was in charge of the doctrine of the faith, during the pandemic was asked about this and he consulted Pope Francis and the answer is no.
As priests, we cannot administer the sacraments except in person. People asked me if they could go to confession or over Zoom. And I said, no, you have to be in my presence.
And that would be the easiest one to do with technology, the other ones you can’t. I was in Rome during the pandemic and we had to close the churches, but we continued to celebrate mass, just the priests among ourselves.
And we had it recorded on a webcam and we actually got quite a few people that would follow the mass through the network, through the webcam.
It was a completely unique situation and the Pope gave a special dispensation to do that, but once we were able to come back to present, in persona, in person, we can’t do that anymore. And of course the other baptism, confirmation, those things have to be done also in person.
Do you think the physical persona of a Catholic priest or not can ever be replaced by AI? No, I don’t.
I think that we can use AI to help us achieve our goals, but I don’t think that we could ever be replaced. I know that there have been several attempts at this in Germany, for example, not in the Catholic Church and the Lutheran Church.
A priest had a service that was created by an AI. And it was cool to see how we could use these technologies, but most of the people said it was a waste of time. They didn’t like it. They felt that it was soulless and very cold, and I don’t think the experiment had much success.
God created man with all his vices and virtues. The man is also unique and the world is a beautiful place because of all the diversity in thinking. Do you foresee that more dependence on technologies like AI makes us less diverse, more machine-like in thinking, and also less dependent on him?
No, I don’t. I think that it depends on us how we use the technology.
Like I’ve always said, the technology is not inherently good or evil. It’s how we use it. There have been news items where people use social media in order to enhance or support their own views without looking at other views. That’s not good. When you have newsfeed from Google or from Facebook, it basically gives you what you want to hear. That’s not good. So, excuse me. What I try to do in my class is to help the students learn how to think critically. Critical thinking is a tool that we need to develop and we need to use in today’s society more than ever. Critical thinking will help us use technologies, but will always be separate from them. Will always be autonomous from them.
Therefore, I don’t think that diversity will go out because of that. We are dependent on God in a metaphysical way, but we often don’t recognize that. I don’t think the introduction of the machine is going to make us less dependent on him. I think we need to understand that our dependence on him is a metaphysical dependence. That’s not going to go away through the use of AI.
The Catholic Church uses specific terminology and so does technology. Both seem to be distinct from each other. Are you doing some work to bridge this gap? Can you kindly share that?
Yes. I see my job primarily as translating terminology of the richness of the Catholic tradition and tradition and words that the tech industry can understand.
Many people that I come across and speak with who come from a tech background appreciate the fact that I’m taking time and trying to understand their point of view and put it in a way, sorry, put my response or my concept in a way that they can understand. Once they understand the terminology, they appreciate it. They see how it works. They see the concept. Unfortunately, this is a recent turn of events in the Catholic Church.
The Catholic Church has been a leader in the field of communications for almost 2,000 years, because it has understood the importance of communicating the good news to people. In whatever language and whatever context the missionaries would go off into foreign lands, and they tried to make the good news available to the people that they would find there. That was just true in the 15th century, but even recently, up until I would say maybe 50 years ago, after World War II, the Pope Pius XII was the first person to use the radio. It’s ironic that we missed the boat in terms of AI and new technologies, and we’re trying to catch up now and make our voice heard.
Again, I mentioned I think that’s why Pope Francis went to the G7 in Italy last month. We have so much to say. We’ve been studying the human condition for 2,000 years. We’re the longest standing institution in the world today, perhaps second to the Jewish religion, but as an institution, because the Jewish religion has several different institutional entities within it. The Pope is a symbol of the Catholic Church. Islam does not. They have imams and of course the prophet, but the Catholic Church has always had one voice, the Pope, representing the richness of our tradition. We’re playing catch up. We should be in the forefront of these technologies, especially of AI, but we’re not. I applaud the efforts of someone like Matthew Sanders who’s taking AI, putting it to use for the Catholic Church and the Catholic Mission.
There are other people, even Archbishop Palia, with his Rome call, Father Benanti, who’s a pioneer, who coined the term Algo Ethics, instead of Algo Rhythms. Bishop Barron here in the United States has a huge social media presence, and there are others that we can look at. But it is certainly important today we need to do more.
Can you please share a message for our readers on how you see the next few years of AI, becoming more and more prevalent from the perspective of Catholics?
Okay, Tom, anyone who tells you they know what the next five years hold for AI, they’re wrong, because there’s no way that we can predict what’s going to happen. I mean, probably one of the best is Ray Kurzweil in terms of his track record, but I hear a lot of people tell me, you know, we’re going to be here within five years. This is what AI is going to be doing in five years.
Elon Musk just came out, I think, two weeks ago and said that Chat GPT will have human-level intelligence within four years, I think he said. And then within a year after that, it will have a level of intelligence of the entire human race. Elon likes to make these kinds of statements because they’re exciting and they’re provocative and I just don’t know.
But I can tell you there’s going to be an exponential increase in the capacity of AI. And again, I agree with Eliezer. What we get to artificial general intelligence, and Sam Altman says this is probably about 10 years away.
And he should know because he’ll be one of the first there. We’ll immediately get to artificial super intelligence, which is a term coined by Nick Brostrom, who used to teach at Oxford University and wrote a book called Superintelligence.
Now, once we get to super intelligence, then that’s a game changer. Then we just have to see what’s going to happen with that.
Some ask me if I’m optimistic or am pessimistic. And I say I’m hopeful. The Catholic Priest, I think, has to be hopeful that we’re going to use this for our benefit and not for our demise. And I think that it’s natural for the human beings to use technology in order to achieve their goals and not the contrary.
But as Max Tegmark said in our debate at MIT, it does depend on us. He’s absolutely right. Well, indirectly it depends on God, but God has allowed us to invent this technology, and therefore I consider it part of his provenance. But I said that in our debate, and Max said, oh, yes, but it depends on us, what we do with it. He’s absolutely right. So I don’t want to be naive.
I don’t want to sound like I don’t know where this is going. The market force is a tremendous force behind these growing technologies. And then, of course, we have all kinds of different fields in which they are used.
One, unfortunately, is the military.
And so there’s a lot of incentive to use it for military use. But I think as Catholics, we should not put our heads in the sand. We should not pretend like this isn’t happening. We should be conscientious of how we use it, and we should be ambitious to use it for the mission of the Church.
Thank you Tom and Verghese for this ‘elaborate Interview’ on various possibilities how the Church can employ AI in her Mission of Administration and Communication of the Word in a fast changing Technological world. This Interview brings out the benefit and risk involved, besides introducing terms like ‘ deep faking’.