Friar Paolo Benanti, a key figure in the Vatican's ethical considerations of artificial intelligence (AI), is playing a pivotal role in shaping the Roman Catholic Church's stance on technology.

Wearing the humble brown robes of his Franciscan order, the 50-year-old Italian priest advises Pope Francis and engages with top engineers in Silicon Valley. Benanti, with a background in engineering and a doctorate in moral theology, aligns with Pope Francis's call for an international treaty to ensure ethical AI use.

In a The Associated Press story, Benanti raises a fundamental question: "What is the difference between a man who exists and a machine that functions?" Teaching courses in moral theology and bioethics at the Pontifical Gregoriana University, he emphasizes the profound impact of AI on humanity.

Vatican's AI Expert on a Mission to Ensure Ethical Tech Use

(Photo : FILIPPO MONTEFORTE/AFP via Getty Images)
Pope Francis (C) presides over the funeral of Italian Cardinal Sergio Sebastiani at the altar of the Chair in St. Peter's Basilica in the Vatican, on January 17, 2024.

Beyond the Vatican, Benanti is a member of the Italian government commission safeguarding journalism from disinformation and an advisor to the Pontifical Academy for Life. His expertise proved valuable during a 2023 meeting between Pope Francis and Microsoft President Brad Smith, which explored AI's potential benefits and risks to humanity.

Expressing concerns about AI's potential infringement on human rights, Friar Benanti emphasizes the importance of inclusive data, cautioning against choices lacking inclusivity. He draws attention to the ethical implications of AI, urging a focus on governance rather than limiting development to maintain compatibility with democracy.

"It is a problem not of using (AI) but it is a problem of governance. And here is where ethics come in - finding the right level of use inside a social context," he said, as quoted in the AP News report.

The friar's unique perspective, combining engineering, ethics, and technology, positions him as a critical voice in the global dialogue on regulating AI. With the European Union leading the way with comprehensive AI rules, Benanti's efforts align with broader initiatives to ensure responsible and ethical AI development worldwide.

Tech Companies Failing to Address AI Ethics Gaps

A Stanford University investigation found that prominent tech corporations are failing to prioritize ethical AI development despite public pledges. The Institute for Human-Centered Artificial Intelligence notes that firms state AI values and fund AI ethics research, but implementation lags, Al Jazeera reported.

Read Also: Practica AI Chatbot Revolutionizing Workplace Mentorship: Here's How It Works

The report, titled "Walking the Walk of AI Ethics in Technology Companies," draws on insights from 25 AI ethics practitioners who note a gap between rhetoric and action. Complaints include inadequate institutional support, isolation within organizations, and resistance from product managers prioritizing productivity and launch timelines over ethical considerations.

The report emphasizes the need for companies to translate ethical promises into tangible practices in the AI development process. As global concerns about the ethical use of AI continue to grow, bridging this gap between intent and implementation is crucial for building trust and ensuring responsible AI practices.

WHO Issues Ethics in Healthcare AI

Amid the concerns regarding the use of AI, the World Health Organization (WHO) has issued detailed recommendations on the ethical governance of large multi-modal models (LMMs), a rapidly developing kind of generative AI technology in healthcare.

The guideline, posted on the UN health agency's website, includes over 40 suggestions for governments, technology firms, and healthcare practitioners to use LMMs ethically and responsibly to improve population health. In 2023, LMMs like ChatGPT, Bard, and Bert gained popularity for their capacity to absorb varied data inputs and replicate human interactions.

The WHO stresses the need for clear information and policies for LMM design, development, and implementation in healthcare to avoid disinformation and prejudice. LMMs are used in diagnosis, patient-guided usage, administrative activities, medical education, and scientific research, according to the instructions.

Government investment in public infrastructure, ethical conduct, regulatory evaluation, post-release audits, and stakeholder participation in development are key considerations. The WHO emphasizes ethical issues and public confidence in AI applications in medicine to guarantee the safe and successful use of LMMs in healthcare.

Related Article: OpenAI CEO Sam Altman Expresses Concerns About Rapid AI Revolution

byline-quincy


ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion