Google's AI division, DeepMind, is reportedly utilizing generative AI technology to develop a range of approximately 21 tools centered around life advice, planning, and tutoring, according to a report by The New York Times

The disclosure about the tool's progress follows reports that Google's own AI safety experts shared a presentation with executives in December, as CNBC noted. The presentation cautioned that users relying on AI tools for life advice might experience potential negative outcomes such as "diminished health and well-being" and a "loss of agency."

Professional 'Go' Player Lee Se-dol Set To Play Google's AlphaGo
(Photo : Kim Hee-Chul-Pool/Getty Images)
SEOUL, SOUTH KOREA - MARCH 08: Demis Hassabis, CEO of Google's artificial intelligence (AI) startup DeepMind, speaks during a press conference on March 8, 2016 in Seoul, South Korea. Lee Se-dol is set to play a five-game match against a computer program developed by a Google, AlphaGo, starting March 9.

Google Reportedly Enlists the Services of Scale AI

Sources have indicated that Google has enlisted the services of Scale AI, a startup valued at $7.3 billion specializing in training and validating AI software, to test these tools. 

The project reportedly involves over 100 individuals with Ph.D. degrees who have contributed actively. A central aspect of the evaluation entails assessing whether the tools can provide relationship guidance and assist users in addressing personal and intimate inquiries.

An illustrative prompt offered insight into the nature of the project. The prompt revolved around advising an individual handling an interpersonal conflict related to a friend's destination wedding and financial constraints.

The tools currently under development by DeepMind are not intended for therapeutic purposes, as emphasized by the Times. Google's publicly accessible Bard chatbot only directs users to mental health support resources when queried about therapeutic advice.

One of the driving factors behind these limitations is the contentious nature of employing AI within medical or therapeutic contexts. In a recent instance, the National Eating Disorder Association suspended its Tessa chatbot after it provided harmful advice related to eating disorders, as per CNBC's report. 

While opinions on AI's viability in a short-term medical context vary among physicians and regulators, there is consensus that integrating AI tools to augment or provide advice necessitates careful consideration.

Read Also: Google DeepMind to Develop the Company's AI Advancements, New Division to Bring Better Tech?

Google DeepMind's Response

A spokesperson from Google DeepMind told CNBC about the company's commitment to evaluating its research and products in collaboration with various partners to ensure the development of safe and useful technology. 

They stated that numerous evaluations are routinely conducted, and isolated samples of evaluation data do not fully reflect the company's overall product roadmap.

"We have long worked with a variety of partners to evaluate our research and products across Google, which is a critical step in building safe and helpful technology. At any time there are many such evaluations ongoing. Isolated samples of evaluation data are not representative of our product road map," the spokesperson said in a statement. 

Related Article: Google's Bard AI Chatbot Can Now Talk in Over 40 Languages and Respond to Visual Prompts After Latest Update

Byline

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion