Chatbots powered by artificial intelligence (AI) and therapy applications are changing the mental health service industry fast. Among them is the mental health chatbot Earkick has a colorful interface with a bandana-wearing panda.

After downloading, the mental health chatbot responds to anxious talks and typing sessions with reassuring and compassionate comments, like therapists. Despite using therapeutic approaches, Earkick's co-founder, Karin Andrea Stephan, dislikes labeling the AI-powered service as therapy.

The debate over AI-driven mental health services goes beyond Earkick's novelty to include their effectiveness and safety, per AP News. With the growth of AI chatbots for Gen Z adolescent and young adult mental health, questions concerning their therapeutic vs. self-help status arise.

Despite their round-the-clock availability and attempts to lessen the stigma associated with conventional treatment, their efficacy remains disputed. The Food and Drug Administration does not regulate mental health chatbots since they seldom diagnose or treat medical diseases. Psychologists and technology directors like Vaile Wright worry about this regulatory gap because they lack control and evidence of its effectiveness.

Rising Demand For Mental Health Chatbots

Some health expers believe the app's non-medical disclaimers may not be enough to alleviate the mental health professional shortage and lengthy treatment wait times. The displacement of established medicines for serious diseases and chatbots' emergency response abilities complicate the discussion.

Around these debates on AI chatbot safety, Stanford-trained psychologists developed Woebot in 2017 to propose alternatives to huge language model-based chatbots. Woebot prioritizes mental health treatment safety and effectiveness with organized scripts.

US-HEALTH-VIRUS-IT-PSYCHOLOGY-ILLUSTRATION

(Photo : OLIVIER DOULIERY/AFP via Getty Images)
In this photo illustration a virtual friend is seen on the screen of an iPhone on April 30, 2020, in Arlington, Virginia. 

According to The Guardian, the rising demand for National Health Service talking therapy services shows the need for digital alternatives to face-to-face treatment. In 2022-23, 1.76 million people were referred for mental health treatment, with 1.22 million starting in-person therapy.

BetterHelp addresses obstacles to treatment, such as restricted practitioner availability and unapproachable therapists. These platforms' management of sensitive user data raises concerns. UK regulators are regulating such applications due to privacy concerns.

BetterHelp was fined $7.8 million by the US Federal Trade Commission last year for tricking customers and disclosing sensitive data to other parties despite privacy promises. Privacy breaches are widespread in the mental health app business, which includes virtual treatment, mood monitors, and chatbots.

Read Also: Lenmeldy Sparks Controversy as World's Most Expensive Medicine, Debating the $4.25 Million Price Tag

Many platforms use regulatory gaps to exchange or sell personal data, according to independent watchdogs like the Mozilla Foundation. The foundation surveyed 32 prominent mental health applications and concluded that 19 failed to sufficiently protect user privacy and security, raising worries about the monetization of mental health challenges.

Chatbot creators have taken steps to protect users' privacy, albeit with mixed results. ChatGPT's homepage toggle allows users to prevent indefinite conversation history storage. Although this doesn't prevent breaches, the platform promises that the conversations of people who utilize this function will only be retained for 30 days and will not be used for model training.

Bard users may find default methods to remove chat activity at Bard.Google.com. Microsoft spokespersons said Bing users may monitor and remove talks from their search history on the chatbot homepage. Users cannot deactivate chat history.

How To Safely Use AI Chatbots for Mental Health

Experts advise users to be vigilant about privacy while using generative AI technologies, as reported by The Wall Street Journal. Dominique Shelton Leipzig, a Mayer Brown privacy and cybersecurity partner, said a lack of a privacy notice indicates poor governance. Excessive personal information collection is another red flag.

Users should also refrain from providing crucial information to unidentified AI chatbots, as they could potentially be under the control of malicious actors. Irina Raicu, director of the Internet ethics program at the Markkula Center for Applied Ethics at Santa Clara University, cautions against disclosing health or financial data since chatbot firms' terms of service typically allow human personnel to view certain discussions.

Before using an AI chatbot for mental health, consumers should weigh the risks and rewards and read the platform's terms of service and privacy policies. To improve mental health, stakeholders underline the need to understand AI technology as the digital health industry evolves.

Related Article: DOJ Targets Apple Over 'Green Bubble' Text Message Stigma in Antitrust Case

byline quincy

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion