55% of AI Failures in Business Attributed to Poor Use of Third-Party AI Tools
(Photo : Possessed Photography from Unsplash)
New research reveals that third-party AI tools are not responsibly used, leading to the failure of several businesses.

AI has become an integral part of our lives. We rely on AI-powered weather apps to plan our picnics and trust AI tools for customer support. However, as AI infiltrates every corner of our existence, an alarming reality must be addressed.

Over 55% of AI-related failures in organizations are attributed to third-party AI tools., according to a survey conducted by MIT Sloan Management Review and Boston Consulting Group. 

The Rise of Third-Party AI

There are times when AI apps falsely forecast the weather pattern for a certain day, or even an AI detector can't detect all of the grammatical errors in a sentence. These scenarios, unfortunately, are not uncommon.

The advent of ChatGPT, a powerful AI chatbot, ignited a generative AI revolution. OpenAI was soon joined by giants like Microsoft Bing and Google Bard in the chatbot race. While these AI chatbots promised remarkable capabilities, they also brought ethical dilemmas to the forefront.

According to a report by ZDNet, ChatGPT's success led to a proliferation of third-party AI solutions catering to customer support, content creation, IT assistance, and grammar checking. 

Out of 1,240 survey respondents across 87 countries, a staggering 78% reported using third-party AI tools, either through access, purchase, or licensing. 

Even more noteworthy, 53% of these organizations rely solely on third-party tools without in-house AI tech.

Related Article: Will Spotify Completely Ban AI-Generated Music? Here's What CEO Daniel EK Has to Say

The Hidden Risks Behind Third-Party AI Tools

Despite the widespread adoption of third-party AI tools, the survey revealed that 55% of AI-related failures originate from using these tools. 20% of organizations failed to assess the substantial risks associated with third-party AI.

"Many do not subject AI vendors or their products to the kinds of assessment undertaken for cybersecurity, leaving them blind to the risks of deploying third-party AI solutions." Armilla AI's head of AI policy, Philip Dawson, said.

It is evident that achieving responsible AI is an intricate task, particularly when organizations engage third-party vendors without proper oversight. 

Triveni Gandhi, responsible AI lead for AI company Dataiku, emphasizes the connection between model risk management practices, external regulations, and responsible AI. 

In regulated industries like financial services, adherence to external regulations is pivotal in shaping RAI strategies.

Responsible AI Actions

Eliminating third-party AI tools is not a viable solution, as they often serve as essential components of organizational AI strategies. The solution lies in robust risk assessment strategies, including vendor audits, internal reviews, and adherence to industry standards. 

With the dynamic regulatory landscape surrounding responsible AI, organizations should prioritize RAI from the ground up, starting with the CEO.

The research findings reveal that organizations benefit significantly when their CEO actively engages in responsible AI initiatives. 

In fact, organizations with a hands-on CEO in RAI reported 58% more business benefits compared to those with a less-involved CEO. Moreover, organizations with a CEO who actively participates in RAI are nearly twice as likely to invest in RAI.

In other news, Tech Times reported that Roblox recently invested in AI through the Speechly acquisition. The startup is known to excel in AI voice moderation.

Read Also: AI Race: Amazon to Invest $4 Billion into OpenAI Rival Anthropic

Joseph Henry

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion