In a recent legal case, Air Canada argued in court that a customer misled by its chatbot shouldn't have trusted it, claiming that the chatbot is a separate legal entity responsible for its actions.

Experts are concerned about companies using legal disclaimers to escape punishment for AI-related issues.The concern is amplifying with the anticipated expansion of AI utilization across various sectors. 

"No customer wants legalese. They are fed up with it," Karthik Sj, VP of Product Management & Marketing at generative AI company Aisera, told Tech Times in an interview. "If a company says it will not stand by its chatbot, this is a major red flag. And if something goes wrong, it's a losing position to try to defend it by saying a chatbot is somehow a separate legal entity."

Karthik says the onus is on the companies to ensure their AI tools provide accurate information and cannot simply distance themselves from the action of their AI.  He argued that it all boils down to accountability. When a company deploys an AI tool, it must stand behind the information spurted by that tool, just as it would support information doled out by human employees​​​​. 

"The onus is on the company. Systems should be battle-tested to anticipate various scenarios," says Karthik. He insists this is even more true in the case of emerging tech like generative AI. "If not, then companies should not use these systems."

Joaquin Lippincott, CEO & founder of AI cloud consulting firm Metal Toad, says using generative AI introduces a whole new dimension of ethical and legal concerns because of the AI's unpredictable nature. 

"This technology, capable of learning and evolving, often shows behaviors that can't be precisely anticipated, presenting both benefits and risks," Lippincott told Tech Times in an interview. 

Building on this, Ryan LaSalle, CEO of risk-management firm Nisos, says generative AI apps that have unlimited autonomy to create offers present an excellent example of the importance of why companies must test this new emerging class of technology, not just for security and abuse, but also for unintended consequences.

"Customer experiences are a reflection of the company," LaSalle told Tech Times in an interview. "When users genuinely engage without abuse, they expect the company to stand behind its offers and customer treatment." 

Jonas Jacobi, CEO & co-founder of AI testing and documenting firm ValidMind, joins the other experts by saying a company's decision to take refuge behind legalese reflects a poorly thought-through strategy.  He believes a company's choice to roll out inadequately tested, improperly validated, or undocumented AI shows their disdain for understanding the risks to their customers.

Jacobi uses the example of pharmaceutical companies, which he says are required to give customers information and notices about the potential side effects of the drugs they market. 

"Similarly, companies should be responsible for documenting the risks of their AI and making that information available to their customers, for instance, through disclaimers," Jacobi told Tech Times in an interview.

He pointed to Canada's small claims court ruling against Air Canada to suggest that governments and civil society support the addition of warnings, for instance, through disclaimers.

Karthik, however, doesn't believe adding disclaimers is the way to go, for the simple fact that most customers usually ignore them. 

"Who will read pages of legalese before using a chatbot? No one. Users will just use the chatbot because that is what a company wants," said Karthik.

That makes sense to Lippincott, who believes adding disclaimers and an End end-user license Agreement (EULA) to AI bots cautioning about the reliability of their information sends conflicting messages. He says using disclaimers and EULAs could potentially erode trust, as users may interpret them as a sign that the company itself believes its AI is not entirely dependable and lacks confidence in its outputs. 

Most experts agreed that while they might protect companies legally, disclaimers don't help build user trust. They suggest that instead of adding EULAs, companies should focus on improving the accuracy of their AI systems, being transparent about their limitations, and not shying away from accountability. Karthik says companies should remember that while it's costly to get new customers, it doesn't take much to lose one. "And when this happens, there is a high risk that they will tell other people."

Jacobi said  that while the legal text might protect the company from being sued, it won't protect it from losing customers if the AI makes mistakes. "Nor will it protect customers from the harm done to them by poorly tested AI models. Customers should expect more assurance from companies to trust their AI."

About the author: Mayank Sharma is a technology writer with two decades of experience in breaking down complex technology and getting behind the news to help his readers get to grips with the latest buzzwords and industry milestones.  He has had bylines on NewsForge, Linux.com, IBM developerWorks, Linux User & Developer magazine, Linux Voice magazine, Linux Magazine, and HackSpace magazine. In addition to Tech Times, his current roster of publications include TechRadar Pro, and Linux Format magazine. Follow him at https://twitter.com/geekybodhi

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion