Small business owners in New York City have criticized its new AI-powered chatbot for misinterpreting local regulations and suggesting illegal behaviors.

Last week, tech news blog The Markup reported on the chatbot's poor replies, sparking the controversy. Despite objections, the New York City administration maintains the AI tool on its government website. Mayor Eric Adams justified this move by recognizing the chatbot's mistakes.

New York City's AI chatbot, launched in October as a complete resource for company owners, answers local bureaucratic questions algorithmically. However, it warns users that it may deliver inaccurate or dangerous information and that its comments are not legal advice, according to AP News.

Despite these protections, the NYC AI chatbot provides erroneous advice, raising worries from experts about the hazards of governments deploying AI-powered systems without control.

Julia Stoyanovich, a computer science professor and director of the Center for Responsible AI at New York University, criticized the city's approach, stating, "They're rolling out software that is unproven without oversight. It's clear they have no intention of doing what's responsible."

The chatbot incorrectly claims that companies can lawfully fire workers who report sexual harassment, fail to disclose pregnancies, or refuse haircut changes, according to questions. It has also misrepresented the city's waste programs by suggesting that companies may use black trash bags and not compost.

Microsoft, which runs the chatbot with Azure AI, said it is working with city officials to improve its accuracy and alignment with official paperwork.

Mayor Adams defended the chatbot at a news conference, saying glitches are part of refining new technology. Critics like Stoyanovich called this strategy "reckless and irresponsible."

Expert: Watch Out For 'Hallucinations'

Experts advise other governments contemplating comparable technology to learn from New York City's chatbot's mistakes. Suresh Venkatasubramanian, director of Brown University's Center for Technological Responsibility, Reimagination, and Redesign, advised cities to carefully evaluate chatbots' benefits and hazards to ensure public service responsibility.

The New York City AI chatbot debate stems from AI models' "hallucinations," which raise questions about their programmed answers. The training of massive language models, or AI chatbots, which scan enormous datasets to find patterns and relationships between words and subjects, causes this difficulty. While using this information to understand cues and develop content, they can produce convincing but factually incorrect outputs.

A Manhattan federal court received a ChatGPT AI-written legal brief from lawyers representing a client suing an airline. The brief included fake statements and court case references, demonstrating the dangers of AI-generated disinformation, per CNBC.

Read Also: Amazon AWS Implements Layoffs Amid Restructuring Efforts 

New York Grand Jury Votes To Indict Former President Trump

(Photo : Spencer Platt/Getty Images)
Mayor Eric Adams listens during a briefing on security preparations ahead of former President Donald Trump's arrival on April 03, 2023 in New York City. 

Recognizing their limitations and mistakes is crucial with the rise of AI chatbots, particularly systems that allow users to modify them. OpenAI uses "process supervision" to reduce mistakes. This method rewards the AI model for correct replies and good reasoning.

OpenAI researcher Karl Cobbe stresses the necessity of recognizing and fixing logical errors, or "hallucinations," in AI models to achieve aligned artificial general intelligence.

Experts advise users to check for factual errors in replies from AI systems like ChatGPT and Google's Bard, even if they appear accurate.

Internet Data Running Out for Training AI Models

Aside from hallucinations, AI chatbots face another issue as internet data is running out, preventing AI businesses from developing more powerful large-language models.  Corporations are reaching the end of free internet resources, raising concerns about a data deficit for AI model training, per an earlier TechTimes report.

In response, AI businesses are seeking other training data sources. Some use public video transcripts and AI-generated synthetic data. However, this move increases the potential for AI model "hallucinations" based on intentionally manufactured data.

In response to the data scarcity issue, AI industry leaders like OpenAI are considering unconventional approaches to training their models. OpenAI, the creator of ChatGPT, is reportedly considering training its GPT-5 model with YouTube transcriptions.

Related Article: NYC's AI Gun Detection System Faces Criticism Over Effectiveness, Accuracy

byline quincy

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion