There is now a massive problem found on New York City's AI chatbot, 'MyCity,' as it was reportedly hallucinating when being prompted about information about its government's rules and regulations.

Several reports have prompted the AI chatbot to consider the kinds of wrongful information it shares with users, even sharing incorrect data that is against the law. 

New York's MyCity Chatbot is Hallucinating, Says Reports

NYC Mayor Eric Adams

(Photo: Spencer Platt/Getty Images)
Reports from The City, a non-profit news website, shared that it has recently found that the MyCity AI chatbot of New York is hallucinating, giving users incorrect and wrongful information for their questions.

Another website called The Markup corroborates this claim, as they, too, found misleading data shared by the AI chatbot when asked about certain business regulations in the city. 

Moreover, a user from BlueSky, Jack Dorsey's latest social media platform, shared screenshots of the AI's hallucinations, which asked it various questions about grounds for evicting a tenant, doing so hilariously against the chatbot. 

In one example, MyCity was asked about evicting a tenant who refused to pay rent, and the chatbot answered that the landlord could not evict them from the property, something that is arguable in court.

Read Also: The White House Unveils Policies for Federal AI Use

Incorrect, Wrongful Data Shared to Users

Another post on the BlueSky thread asked about the need to address co-workers with non-binary pronouns like "they/them" if they are asked to, and the chatbot said "No" to it. However, Kathryn Tewson, the OP, cited New York's Commission on Human Rights laws regarding gender-based discrimination regarding this. 

The MyCity Chatbot, headed by NYC Mayor Eric Adams, has been in the beta testing phase since its release in October 2023. On its landing page, it said that it uses Microsoft's Azure AI to provide information.

AI Hallucination is Still a Problem

The problems in AI hallucinations are real, and renowned companies with their available chatbots are suffering from these issues as they give wrongful information that the technology generates on its own.

Experts previously addressed that AI hallucinations would be difficult to fix, and while certain areas may be revamped, the overall hallucination may not be possible to eliminate. 

One of the most significant cases of AI hallucinations involved lawyers who used ChatGPT to cite cases for them and build arguments using OpenAI technology.

The most recent report was not the first, with Attorney Jae Lee caught in doing so when appealing to dismiss her client's medical malpractice lawsuit, citing a fake case and presenting it to the court. 

Companies like OpenAI, Google, Microsoft, and more have already addressed AI hallucination claims, with disclaimers about its accuracy and tendencies to fabricate information. 

New York's MyCity is a good initiative for the local government to bring assistive tech to its citizens. However, despite the information widely available on the web, it is not entirely accurate, and for now, it is not safe to rely on it for various questions about the city. 

Related Article: Google, Microsoft AI Chatbots Fabricate Super Bowl Stats, Hallucinates Data

Isaiah Richard

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion