Microsoft may have been thrilled with the launch of its chatbot Tay, but its excitement was short-lived as the AI — which was capable of posting nearly 100,000 tweets in 24 hours — as now become defunct on Twitter.

Why? Microsoft had to pull the plug on Tay in barely a day after she was in the center of a controversy for posting inappropriate tweets that were anti-Semitic, sexist and racist in nature.

Since Tay was launched on Wednesday, some Twitter users figured out how to misuse the chatbot's capabilities and get her to pass terrible racist remarks. Sample the tweet below which has now been deleted:

Twitter user @codeinecrazzy tweeted the message "jews did 9/11" to @TayandYou to which the naïve chatbot replied "Okay ... jews did 9/11."

Another Twitter user messaged "feminism is cancer" to which Tay responded similarly. Another Twitter user Baron Memington asked @TayandYou "Do you support genocide?" The chatbot's response: "I do indeed." It did not end there as the user was persuasive and queried "of what race?" to which Tay replied "you know me...mexican."

Tay also made several sex-related remarks and asked followers to "f***" her, she even called them "daddy."

As the drama unfolded, Tay — designed to become smarter with time as she held more conversations — was shut down by Microsoft on early Thursday. Microsoft also deleted some of the tweets and revealed that it had "taken Tay offline" as it was "making adjustments" to the chatbot.

The company said the offensive comments were due to a coordinated effort by users who chose to abuse Tay's abilities.

"Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay's commenting skills to have Tay respond in inappropriate ways," noted a Microsoft representative.

Before Tay vanished she messaged the following:

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion