In 2016, Microsoft launched Tay, an artificial intelligence-powered chatbot that turned into a Nazi after spending a day on Twitter.

Tay also almost caused a feud between the Redmond, Washington-based tech company and pop superstar Taylor Swift.

Taylor Swift Threatens To Sue Microsoft

In Tools and Weapons, the upcoming book written by Brad Smith, the president of Microsoft recounted receiving an e-mail from Swift's legal team over the use of the name "Tay."

"An email had just arrived from a Beverly Hills lawyer who introduced himself by telling me: 'We represent Taylor Swift, on whose behalf this is directed to you,'" Smith wrote as reported by The Guardian).

Swift's legal representative argued that the name "Tay" is closely associated with the singer and using it would create a misleading association between Swift and the AI chatbot, violating federal and state laws. Microsoft's trademark lawyers disagreed, but the company decided not to engage in a legal battle with the celebrity.

Tay Becomes A Nazi

The AI chatbot was initially introduced as XiaoIce in China. It was developed to communicate with young adults and teens on social media.

The experiment was a resounding success. Chinese internet users spend 15 to 20 minutes per day talking to XiaoIce, sharing their hopes and dreams to friendly chatbot. Due to its popularity, XiaoIce was integrated into banking, news, and entertainment platforms.

Microsoft wanted to recreate the success of XiaoIce in the United States. On March 23, 2016, published her first tweet.

Unfortunately, the experiment did not go as planned. Trolls started tweeting racist remarks to the chatbot. Some users tried to teach Tay about Donald Trump.

A few hours later, Tay was referring to feminism as "cult" and "cancer," denying that the Holocaust ever happened, stating that Bush did 9/11, and repeating inflammatory political statements.

Microsoft immediately disconnected the chatbot from Twitter. The tech company issued an apology for the offensive tweets.

"Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack," the company said in a statement. "We take full responsibility for not seeing this possibility ahead of time."

In 2018, Microsoft launched a new version of the AI chatbot named Zo. Unlike its predecessor, Zo is prohibitted from speaking about politics, race, and religion.

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion