The Federal Trade Commission appeared at a Congress hearing last Tuesday and it discussed the many dangers of AI and its growing use today, as it can be used for fraudulent acts across industries. FTC Chair Lina Khan specifically used the word to describe AI to help "turbocharge" fraud, with the likes of present technologies including ChatGPT. 

It is now looking into protecting Americans from the unfair practices in the field that is related to technical advances these systems bring for everyone. 

FTC
(Photo : Graeme Jennings-Pool/Getty Images)

FTC Warns About AI 'Turbocharging' Fraud in the Industry

The Federal Trade Commission (FTC) has issued a warning about the potential dangers of artificial intelligence (AI) technology, specifically chatbots, being used to facilitate fraud in different industries. FTC Chair, Lina Khan, addressed this critical issue during a Congressional hearing on Tuesday, shedding light on the need to protect American consumers from deceptive practices. 

"AI presents a whole set of opportunities, but also presents a whole set of risks," said Khan among the House representatives. 

"And I think we've already seen ways in which it could be used to turbocharge fraud and scams. We've been putting market participants on notice that instances in which AI tools are effectively being designed to deceive people can place them on the hook for FTC action," she added.

Read Also: Rules for AI? US Wants Your Suggestions on Creating Regulations for AI Models Including ChatGPT and MORE

FTC is Looking to Protect Consumers and Competition

TechCrunch claimed that the FTC addressed its measures in combatting AI fraudulence and scams carried out by individuals or organizations looking to best the public. As per Chair Khan, the FTC already has its personnel focusing on AI's influence on the consumer and competition side of the Commission's work in protecting the public. 

Moreover, FTC already has set countermeasures to fight against AI wrongdoings. 

AI and its Growing Presence in Technology

AI is a significant technology now, and because of its growing presence and popularity in the tech industry, many are falling victim to users who are utilizing it for wrongful deeds. One of the most recent scams with AI was deepfaking a voice of a girl, which was then sent to her mother and the criminals demanding $1 million in ransom payment. 

These systems are developed for the good of humankind, centering on offering tools and capabilities to help them have an easier time creating content or generating information. However, some experts and researchers also believe that ChatGPT may be used for scams, bringing in the dark and twisted side of the technology its creators never intended. 

Machine learning is growing to be more sophisticated and advanced, with people who want to take advantage of the technology also working hard in bending it toward their desires. This is what the FTC is looking out for in the present, as AI is like clay that can be molded to bring anything they like, with the tool being easily accessible to execute fraudulent deeds against the public, and the Commission watching closely.

Related Article: TruthGPT vs. CHATGPT: How Is Elon Musk's AI Different From OpenAI's Chatbot?

Isaiah Richard

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion