Author Tom Kemp believes that once a technological genie is out of the bottle, it cannot be put back. Thus, he noted that making rules to manage emerging technologies like artificial intelligence (AI) is smarter. 

In his new book "Containing Big Tech: How to Protect Our Civil Rights, Economy, and Democracy," Kemp talked about how these regulations might look and what they mean for people. 

FINLAND-SCIENCE-TECHNOLOGY-AI-WIRELESS-INTERNET-COMPUTERS-SOFTWA
(Photo : OLIVIER MORIN/AFP via Getty Images)
This illustration photograph taken in Helsinki on June 12, 2023, shows an AI (Artificial Intelligence) logo blended with four fake Twitter accounts bearing profile pictures apparently generated by Artificial Intelligence software. 

From an Analyst's Perspective

The Silicon Valley-based author, entrepreneur, investor, and policy advisor asserted that once the genie of technology has been let loose, there's no reversing that course. Hence, he noted that the prudent approach is to focus on establishing regulatory frameworks, Engadget reported. 

In his recently published work, Kemp delved into the potential shape of these regulations and their implications for the general public. It also unveiled Kemp's insights on the possible nature of these regulations and their significance in terms of consumer protection. 

Emergence of AI

The rapidly growing AI sector has swiftly moved beyond the initial "move fast" phase of its evolution and plunged headfirst into the realm of causing disruptions - even within society itself.

Since the introduction of ChatGPT last November, generative AI technologies have surged in popularity across the digital landscape. 

Their applications span a wide spectrum, including tasks like automated programming, industrial utilization, game development, and immersive virtual experiences.

Unfortunately, this technology has also been swiftly embraced for malicious intentions, such as facilitating extensive spam email campaigns and fabricating convincing deepfake content.

Read Also: Rules for AI? US Wants Your Suggestions on Creating Regulations for AI Models Including ChatGPT and MORE

Regulating AI

Engadget reported that people must tap into the advantages of AI while confining the potential harm it might bring to people within a figurative Pandora's box. 

Dr. Timnit Gebru, who established the Distributed Artificial Intelligence Research Institute (DAIR), shared her thoughts on tackling AI bias with the New York Times.

She suggested that addressing this issue requires a comprehensive approach: establishing principles, standards, and regulatory bodies and involving people in decision-making, similar to the role of the Food and Drug Administration (FDA).

In her view, solving the problem is more complex than just diversifying datasets, as it demands broader systemic measures. This technology could be used in the political sphere to wrongly attribute statements or actions to a rival, leading to voter deception.

The Federal Election Commission is currently focusing on the potential control of political ads that use AI to alter the appearance of rival candidates. This issue has gained significance in light of the upcoming 2024 US presidential election.

The heightened utilization of generative AI has contributed to a rise in the production of highly realistic fake images. Consequently, there has been a surge in deepfakes and manipulated content that strikingly resembles real individuals. 

Related Article: FEC Evaluates Regulation of AI-Powered Political Ads Amid Deepfake Concerns

Written by Inno Flores

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion