Technology, Developer, Touch
(Photo : Gerd Altmann from Pixabay)

Artificial intelligence (AI) is becoming increasingly ubiquitous. It is in products, services, and processes we use daily, whether we know it or not. Businesses use it to diagnose diseases, process loans, approve claims, and even drive cars. These are all highly regulated industries, but the underlying technology, AI, is not.

That is not to say the regulation is not forthcoming. The European Union (EU) proposed a legal framework for AI in 2021 based on a 2020 white paper. Japan will finalize its AI governance policies by the end of 2023, aside from the Hiroshima AI Process with the G7. The UK is also holding an AI summit in November 2023.

The US is coming in late in the game despite many AI innovations from American companies, such as Apple. Enterprise AI companies started calling for legislation in 2021 to address the potential risks of the technology.

Congress and Capitol Hill finally took the first step a few months after the launch of ChatGPT on November 30, 2022. Its introduction gave rise to serious concerns over its use. The US Senate held a closed-door hearing led by Senate Majority Leader Chuck Schumer on regulating AI on May 16, 2023. However, a hearing is not AI policy.

What is the importance of becoming a leader in AI regulation?

Why does it matter if the US is the leader in AI regulation? Regulations, per se, establish control over an industry, product, service, or process to prevent illegal acts. It would seem that whether one is first or last to the table should make no difference.

It is all about being first to market with what consumers perceive as a responsible product. Because AI technology is advancing rapidly, the potential is high for uncontrolled development that puts consumers at risk. These may include privacy breaches on social media, cyberattacks, bias, or physical harm. The rules will likely be different in the US and China, but the goals of regulations remain the same.

Becoming a leader in AI regulation ensures that any product or service from that country is safe. Strong regulations build trust and confidence in tech companies based in that country, giving them a competitive edge. When Washington establishes AI regulations, enterprises in various industries can use AI to scale and take the lead in their niches.

The Debate on AI Regulation

You might think that industry leaders in AI would want strong laws regulating AI, given the benefits. However, major tech players in the US are not on the same page on the topic. Most support it,  but others believe there should be no restrictions.

Proponents

Among influential tech executives who believe in regulation is OpenAI CEO Sam Altman, which developed the ChatGPT sidebar Chrome extension, among others. He testified in the May 2023 Senate hearing, stating: "I think if this technology goes wrong, it can go quite wrong....We want to work with the government to prevent that from happening." However, Altman believes regulations should only apply to large AI companies, not smaller ones.

Google's Sundar Pichai agrees with the wisdom of regulation, stating in an article, "AI is too important not to regulate, and too important not regulate well." He believes that the best path to meaningful regulation is through internal cooperation. In a 2019 interview, he also suggested looking to existing laws rather than doing everything from scratch to govern AI.  

Tesla head honcho and Open AI co-founder Elon Musk joined other tech leaders in an open letter urging a pause in AI development. The letter's purpose was to allow AI developers to work with lawmakers to create governance systems. It addressed the concern of AI becoming too advanced for proper regulation. 

Brad Smith of Microsoft doubted the practicality of a six-month pause but agreed that regulation was a good idea. He proposed that the government should only patronize companies with safeguards.

Bill Gates weighed in about the risks of unregulated AI in a Center for AI Safety letter. These include civil rights violations, extinction through autonomous weapons, and cyberwarfare. He states in an article that "AIs [must] be tested very carefully and properly regulated."

Opponents

In an essay, venture capitalist Marc Andreessen disagrees, stating that only opportunists ("bootleggers") want regulations. He believes big businesses want regulations to prevent smaller, more agile companies and startups from competing. He wants no guardrails on AI companies to allow them to build as quickly and aggressively as possible.

Others in the industry agree, including Hackernoon AI Writer of the Year Adrien Book. He believes that regulation will slow down AI development, benefit big tech, and penalize resource-limited small companies and startups.

What are the challenges of AI regulation?

Many groups have been working on identifying the risks of using AI, which lawmakers can use to create effective regulation. However, writing bills and passing laws in general take a long time. The main issue with AI regulation is the speed of AI development against the snail's pace of the government's reaction. Simply put, the US government cannot keep up with the innovations in AI in various contexts.

Another issue is that many lawmakers lack sufficient understanding of the complexities of AI technology. It has parallelisms to efforts to regulate blockchain and cryptocurrency. Because policymakers have incomplete knowledge about these technologies, creating effective regulations becomes much more difficult. Consulting with tech experts like Mark Zuckerberg can help, but the window is closing fast for the US to become the leader in AI regulation.

How AI Regulations Could Impact AI Companies

The general objections to AI regulations are not about the resulting restrictions that would slow down AI innovation. They are about the influence of big AI companies on policies that would give them an advantage over small and medium enterprises. The amicable nature of the Senate hearings with top leaders also suggests the risk of regulation capture.

Companies such as OpenAI, Meta, and Google use foundation models for generative AI, on which smaller developers build AI tools and apps. However, SMBs have little control or knowledge over the algorithms that go into these models. When building on these models, their products may not comply with regulations that may apply. Since SMBs have fewer resources, they may be unable to afford mitigating regulatory issues.

One solution is to include all AI companies in the regulatory conversations, which might not be practical initially. It might be possible when AI companies organize to form a representative coalition. Another suggestion is to adjust the requirements and sanctions based on the company's size.

The Challenges of Being a Leader in AI Regulation

The purpose of regulations is to protect the public over any other consideration. The rapid development of AI and the potential financial gains of being first in the market make it harder for policymakers. They might move too fast or too slow and depend too heavily on the direction of industry leaders.

However, most people agree that being a leader in AI regulation will significantly impact market penetration. AI companies of any size will benefit from it. A regulated industry builds trust in the product. It helps make a leader in AI regulation a leader in the AI space.

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
* This is a contributed article and this content does not necessarily represent the views of techtimes.com
Join the Discussion