Artificial intelligence (AI) has become increasingly prominent, making notable strides with developments like ChatGPT and penetrating various facets of daily life, such as job screening, rental applications, and medical care decisions.

Despite AI systems being criticized for their tendency to favor certain races, genders, and economic groups, little government control remains.

According to AP News, US lawmakers in seven states are addressing these gaps by putting forth legislation to limit AI bias, igniting a discussion on how to balance the benefits and risks of artificial technology.

Suresh Venkatasubramanian, co-author of the White House's Blueprint for an AI Bill of Rights and a professor at Brown University, highlighted a prevailing issue: not all AI systems operate flawlessly.

Hence, the success or failure of these legislative endeavors hinges on navigating complex problems while engaging with an evolving industry valued in the hundreds of billions.

BSA The Software Alliance reports that out of nearly 200 AI-related bills introduced last year, only about a dozen were enacted into law. These bills primarily targeted specific aspects of AI, such as deepfakes and chatbots, rather than broader concerns like AI bias.

(Photo: KIRILL KUDRYAVTSEV/AFP via Getty Images) A photo taken on February 26, 2024 shows the logo of the Artificial Intelligence chat application on a smartphone screen (L) and and the letters AI on a laptop screen in Frankfurt am Main, western Germany.


In contrast, seven state bills currently under consideration aim to regulate AI bias across various industries. Experts argue that states are lagging in establishing safeguards against AI bias, given its pervasive use in consequential decision-making labeled as "automated decision tools."

Studies indicate that a significant percentage of employers, including 99% of Fortune 500 companies, use algorithms in their hiring processes. Despite this prevalence, most Americans remain unaware of the AI tools in use, let alone their potential biases.

Amazon's hiring algorithm favoring male applicants a decade ago as a result of biased historical data is an example of how difficult it is for AI to learn biases from historical data.

Proposed bills address the lack of transparency and accountability in AI decision-making by requiring companies to use automated decision tools to conduct "impact assessments" submitted to state or regulatory authorities.

Some bills advocate for customer notifications and opt-out options, with industry lobbying groups expressing general support for measures like impact assessments.

Read Also: Anthropic Unveils Claude 3: The 'Rolls-Royce of AI Models' Outshining GPT-4 and Gemini 1.0 Ultra

Promising Legislations But Still Insufficient

Despite progress, obstacles persist, with bills facing challenges in states like Washington and California. Lawmakers, including California Assemblymember Rebecca Bauer-Kahan, are refining proposals with input from tech companies.

These legislations are promising, but experts say impact studies may not be enough to discover and fix biases. Industries oppose bias audits and extensive testing because they fear disclosing trade secrets. As AI becomes ubiquitous, lawmakers seek a balance between innovation and responsibility.

In light of growing concerns over AI risks, last month, the Biden administration launched the US AI Safety Institute Consortium (AISIC), involving over 200 entities, including major tech firms like OpenAI, Google, Microsoft, and Meta. Housed under the US AI Safety Institute, AISIC supports secure generative AI development, aligning with President Biden's AI executive order.

Responding to Biden's directive, AISIC aims to set standards for AI testing, with a special focus on cybersecurity and other risks. The executive order stresses transparency, safety, and security, mandating disclosure of safety test results to the US government and emphasizing standards against AI-enabled fraud and deception, as previously reported by TechTimes.

Addressing AI Toxicity

Furthermore, UC San Diego computer scientists have unveiled ToxicChat, an innovative benchmark designed to identify and prevent toxic prompts directed at AI models, especially chatbots. Diverging from previous benchmarks, ToxicChat relies on real-world interactions rather than social media examples, enhancing its ability to detect harmful queries camouflaged as innocuous language.

Meta has integrated ToxicChat into Llama Guard's evaluation tools, gaining traction within the AI community with over 12,000 downloads since its release on Huggingface.

Related Article: Elon Musk Warns AI and EV Growth Could Strain Electricity and Transformer Supplies by 2025

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion