US AI regulation is on the horizon as the Biden administration has reportedly started writing essential standards and guidance for the projected regulation. Big Tech companies and the public are requested to provide input until February of next year, as reported by Reuters.

The government body looking for public feedback will reportedly be the Commerce Department of the National Institute of Standards and Technology (NIST).

President Biden Delivers Remarks On Artificial Intelligence
(Photo : Anna Moneymaker/Getty Images)
U.S. President Joe Biden gives remarks on Artificial Intelligence in the Roosevelt Room at the White House on July 21, 2023 in Washington, DC.

President Joe Biden's October executive order on AI, according to Commerce Secretary Gina Raimondo, spurred the initiative, which aims to create industry standards around AI safety, security, and trust to push the United States to maintain its position as a global leader in the responsible development and application of this quickly developing technology. 

NIST's Commerce Department has been reportedly involved in the program for a long time. It was first assigned to the government body to implement Biden's Executive Order by combining stringent reporting requirements, voluntary measurements, and cutting-edge standards and evaluation capabilities.

The National Telecommunications and Information Administration (NTIA), the Bureau of Industry and Security (BIS), and the United States Patent and Trademark Office (USPTO) are listed as other federal organizations involved in the program.

Read Also: Experts weigh in: How will President Biden's executive order on AI impact healthcare? 

NIST's 'Red-Teaming' Method for Public Input

Reuters adds that NIST is developing testing recommendations, including best practices for AI risk assessment and management and where "red-teaming" would be most helpful.

For years, cybersecurity professionals have utilized "external red-teaming" to uncover novel threats, concerning U.S. Cold War computer games in which the opposition was known as the "red team."

The inaugural public assessment "red-teaming" event in the United States occurred in August, coinciding with a significant cybersecurity conference. AI Village, SeedAI, and Humane Intelligence organized it. 

Reuters stated that, according to the White House, thousands of participants attempted to cause the systems to malfunction or create unwanted results to better comprehend the hazards these technologies pose. It also reportedly showed how external red-teaming may be useful for identifying new AI dangers.  

White House AI Council

These developments come after The Hill recently reported the first White House AI council meeting was conducted last week.

The report states that as per a White House official, the attending officials were briefed on the worldwide implications and potential of AI by the president's national security team using secret intelligence. The panel also reportedly discussed the new U.S. Artificial Intelligence Safety Institute. 

The group discussed how to bring talent and expertise into government, safely test new models, and ways to prevent risks associated with artificial intelligence (AI), such as fraud, discrimination, and privacy risks. Members of the Cabinet, including Secretary of State Antony Blinken, Secretary of Commerce Gina Raimondo, and Secretary Xavier Becerra, were also present.   

The report adds that the White House AI Council will have regular meetings, as developed by Biden's extensive executive order on AI that includes new safety requirements and guidelines for informing the federal government about the testing and outcomes of models that endanger public health, economic security, or national security.

Related Article: US, China, Join Forces to Possibly Forbid AI in Autonomous Weaponry 

Written by Aldohn Domingo

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion