Eight prominent tech companies, including Adobe, IBM, Nvidia, Palantir, and Salesforce, have voluntarily pledged commitments toward the responsible development of artificial intelligence (AI). The Biden administration has unveiled initiatives in managing the potential risks and harnessing the benefits of AI.

In July, the administration successfully secured voluntary commitments from seven leading AI firms to support the development of secure, trustworthy, and safe AI technologies.

Technology Circuit Board Head
(Photo : Vicki Hamilton from Pixabay)

Biden Administration Is Working With Leading AI Companies

Representatives from eight AI companies met with US Secretary of Commerce Gina Raimondo, White House Chief of Staff Jeff Zients, and other senior administration officials at the White House on Tuesday.

The eight firms, namely Adobe, Cohere, IBM, Nvidia, Palantir, Salesforce, Scale AI, and Stability, joined the first seven companies that convened in July at the White House to sign on to the set of voluntary commitments aimed at advancing the development of safe, secure, and trustworthy AI.

These commitments are vital to government action and form an integral part of the Biden administration's comprehensive approach to harnessing the benefits and managing the risks of AI.

Read Also: No More Grammar Errors as the New AI-Powered Google Gboard Feature Will Proofread Your Text

Commitments of AI Firms

The eight leading AI companies commit to:

1. Ensuring Product Safety Before Public Introduction

The companies pledge to subject their AI systems to rigorous internal and external security testing before release. This testing, partly conducted by independent experts, guards against significant AI risks, including biosecurity, cybersecurity, and broader societal impacts.

They also commit to sharing information on AI risk management with industry peers, governments, civil society, and academia. It involves best practices for safety, insights into attempts to bypass safeguards, and technical collaboration.

2. Building Systems With Security as a Priority

The companies vowed to invest in cybersecurity measures and insider threat defenses to safeguard proprietary and unreleased model weights, the critical component of an AI system. They stress that model weights should only be released when intended and when security risks are thoroughly assessed.

The companies also commit to identifying and reporting vulnerabilities in their AI systems by third parties. This mechanism ensures that any issues persisting post-release can be promptly identified and rectified.

3. Earning the Public's Trust

The companies promise to create reliable technical systems that alert users when AI generates content, possibly through a watermarking technique. 

This step nurtures innovation and efficiency while reducing the chances of fraud and deception. They vow to openly communicate about their AI systems' capacities, restrictions, and domains of suitable and unsuitable applications.

These disclosures will cover a spectrum of concerns, from security and societal risks to considerations regarding fairness and bias. 

The companies are dedicated to prioritizing research on the potential societal hazards associated with AI systems. Their primary objective is to diminish detrimental bias and discrimination and uphold privacy.  

Last July, leading AI firms Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI visited the White House and extended voluntary safety commitments in AI technology development.

These commitments are pivotal steps toward building a framework for responsible AI development, ensuring that the emerging technology benefits society while minimizing potential risks.

Related Article: UNESCO Unveils First-Ever Global Guidance on Generative AI in Education, Research

Byline

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion