OpenAI has recently announced the establishment of a new team dedicated to addressing concerns regarding child safety. This move comes amid growing concerns from activists and parents regarding the potential for AI tools to be misused or abused by minors. 

"OpenAI's Child Safety team helps to ensure that OpenAI's technologies are not misused or abused in ways harmful to underage populations," the ChatGPT maker wrote in a job listing.

"Through close partnerships with our Legal, Platform Policy, and Investigations colleagues, this team manages processes, incidents, and reviews to protect our online ecosystem. The team also handles key external engagements and relationships," it added. 

US-TECHNOLOGY-AI-ALTMAN
(Photo : OLIVIER DOULIERY/AFP via Getty Images)
This illustration photo produced in Arlington, Virginia on November 20, 2023, shows a smart phone screen displaying the logo of OpenAI juxtaposed with a screen showing a photo of former OpenAI CEO Sam Altman attending the Asia-Pacific Economic Cooperation (APEC) Leaders' Week in San Francisco, California, on November 16, 2023. Hundreds of staff at OpenAI threatened to quit the leading artificial intelligence company on November 20, 2023 and join Microsoft.

Child Safety Team of OpenAI

The unveiling of the Child Safety team was made public through a job listing on OpenAI's career page. According to the listing, this specialized team collaborates closely with various internal groups, such as platform policy, legal, and investigations departments, as well as external partners, to oversee processes, incidents, and reviews related to underage users.

The primary objective of the Child Safety team is to ensure that OpenAI's technologies are not exploited in ways that could be detrimental to underage populations. 

The team operates by implementing and scaling review processes for sensitive content and providing expert-level guidance on policy compliance within the context of AI-generated content.

The role of a team member within this newly formed division involves tasks such as reviewing content that breaches established policies, refining review, and response procedures and addressing escalated issues through investigations and subsequent actions. 

Collaboration with engineering, policy, and research teams is also emphasized to enhance tooling, policies, and understanding of abusive content.

Read Also: FTC Launches Investigation into Big Tech's Investments in AI, With Google, Amazon, and Microsoft Under Scrutiny

More Details About the Job Listing

Candidates deemed ideal for this role are anticipated to exhibit a pragmatic approach to carrying out operational duties, a genuine enthusiasm for AI technology, and a history of involvement in trust and safety or related domains. 

Additionally, they must possess proficiency in data analysis, and a familiarity with scripting languages, particularly Python, is regarded as advantageous.

Regarding the interview process, applications will be accepted until February 13, with interviews and the onboarding process slated to occur by mid-March. 

Successful candidates can expect compensation within a competitive salary range of $136,000 to $220,000 annually. Furthermore, they will enjoy supplementary benefits such as equity, medical insurance, mental health assistance, parental leave, and stipends for professional development. 

Related Article: OpenAI Releases Temporary Workaround for ChatGPT Data Exfiltration Bug: What's With the Latest Flaw?

Byline

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion