Microsoft's lead engineer, Shane Jones, reportedly claims that the company's legal department asked him to delete a LinkedIn post that informs the public about OpenAI's DALL-E 3's security gaps that allow users to create explicit and violent images. 

Now, Jones has reportedly penned a letter to Washington state Attorney General Bob Ferguson (D), US Senators Patty Murray (D) and Maria Cantwell (D), and Representative Adam Smith (D) of the 9th District of Washington to notify the public once again on DALL-E 3's security concerns.

Jones claims he found the security gap early in December last year, which he quickly reported to his Microsoft supervisors. The supervisor reportedly gave him the go-ahead to submit it to OpenAI directly. 

Microsoft CEO Satya Nadella Expresses Alarm Over Taylor Swift Deepfakes Circulating Online

(Photo : Ethan Miller/Getty Images)
Microsoft CEO Satya Nadella has voiced his concern over the emergence of sexually explicit AI-generated images featuring Taylor Swift.

On December 14, 2023, early in the morning, Jones then attempted to make his cause known by publicly posting a letter to OpenAI's non-profit board of directors on LinkedIn, requesting that they halt the distribution of DALL·E 3. 

Microsoft demanded that he erase his post, with his manager contacting him shortly after Jones revealed the letter to his leadership team, informing him that Microsoft's legal department had requested to remove the post. Jones was met with little to no explanation but to remove the post immediately.

Jones states that he was then told that Microsoft's legal department would provide a detailed explanation for the takedown order and would send an email in due course. Jones cooperated, but he claims Microsoft's legal staff never sent over the more detailed response. Subsequent efforts to obtain additional information from the company's legal department were purportedly disregarded.   

Read Also: ChatGPT has Allegedly Violated Privacy Rules, Claims Italian Watchdog 

Microsoft's Response

In response, Microsoft reportedly stated that it is dedicated to resolving employee concerns. It has set up internal solid reporting mechanisms to look into and fix any problems appropriately and validate and test Jones's concerns before taking them to the public.

The business claimed to have looked into the employee's allegations. It verified that none of its AI-powered picture-generating systems used the methods Jones disclosed to get beyond safety filters. The business insisted that it values employee feedback and is reaching out to Jones to resolve any unresolved issues he may have. 

OpenAI's Mitigation Measures

An OpenAI representative mirrored Microsoft's statement in a reported email, in which the AI giant stated that upon receiving the Microsoft employee's report on December 1, they looked into it right away and verified that the method he revealed did not circumvent their safety measures. OpenAI then reiterated that their first goal is safety and that they attack the problem from multiple angles. 

The company claims that it has created strong picture classifiers that prevent the model from producing damaging images and that it has worked to exclude the most explicit content, including graphic sexual and violent content, from its training data, which goes into the underlying DALL-E 3 model. 

Currently, OpenAI's DALL-E 3 is stated to reject prompts requesting harmful, violent images validated by domain experts who stress-test the model in risk areas such as creating public figures and detrimental biases associated with visual over- or under-representation. Mitigation efforts in propaganda and misinformation are also supposedly in place.

Related Article: Pentagon Launches AI Bias Bounty, Pays Anyone Who Finds Biases in AI Chatbots 

Written by Aldohn Domingo

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion