Based on a report from The Verge, a father was mistakenly flagged by Google for Child Sexual Abuse Material or CSAM after using his smartphone to take his toddler's infected private part which he will use for medical consultation. 

US-IT-TECHNOLOGY-GOOGLE

(Photo : NOAH BERGER/AFP via Getty Images)
A bicyclist rides along a path at Googles Bay View campus in Mountain View, California on June 27, 2022.

Because of the automated report, Google closed his accounts and submitted a report to the National Center for Missing and Exploited Children or NCMEC. Following the report is the authority to investigate the person if the picture is just an innocent photo or an abuse.

As per The New York Times, the incident happened in February last year when hospitals and doctor's offices were still on lockdown and closed because of the COVID-19 pandemic. The father then took a photo after noticing the swelling of his son's genitals as requested by the nurse during the online consultation. The doctors prescribed antibiotics to cure the infection.

Two days after the incident, the father received a Google notification that states the lock of his account as he severely violates Google's policies regarding harmful content. The notification includes the illegality of what the parent did.

Because of this, the parent lost his access to his emails, contacts, photos, and Google Fi's phone number. The request to appeal the decision of the company was also denied by Google. However, the San Francisco Police Department opened an investigation which helped him to take back all of his information and storage.

The incident was found to be harmless and there was no crime occurring in the picture as per the investigator.  

Also read: Google is Attempting to Combat Clickbait in Searches with an Update to its Ranking Algorithm 

Google Spokesperson Chrtia Muldoon said in a statement that CSAM is a sensitive topic and the company is preventing the misuse of these types of content. She added, "We follow US law in defining what constitutes CSAM and use a combination of hash matching technology and artificial intelligence to identify it and remove it from our platforms. Additionally, our team of child safety experts reviews flagged content for accuracy and consults with pediatricians to help ensure we're able to identify instances where users may be seeking medical advice.".

Google's Process

Children who experience exploitation and sexual abuse via pictures are flagged and banned not only by Google but also by other technology companies. 600,000 reports of child abuse materials were filed by the company and disabled the accounts of 270,000 users. 

The process was done by PhotoDNA where photos can be converted into unique digital codes where it can still be detected if abuse was visible in the picture even if it was altered or edited. This was released by Microsoft in 2009 and was used by other tech companies such as Facebook to prevent these kinds of situations.

Related Article: Google to Develop Smarter 'Helper Robots' That Can Respond to Natural Language Orders

This article is owned by TechTimes


Written by Inno Flores

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion