According to reports, the French police are testing AI-powered surveillance cameras this week to prepare for the cameras' intended deployment at the 2024 Paris Olympics. 

Six AI-equipped cameras have been placed throughout the Accor Arena by Paris police for the first time in the country to track crowd movements and spot any unusual or dangerous activities. 

The experiment's main goal is to get ready for the Paris Olympics, which takes place in five months and is predicted to provide a serious security issue for the police. The opening ceremony is planned to be guarded by over 30,000 officers.

(Photo : Mario Tama/Getty Images)
A security camera is seen across the street from the World Trade Center site July 9, 2007 in New York City. A London-style surveillance system, dubbed the Lower Manhattan Security Initiative, is being planned to blanket the area with 3,000 security cameras and other measures in an effort to detect terrorists.

One of the few new security initiatives being implemented for the 2024 Olympics is the use of AI-powered cameras. According to reports, locals living close to Olympic venues would also have to apply for a QR code that would enable them to get by security obstacles.

Residents of the prohibited zones would also be required to register any guests who might like to view the action from their houseboat, balcony, window, or rooftop.

Attendees of the Depeche Mode concerts on Sunday and Tuesday will be monitored by the security cameras. A bill authorizing the use of AI for the security of sporting and leisure events was passed by the French parliament in May of last year.

However one of the grounds for the law was the turmoil of the 2022 Champions League Final between Real Madrid and Liverpool at the Stade de France, where fans were pepper sprayed by riot police and crushed in congested bottlenecks.

Read Also: Pioneering the Fight Against Online Fraud through Advanced AI Integration

AI Surveillance

AI cameras at the concert last night should notify surveillance operators of any unusual or potentially dangerous activities as part of the real-time experiment. Surveillance operators will determine whether or not to notify authorities and request police action after the AI has reported an occurrence.

Eight categories of events will reportedly be monitored by the AI: counterflowing traffic, persons in forbidden areas, crowd movement, misplaced parcels, weapon use or presence, crowded conditions, a body on the ground, and fire. According to reports, however, ministers have pledged not to make any arrests based on photos chosen by the AI cameras during the test.  

Criticism Surrounding AI-Assisted Security

Back in October 2023, a total of 65 British lawmakers joined the European Union in calling for an end to live facial recognition technology, which was outlawed in the first part of 2023. The British lawmakers joined several nonprofits and charitable groups in calling for using live face recognition to be discontinued.

Among many others, Big Brother Watch has already criticized facial recognition surveillance cameras in the UK for collecting millions of faceprints without permission and citing this as a dangerous precedent. Such technology was viewed as a danger to liberties and privacy. 

However, it has not been enough to appease privacy advocacy Quadrature du Net, which claims that more arbitrary arrests would result from greater dependence on AI technology and that the proposal is a slippery slope towards legitimizing increased surveillance technologies being used on the general public. 

Related Article: Corporate AI Race Could Cause Privacy, Security Risks to the Public, New Study Shows

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion