A new study conducted by the University of the Sunshine Coast sheds light on the escalating privacy and security risks posed to the public and staff, customers, and stakeholders due to the rush by Australian companies to embrace generative artificial intelligence (AI).  

(Photo : Peace,love,happiness from Pixabay)

AI Surge Could Lead to Privacy and Security Risks

According to the study, the surge in AI exposes companies to various risks, including mass data breaches and business failures resulting from manipulated or "poisoned" AI modeling, intentional or accidental. 

The research urges businesses to consider ethical implications when implementing AI solutions and provides a five-point checklist for ethically integrating AI into their operations.

Dr. Declan Humphreys, a Lecturer in Cyber Security at UniSC, underscores the moral and technical challenges associated with the corporate race to adopt generative AI solutions such as ChatGPT, Microsoft's Bard, or Google's Gemini.

"The research shows it's not just tech firms rushing to integrate the AI into their everyday work-there are call centers, supply chain operators, investment funds, companies in sales, new product development and human resource management," Humphreys said in a statement.

"While there is a lot of talk around the threat of AI for jobs, or the risk of bias, few companies are considering the cyber security risks. Organizations caught in the hype can leave themselves vulnerable by either over-relying on or over-trusting AI systems," he added. 

Read Also: Workers Still Input Sensitive Information When Using AI Tools Despite Security Risks

Potential Hacking Vulnerabilities

The study, co-authored by experts in cyber security, computer science, and AI, highlights the lack of consideration for potential hacking vulnerabilities in AI models developed by companies or acquired from third-party providers. 

Humphreys warns of the risks posed by unauthorized access to user data or alterations to AI model responses, which could lead to data leaks or undermine business decisions. 

Despite the rapid adoption of generative AI, the researchers note that legislation has not kept pace with emerging data protection and AI governance challenges.

The study advocates for ethical AI implementation by emphasizing secure model design, fair data collection processes, robust data storage practices, ethical model retraining and maintenance, and staff training and management.  

Humphreys underscores the importance of prioritizing privacy and security in AI implementation, urging businesses to adopt new governance frameworks to mitigate risks to workers, sensitive information, and the public. 

He emphasizes the need for a comprehensive understanding of AI technology and its associated risks to guide responsible AI adoption. 

"The rapid adoption of generative AI seems to be moving faster than the industry's understanding of the technology and its inherent ethical and cyber security risks. A major risk is its adoption by workers without guidance or understanding of how various generative AI tools are produced or managed, or of the risks they pose," Humphreys said.

"Companies will need to introduce new forms of governance and regulatory frameworks to protect workers, sensitive information and the public," he added. 

The findings of the study were published in AI and Ethics.  

Related Article: OpenAI's ChatGPT Marks First Anniversary, Emerges as Fastest-Growing Consumer App in History


ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion