As Artificial Intelligence (AI) becomes an increasingly important aspect of our lives, we are only now beginning to understand how it may interpret the world around us. A new study by the Massachusetts Institute of Technology (MIT) has uncovered that AI can be much harsher than humans when judging whether rules have been broken. 

The Risks of AI Taking Action Based on Interpreted Breaches 

According to the story by MSN, the researchers warn that this could potentially lead AI to overstep when it comes to punishments depending on what information it has been programmed with. 

The study looked at how AI would interpret perceived breaches of a code, in particular using the example of dog images that may violate a home rental policy against aggressive breeds. 

Participants were split into two groups; the descriptive team was asked to indicate whether three factual features were presented in the image or text, such as whether the dog appeared aggressive. 

Increased Risk of Human Bias When Utilizing AI in Policy Settings 

The normative group, in contrast, was given information about the overarching dog policy and tasked with whether the image had violated it and why. The research found that the descriptive team was 20% more likely to declare a code breach than the normative one, showing that the data used can significantly impact the outcome. 

This has significant implications, as using this type of data in a policy setting could potentially lead to harsher punishments - such as longer sentences or higher bail amounts - than those decided by humans. 

To avert the risk of human bias being exhibited by AI, the researchers suggest a need for greater data transparency and for the data used to match the intended context. If AI is to be used within human processes such as legal and medical settings, then it is important that experts within those fields are used for data entry. 

The Need for Transparency and Human Oversight in AI Decision Making 

This will ensure that AI can interpret nuances more effectively and achieve more just outcomes. This research has revealed the importance of programs being transparent regarding what data has been used to train AI so that potential bias can be considered. 

With AI becoming increasingly commonplace in our lives, it's essential that further research is conducted to ensure that it is not misused and that human judgment is taken into account where necessary.

Read Also: MIT's New AI Tech Could Make X-Ray Vision a Reality! Here's How It Could Benefit Vehicles

Understanding the Risks of Artificial Intelligence and How to Mitigate Them 

To fix the problem, MIT's experts have called for greater transparency in data collection, noting that knowing how data is gathered and used can help keep AI models from being too harsh. Another article by Tableau shares the risks of artificial intelligence.

More awareness is needed about the implications of using AI for decision-making and the importance of using normative data over descriptive data. The National Institute of Standards and Technology also published an AI risk management framework in collaboration with the private and public sectors.

Related Article: OpenAI CEO Sam Altman to Address US Senate on Urgent Need for AI Regulation

Tech Times

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion