Elderly Man Thinking while Looking at a Chessboard
(Photo : Pavel Danilyuk via Pexels)

As artificial intelligence (AI) continues to permeate more aspects of human life, there are growing calls for governments to create regulations to prevent misuse and abuse of the technology. Recently, the European Union's Parliament passed the first major global legislation regulating AI, which is likely to influence future regulations in other jurisdictions all over the world.

The AI Act, which the EU's member nations agreed to in December 2023, was endorsed by the members of the European Parliament, with 523 voting in favor, 46 voting against, and 49 abstaining. It is expected to come into effect in a staggered manner, beginning in May 2024.

According to the European Parliament, the regulation "aims to protect fundamental rights, democracy, the rule of law and environmental sustainability from high-risk AI, while boosting innovation and establishing Europe as a leader in the field." It also establishes certain levels of risk for various applications of AI technology. Some are deemed unacceptable, where the use of AI is banned, while others are classified as high, medium, or low risk.

For example, using AI for emotion recognition from a person's face is banned in schools and workplaces. Meanwhile, high-risk AI uses include critical infrastructure, law enforcement, education, employment, and essential private and public services (such as healthcare and banking). These high-risk uses have strict requirements that entities, both governmental and private, must comply with or face significant penalties.

Elin Hauge, a Norway-based AI and business strategist, says that the AI Act makes it a requirement for leaders of businesses that use AI to have a high-level understanding of how AI works in relation to their organizations. At its core, AI is about applying mathematics and statistics, in the form of machine learning, to large amounts of data to automate or optimize decisions and outcomes. Hauge says that C-suite executives outsourcing their understanding of the role of algorithms is essentially outsourcing their jobs.

One major example was in 2021 when the Dutch government resigned after it was discovered that they had used an algorithm to spot childcare benefits fraud. However, the algorithm penalized people for even the slightest risk factor, imposing huge tax bills on tens of thousands of families and even driving some individuals to suicide.

Hauge, who advises company boards on technology and AI-related issues, also speaks professionally on AI's impact on business and society. She says that, if companies are implementing AI to make decisions and there's a problem or a mistake due to lack of safeguards, then the AI cannot be held responsible, as it is not a human. All liability still falls on the human decision-makers of the organization.

"At the end of the day, human beings have to take responsibility for their own job. They need to understand the implications of the algorithms and AI models that they are putting in place," Hauge says. "AI doesn't 'think' the way humans do. They approximate patterns in the data that we have forced them to identify. When AI makes bad decisions, it can be difficult to scrutinize exactly why it did. Further, because it was just operating on the data and parameters it has been fed, it cannot be held responsible for those decisions in the same way as a person. The ones that need to be held liable are the people involved in designing and implementing the algorithm."

With regard to the AI Act partly banning the use of AI for emotion recognition and partly classifying it as high-risk, Hauge strongly agrees with this, as the technology is highly unreliable because different people express emotion differently. For example, individuals on the autism spectrum don't have the same emotional facial responses as neurotypical individuals. Cultural differences also result in different responses. Furthermore, personal history, such as past experiences and traumas, has a huge effect on how a person reacts to something. Therefore, the likelihood of the algorithm getting it wrong is very high. The banning of AI in this area also protects the rights of individuals in schools and workplaces, preventing retaliation or singling out of a person just because they exhibit a perceived undesirable response to something a superior said. It is also well known that facial recognition more often fails for people with dark skin, as compared to people with white skin, and fails more often for women than it does for men.

Another aspect of the AI Act that Hauge finds interesting, is the requirement for providers of large language models or image models, to be transparent about the data they used for training. She believes this is going to keep a lot of attorneys very busy for a very long time, not so much for the individual companies that want to use these algorithms, but very much so for the providers of these models. This can help address many of the criticisms against generative AI, such as the theft of copyrighted material.

With the AI Act placing the responsibility firmly in the hands of business leaders, Hauge says that they should take more interest in understanding how AI works for their organizations. She observed that many board members haven't really tried any of their AI tools, so they don't have a clue about how these work. As such, it's time they "get their hands dirty" and try them out. "There are two aspects to this," Hauge says. "They need to understand the changes in the market dynamics as a consequence of these technologies so that they can have the right strategic discussions. And they need to understand the limitations of these technologies so they do not buy snake oil. And there is a lot of snake oil out there."

Another area leaders need to understand is how AI affects their core business and how to use AI as a toolbox that contains many different tools to automate and optimize their core business. Once they understand how AI tools can enable increased profitability of the core, they can begin looking at how to expand the use of the toolbox for more innovative purposes.

"For several years, it's been said that 'data is the new oil', but I want to qualify that as 'authentic data is the new oil.' This is because generative AI models are now filling the internet with low-quality data," Hauge says. "By training an algorithm on a stochastic model, you create a new stochastic model based on the previous one. If this continues, with a few more iterations, your stochastic model will become very narrow and very poor quality. As generators continue creating low-quality content, that content is being scraped and reused into new models, resulting in declining quality of every successive model. Thus, companies need to be aware of the data they are training their AI on. The real valuable data is the data that is either generated by machines as part of a value chain or by the humans working in your organization. Not the data generated by generative AI. The authentic data will be extremely valuable, and leaders need to understand that quality data engineering is going to be even more important moving forward."

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
* This is a contributed article and this content does not necessarily represent the views of techtimes.com
Join the Discussion