The Bank of England, responsible for overseeing the United Kingdom's (UK) financial stability, has announced its plans to comprehensively assess the potential risks associated with the widespread adoption of artificial intelligence (AI) and machine learning in the financial services sector next year.

According to the Associated Press, the financial services sector represents approximately 8% of the British economy and holds significant global connections.

Chat Ai
(Photo : Alexandra_Koch from Pixabay)

Bank of England to Review Risks That AI Poses to UK Financial System

As outlined in its half-yearly Financial Stability Report, the Bank of England aims to seek guidance on the implications arising from the integration of AI and machine learning technologies in financial services. 

The bank's Financial Policy Committee, tasked with identifying and monitoring risks, will collaborate with other authorities to ensure the resilience of the UK financial system against risks stemming from the broader use of AI and machine learning.

Bank of England Governor Andrew Bailey emphasized the need for a cautious approach, acknowledging the transformative potential of AI while recognizing its profound implications for economic growth, productivity, and the shaping of economies in the future. 

He stressed the importance of embracing AI but underscored the necessity of doing so with awareness. In the last year, the discourse on AI has transformed, gaining greater acknowledgment of its advantages and potential risks. 

The growing popularity of AI models such as OpenAI's ChatGPT and other chatbots has spurred worldwide competition to establish regulatory frameworks for AI. Leaders in the European Union are collaborating to reach a consensus on groundbreaking AI regulations. 

Bailey emphasized the responsibility of firms utilizing AI to comprehend the tools they employ, highlighting the critical need for understanding rather than viewing AI as a mere source of risks. 

While acknowledging that he is not an AI expert, Bailey expressed optimism about the tremendous potential of these technologies beyond being a mere repository of risks.

Read Also: AI Systems Are More Prone to Malicious Attacks Than Previously Believed, Study Finds

AI Age Checks

In related news, the UK's internet regulator, Ofcom, has introduced draft guidelines outlining stringent measures for age verification on pornographic websites to comply with the Online Safety Act.  

This could mean that AI age checks is just nearing the horizon. In response to concerns raised by research indicating that the average age of children encountering online pornography for the first time is 13, the proposed regulations aim to protect children from accessing explicit content online.

Under the Online Safety Act, platforms displaying adult content must implement robust "age assurance" mechanisms to ascertain whether a user is a child. 

Ofcom's guidelines emphasize technical accuracy, reliability, and fairness in age checks, outlining various methods, including open banking, photo identification matching, mobile network operator age checks, credit card checks, digital identity wallets, and even AI-assisted webcam assessments.

The guidance sets a high standard for age verification, explicitly rejecting weaker methods such as self-declaration of age, specific online payment methods, and general disclaimers.

It also mandates that pornographic content should not be visible to users before or during the age-check process, and services must prevent content that encourages children to bypass age controls. 

Related Article: AI's Dominance Reflects in 'Word of the Year' Lists, Including Oxford & Webster

Byline

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion