
With two weeks left in a formal public consultation closing May 29, the UK's Information Commissioner's Office (ICO) is warning employers across Britain that AI-powered hiring tools — used to screen CVs, rank candidates, and analyse video interviews — may already be breaking data protection law if a human is not meaningfully involved in every consequential decision.
The regulator launched the consultation on March 31, 2026, alongside a report titled "Recruitment Rewired," which drew on evidence gathered from more than 30 UK employers between March 2025 and January 2026. Its central finding: many employers do not recognise that they are carrying out automated decision-making (ADM) at all — and as a result, the legal safeguards that protect candidates are not being applied.
The ICO's action arrives as global evidence of AI hiring tool abuse is accumulating rapidly: from peer-reviewed research showing systematic racial and gender bias encoded in widely deployed screening models, to a US federal class action that now threatens to expose the hiring practices of the more than 10,000 companies that use Workday's AI platform. The problem is not theoretical. It is operating at scale, and regulators in the UK, US, and EU are all moving to contain it simultaneously.
Most Employers Audited Are Already Non-Compliant, ICO Finds
The ICO's review found a systemic gap between what employers believe they are doing and what data protection law requires. Under UK GDPR Article 22A, any decision made "solely on automated processing" that has a legal or similarly significant effect on a person triggers mandatory safeguards — including transparency disclosures, the right to request human review, and the right to challenge the outcome.
The problem, the ICO found, is that most employers using AI at the CV-filtering or candidate-scoring stage believe they are using these tools only to "support" a human decision — not to make one. The regulator disagrees. If a hiring manager reviews only the candidates surfaced by an automated shortlist and lacks the time, information, or authority to override it, the ICO's position is that the AI is making the decision, not the human.
The regulator has already written directly to 16 named organisations it identified as likely to be operating outside the rules. Those organisations have committed to acting on the ICO's recommendations. The regulator has the power to issue reprimands and substantial monetary penalties under the Data Protection Act 2018.
The ICO also found that many existing Data Protection Impact Assessments (DPIAs) lack sufficient detail to satisfy legal requirements, and that employers must conduct regular bias testing. Employers are expected to ask vendors directly about the frequency and methodology of that testing before deploying any tool.
What "Meaningful Human Involvement" Actually Requires
The ICO is explicit that a human rubber-stamping an AI-generated shortlist does not meet the standard. A reviewer must have the authority, information, and genuine capacity to change the outcome before a decision takes effect. Clicking "approve" on a list without independent analysis does not qualify.
For engineers and developers building AI hiring systems, the ICO's draft guidance signals that tools must surface explanations, uncertainty estimates, and override pathways that make human review substantive rather than ceremonial. Audit trails documenting genuine human engagement at each hiring stage are likely to become a baseline compliance expectation once final guidance is published after May 29.
The ICO also identified inconsistency as a distinct legal risk: where human review is applied to some candidates but not others at the same hiring stage, the regulator treats that as a breach of fair treatment obligations under UK GDPR.
Bias Encoded in Training Data: The Root of the Problem
The most pervasive failure in AI hiring tools is structural: models learn from historical hiring decisions, and if a company spent years hiring from a narrow demographic, the model encodes that pattern as the definition of a qualified candidate. Amazon provided the clearest early demonstration of this.
Amazon's machine-learning team built a CV-screening tool from 2014 onward trained on a decade of resumes. Because most of those resumes came from men, the system taught itself that male candidates were preferable. It penalised resumes containing the word "women's" — as in "women's chess club captain" — and downgraded graduates of all-women's colleges. Even after Amazon patched those specific terms, it could not guarantee the model would not find other discriminatory sorting signals. The project was scrapped in 2017.
That was one internal tool. The same dynamic now operates across third-party platforms used by thousands of employers simultaneously. A University of Washington study found that in AI-assisted resume screenings across nine occupations, models favoured white-associated names in 85.1% of cases and female-associated names in only 11.1% of cases. Black male candidates were disadvantaged compared to white male counterparts in up to 100% of direct comparisons.
A follow-up study published in May 2025 by researchers at the University of Hong Kong and the Chinese Academy of Sciences extended those findings to large language models. Five leading LLMs systematically awarded lower scores to Black male candidates compared to white male candidates with identical qualifications, regardless of race category. The researchers concluded the biases were "deeply embedded in how current AI systems evaluate candidates."
Video Interviews: Scoring Facial Expressions and Tone
Beyond resume screening, AI tools deployed in video interviews have scored candidates on attributes with no validated link to job performance — including facial micro-expressions, vocal cadence, and eye contact.
HireVue, one of the largest AI video interview platforms — used by JPMorgan, Goldman Sachs, IBM, and others — previously used Affectiva's technology to analyse facial movements and generate "employability scores." CVS settled a proposed class action in July 2024 after plaintiffs alleged the system measured candidates' "conscientiousness and responsibility" and "innate sense of integrity and honour" through facial analysis — a process plaintiffs argued was legally equivalent to a lie detector test. HireVue subsequently removed facial analysis from its standard configuration after sustained regulatory and public pressure. The company states that it now analyses only the transcript of candidate responses.
However, the disability implications of video AI analysis go deeper than facial expression scoring. In March 2025, the ACLU of Colorado filed a complaint against Intuit and HireVue on behalf of an Indigenous deaf woman who had received positive performance feedback during her Intuit employment. After submitting an AI-analysed video interview, the system told her she needed to improve her "active listening." The complaint alleges the platform discriminated on the basis of both disability and race, and is pending investigation by the EEOC and the Colorado Civil Rights Division.
The EEOC's 2023 guidance had specifically flagged video interview AI as a risk for applicants with disabilities — noting that tools scoring candidates on facial expression, vocal fluency, or eye contact would systematically disadvantage people with autism spectrum conditions, stuttering, or any condition affecting physical cadence.
Deliberate Discrimination Automated Into Software
Not all AI hiring failures are emergent artefacts of biased training data. In the case of iTutorGroup, the discrimination was intentional and explicit. In September 2023, iTutorGroup paid $365,000 to settle an EEOC lawsuit after its recruitment AI was found to automatically reject female applicants aged 55 or older and male applicants aged 60 or older. The software rejected more than 200 qualified applicants based solely on age, in direct violation of the Age Discrimination in Employment Act. EEOC Chair Charlotte Burrows stated at the time: "Employers cannot rely on AI to make employment decisions that discriminate against applicants on the basis of protected characteristics."
The iTutorGroup case was the first EEOC enforcement action targeting AI hiring discrimination. Since then, the legal landscape has shifted substantially — and not in a direction that makes federal enforcement more likely.
Mobley v. Workday: The Class Action That Could Reshape AI Hiring
The most consequential ongoing case is Mobley v. Workday, Inc., now in active litigation in the U.S. District Court for the Northern District of California. Derek Mobley, a Black man over 40 with depression and anxiety, applied for more than 100 positions through companies using Workday's AI screening platform between 2017 and 2022. He was rejected in every instance, in some cases receiving automated rejection emails less than an hour after submitting applications — including one at 1:50 a.m.
In July 2024, Judge Rita Lin ruled that Workday could be held liable as an "agent" of the employers using its platform, writing: "Workday's role in the hiring process is no less significant because it allegedly happens through artificial intelligence rather than a live human being... Drawing an artificial distinction between software decisionmakers and human decisionmakers would potentially gut anti-discrimination laws in the modern era." This ruling established, for the first time, that AI vendors — not just the employers using their tools — can be directly liable for discriminatory outcomes.
In May 2025, the court granted conditional certification of a nationwide collective action covering all individuals aged 40 and over who applied for jobs through Workday's platform since September 2020. On March 6, 2026, Judge Lin rejected Workday's argument that the Age Discrimination in Employment Act does not protect job applicants — its strongest remaining dismissal argument. Plaintiffs filed an amended complaint on March 30, 2026, reinstating California state and disability claims. The case is now in active discovery.
The exposure extends well beyond Workday itself. The court has ordered Workday to provide a list of its employer clients who used its AI screening features, so that potentially millions of affected applicants can be notified. Workday's platform processes hundreds of millions of applications annually across more than 10,000 client organisations. Workday denies the discrimination claims and maintains that its tools do not make hiring decisions.
A parallel case filed in January 2026 targets Eightfold AI, alleging the company operated as a consumer reporting agency — collecting and scoring applicant data from unverified third-party sources without consent — in violation of the Fair Credit Reporting Act. While Workday establishes that a vendor can be sued as an agent for discriminatory outcomes, Eightfold frames the vendor as a data broker subject to transparency mandates. The two cases, taken together, are described by commentators as a "pincer movement" closing around AI hiring vendors.
The Regulatory Response: Expanding, but Enforcement Remains Uneven
Regulation is accelerating, but enforcement has not kept pace with the scale of deployment.
In the US, New York City's Local Law 144 — which requires mandatory annual bias audits, public disclosure of results, and ten days' advance notice to candidates before an AI tool is used — has been enforceable since July 2023. However, a December 2025 audit by the New York State Comptroller found the enforcement agency had identified just one instance of non-compliance after reviewing 32 companies, while the Comptroller's own review found 17 potential violations. The audit also found that 75% of test calls to the NYC 311 complaint hotline about AI hiring were routed to the wrong agency.
Newer laws are stronger. Illinois's HB 3773, effective January 1, 2026, prohibits AI-based employment discrimination and requires employers to notify candidates when AI is used in any employment decision, with enforcement through the Illinois Department of Human Rights and remedies including back pay and reinstatement. Colorado's AI Act, effective June 30, 2026, requires impact assessments and consumer notification for any high-risk AI used in employment decisions, with the Attorney General holding exclusive enforcement authority.
At the federal level, the picture has regressed. The EEOC's AI and Algorithmic Fairness Initiative, which had filed amicus briefs in support of the Workday plaintiffs, was shut down by executive order in April 2025. The order instructed federal agencies to deprioritise enforcement of disparate-impact liability theories — the very legal framework underlying most AI hiring discrimination claims. Private litigants, however, are not bound by that directive, and the Workday class action continues.
In the EU, the AI Act classifies AI recruitment tools as high-risk from August 2026, requiring conformity assessments before deployment — a layer that sits on top of existing GDPR Article 22 obligations. As Ropes & Gray has noted, a tool that passes the UK's ADM test may not satisfy EU requirements, and vice versa. Multinational employers face three separate legal assessments — UK GDPR, EU GDPR, and the EU AI Act — that must each be satisfied independently.
70% of UK Employers Plan to Expand AI Hiring Use Within Five Years
The stakes extend well beyond the 16 organisations already in the ICO's sights. According to a survey by the Institute of Student Employers, nearly 70% of UK employers anticipate increasing their use of AI in recruitment over the next five years. That expansion is happening under a regulatory framework that the ICO's own review suggests most employers do not yet understand.
The Data (Use and Access) Act 2025, which came into force on February 5, 2026, updated the UK GDPR's ADM provisions and created a clearer framework for organisations that want to use automation lawfully. The ICO's draft guidance is its first detailed interpretation of those updated rules as they apply to recruitment. Organisations that engage with the consultation before May 29 have a direct opportunity to shape the final version of that guidance.
What Employers and Developers Must Do Before May 29
Organisations using AI at any stage of UK recruitment should audit their pipelines now to map where automated scoring, filtering, or ranking occurs. For each stage, they should document what human review looks like in practice — not in policy, but in reality: what information does the reviewer see, how long do they spend, and do they have the access and authority to reject an AI recommendation?
Developers building or procuring AI hiring tools should verify whether their systems produce candidate-facing explanations and enable override workflows that satisfy the ICO's emerging standard. Vendors should be asked directly about bias-testing frequency and the documentation they can provide. For organisations operating across the US and EU, the compliance audit is more complex: New York City's Local Law 144, Illinois HB 3773, Colorado's AI Act, and the EU AI Act each impose distinct obligations that must be assessed and satisfied separately.
The ICO consultation closes May 29, 2026. Responses can be submitted via the ICO's official consultation page.
ⓒ 2026 TECHTIMES.com All rights reserved. Do not reproduce without permission.




