AI and Recruiting: The Workday Case and the New Urgency of Algorithmic Governance

The adoption of artificial intelligence systems in recruitment processes is revolutionizing the HR world, but at the same time exposes organizations to increasingly concrete regulatory, reputational, and operational risks.

A recent legal precedent in the United States, Mobley v. Workday, marks a turning point for anyone using AI platforms for automated candidate evaluations---even in SaaS mode.


The Workday Case: Legal and Operational Implications

In May 2025, a California federal court granted preliminary certification to a collective action against Workday, one of the world's leading HR-tech providers. The lawsuit, filed by Derek Mobley, led to the class action being certified for age discrimination against candidates over 40, excluded from selection processes managed by Workday's AI from 2020 to the present.

A key point: the judge acknowledged the possibility of direct liability for SaaS vendors as well---not only for the companies running the recruitment processes. Claims regarding racial and disability discrimination remain open but have not yet been certified.

The litigation could impact hundreds of millions of applications and has already attracted the attention of regulators (including the EEOC), industry stakeholders, and specialized media.


Risks for Organizations

Legal implications:

  • Legal precedent extending liability to SaaS vendors and corporate clients.

  • Exposure to class actions and regulatory investigations in the absence of independent controls and audits.

Operational and reputational implications:

  • Potential disruption of HR operations due to extraordinary audits and reviews.

  • Negative impact on corporate reputation and employer branding.

  • Increased scrutiny from investors, stakeholders, and the media.


The Causes: How Algorithmic Bias Can Spread

  • Biased historical data: AI learns from datasets that often reflect past biases in hiring processes.

  • Lack of independent audits: Many systems are validated only internally, with no third-party checks on fairness and non-discrimination metrics.

  • Poor data drift management: Lack of oversight on data changes can worsen discriminatory behavior over time.


The AI Act: New Standards for European Compliance

The European regulatory landscape is quickly aligning:

The AI Act classifies automated hiring systems as high-risk, introducing detailed and enforceable obligations for all organizations adopting these technologies:

  • Preliminary impact assessment on risks of bias and discrimination.

  • Documentation and traceability of data, methodologies, and automated decision-making processes.

  • Meaningful human oversight and mechanisms to contest algorithmic decisions.

  • Mandatory regular audits and up-to-date fairness metrics.


Essential Compliance Checklist (AI & Recruiting)

  • Is your ATS or AI system regularly subject to independent audits?

  • Are training datasets balanced and free from known systemic biases?

  • Are automated decisions fully traceable and documented?

  • Is there a structured process for candidates to challenge exclusions?

  • Does your vendor ensure compliance with the AI Act and national regulations?


Operational Recommendations

  • Third-party audits and impact assessments should become recurring practices, not one-off efforts.

  • Training and updates for HR, IT, and compliance teams on evolving regulations.

  • Legal review of SaaS provider contracts, with focus on liability clauses.

  • Incident response plans for promptly addressing reports of discrimination or algorithmic failures.


Conclusions

The Workday case represents a paradigm shift in managing AI-related risks in recruiting. In a context of rapidly evolving regulation and growing reputational exposure, algorithmic governance must be solid, proactive, and transparency-driven.

Addressing this now means protecting your organization from legal risks, ensuring compliance with emerging EU regulations, and reinforcing the trust of candidates, investors, and stakeholders.

To assess the compliance of your AI-based recruitment systems, GenComply consultants are available for audits, impact assessments, and targeted training.