AI and Public Administration: The MyCity Case and Lessons for Algorithmic Governance
The introduction of artificial intelligence solutions in public services marks a major shift in the digitalization and accessibility of institutions. However, the case of the MyCity chatbot launched by the City of New York in 2024 clearly highlights the concrete risks associated with unguided adoption of these tools: from compliance issues to reputational and legal consequences.
The MyCity Case: From Innovation to Systemic Failure
Designed to simplify small businesses’ access to local regulations, the MyCity chatbot quickly became a case study in failed AI deployment. Following independent tests conducted by investigative journalists, serious anomalies emerged:
- Advice that contradicted the law on dismissals and handling of harassment reports.
- Recommendations involving unlawful wage practices (e.g., illegal withholding of tips).
- Guidance on circumventing mandatory building and health codes.
These errors were not limited to isolated incidents but potentially reached thousands of users.
Root Causes and System Vulnerabilities
1. Training on unvalidated datasets:
The AI was trained on a legal corpus without validation, leading to risks such as confusing exceptions with general rules, treating outdated regulations as current, and interpreting exceptional clauses as standard practice.
2. Insufficient human oversight:
There was a lack of systematic involvement from legal experts, compliance officers, and AI governance specialists during output validation—a critical step, especially for high-impact regulatory answers.
3. Absence of proactive monitoring:
The lack of risk assessment dashboards, automatic alerts, and structured feedback/auditing processes prevented the timely identification of risks.
Legal, Reputational, and Systemic Impacts
- Risk of civil lawsuits and administrative penalties for businesses that followed illegitimate advice.
- Potential criminal liability for violations of labor laws.
- Reputational damage to the public administration and increased public distrust in institutional AI use.
Regulatory Framework: The AI Act and Decision Support Systems
In the European context, the AI Act classifies decision support systems affecting fundamental rights as “high risk,” imposing several key obligations:
- Error and anomaly logging (Art. 14): Systematic documentation of malfunctions and remediation plans.
- Quality management system (Art. 17): Structured control and escalation processes, with continuous operator training.
- Transparency obligations (Art. 52): Clear disclosure of AI usage, explanation of decision-making logic, and disclaimers about system limitations.
AI Governance and Prevention Strategies
-
Advanced data curation:
Constant validation of legal sources, database versioning, and regular reviews for accuracy and consistency. -
Human-in-the-loop:
Automatic escalation systems for high-risk queries routed to qualified legal and compliance reviewers. -
Continuous monitoring and auditing:
Operational dashboards with real-time metrics, alerts for anomalous patterns, and periodic audits of logs and outputs.
Operational Recommendations
For public administrations:
- Adopt procurement criteria favoring vendors with proven expertise in AI governance and legal tech.
- Invest structurally in internal competencies on AI ethics and compliance.
- Systematically involve stakeholders and establish supervised pilot programs.
For compliance officers:
- Integrate AI risks into enterprise risk management frameworks.
- Foster structured collaboration among legal, IT, and business functions.
- Define and monitor specific AI governance KPIs.
For technology providers:
- Design with a "compliance by design" approach and transparent decision logic.
- Draft contracts that clearly define liability sharing in case of system malfunctions.
Conclusions
The MyCity case underscores the need for an integrated and responsible approach to AI adoption in both public administration and the private sector. Governance can no longer be an afterthought—it must be an integral part of the AI solution lifecycle, from design to post-deployment monitoring.
Only structured algorithmic governance can ensure sustainable innovation, protection of rights, and preservation of public trust.
For audits, assessments, and targeted training on AI compliance in public and private processes, GenComply consultants are available.
Primary sources: