When AI Speaks for the Company: New Liability Risks for Chatbots and Automated Customer Service

The widespread adoption of chatbots and AI systems for customer service is revolutionizing client relations, offering continuous availability, cost reductions, and faster responses. However, careless management of these tools can expose companies to unprecedented legal, financial, and reputational risks. The recent Air Canada case is a clear wake-up call for any organization delegating its official voice to AI.


The Air Canada Case: When the Chatbot Becomes a Source of Corporate Liability

In February 2024, a grieving Canadian customer contacted Air Canada’s chatbot to inquire about bereavement fare discounts. The AI system, with no filters, promised a retroactive refund not foreseen by company policy. When Air Canada refused the refund, the company invoked standard disclaimers and the "separate" nature of the chatbot from official policies.

However, the British Columbia Civil Resolution Tribunal ruled that:

  • The chatbot officially represents the company.
  • Information provided by the AI is as binding as that of a human employee.
  • Generic disclaimers do not exempt the company from liability for specific errors.

Outcome: Air Canada was ordered to compensate the customer. But the real impact extends far beyond the $812 in damages.


What's Changing: Legal Precedent and the New Standard for Corporate Accountability

  • Direct liability: Companies are responsible for information provided by their chatbots, with no possibility to deflect blame to developers or vendors.
  • Equivalence to human staff: AI “hallucinations” are treated as official company statements.
  • Reversed burden of proof: It’s not enough to claim a technical error; companies must prove the existence and effectiveness of safeguards.

In short: AI is no longer an "experimental channel" but an official representative in customer service and public communications.


The Problem of AI “Hallucinations”: Why Active Governance Is Necessary

Hallucinations in large language models are not exceptions, but consequences of their predictive nature:

  • Spurious pattern matching: Plausible but incorrect information generation.
  • Confidence bias: Highly confident answers even when factually unfounded.
  • Poor uncertainty signaling: Fluent language masks the lack of fact-checking.

Most exposed sectors:

  • Travel and Hospitality: Constantly evolving policies and high expectations of precision.
  • Financial Services: Strict regulations, risk of incorrect advice.
  • Healthcare and Insurance: Medical advice, insurance coverage—high legal exposure.

Regulatory Landscape: From the AI Act to Customer Protection Practices

The EU AI Act already imposes several key requirements for AI systems interacting with the public:

  • Transparency (Art. 52): Mandatory disclosure, explanation of limitations, and alternative channels for accurate information.
  • Accuracy and robustness (Art. 13): Rigorous testing, ongoing monitoring, and correction mechanisms.

Governance Strategies and Risk Mitigation

Architectural Safeguards

  • Validated knowledge bases integrated with official databases, approval workflows, and version control.
  • Automated fact-checking and discrepancy flagging for human review.
  • Template responses and fallbacks for out-of-scope queries.

Process-Based Safeguards

  • Human-in-the-loop escalation for high-risk queries.
  • Quality assurance via sampling and customer feedback.
  • Complete logging and audit trails of all AI interactions.

Legal & Compliance

  • Continuous alignment of AI responses with official policies.
  • Regular audits of content and templates.
  • Contractual SLAs, liability clauses, and audit rights over vendor systems.

Best Practices and Success Cases

Emirates Airlines: Hybrid AI-human architecture, automatic escalation for policy-sensitive queries, real-time fact-checking.

  • Result: 97% accuracy, 60% cost reduction, zero legal incidents.

USAA: Multi-level validation, unified source of truth, prioritization of accuracy over speed.

  • Result: +40% customer satisfaction, -85% legal risk, +35% operational efficiency.

Recommendations for Management

  • Risk Assessment Framework: Audit all AI-customer touchpoints, conduct gap analysis, and map risks across jurisdictions.
  • Implementation Roadmap:

    • Immediate: Review disclaimers, set up emergency procedures, train staff.
    • Medium-term: Implement technical safeguards and redesign human-AI processes.
    • Long-term: Build a compliance-ready architecture and lead in accuracy and reliability.
  • Vendor Management: Select vendors based on track record, liability sharing, compliance support, audit rights, and performance guarantees.


Conclusion: AI Customer Service Is (Already) a Matter of Corporate Governance

The Air Canada case marks a turning point: chatbot management is no longer a tech issue but one of governance, legal, and risk oversight.

Organizations that want to protect themselves—and unlock the value of AI—must invest in control architectures, training, audits, and strategic legal partnerships.

GenComply supports companies in building AI governance frameworks that protect brands, customers, and stakeholders.


Main sources: