Airline Said It's Not Responsible For Terrible Advice From Its Own Customer Service AI Bot. The Court... Disagreed.

Not even ChatGPT could invent a case to make this stick.

Robot wearing dunce hat stands with arms outLike a lot of companies, Air Canada inserted an AI chatbot into its customer service workflow. The airline no doubt trumpeted the move as “embracing cutting-edge technology as a forward-thinking company” or some such drivel, as opposed to “a bid to layoff a layer of human operators.” In fact, when the airline first instituted the AI, it admitted that it was more expensive than human operators, but they considered it an investment in eventually lowering expenses because companies. Because no cost is too high to be able to say you hope to cut spending on people!

In any event, the airline’s chatbot became the first line of contact for a number of passengers seeking customer service and, predictably, the chatbot hallucinated like Timothy Leary at a Pfish show.

A customer asking about bereavement fees learned that he could purchase tickets at full price and seek a refund within 90 days. When the airline refused to honor this request, he produced a screencap of the airline’s own customer service bot providing this advice only to be told that the bot was wrong and there was nothing the airline planned to do about it. A bold stance since Canada’s national motto is “Sorry.”

The customer took the airline to small claims court where the corporation decided to double down with a wild legal strategy. Per Ars Technica:

According to Air Canada, Moffatt never should have trusted the chatbot and the airline should not be liable for the chatbot’s misleading information because Air Canada essentially argued that “the chatbot is a separate legal entity that is responsible for its own actions,” a court order said.

Yeah… no. The court awarded the customer most of the refund and some associated fees.

But what were they even talking aboot?

Sponsored

Based upon the record, the airline hadn’t even taken basic steps to claim that the bot belonged to some legal subsidiary. You can’t invent legal entities retroactively unless you’re trying to help corporations get away with crimes against humanity. And even then, all you can do is spin existing liability onto the new entity. As the court put it:

This is a remarkable submission. While a chatbot has an interactive component, it is still just a part of Air Canada’s website. It should be obvious to Air Canada that it is responsible for all the information on its website. It makes no difference whether the information comes from a static page or a chatbot.

There are actually good arguments for giving AI legal personhood. Professor David Vladeck argued a few years back that the current legal regime inefficiently allocates damages for AI run amok. There are so many points of contact where AI can go off the rails that it doesn’t make sense to blame the inventor of the algorithm if the problem is the data fed into it or the dingbats using it for the wrong purpose, for example. Making AI a separate legal entity and then creating a mandatory insurance requirement that all stakeholders have to proportionally fund is a better solution for the long-run.

But not only is that not currently the law — in Canada or anywhere else —  it’s unclear how this wouldn’t be Air Canada’s fault anyway. Sure, the insurance would have to pay for the screw-up, but the carrier isn’t going to bang on OpenAI’s door when it would come time to increase premiums. This was Air Canada’s breakdown, not the algorithm’s.

Air Canada has, for now, seemingly suspended its AI endeavors.

Sponsored

Air Canada must honor refund policy invented by airline’s chatbot [Ars Technica]


HeadshotJoe Patrice is a senior editor at Above the Law and co-host of Thinking Like A Lawyer. Feel free to email any tips, questions, or comments. Follow him on Twitter if you’re interested in law, politics, and a healthy dose of college sports news. Joe also serves as a Managing Director at RPN Executive Search.

CRM Banner