Share this article:

Can I sue AI and who do I sue? An Overview of the BC Law Institute’s new Report on Artificial Intelligence and Civil Liability

With the emergence of more and more AI technology, one of the unintended consequences is that occasionally AI may cause harm. In April 2024, the BC Law Institute, a non-profit law reform organization that undertakes research to determine how we can improve laws in B.C., published the Report on Artificial Intelligence and Civil Liability (the “Report”). The Report explores the current state of AI and its implications for civil liability, recommending the best way forward for recognizing potential legal harms caused by AI.

Civil liability refers to the area of law where individuals, businesses, and governments file disputes against others, to obtain compensation for legal “harms” to persons or property. Think of suing a person for property damage or suing a city, person, or business for negligence causing physical or financial harm. People, corporate entities, or governments are considered legal persons that can commit legal “wrongs” or harm and can be sued, but the status of an AI technology as a “legal person” that can be sued is not entirely clear.

AI is the topic of a lot of conversation these days. As stated in the Report, pinning down one definition of AI is difficult. Canada’s Directive on Automated Decision-Making, defines AI as: “Information technology that performs tasks that would ordinarily require biological brainpower to accomplish, such as making sense of spoken language, learning behaviours, or solving problems.”1

For the most part, AI is meant to optimize and improve certain tasks or systems. As stated in the Report, AI is not generally developed with an intent to do harm, but ultimately, AI may be used by humans to do harm or result in harm to humans. AI may also sometimes display unpredictable original behaviour in pursuit of its objectives, called an “emergence.” Sometimes an emergence will create innovation, other times it will generate harmful results. Certain types of AI are more susceptible to causing harm, like autonomous systems.

But who is responsible for harms caused by AI? The Report provides an overview of recommendations for how to hold AI liable for harm committed.

AI is not a “person” (yet), and they have no money to pay compensation. Treating AI like a human decision-maker causing harm is challenging, because AI “fails” and causes harm differently from a human. For example, the Report draws on the example of a self-driving car involved in a fatal accident. The AI could not discern a pedestrian walking a bicycle across a crosswalk as a person or fixed object, and only decided the pedestrian was a human at the last second. The AI likely would have correctly identified a pedestrian or a person riding a bicycle separately, but when combined, the AI made a fatal error. Hopefully a human would not make that mistake, but if the human did make that mistake, then the human’s conduct could be reviewed in relation to the law of negligence and the human could be held accountable to pay damages if the human was found to have been negligent.

In contemplating the pros and cons of determining different ways to assign fault for harm to AI, the Report ultimately recommends that fault should be assigned to the individual or company with the decision-making authority, of a managerial nature, over the operation of the AI system (the “operator”). One of the arguments in favour of holding the operator liable for harms committed by AI is that it would be unfair to always hold the creator of an AI system liable for harms committed by operators using the AI systems, who could then commit harms with no consequences. At the same time, sometimes the AI creator will be equally responsible as the operator of the system depending on the particular AI. The Report recommends against holding AI responsible for harms when overseeing other AI systems—the operator liable for the harms should always be an individual or company.

However, there is still a high number of potential operators involved in managing an AI system, and this may be complicated further in autonomous systems. There is also sometimes limited explanation or understanding for an AI’s emergence, and the harm the AI causes may not be foreseeable, which is an essential principle of civil liability.

Ultimately, there are going to be some challenges and a need for new legal developments in the area of civil liability for harms committed by AI. We are already seeing attempts at regulating responsible use of AI systems, namely Canada’s Artificial Intelligence and Data Act tabled in June 2022, but not yet passed. The Report provides an extensive overview of the complications of developing the law in this area and well-thought-out recommendations to guide future law makers. AI users and luddites alike should check out the Report if they are interested in learning more about AI and the potential impact it can have on their lives.

1 Directive on Automated Decision-Making- Canada.ca. For more information on the Directive on Automated Decision-Making, see my recent article on the topic Are Decision-Makers Being Replaced with AI? – Pushor Mitchell LLP

The content made available on this website has been provided solely for general informational purposes as of the date published and should NOT be treated as or relied upon as legal advice. It is not to be construed as a representation, warranty, or guarantee, and may not be accurate, current, complete, or fit for a particular purpose or circumstance. If you are seeking legal advice, a professional at Pushor Mitchell LLP would be pleased to assist you in resolving your legal concerns in the context of your particular circumstances.

It is prohibited to reproduce, modify, republish, or in any way use content from this website without express written permission from the Chief Operating Officer or the Managing Partner at Pushor Mitchell LLP. Third party content that references this publication is not endorsed by Pushor Mitchell LLP and in no way represents the views of the firm. We do not guarantee the accuracy of, nor accept responsibility for the content of any source that may link, quote, or reference this publication.

Please read and understand our full Website Terms of Use and Disclaimer here.

Legal Alert, Pushor Mitchell’s free monthly e-newsletter