First Case of Sanction for SRL Misusing AI in Quebec
Specter Aviation Limited c. Laprade, 2025 QCCS 3521 is a Superior Court case sanctioning inappropriate use of AI in court proceedings under section 342 of the Quebec Code of civil procedures (C.p.c.), a first in Quebec Law. The Court finds that defendant’s inappropriate use of artificial intelligence constitutes a significant breach in the conduct of proceedings and condemns defendant to a $5000 fine.
So let’s unpack that:
Use of AI
Defendant, a self-represented litigant, used generative AI for the preparation of his brief, and in a nutshell, the AI hallucinated… at least eight times.
The use of AI in court proceedings is not prohibited in Quebec, but this permissible use needs proper guardrails and extreme caution, as stated in the Superior Court’s 2023 notice on use of LLMs.
Check that sources cited in output are from courts’ websites or other official sources of legal information
A human should check the validity of generated analysis for validity of the arguments raised
When in doubt, don’t: you’ll be exposed to full legal consequences
Inappropriate use
This case of relying on “fake precedents” is not by a fellow attorney, but by a self-represented litigant, a citizen who raises the right to a full defence and has baseline difficulties navigating complexities our legal system on their own. While I sympathize with self-represented parties, I agree with the Court fully on this point: we can’t accept what is tantamount to forgery of cases in the name of access to justice.
Then what’s appropriate?
Always follow the notices and directives of Courts on AI use, even as a self-represented party.
Think smaller. Instead of throwing the task of constructing a whole brief at ChatGPT, try using a tool like CanLII’s genAI summaries to understand complex decisions (not yet covering Quebec).
Generate a list of ideas about your case to validate through research by a human.
Significant breach of conduct
Making up case law is not acceptable, by any person or machine, in any circonstance. Access to justice can’t be a shield for misuse of AI. The message is clear: sanctions will apply where appropriate.
Quebec’s Bar Association has a guide on the use of genAI for lawyers. A lawyer relying on fake precedents can be exposed to a variety of sanctions, including civil sanctions, for abuse of process. Now we know that the same is true for self-represented litigants, even if there is no professional liability involved.
Understand that you are swearing to tell the truth and the whole truth to court, not only as a witness but also as your own representation.
Understand that you are liable for the AI’s hallucinations at law. Full stop.
Ok, Julia, get to your point
This case brings home the urgency for legal tech developers to look to serve litigants, especially helping those resorting to self-representation in our legal system. Legal tech at the service of lawyers is the main core of the industry and promises more efficiency for firms and lawyers looking to streamline workflows. What about developing public interest legal tech facilitating access to justice? Leveraging technology is part of the solution to access to justice issues in Quebec and around the world. As our government is rethinking its digital transformation projects, including digital platforms for the legal system, I see a window of opportunity to help the public gain awareness and skill in self-navigating the legal process.
We need data and research on how people go through the legal system as unaccompanied individuals. If submissions were to be uploaded using a centralized platform, a simple window popup could remind users of the dangers of unchecked genAI and prompt them to self-categorized the submission as generated or human-made. Knowing where people are getting by is half of the game already. There are ways of baking data collection for research into the next platform. Let’s design the next platform to not only give better access to justice, but enable research on continuously improving access to justice.
It’s costly to do things right. What court notices and rules on AI use depict as ideal scenario translates in technical terms to a Human-in-the-loop approach in training and evaluating ML models. I have a folder of targeted ads I’ve received in the last two years offering me work as a data labeler and performance enhancer for ML models, against decent compensation for remote work. Legalese as a sub-variant of NLP generates immense costs for development. Lawyers and law firms have access to retrieval augmented generation for optimized output with a reliable and authoritative knowledge base behind the retrievals. RAG would have prevented the case I just discussed from going as far as eight separate counts of hallucinations. Unless there is a way to solve the cost issue, ordinary citizens will not have tools adequate enough to help navigate the legal system.