Court leaders in Canada and the United States are running out of time to decide how artificial intelligence enters the justice system.
A Saskatchewan judge issued a ruling last July that one lawyer called groundbreaking. It dealt with whether police officers accused of wrongdoing can investigate themselves. Six months later, the ruling still isn’t on CanLII. The public can’t read it. Other lawyers can’t easily cite it. For practical purposes, it might as well not exist.
That’s the state of court transparency in Canada in 2026. And while courts on both sides of the border argue about who decides what gets published, artificial intelligence is transforming legal practice anyway. The question for court leaders now is simple: are you going to shape how AI enters the justice system, or are you going to let it happen to you?
I run Caseway, a Vancouver company that reads millions of court decisions and helps lawyers and self-represented litigants navigate them. We were sued by CanLII in late 2024 for building tools on top of Canadian court data. The lawsuit has since been resolved. But the experience taught me something I want court leaders in Canada and the United States to understand… The rules of the road for legal AI are being written right now, in courtrooms and legislatures, by people who may or may not know what they’re building.
The US is already further ahead, and the gap is growing
In the United States, PACER provides access to over a billion federal court documents. Harvard Law School built the Caselaw Access Project. Researchers, founders and journalists can actually study the system they operate in. In Canada, about half the provinces and territories have no online court search at all. British Columbia, Manitoba and Nova Scotia require a commercial business like mine to seek court approval before deploying AI tools on its database. That’s not caution. That’s a moat.
I’m not arguing the American model is perfect. It isn’t. But the gap matters because AI runs on data. If courts in your jurisdiction don’t publish decisions in bulk, machine-readable form, you are guaranteeing that the AI tools used by lawyers in your courts will be trained on data from somewhere else. Your case law, precedents, and your jurisdiction’s nuances. Missing.
Countries around the world are moving the other way. France has Judilibre. Britain has the Cambridge Law Corpus. These platforms are fairly standard in Europe now. Canada has a legal data desert.
The risks court leaders can’t delegate
I’ve spent the last two years talking to bar associations, regulators, academics and enterprise clients about AI in law. Here’s what I tell them, and it applies directly to the bench and the court administrators who support it.
AI doesn’t “know” anything. It predicts. Large language models are pattern recognition at scale. The risk isn’t that they’re wrong. The risk is that they sound confident when they’re wrong. Lawyers have already been sanctioned in both countries for filing briefs citing cases that don’t exist. That problem is solvable with the right data pipeline, but only if courts cooperate. Business in Vancouver wrote about how Vancouver AI firms are tackling hallucinations.
If your court publishes selectively, you are privatizing the law. Decisions of precedential value should be easy for anyone to find. When judges alone decide what gets published, the body of law becomes a curated collection rather than a complete record. Academics can’t measure whether like cases are decided alike. Self-represented litigants can’t see how similar disputes have resolved. Small law firms can’t compete with the firms that can afford Westlaw and LexisNexis subscriptions. Justice becomes a function of who can pay.
You should also know that the academic community wants to help. Caseway has research partnerships with Dr. Vered Shwartz at UBC, SFU’s faculty of computing, Northeastern University, and the University of the Fraser Valley. These aren’t consulting relationships. They’re researchers who want to study how legal reasoning works and whether the system delivers on its promises. Court leaders should be inviting that scrutiny, not hiding from it.
A practical starting point
If I were advising a chief justice or court administrator today, I’d say this. Pick one workflow, not the whole system. Start with something boring. Timely posting of written decisions. Machine-readable formats for appellate rulings. A clear policy on when oral decisions get transcribed. Put a human in the loop for review. Make one person accountable for the outcome.
Then measure. Count the decisions published within thirty days of release. You should count the decisions still sitting in a judge’s queue six months later. Have a look at the jurisdictions where a litigant can actually find the record of their own case online. If you can’t measure it, you can’t improve it.
Court Leaders – The real choice
AI is a force multiplier. It is going to make lawyers faster, make research cheaper, and make it possible for regular people to understand what’s happening in their own cases. That is happening whether or not courts cooperate. The only question is whether the systems being built on top of the law will be trained on complete, authoritative data or on whatever scraps developers can assemble elsewhere.
Courts exist to serve the public. Judges are paid by taxpayers to produce decisions that shape how the country is governed. Those decisions belong to the public. Not to vendors, databases operating at their own pace, or to individual judges choosing whether to hit publish.
Court leaders in Canada and the United States have a decision to make in 2026. Lead on this, or get led by it.
Alistair Vigier is the CEO of Caseway, a Vancouver-based legal AI company. He previously worked in divorce law, and served seven years in the Canadian Army.

