On December 20, 2023, the Federal Court posted Interim Principles and Guidelines on the Court’s Use of Artificial Intelligence. The publication enumerates a number of principles and guidelines that the Federal Court will comply with, which is noted in today’s article.
We also highlight a decision of the Court referenced as Haghshenas v. Canada (Citizenship and Immigration), 2023 FC 464 (CanLII) in which artificial intelligence was used in drafting reasons. The case involved a Judicial Review on a citizenship and immigration matter in which a work permit application was rejected. The Court essentially noted that using artificial intelligence in this case did not impact the duty of procedural fairness.
The Federal Court Publication is noted below:
Interim Principles and Guidelines on the Court’s Use of Artificial Intelligence
December 20, 2023
Federal Court will follow the Principles and Guidelines in this policy when using Artificial Intelligence (AI). The Court will not use AI, and more specifically automated decision-making tools, in making its judgments and orders, without first engaging in public consultations. For greater certainty, this includes the Court’s determination of the issues raised by the parties, as reflected in its Reasons for Judgment and its Reasons for Order, or any other decision made by the Court in a proceeding. For information regarding the use of AI by parties, self-represented litigants and interveners, please refer to the Notice on the Use of Artificial Intelligence in Court Proceedings.
The Federal Court’s Strategic Plan 2020-2025 references the Court’s interest in exploring the use of AI. After consultations with stakeholders, the Court has developed the following principles and guidelines to guide the potential use of AI by members of the Court and their law clerks.
The Court will begin investigating and piloting potential uses of AI for internal administrative purposes through its Technology Committee. For example, the Court will pilot a new process for translating decisions written by members of Court by using a form of AI to translate text. A translator and/or jurilinguist will review these AI-assisted translations to ensure that the translation accurately reflects the original reasons and outcome.
The Court understands the potential benefits, and risks, of using AI. In particular, the Court recognizes that AI can improve the efficiency and fairness of the legal system. For instance, it can assist with tasks such as analyzing large amounts of raw data, aiding in legal research, and performing administrative tasks. This can save time and reduce workload for judges and Court staff, just as it can for lawyers.
Other examples of potential benefits for all stakeholders in the justice system include streamlining aspects of case management, improving the accuracy and thoroughness of legal research, helping self-represented litigants to navigate Court procedures, and supporting alternative dispute resolution.
Alongside these potential benefits, the Court acknowledges the potential for AI to impact adversely on judicial independence. The Court also recognizes the risk that public confidence in the administration of justice might be undermined by some uses of AI. The Court will exercise the utmost vigilance to ensure that any use of AI by the Court does not encroach upon its decision-making function.
The Court will continue to consult experts and stakeholders as its understanding of AI evolves.
The following principles will guide the potential use of AI by members of the Court and their law clerks:
Accountability: The Court will be fully accountable to the public for any potential use of AI in its decision-making function;
Respect of fundamental rights: The Court will ensure its uses of AI do not undermine judicial independence, access to justice, or fundamental rights, such as the right to a fair hearing before an impartial decision-maker;
Non-discrimination: The Court will ensure that its use of AI does not reproduce or aggravate discrimination;
Accuracy: For any processing of judicial decisions and data for purely administrative purposes, the Court will use certified or verified sources and data;
Transparency: The Court will authorize external audits of any AI-assisted data processing methods that it embraces;
Cybersecurity: The Court will store and manage its data in a secure technological environment that protects the confidentiality, privacy, provenance, and purpose of the data managed; and,
“Human in the loop”: The Court will ensure that members of the Court and their law clerks are aware of the need to verify the results of any AI-generated outputs that they may be inclined to use in their work.
For the potential use of AI by members of the Court and their law clerks, the Court will adhere to the following guidelines:
The Court will not use AI, and more specifically automated decision-making tools, in making its judgments and orders, without first engaging in public consultation. For greater certainty, this includes the Court’s determination of the issues raised by the parties, as reflected in its Reasons for Judgment and its Reasons for Order, or any other decision made by the Court in a proceeding;
The Court will embrace the Principles listed above in any internal use of AI; and,
If a specific use of AI by the Court may have an impact on the profession or public, the Court will consult the relevant stakeholders before implementing that specific use.
See Federal Court posting here.
Also referenced is the decision: Haghshenas v. Canada (Citizenship and Immigration), 2023 FC 464 (CanLII) in which the Court had the following to say:
 As to artificial intelligence, the Applicant submits the Decision is based on artificial intelligence generated by Microsoft in the form of “Chinook” software. However, the evidence is that the Decision was made by a Visa Officer and not by software. I agree the Decision had input assembled by artificial intelligence, but it seems to me the Court on judicial review is to look at the record and the Decision and determine its reasonableness in accordance with Vavilov. Whether a decision is reasonable or unreasonable will determine if it is upheld or set aside, whether or not artificial intelligence was used. To hold otherwise would elevate process over substance.
 Regarding the use of the “Chinook” software, the Applicant suggests that there are questions about its reliability and efficacy. In this way, the Applicant suggests that a decision rendered using Chinook cannot be termed reasonable until it is elaborated to all stakeholders how machine learning has replaced human input and how it affects application outcomes. I have already dealt with this argument under procedural fairness, and found the use of artificial intelligence is irrelevant given that (a) an Officer made the Decision in question, and that (b) judicial review deals with the procedural fairness and or reasonableness of the Decision as required by Vavilov.
The Supreme Court of Canada decision noted above as “Vavilov” and referenced as Canada (Minister of Citizenship and Immigration) v Vavilov, 2019 SCC 65 [Vavilov] is also noted here:
 A reasonable decision is one that exhibits the hallmarks of justification, transparency and intelligibility, and is justified in the context of the applicable factual and legal constraints: Canada (Minister of Citizenship and Immigration) v Vavilov, 2019 SCC 65 [Vavilov] at para 99. The party challenging an administrative decision has the burden of showing that it is unreasonable: Vavilov, above at para 100.
Finally, in Jamali v. Canada (Citizenship and Immigration), 2023 FC 1328 (CanLII), the topic of Chinook software was also raised in argument for Judicial Review. The arguments are noted below
(2) Use of Chinook
 The applicant submitted that the reasons provided by the officer relied on the use of Chinook. The applicant’s written submissions described Chinook as a processing tool developed by IRCC to speed up officers’ review of the high volume of applications, review the file information, make decisions and generate notes in a fraction of the time it previously took to review the same number of applications. The applicant argued that it should be “presumed that not enough human input has gone into” the review of his file and there was a “lack of effective oversight on the decisions being generated.”
The Court adopted the reasons in Haghshenas and the matter was dismissed.
See Disclaimer in About Page.