How artificial intelligence will change administrative law

Most media attention placed on artificial intelligence today focuses on the efficiency gains and impacts on the job-market, the implications for privacy, data protection, or the commercial risks created by AI-assisted decision making.

Most media attention placed on artificial intelligence today focuses on the efficiency gains and impacts on the job-market, the implications for privacy, data protection, or the commercial risks created by AI-assisted decision making.

However, the impact of artificial intelligence on administrative decisions by governments and governmental agencies is equally profound and deserves more attention.

In April, 2023, the federal government updated its 2020 “Directive on Automated Decision-Making” (the “Directive”) following a stakeholder review of the Directive to re-evaluate and adapt the state and use of the Directive based on the current Canadian and global AI landscape, and the evolving risks created by these changes.

The Directive is the first national policy focused on algorithmic and automated decision-making in public administration. It tells us a lot about how government is understanding the impact of AI systems on decision making and highlights some of the administrative law problems created by their use.

The government developed the Directive based on AI guiding principles adopted by leading digital nations at the D9 conference in November 2018.

The principles include: (1) understanding and measuring the impact of using AI; (2) ensuring transparency about how and when AI is used; (3) providing meaningful explanations about AI decision-making; (4) remaining open to sharing source codes and relevant data while protecting personal data; and (5) providing sufficient training for the use and development of AI solutions.

However, experience reveals that there are many areas where quality breaks down in practice. For example, data quality is often a significant issue for many organizations and is likely to pose significant issues for government systems as well. Poor data quality could lead to completely incorrect statistical analyses, correlation issues, misleading data, and faulty outcomes.

Without robust safeguards, data can also easily be manipulated (intentionally or not) to provide distorted perspectives on issues - and this can be the case both for decision makers and parties who have vested interests in achieving particular outcomes in the regulatory process.

Employee training will be crucially important to protect against misleading uses of data, incorrect inferences and deductions, and effective vetting and oversight. The higher AIA impact risk of a system, the greater the training required of those who use it.

The Directive requires institutions to provide clients with recourse to challenge administrative decisions. The Directive also requires publication of information on the effectiveness and efficiency of the automated decision system in meeting program objectives on a government website.

While the Directive is a useful starting point, the impact of AI-assisted decision making on both procedural fairness and the substantive aspects of a decision is untested in Canada.

The Directive clarifies many of the key areas for judicial scrutiny and will be a useful guide for those wishing to understand how to interrogate decisions made or assisted by automated decision systems such as AI systems.

However, there is little doubt these systems will introduce entirely new categories of risk in administrative decisions and thereby create fruitful grounds for review.