Michaela Messier
What once seemed like science fiction, or a Silicon Valley experiment, is now a real part of how governments operate. Artificial Intelligence (“AI”) is already reshaping the way public institutions make decisions that affect millions of people. From predictive policing to benefits eligibility, AI systems are now embedded in the public sector, where their impact on civil liberties, due process, and equality under the law is profound and often blurred. As these systems proliferate, public policy must catch up. Not just to regulate private sector innovation, but to ensure government uses of AI reflect democratic values and constitutional principles. In recent years, public agencies at the federal, state, and local levels have increasingly adopted algorithmic decision-making tools. These tools are used for everything from identifying children at risk of abuse, to optimizing traffic flow, to predicting the likelihood of someone committing a future crime. In theory, these systems promise efficiency, consistency, and objectivity. In practice, however, they often entrench existing inequalities, lack transparency, and frequently lack comprehensive mechanisms for accountability.
For instance, the Correctional Offender Management Profiling for Alternative Sanctions (“COMPAS”) algorithm used in criminal sentencing was found to exhibit significant racial bias. A 2016 investigation by ProPublic revealed COMPAS falsely scored Black defendants as higher risk nearly twice as often than white defendants with similar profiles.[1] Despite being used to inform life-altering judicial decisions, the algorithm’s internal workings are proprietary and shielded from scrutiny, raising due process concerns.[2] This evidence shows that, when flawed, these systems do not just cause inconvenience. They can cost people their freedom. Unlike private uses of AI, such as product recommendations or targeted ads, the stakes in government decision-making are uniquely high. Government action carries the weight of law. A person denied housing benefits by an AI tool doesn’t just lose out on a consumer good. It is possible they may lose the roof over their head. Constitutional protections, such as the right to due process and equal protection, carry distinct implications in the public sector that are not present in private, commercial settings.
The Supreme Court has long held that when the government deprives someone of liberty or property interest, due process requires “the opportunity to be heard at a meaningful time and in a meaningful manner.”[3] When the decisionmaker is not required to explain their conclusion or provide access to the basis for that decision, meaningful review becomes impossible.[4] That concern is magnified when decisions stem from blurred and complex AI systems. Courts have shown increasing concern about governmental reliance on third-party algorithms without sufficient transparency. In Loomis v. Wisconsin, the Wisconsin Supreme Court upheld the use of the COMPAS algorithm in sentencing, but the case sparked widespread legal debate about whether secret algorithms can meet constitutional standards.[5] Although SCOTUS denied certiorari, the case remains a flashpoint in the discussion over due process and algorithmic accountability.
One of the foundational challenges in regulating AI is the “black box” problem. Many AI systems are so complex that even their developers struggle to explain how they reach certain conclusions. When deployed by the government, this complexity becomes a legal and ethical minefield. How can someone appeal a denial of services if they cannot understand the logic that led to the decision? And how can an ordinary citizen be expected to understand such logic when the developers themselves may not even be able to? This has prompted calls for “algorithmic explainability” in public administration. While perfect transparency may not be feasible for every technical detail, policy should mandate that automated systems affecting rights and benefits come with clear, accessible explanations that allow affected individuals to understand, and more importantly, challenge decisions.
The U.S. Constitution does not recognize a “right to explanation” per se, but due process arguably requires a minimally intelligible account of how and why a decision was made.[6] Courts and legislatures should act accordingly, developing standards for procedural fairness that reflect the realities of automated governance. The European Union’s General Data Protection Regulation (“GDPR”) provides limited safeguards in this area.[7] In the American context, procedural due process provides the best doctrinal hook. Courts should expand the Matthews v. Eldridge balancing test to account for algorithmic opacity as a significant risk of erroneous deprivation.[8]
Another key reform is requiring Algorithmic Impact Assessments (“AIAs”), modeled after environmental or fiscal impact assessments. These would compel agencies to evaluate the risks and benefits of an AI system before implementation and regularly thereafter. An AIA could include an equity analysis, an audit of potential biases, and a public comment process. Some jurisdictions are already experimenting with this. For example, New York City has established an Office of Algorithmic Accountability.[9] Similarly, the federal Office of Management and Budget released guidance in 2023 urging agencies to incorporate AI risk management strategies and public input.[10] However, these efforts remain voluntary or fragmented. It’s time to move beyond voluntary frameworks. Congress should enact legislation mandating AIAs for any federal agency system that affects individual rights or welfare. States and municipalities should follow suit, creating uniform standards and independent oversight bodies.
Perhaps the most fundamental issue is who gets to decide how AI is used in the public realm. Many of today’s systems are procured from private vendors with little public input or debate. This is particularly troubling given the potentially transformative power of these tools. Democratic legitimacy requires that policies affecting people’s lives be shaped through inclusive and deliberative processes. At a minimum, agencies should be required to notify the public when adopting new AI systems, solicit community feedback, and disclose relevant documentation. Participatory governance is not only good policy, but also necessary to fend against abuse and error. Policies should empower affected communities, particularly marginalized populations who are often disproportionately harmed by automated systems. This includes funding for civil society groups to engage in oversight, legal aid for those harmed by algorithmic decisions, and community representation in AI review boards.[11]
AI in the public sector is not inherently harmful. When designed with care and deployed with oversight, it can improve service delivery, reduce human error, and allocate resources more efficiently. But these benefits will only be realized if public policy steps up to meet the challenge. A comprehensive regulatory framework tailored to the realities of government use, one that centers transparency, accountability, equity, and democratic participation, is necessary. The alternative, a future where inscrutable machines silently shape who gets what from the state, is not just unwise. It is fundamentally undemocratic.
[1] Julia Angwin et al., Machine Bias, ProPublica (May 2023, 2016), https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.
[2] Loomis v. Wisconsin, 881 N.W.2d 749, 761-62 (Wis. 2016).
[3] Mathews v. Eldridge, 424 U.S. 319, 333 (1976).
[4] Goldberg v. Kelly, 397 U.S. 254, 271 (1970).
[5] Loomis, 881 N.W.2d at 761.
[6] Margot E. Kaminski, The Right to Explanation, Explained, 34 Berkeley Tech. L.J. 189 (2019).
[7] Regulation (EU) 2016/679 (General Data Protection Regulation), art. 22, 2016 O.J. (L 119) 1.
[8] Mathews, 424 U.S. at 335.
[9] N.Y.C Local Law No. 49 (2018).
[10] Office of Mgmt. & Budget, Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence, M-23-18 (2023).
[11] Virginia Eubanks, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor (2018).
No Comments