Accountable AI: Can the Law Reduce Bias and Increase Transparency?
Governments are increasingly making use of AI tools to help with decision making, but they face a key challenge: ensuring fairness and maintaining public trust.
May 20, 2019 at 07:00 AM
5 minute read
Automated decision-making frees up time and resources, two commodities that many government departments lack. The increased availability of AI decision-making tools allows government decisions to be delegated to algorithms, particularly in the resource-intensive areas of local government work that directly impact individual citizens: identifying children in need of additional care, rating schools' performance, selecting food outlets for a health and safety inspection, calculating fire risks and predicting crime.
Algorithms are useful. They save time, they save money and they can provide better outcomes. But the uptake of automated decision-making is tricky for governments, which are under even more pressure than companies to maintain accountability and transparency and ensure that their citizens trust that the decisions being made about them are fair and trustworthy. Things can sometimes go wrong—a study last year for example reported that an algorithm widely used in the US for predicting criminal re-offending rates was exhibiting unintentional racial bias.
Even when they don't go wrong, automated decision-making tools can be a headache for local government: How can the decisions reached by such tools be explained to citizens? Do citizens trust AI to make good decisions? And do the government officials commissioning and deploying these tools understand them enough to decide whether or not the system is worth investing into?
The challenge for governments is therefore to harness the enormous potential of these new technologies without alienating the people who they serve. Last year saw the start of a wave of innovative legislative proposals in the US designed to enhance accountability in these technologies, a first sign that governments are engaging in a solution to this issue.
Avoiding a Black-Box Government
New York City's local government has been at the forefront of this algorithmic accountability movement. Last year, the city council introduced a new law that requires algorithms being used in council decision-making to be properly assessed, so as to be able to produce an explanation about their decision-making processes. In some cases, technical information such as source code may be released to the public too.
The city council representative who proposed the law, James Vacca, has said the legislation was introduced “not to prevent city agencies from taking advantage of cutting-edge tools, but to ensure that when they do, they remain accountable to the public”—a motto to live by for any modern administration trying to use innovation and retain the public's trust. New York Mayor de Blasio established a task force to investigate the use of AI in the city's administration, which is expected to publish its first findings at the end of this year.
Other US governments have followed. Earlier this year Washington State proposed a bill to regulate the use of AI in government and to increase transparency in automated decision making. The bill envisages that government departments using AI tools must publish algorithmic accountability reports, to give the public an understanding of how decisions are made.
Algorithmic Accountability in the EU
The European Union is also grappling with these issues. The EU is trying to position itself as a hub for AI investment and development, and as part of this has been backing efforts to develop an ethical and legal framework for the use of AI in society.
In April, the European Commission published its Ethics Guidelines for Trustworthy AI, which set out key requirements, centered on fairness and explicability, that AI systems should meet in order to be deemed trustworthy. The underlying principles are ethical rather than technical, and don't give much concrete guidance on how they can be achieved in practice. The Guidelines are in a pilot phase, and the EU Commission's High-Level Expert Group on AI plan to review them in early 2020. Based on this review, the commission will evaluate the outcome and propose any next steps. The requirements currently don't have any legal force.
The Council of Europe, a European human rights organisation distinct from the EU's structure, has also been assessing the impact of algorithms on human rights. It has put together an expert committee to investigate problems and solutions- transparency and accountability are key principles in the council's approach to regulating algorithms, though again no concrete examples of how this can be ensured have yet been suggested.
The GDPR, a privacy law that came into force in all EU countries last year, introduced new requirements for algorithmic accountability. Under the GDPR, when automated decisions about individuals are made without a human involved in the outcome, the individuals may need to be provided with “meaningful information about the logic involved”. This legal requirement has generated a lot of debate, both technical and legal. Explaining how an AI system reached a decision in an easily intelligible way is currently a hot topic for academic research and legal practitioners, yet it remains unclear when, and indeed if, a solution to this will become available.
While national and international organisations in Europe are lining up to emphasize the importance of accountability and transparency in algorithms, no consensus has yet emerged on how this can be achieved in practice. Users and developers of AI in the EU will be keeping a close eye on the other side of the pond, to see how city and state governments in the US explain these new decision-making tools to their citizens.
Toby Bond is a Senior Associate in Bird & Bird's Intellectual Property Group, based in London. Toby advises on contentious intellectual property matters involving complex technologies. Clara Clark Nevola is Trainee Associate at Bird & Bird.
This content has been archived. It is available through our partners, LexisNexis® and Bloomberg Law.
To view this content, please continue to their sites.
Not a Lexis Subscriber?
Subscribe Now
Not a Bloomberg Law Subscriber?
Subscribe Now
NOT FOR REPRINT
© 2025 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.
You Might Like
View AllTrending Stories
- 1Uber Files RICO Suit Against Plaintiff-Side Firms Alleging Fraudulent Injury Claims
- 2The Law Firm Disrupted: Scrutinizing the Elephant More Than the Mouse
- 3Inherent Diminished Value Damages Unavailable to 3rd-Party Claimants, Court Says
- 4Pa. Defense Firm Sued by Client Over Ex-Eagles Player's $43.5M Med Mal Win
- 5Losses Mount at Morris Manning, but Departing Ex-Chair Stays Bullish About His Old Firm's Future
Who Got The Work
J. Brugh Lower of Gibbons has entered an appearance for industrial equipment supplier Devco Corporation in a pending trademark infringement lawsuit. The suit, accusing the defendant of selling knock-off Graco products, was filed Dec. 18 in New Jersey District Court by Rivkin Radler on behalf of Graco Inc. and Graco Minnesota. The case, assigned to U.S. District Judge Zahid N. Quraishi, is 3:24-cv-11294, Graco Inc. et al v. Devco Corporation.
Who Got The Work
Rebecca Maller-Stein and Kent A. Yalowitz of Arnold & Porter Kaye Scholer have entered their appearances for Hanaco Venture Capital and its executives, Lior Prosor and David Frankel, in a pending securities lawsuit. The action, filed on Dec. 24 in New York Southern District Court by Zell, Aron & Co. on behalf of Goldeneye Advisors, accuses the defendants of negligently and fraudulently managing the plaintiff's $1 million investment. The case, assigned to U.S. District Judge Vernon S. Broderick, is 1:24-cv-09918, Goldeneye Advisors, LLC v. Hanaco Venture Capital, Ltd. et al.
Who Got The Work
Attorneys from A&O Shearman has stepped in as defense counsel for Toronto-Dominion Bank and other defendants in a pending securities class action. The suit, filed Dec. 11 in New York Southern District Court by Bleichmar Fonti & Auld, accuses the defendants of concealing the bank's 'pervasive' deficiencies in regards to its compliance with the Bank Secrecy Act and the quality of its anti-money laundering controls. The case, assigned to U.S. District Judge Arun Subramanian, is 1:24-cv-09445, Gonzalez v. The Toronto-Dominion Bank et al.
Who Got The Work
Crown Castle International, a Pennsylvania company providing shared communications infrastructure, has turned to Luke D. Wolf of Gordon Rees Scully Mansukhani to fend off a pending breach-of-contract lawsuit. The court action, filed Nov. 25 in Michigan Eastern District Court by Hooper Hathaway PC on behalf of The Town Residences LLC, accuses Crown Castle of failing to transfer approximately $30,000 in utility payments from T-Mobile in breach of a roof-top lease and assignment agreement. The case, assigned to U.S. District Judge Susan K. Declercq, is 2:24-cv-13131, The Town Residences LLC v. T-Mobile US, Inc. et al.
Who Got The Work
Wilfred P. Coronato and Daniel M. Schwartz of McCarter & English have stepped in as defense counsel to Electrolux Home Products Inc. in a pending product liability lawsuit. The court action, filed Nov. 26 in New York Eastern District Court by Poulos Lopiccolo PC and Nagel Rice LLP on behalf of David Stern, alleges that the defendant's refrigerators’ drawers and shelving repeatedly break and fall apart within months after purchase. The case, assigned to U.S. District Judge Joan M. Azrack, is 2:24-cv-08204, Stern v. Electrolux Home Products, Inc.
Featured Firms
Law Offices of Gary Martin Hays & Associates, P.C.
(470) 294-1674
Law Offices of Mark E. Salomone
(857) 444-6468
Smith & Hassler
(713) 739-1250