Accountable AI: Can the Law Reduce Bias and Increase Transparency?
Governments are increasingly making use of AI tools to help with decision making, but they face a key challenge: ensuring fairness and maintaining public trust.
May 20, 2019 at 07:00 AM
5 minute read
Automated decision-making frees up time and resources, two commodities that many government departments lack. The increased availability of AI decision-making tools allows government decisions to be delegated to algorithms, particularly in the resource-intensive areas of local government work that directly impact individual citizens: identifying children in need of additional care, rating schools' performance, selecting food outlets for a health and safety inspection, calculating fire risks and predicting crime.
Algorithms are useful. They save time, they save money and they can provide better outcomes. But the uptake of automated decision-making is tricky for governments, which are under even more pressure than companies to maintain accountability and transparency and ensure that their citizens trust that the decisions being made about them are fair and trustworthy. Things can sometimes go wrong—a study last year for example reported that an algorithm widely used in the US for predicting criminal re-offending rates was exhibiting unintentional racial bias.
Even when they don't go wrong, automated decision-making tools can be a headache for local government: How can the decisions reached by such tools be explained to citizens? Do citizens trust AI to make good decisions? And do the government officials commissioning and deploying these tools understand them enough to decide whether or not the system is worth investing into?
The challenge for governments is therefore to harness the enormous potential of these new technologies without alienating the people who they serve. Last year saw the start of a wave of innovative legislative proposals in the US designed to enhance accountability in these technologies, a first sign that governments are engaging in a solution to this issue.
|Avoiding a Black-Box Government
New York City's local government has been at the forefront of this algorithmic accountability movement. Last year, the city council introduced a new law that requires algorithms being used in council decision-making to be properly assessed, so as to be able to produce an explanation about their decision-making processes. In some cases, technical information such as source code may be released to the public too.
The city council representative who proposed the law, James Vacca, has said the legislation was introduced “not to prevent city agencies from taking advantage of cutting-edge tools, but to ensure that when they do, they remain accountable to the public”—a motto to live by for any modern administration trying to use innovation and retain the public's trust. New York Mayor de Blasio established a task force to investigate the use of AI in the city's administration, which is expected to publish its first findings at the end of this year.
Other US governments have followed. Earlier this year Washington State proposed a bill to regulate the use of AI in government and to increase transparency in automated decision making. The bill envisages that government departments using AI tools must publish algorithmic accountability reports, to give the public an understanding of how decisions are made.
|Algorithmic Accountability in the EU
The European Union is also grappling with these issues. The EU is trying to position itself as a hub for AI investment and development, and as part of this has been backing efforts to develop an ethical and legal framework for the use of AI in society.
In April, the European Commission published its Ethics Guidelines for Trustworthy AI, which set out key requirements, centered on fairness and explicability, that AI systems should meet in order to be deemed trustworthy. The underlying principles are ethical rather than technical, and don't give much concrete guidance on how they can be achieved in practice. The Guidelines are in a pilot phase, and the EU Commission's High-Level Expert Group on AI plan to review them in early 2020. Based on this review, the commission will evaluate the outcome and propose any next steps. The requirements currently don't have any legal force.
The Council of Europe, a European human rights organisation distinct from the EU's structure, has also been assessing the impact of algorithms on human rights. It has put together an expert committee to investigate problems and solutions- transparency and accountability are key principles in the council's approach to regulating algorithms, though again no concrete examples of how this can be ensured have yet been suggested.
The GDPR, a privacy law that came into force in all EU countries last year, introduced new requirements for algorithmic accountability. Under the GDPR, when automated decisions about individuals are made without a human involved in the outcome, the individuals may need to be provided with “meaningful information about the logic involved”. This legal requirement has generated a lot of debate, both technical and legal. Explaining how an AI system reached a decision in an easily intelligible way is currently a hot topic for academic research and legal practitioners, yet it remains unclear when, and indeed if, a solution to this will become available.
While national and international organisations in Europe are lining up to emphasize the importance of accountability and transparency in algorithms, no consensus has yet emerged on how this can be achieved in practice. Users and developers of AI in the EU will be keeping a close eye on the other side of the pond, to see how city and state governments in the US explain these new decision-making tools to their citizens.
Toby Bond is a Senior Associate in Bird & Bird's Intellectual Property Group, based in London. Toby advises on contentious intellectual property matters involving complex technologies. Clara Clark Nevola is Trainee Associate at Bird & Bird.
This content has been archived. It is available through our partners, LexisNexis® and Bloomberg Law.
To view this content, please continue to their sites.
Not a Lexis Subscriber?
Subscribe Now
Not a Bloomberg Law Subscriber?
Subscribe Now
NOT FOR REPRINT
© 2024 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.
You Might Like
View AllTrending Stories
Who Got The Work
Michael G. Bongiorno, Andrew Scott Dulberg and Elizabeth E. Driscoll from Wilmer Cutler Pickering Hale and Dorr have stepped in to represent Symbotic Inc., an A.I.-enabled technology platform that focuses on increasing supply chain efficiency, and other defendants in a pending shareholder derivative lawsuit. The case, filed Oct. 2 in Massachusetts District Court by the Brown Law Firm on behalf of Stephen Austen, accuses certain officers and directors of misleading investors in regard to Symbotic's potential for margin growth by failing to disclose that the company was not equipped to timely deploy its systems or manage expenses through project delays. The case, assigned to U.S. District Judge Nathaniel M. Gorton, is 1:24-cv-12522, Austen v. Cohen et al.
Who Got The Work
Edmund Polubinski and Marie Killmond of Davis Polk & Wardwell have entered appearances for data platform software development company MongoDB and other defendants in a pending shareholder derivative lawsuit. The action, filed Oct. 7 in New York Southern District Court by the Brown Law Firm, accuses the company's directors and/or officers of falsely expressing confidence in the company’s restructuring of its sales incentive plan and downplaying the severity of decreases in its upfront commitments. The case is 1:24-cv-07594, Roy v. Ittycheria et al.
Who Got The Work
Amy O. Bruchs and Kurt F. Ellison of Michael Best & Friedrich have entered appearances for Epic Systems Corp. in a pending employment discrimination lawsuit. The suit was filed Sept. 7 in Wisconsin Western District Court by Levine Eisberner LLC and Siri & Glimstad on behalf of a project manager who claims that he was wrongfully terminated after applying for a religious exemption to the defendant's COVID-19 vaccine mandate. The case, assigned to U.S. Magistrate Judge Anita Marie Boor, is 3:24-cv-00630, Secker, Nathan v. Epic Systems Corporation.
Who Got The Work
David X. Sullivan, Thomas J. Finn and Gregory A. Hall from McCarter & English have entered appearances for Sunrun Installation Services in a pending civil rights lawsuit. The complaint was filed Sept. 4 in Connecticut District Court by attorney Robert M. Berke on behalf of former employee George Edward Steins, who was arrested and charged with employing an unregistered home improvement salesperson. The complaint alleges that had Sunrun informed the Connecticut Department of Consumer Protection that the plaintiff's employment had ended in 2017 and that he no longer held Sunrun's home improvement contractor license, he would not have been hit with charges, which were dismissed in May 2024. The case, assigned to U.S. District Judge Jeffrey A. Meyer, is 3:24-cv-01423, Steins v. Sunrun, Inc. et al.
Who Got The Work
Greenberg Traurig shareholder Joshua L. Raskin has entered an appearance for boohoo.com UK Ltd. in a pending patent infringement lawsuit. The suit, filed Sept. 3 in Texas Eastern District Court by Rozier Hardt McDonough on behalf of Alto Dynamics, asserts five patents related to an online shopping platform. The case, assigned to U.S. District Judge Rodney Gilstrap, is 2:24-cv-00719, Alto Dynamics, LLC v. boohoo.com UK Limited.
Featured Firms
Law Offices of Gary Martin Hays & Associates, P.C.
(470) 294-1674
Law Offices of Mark E. Salomone
(857) 444-6468
Smith & Hassler
(713) 739-1250