Artificial Intelligence

Automated decision-making frees up time and resources, two commodities that many government departments lack. The increased availability of AI decision-making tools allows government decisions to be delegated to algorithms, particularly in the resource-intensive areas of local government work that directly impact individual citizens: identifying children in need of additional care, rating schools' performance, selecting food outlets for a health and safety inspection, calculating fire risks and predicting crime.

Algorithms are useful. They save time, they save money and they can provide better outcomes. But the uptake of automated decision-making is tricky for governments, which are under even more pressure than companies to maintain accountability and transparency and ensure that their citizens trust that the decisions being made about them are fair and trustworthy. Things can sometimes go wrong—a study last year for example reported that an algorithm widely used in the US for predicting criminal re-offending rates was exhibiting unintentional racial bias.

Even when they don't go wrong, automated decision-making tools can be a headache for local government: How can the decisions reached by such tools be explained to citizens? Do citizens trust AI to make good decisions? And do the government officials commissioning and deploying these tools understand them enough to decide whether or not the system is worth investing into?

The challenge for governments is therefore to harness the enormous potential of these new technologies without alienating the people who they serve. Last year saw the start of a wave of innovative legislative proposals in the US designed to enhance accountability in these technologies, a first sign that governments are engaging in a solution to this issue.

|

Avoiding a Black-Box Government

New York City's local government has been at the forefront of this algorithmic accountability movement. Last year, the city council introduced a new law that requires algorithms being used in council decision-making to be properly assessed, so as to be able to produce an explanation about their decision-making processes. In some cases, technical information such as source code may be released to the public too.

The city council representative who proposed the law, James Vacca, has said the legislation was introduced “not to prevent city agencies from taking advantage of cutting-edge tools, but to ensure that when they do, they remain accountable to the public”—a motto to live by for any modern administration trying to use innovation and retain the public's trust. New York Mayor de Blasio established a task force to investigate the use of AI in the city's administration, which is expected to publish its first findings at the end of this year.

Other US governments have followed. Earlier this year Washington State proposed a bill to regulate the use of AI in government and to increase transparency in automated decision making. The bill envisages that government departments using AI tools must publish algorithmic accountability reports, to give the public an understanding of how decisions are made.

|

Algorithmic Accountability in the EU

The European Union is also grappling with these issues. The EU is trying to position itself as a hub for AI investment and development, and as part of this has been backing efforts to develop an ethical and legal framework for the use of AI in society.

In April, the European Commission published its Ethics Guidelines for Trustworthy AI, which set out key requirements, centered on fairness and explicability, that AI systems should meet in order to be deemed trustworthy. The underlying principles are ethical rather than technical, and don't give much concrete guidance on how they can be achieved in practice. The Guidelines are in a pilot phase, and the EU Commission's High-Level Expert Group on AI plan to review them in early 2020. Based on this review, the commission will evaluate the outcome and propose any next steps. The requirements currently don't have any legal force.

The Council of Europe, a European human rights organisation distinct from the EU's structure, has also been assessing the impact of algorithms on human rights. It has put together an expert committee to investigate problems and solutions- transparency and accountability are key principles in the council's approach to regulating algorithms, though again no concrete examples of how this can be ensured have yet been suggested.

The GDPR, a privacy law that came into force in all EU countries last year, introduced new requirements for algorithmic accountability. Under the GDPR, when automated decisions about individuals are made without a human involved in the outcome, the individuals may need to be provided with “meaningful information about the logic involved”. This legal requirement has generated a lot of debate, both technical and legal. Explaining how an AI system reached a decision in an easily intelligible way is currently a hot topic for academic research and legal practitioners, yet it remains unclear when, and indeed if, a solution to this will become available.

While national and international organisations in Europe are lining up to emphasize the importance of accountability and transparency in algorithms, no consensus has yet emerged on how this can be achieved in practice. Users and developers of AI in the EU will be keeping a close eye on the other side of the pond, to see how city and state governments in the US explain these new decision-making tools to their citizens.

Toby Bond is a Senior Associate in Bird & Bird's Intellectual Property Group, based in London. Toby advises on contentious intellectual property matters involving complex technologies. Clara Clark Nevola is Trainee Associate at Bird & Bird.