What to consider when using AI in the public sector?

Nicolas Zahn • December 2021

The first workshop in our event series on AI & Ethics took place in Bern in collaboration with AlgorithmWatch Switzerland. Together with participants from various backgrounds, we explored how AI could be applied to the public sector, where and in what forms such applications are already used and what questions should be asked when considering the use of artificial intelligence in the public sector.

What are the potential issues with AI in the public sector?

Before the work in groups began, Dr. Anna Mätzener introduced AlgorithmWatch and the terminology for the remainder of the workshop, in particular the concept of automated decision-making or ADM. Many bureaucratic activities come down to making decisions: should a permit be granted, should a request be answered, is a limit overstepped etc. Making these decisions involves the collection and analysis of data so it is no wonder that applications of artificial intelligence in the public sector are mostly discussed in the form of ADM systems. While such use can increase the efficiency, lower costs and increase the reliability of decision-making, AlgorithmWatch points to several challenges:

  • Such systems are introduced in ever more fields with great speed and little debate
  • The systems themselves as well the processes surrounding them are often intransparent
  • The potential effects on humans and their fundamental rights are not clearly understood
  • The potential effects on democratic societies are not clearly understood

This is why AlgorithmWatch Switzerland developed a checklist for use in the public sector that aims at raising important questions in the procurement process of ADM systems.

Systems already in use

The participants split up into three groups to discuss one use-case each under the guidance of AlgorithmWatch Switzerland. The first use-case concerned the education system and revolved around a system that assigns children in a district to Kindergarten and primary schools. The second use-case involved a system used in the judicial system. It tries to evaluate the risk of relapse for criminals. The final use-case showed a system in use in the field of social security. It tries to identify potential welfare fraud.

All systems are in live use in various European countries and some of them have already led to public outrage due to issues such as intransparent communication by authorities about the use or bias in data leading to discriminatory decisions.

No easy answers

The participants discussed the checklist and each use-case vividly in their groups and it became quickly apparent that while serious issues have been identified with each system, there are no easy, fit-anything answers. For one thing, the ethical problems observed, such as biased decision-making, are not necessarily linked to using an ADM system but can also be an issue with “traditional” bureaucracies. In fact, in certain cases introducing ADM might even make apparent biases that were previously ignored or not as visible.

Nevertheless it is clear that particularly for AI applications in the public sector, questions need to be raised and answered starting with introducing more transparency into which systems are currently used where and for what purposes. A registry of ADM systems and open access for researchers can yield important insights that help to address some open questions. Given the potential benefits of ADM systems, participants agreed that it is not an option to assume that they will not be a relevant factor. This is why smart regulation and close monitoring of the European and global developments is needed.

Concluding thoughts

The Swiss Digital Initiative agrees that questions of digital ethics and digital trust are of particular importance in the public sector (see also our recently released Whitepaper). The issue isn’t necessarily that there aren’t any rules or guidance. But just as is the case in the private sector, the practical implementation of those principles poses a significant challenge. What might sound logical and feasible on paper all too often fails when confronted with the messy reality. We faced this challenge as well in developing the Digital Trust Label. By bringing in various stakeholders from academia, public and private sector as well as civil society, we strive for a balance between practical feasibility and credible certification of the trustworthiness of a digital service. You can find more information on this project here: https://www.digitaltrust-label.swiss