Expectations for the private sector in AI ethics

Nicolas Zahn • December 2021

Expectations For The Private Sector In AI Theics

The second workshop of our event series on AI & Ethics took place in Zürich, where we shifted our focus from the public sector to the private sector. What do we as potential customers and designers of privately developed and sold AI use cases expect from the economy when it comes to ethics?

Fingering the pulse

This was the guiding questions we explored in the workshop under the direction of Johan Rochel from ethix. As a precursor to the workshop, we invited participants to share their views on the subject of AI ethics in the private sector through a public form. While the results are not statistically significant, they are nevertheless interesting as they showed tensions that were also apparent in the discussions during the workshop.

First, respondents showed that they assign high importance to ethics being considered in AI applications while also stating that they do not think the private sector is sufficiently addressing this issue. Second, respondents claimed that they themselves try very hard to consider ethics in their digital behaviour. However, they not only stated that they are still likely to use a service that provides benefits even if it does not take into account ethics or is even unethical. On top, they also stated that the term “ethics” might not be that useful when talking about the social impacts of private sector AI applications.

Urging proactive engagement and inclusion

Working from concrete examples towards a common understanding the 25 participants of the workshop added their personal and direct experiences with private sector AI applications and the corresponding ethical considerations. An important insight was that AI ethics is not only needed to address science-fiction technologies but should already today be part of the design of AI applications. From AI systems used in the context of the sharing economy putting pressure on employees and increasing potential risks to intransparent health diagnosis based on biased training data or discriminatory lending decisions: complex questions are all over the place. Vulnerable groups and individuals cannot yet rely on adequate safeguards as policy making processes lag behind the pace of technological progress.

This challenge actually opens a window of opportunity for companies to engage now sincerely with ethics, particularly when it comes to AI. Companies as creators of those services are in a good position to also discuss ethical considerations and make it clear to their customers and the public how those considerations are put into practice. Postponing these discussions is likely to lead to reactive regulation given mounting pressure to become active against tech companies. This dynamic would support a view of ethics that is already present in the private sector: the view of ethics as compliance. But rather than being a “party-pooper” , digital ethics, integrated early on, could actually be an enabler of innovation instead of a stopper.

Towards a “wishlist”

Starting from the observation that customers feel not enough is being done by the private sector to take ethical considerations into account in the development of AI services, participants tried to come up with a “wishlist”. What is needed by the state, the private sector and society?

Participants acknowledged that it is very easy to demand that “somebody” do more, but we also need more digital literacy in the wider population. Otherwise, relevant questions driving consumer behavior might not even be asked in turn lowering incentives for the private sector to engage with digital ethics.

The public sector on the other hand should focus on providing the framework conditions for open innovation and supporting interdisciplinary research into the consequences of technologies as is already done with technology assessments.

This leaves the following “wishes” for the private sector to achieve a meaningful debate on AI ethics:

  • Increase transparency: what AI frameworks are being used for what reasons and how are they designed and trained?
  • Adjust existing risk management approaches to new challenges raised by new technologies so as to extend the current level of protection
  • Identify vulnerable groups and risk groups
  • Embed ethics into the company culture from top to bottom, using empowering strategies
  • If clear operationalisation of ethical principles can be achieved, also adjust incentive structures within the company
  • Adjust the design process of digital services to be more inclusive and address ethical considerations early on in the process (over the complete life-cycle of a product or service, not only as once-compliance-step)

Concluding thoughts

Through the intense debate it became clear that we are merely scratching the surface of the complex field that is AI ethics. And it also became clear that it is much easier to demand something should be done than actually explaining what should be done. Nevertheless, the workshop helps formulate demands from current and potential future customers towards the private sector. Given the potential tensions between economic realities and ethics, it might be very lucrative to be unethical in the short term. This explains why relying on the private sector alone is unlikely to improve the current state. However, a proactive and sincere engagement of the private sector with digital ethics would significantly facilitate the overall discussion. Part of this will also be other stakeholder groups, particularly civil society. The discussion during the second event of our series showed once more the difficulty but also the necessity of putting theoretical principles into practice. Our Digital Trust Label is proof of such an instrument. You can find more information on this project here: https://www.digitaltrust-label.swiss