In recent decades, digital transformation has fundamentally altered how humans interact, how companies conduct business, and how governments work. More recently, data-driven applications and algorithmic processes have created unprecedented opportunities for citizens, companies and governments around the world, pertaining to the fields of automated data processing and automated decision-making procedures. These applications have great potential to increase innovation and productivity, and to further the welfare of individuals and societies.
However, such data-driven applications and algorithmic processes also present potential risks. Despite the best intentions, they have the capacity to cause unintentional harm and may affect human rights, individual autonomy, competitive market order, financial stability, democratic processes, and national sovereignty. In addition, they can increase inequality and shift control away from humans to algorithms. The deep transformation of our societies triggered by these applications has the potential to undermine trust between citizens, companies, and governments.
Thus, building trust in digital infrastructure and strengthen responsibility for individuals and organisations will be the foundation for societal innovation in the next decade of digital transformation.
To this end, the SDI works on concrete projects to put into practice ethical standards in the digital age.
The Digital Trust Label is the Swiss Digital Initiative’s first project.
In a trend map developed for SDI, the independent think tank W.I.R.E. identified strategic areas of action for decision-makers to foster trust and responsibility. In the next decade, we will face more individual empowerment, increased convenience, new communities, and higher security, but we also have to prepare for data-driven intransparencies, algorithmic discrimination, loss of control, and the erosion of privacy. Based on these findings, SDI aims to explore new projects and measures in the future.
For an overview of all our publications, see here.