The Hague in the 21st Century; Responsible Innovation for Sustainable Peace, International Rule of Law and Global Justice
How does The Hague remain one of the most important cities in the world in the area of Peace, Law, Justice and Security in the 21st century? The Hague owes its international reputation to recognizing problems and solving them pragmatically in times of high political tension, arms races and rapid technological change in the beginning of the 20th century. The metropolitan region of The Hague is also internationally associated with the cradle of thinking about World Peace and the International Rule of Law. In order to be able to play a similar role in the world in the 21st century, The Hague is now also facing with the challenge of understanding the nature of the problems of humanity and of offering solutions.
The Netherlands is internationally leading in ethical thinking about technology. After the incident regarding Cambridge Analytica it is clear that ethics and regulation are crucial for a decent digital society. But this requires an investment in responsible innovation, so DDFV scientific director Jeroen van den Hoven and Peter Paul Verbeek (professor in philosophy of technology at the University of Twente) argue in Dutch Newspaper Financieel Dagblad. IN DUTCH
"Values as such never conflict, only the practices in which they are embedded do." This seemingly simple claim by DDFV-researcher Annemiek van Boeijen is what stuck with me most after the interesting and engaged discussion that we had at the DDFV Playground Meeting of 15 March. It is actually an important insight that is key to what the Delft Design for Values Institute does.
Economic revolutions often bring profound social change, affecting everything from jobs to family size. With the digital revolution now in full swing, humanity must recommit to building more ethical machines, or face a future in which our technologies undermine basic values like human rights and civil liberties.
Artificial Intelligence (AI) is increasingly affecting our lives in smaller or larger ways. In order to ensure that systems will uphold human values, design methods are needed that incorporate ethical principles and address societal concerns. In this article, I introduce the ART design principles (Accountability, Responsibility and Transparency) for the development of AI systems sensitive to human values.
An increasing number of researchers, practitioners and policy makers are realizing that much needs to be done to deal with bias in data and algorithms, and to promote transparency of AI models. Only in this way can the proper use of AI can be ensured and benefits to people’s lives and support for fundamental human rights can be expected.
Yesterday, Sophia, a robot, was declared a citizen of Saudi-Arabia. [...] Sophia is mainly a piece of software and hardware, and therefore can be cloned without much effort. If there are many identical copies of Sophia will all of them be Saudi citizens?
A friend just asked me to comment on the article "Reboot for AI revolution" by Yuval Noah Hararion @NatureNews. Very interesting read, but I am not convinced by many of his points.
Big data worries me. And this idea that Artificial Intelligence (AI) cannot do without big data worries me most. Just this week, a very quick and by no means significant mini-survey during the World Summit AI in Amsterdam revealed that around 20% of the participants expect that more and bigger data is key for the further development of AI.