Over time, norms and values change. When the internet was first developed, the idea was that all the information that was posted should be kept so that nothing would be lost. At the time, that was an important value. Nowadays, though, we want to give internet users the right to be forgotten, but the system is not geared up to doing this. How do you deal with this discrepancy? These kind of questions are tackled by Ibo van de Poel in his ERC project, in which he and his team are working on a philosophical theory on value change. Ibo van de Poel: “The aim is to create a theory of value change that helps designers develop technical systems that can deal with changing values.”
The Dutch National AI course has released two short interview with DDfV scientific director Jeroen van den Hoven, one in which he elaborates on the role of ethics in the development of AI, and the other on what this means for the competitive advantage of Europe. The Delft Design for Values Institute is one of the knowledge partners for this course.
A short video was published yesterday in which Prof. Dr. Jeroen van den Hoven speaks about design for values, responsible innovation and governments as 'launching customers' for realizing a good information society. He believes that there are many opportunities in this area for both the Netherlands and Europe.
Self-learning algorithms determine internet search results, choose which messages you see from friends on Facebook, and may eventually even decide which drugs you will be prescribed and your punishment if you commit a crime. So, should we be afraid? Two researchers from Delft who work in this field answer the question; "The problem with this is that demanding full transparency will have an adverse effect on the self-learning capacity of the algorithm. This is something that needs to be weighed up very carefully indeed"