What strikes you about how Dutch export data?
Taylor: ‘We think very much from an organizational point of view. The assumption is often that if privacy and legality are involved, the public interest is safeguarded. But there may still be opposition – if the public is tracked using cameras and phone data, putting up a sign isn’t enough. Having privacy officers is also not enough. So there are gaps in how data governance is organised.’
‘There are two levels in the Netherlands where public values are involved in data governance: at the top, by the board, and bottom-up. There is a hole in the middle. We try to investigate the level in between, the connecting area between the experiences of residents and what happens at the top of the board.’
A white paper where you contributed to data governance that mainly focuses on personal data and that there is little protection of the public in a sense.
‘That was based on an Urban Data Governance Clinic. We asked urban technology project staff to participate in questions about the techniques, the workshop and possible resistance to it. We found out that there was no accountability structure for the interests of residents.’
And you research activism, for example?
‘As a way of signaling problems. If we can listen better to activism, we may be able to avoid problems. In the middle, between the top and the experiences of residents, the translation must be made. can residents see what is happening there? Because a lot becomes invisible there.’
‘That’s how we also research technology. By looking at everyday experiences.’
Was that also reflected in the recent work on the corona crisis that you contributed to?
‘It is interesting. Things become elusive door technology. This certainly applies to pandemic technology. It occupies that space between government and residents. We see resistance to apps, monitoring and vaccines. People don’t necessarily oppose the technology, but question what kind of freedom it brings. This creates the risk that people will drop out because of things that are not done properly.’
A lot of European legislation is coming up in the digital field.
‘Yes. E-commerce, the Digital Markets Act, the Data Governance Act, the Artificial Intelligence Act… The latter is part of the GDPR, but the GDPR is mainly about the data itself. Not about how it is used with AI. That layer, the usage layer, is very important. It concerns, for example, facial recognition in public space.’
There will be three categories: banned AI, high-risk AI and other forms. Divided based on risk, so organizations must have that assumptions. The GDPR/AVG gave clear test methods for data, hopefully they will also be used for AI. In any case, criteria are given that place the technologies in a certain category. Exploiting vulnerable people is not allowed, for example. And scores don’t give either.’
Like China’s social credit system?
“A lot of Chinese methods that we are concerned about are already happening in the EU and the US. They are shipped in China. Credit ratings have been around forever, but it shouldn’t be used here to distinguish good from bad burgers, for example by determining who gets to live where. So use the EU to avoid situation with credit scores. I’m not saying the situation is in the EU, but exchange is getting better to avoid the problems in the rest of the world.’
Last week the European Parliament expressed concerns about the use of biometric data and the ban on automatic facial recognition in public places. How urgent is the problem?
‘The Netherlands is very innovation-friendly – we are constantly pushing the boundaries of what is possible. That is why the Netherlands is becoming an area where there is a clear line between the law and the clear and private sectors that want to move forward. The EU regulates, but before that becomes national law, there must be debate in all countries. It will be interesting. We try to contribute by looking at what technology should and should not do.’