Blackbox's social assistance fraud detection system violates Dutch human rights and judicial rules – Newsdio

0
88


An algorithmic risk rating system implemented by the Dutch state to try to predict the likelihood that social security claimants commit benefits or tax fraud violates human rights laws, a court in the Netherlands ruled.

The Dutch Risk Indication System (SyRI) legislation uses an undisclosed algorithmic risk model to profile citizens and has been directed exclusively to neighborhoods with mostly low-income and minority residents. Human rights defenders have called it a "welfare surveillance state."

Several civil society organizations in the Netherlands and two citizens instigated legal action against SyRI, seeking to block its use. The court today ordered an immediate stop to use the system.

The ruling is being hailed as a historical trial by human rights defenders, and the court bases its reasoning on European human rights law, specifically the right to privacy established by article 8 of the European Convention on Human Rights ( ECHR): instead of a specific provision in the EU data protection framework (GDPR) that relates to automated processing.

Article 22 of the GDPR includes the right of individuals not to be subject to automated individual decision-making only where they can produce significant legal effects. But there may be some uncertainty about whether this applies if there is a human somewhere in the circle, such as reviewing an objection decision.

In this case, the court has avoided such questions by finding that SyRI directly interferes with the rights established in the ECHR.

Specifically, the court determined that the SyRI legislation does not pass an equilibrium test in Article 8 of the ECHR that requires that any social interest be considered against the violation of people's private life, and a fair and reasonable balance is required.

In its current form, the automated risk assessment system did not pass this test, in the opinion of the court.

Legal experts suggest that the decision sets some clear limits on how the public sector in the United Kingdom can make use of AI tools, and the court is particularly opposed to the lack of transparency on how the algorithmic rating system worked. risk.

In a press release about the ruling (translated into English using Google Translate), the court writes that the use of SyRI is "insufficiently clear and controllable." Although, according to Human Rights Watch, the Dutch government refused during the hearing to reveal "significant information" on how SyRI uses personal data to draw conclusions about possible fraud.

The court clearly had a tenuous view of the state trying to avoid scrutiny of human rights risk by pointing to an algorithmic "black box" and shrugging.

The UN special rapporteur on extreme poverty and human rights, Philip Alston, who intervened in the case by providing the court with a human rights analysis, welcomed the ruling, describing it as "a clear victory for all those who are justifiably concerned about the serious threats that digital welfare systems represent for human rights. ”

“This decision sets a strong legal precedent for other courts to follow. This is one of the first times that a court stops the use of digital technologies and abundant digital information by welfare authorities for human rights reasons, ”he added in a press release.

In 2018, Alston warned that the UK government's rush to apply digital technologies and data tools to socially redesign the provision of large-scale public services risked having a huge impact on the human rights of the most vulnerable.

Therefore, the decision of the Dutch court could have some short-term implications for UK policy in this area.

The ruling does not close the door to the use by states of automated profiling systems, but it does make it clear that in Europe human rights laws must be fundamental for the design and implementation of risk tools.

It also comes at a key moment when EU policy makers are working in a framework to regulate artificial intelligence, with the Commission committing to design rules that ensure that AI technologies are applied ethically and human-centered. .

It remains to be seen whether the Commission will push pan-European limits to specific uses of AI in the public sector, such as for social security assessments. A recent leaked draft of a white paper on AI regulation suggests that it is leaning towards risk assessments and a mosaic of risk-based rules.



LEAVE A REPLY

Please enter your comment!
Please enter your name here