The Tax Authorities have used controversial software to detect fraud with surcharges more often than previously reported. State Secretary De Vries writes in a letter to the House of Representatives that she has reported this to the Dutch Data Protection Authority.
Since April 2013, the tax authorities used a so-called risk classification model. On the basis of all kinds of indicators, it was determined who would be manually checked when applying for surcharges and a self-learning algorithm was used for this purpose. The existence of this model came out in the childcare allowance affair and it turned out to be used for the application for housing allowance as well.
For example, the indicators were someones nationality, the composition of the family and what people deserve. Applicants with a non-Dutch nationality had a higher chance of manually assessing their file. The Dutch Data Protection Authority previously ruled that this involved discrimination. The model has not been used since July 2020.
Now it appears that the risk scores generated by the computer program have also been used by the team within the Surcharges department that assessed signals of abuse and determined whether to investigate further by fraud teams. In December, it was reported to the House that the model had not been used for that purpose, acknowledges the Secretary of State.
“That the risk score of the model has been used more widely is worrying and that is why the Dutch Data Protection Authority has also been informed about this,” writes De Vries.
She investigates exactly how widely the risk scores were used, with whom they were all shared and what consequences this has had for citizens. The Secretary of State cannot rule out that people have “experienced an increased risk of disproportionate disadvantage”.