Hi there EInsighters! Our focus this month is on the City of Rotterdam’s machine learning algorithm used to calculate risk scores for those on welfare.
As always, I’m your curator, Idil, and I'll be taking you on a journey through one of the most talked about tech scandals in the world, while also drawing on the EInsight of our selected EI Experts.
So let’s dive right in - shall we?
Most countries have a welfare regime to help their citizens pay for food, rent, and other necessities. What if we were to tell you that a city deployed a machine learning algorithm to determine how much aid someone should receive based on attributes such as age, gender, and marital status? Initially, that might not sound too bad. But what if we were to tell you there was a hidden agenda - that the algorithm was in fact used to determine who should be investigated for fraud? Cue fraud raids, interrogations, mental health issues resulting from distress, losing one's livelihood, and discriminatory practices...
So could it have been avoided?
Well, I’ve asked just that to our EI Experts, and this is what they had to say...
Right off the bat, what are some issues that spring to mind when we refer to risk-scoring models, Flavio?
“The tool seems to be capable of identifying risk groups in the population according to several indicators including social, economic, and cultural ones. One important – although often neglected – issue regarding indicators of this nature is that very often they implicitly connect to other attributes such as gender, age, ethnicity etc. If these connections are not unveiled, social bias may find its way into decision procedures in ways that can be hard to uncover. From what has been reported about the tool considered here, this seems to have been the case in Rotterdam, leading us to a strong reminder that contextual and semantic analysis of data is always important when building intelligent systems, given that such systems are inevitably rooted on specific domain knowledge.
A second issue is the importance of differentiating retrospective data analysis (which informs about the past and present, thus suggesting where to look at in order to plan for the future) and prospective analysis (which tries to predict the future directly based on past observations, thus prescribing what is likely to occur in the future in order to prepare for it). Here, a powerful tool for retrospective analysis seems to have been built for the City of Rotterdam, which was later mistakenly used as a prescriptive tool. As a consequence of this mistake, indicators that could point to historical social issues and hint at public policies that could mitigate them may end up being used to generate self-fulfilling prophecies which perpetuate social problems.
Moreover, the tool seems to have been accredited by its grounds on rigorous mathematical modelling and empirical data (“it is based on math, so it must be right”, “it is based on evidence, so it must be right” etc.). For tools like the one we are analysing here, these are necessary but not sufficient qualities; they must be complemented with attributes such as deep contextual knowledge and humility to accept that past errors must be corrected and the future is never fully predictable.”
It’s clear from Flavio’s statements that we need to approach these decisions with a grain of salt and not accept things as they are without questioning them. Although machine learning algorithms may seem to offer an “objective” perspective, there are a multitude of different factors that should be taken into consideration when making decisions that ultimately affect people’s lives. After all, not everything is black or white.
Now that we know more about the technology behind it, let’s turn our attention to the different stakeholders involved in this case.
Anna narrowed down the three main stakeholders in this case to (1) Accenture (the actor that created the algorithm), (2) the City of Rotterdam (the actor that deployed the algorithm), and (3) citizens and consumers (the actors affected by the algorithm).
Looking at things from Accenture's perspective, Anna showed concern as to why the company didn’t question the data they were provided with. “Through a set of simple questions and common sense, they could’ve come to the conclusion that the data provided was not enough to create a working algorithm, let alone one that is deemed fair!”
What kind of questions should they have asked?
Anna referred to ODI’s Data Ethics Canvas as a great starting point and highlighted the bare minimum that Accenture could’ve asked:
Are there any limitations in data sources that could influence the project's outcomes?
Is there bias in data collection, inclusion/exclusion, analysis?
Are there gaps or omissions in the data?
Who could be negatively/positively affected by this project?
Could the data be used to target, profile or prejudice people or unfairly restrict access?
How are limitations and risks communicated to people?
How could you reduce any limitations in your data sources?
How are you measuring, reporting and acting on potential negative impacts in your project?
Why do you think no one questioned the data, Anna?
“From the outside, it could be attributed to a poor company culture. A culture where you are being told what to do, where you are treated as just another piece in the machine and made to believe that your thoughts are not welcome. This is a classical treatment that technical teams have been receiving for a long time; they are just being told what to do and their minds are underestimated. A transformation needs to happen here. Companies need to empower their technical teams as much as any other team, and all the departments need to have their ethics thinking hats ready to be used. Of course, this doesn't mean that everybody should be an ethics expert - it just means that they are able to raise a flag when unfairness starts to show. Only then will we see less unfair cases like this one.”
What if we flip things a little and look at this from the City of Rotterdam’s perspective? “Because of the responsibility that public entities have towards citizens, they need to have a system to make sure that what they are buying is free of bias. All purchases need to pass a minimum standard and there needs to be regulation regarding that. Public organisations have strong regulations regarding compliance in most of their outsourced tasks. It is interesting how that does not seem to be the case here.”
So what if we were to approach the whole situation from the citizens’ and consumers’ perspectives - what can they do? “As citizens and consumers, we need to raise our voices when we see unfairness. These systems are being built as we speak and they are not perfect; they are created in a world with little to no regulations, and they affect us a great deal. It is our obligation to be active and speak up about abuses like the one in question. It is vital for our future to report such instances because only then can we find ways to improve the technology behind it.”
Talking about citizens and consumers…
As Tomas puts it, “The powerlessness with which various citizens had to endure wrong decisions is reminiscent of the way American philosopher Hannah Arendt describes bureaucracy:
“In a fully developed bureaucracy there is nobody left with whom one can argue, to whom one can present grievances, on whom the pressures of power can be exerted.”
Tomas, can you tell us a bit more about the rise in the number of governments worldwide opting for more technological solutions?
“The use of AI in government is enhanced by the pursuit of innovation and renewal. This often proves to be a powerful driver for governments to adopt technological solutions, perhaps even more often than really needed. The so-called “New Management” idea gave a push to this by bringing market solutions and terms like efficiency and innovation into the reality of government. Moreover, AI seems to be a promising reality that offers a lot of solutions to problems faced by everyday governance. The Tallinn declaration on eGovernment points to the fact that digital transformation can strengthen the trust in governments that is necessary for policies to have effect: by increasing the transparency, responsiveness, reliability, and integrity of public governance.
But the innovative nature and many benefits of AI in government sometimes blind public policy to its complex reality. The integration of AI systems in governance is more complex than in the private sector because they are taxpayer-funded and ask for even more scrutiny and oversight than the private sector. Moreover, there are more risks associated with its use in public sector because of the potential larger impact on society and the combination with the power of the State. Projects must go beyond simple cost and efficiency gains to satisfy a richer and diverse set of stakeholders who may have conflicting agendas.”
Can you expand on that a bit further?
“Many challenges and risks are associated with implementing AI in public administration, constituting a darker side of AI. These challenges vary from issues of data privacy and security, to workforce replacement and ethical problems like agency and fairness of AI. Yet we find that the use of AI systems is often little supported by the necessary flanking measures, as was the case here. “Although regularly escaping scrutiny, the findings of AI systems often pass as ‘objective’ and ‘neutral’. It is by no means the bureaucrats fault to be tempted to embrace risk-based assessment. Assigning numeric value to any activity and hiding behind the machine produced ‘evidence’ shields imperfect humans from the accusations of bias, misdemeanor and inefficiency.” This “automation bias” is something that can be part of the culture of an organisation. Therefore, a form of education is needed that puts the use of AI in perspective and also puts forward the need for some policy for the use of AI in the public sector. In this case there was a system-first approach, as opposed to a human-centred approach.”
Let’s all take a moment to truly appreciate those words:
A human-centred approach.
Safe to say Abigail agrees with Tomas on the lack of a human-centred approach in this case - “This entire process amounted to racial profiling of individuals. Families were shattered, people were wrongfully accused of fraud, made to repay years' worth of benefits, placed on fraud blacklists, and some even committed suicide. Artificial intelligence used in this manner to make welfare decisions breaches human rights and worsens the over-policing, criminalisation, and exclusion of marginalised people and those in need of critical assistance”.
What other legal implications are there?
“There’s a clear violation of privacy and data protection regulations. According to the GDPR, individuals must be informed about the collection and use of their personal data. In this instance, in violation of the GDPR, personal data was gathered and evaluated for welfare fraud without the subjects' permission. Additionally, the blatant discrimination of persons based on their ethnicity, degree of income, language barrier, and other aspects was a major issue. The system was falsely charging innocent people for committing fraud, thus there was also the issue of lack of fairness.
Rule of law principles like the presumption of innocence and transparency are undermined by risk profiling algorithms. It is crucial to make sure that risk profiling algorithms are created and put into use in a transparent and accountable manner in order to verify that they are in compliance with fundamental administrative principles and the rule of law. Even though the deployment of algorithms can be beneficial in a variety of situations, it's critical that they are created and put into practice in a way that respects fundamental rights including non-discrimination, the right to a fair trial, equality under the law, and privacy.”
This went to court, didn’t it?
“Yes, and they ruled that the use of AI by the government to identify welfare fraud was indeed unlawful since it infringed on the right to respect for private and family life. This judgment serves as a precedent for other governments profiling people in similar manner. However, although the State, not the private sector, was the main concern in this case, the private sector can still benefit from the lessons learned. Private businesses must ensure that they uphold people's fundamental rights, or be willing to face the legal repercussions.”
Feel like our entire discussion throughout this month’s EInsight Issue is best summarised by this quote from Pepita Ceelie (Rotterdam Resident):
“They don’t know me, I’m not a number. I’m a human being.”
So what should your organisation do to avoid a similar tech scandal?
Always question the data
Take a human-centred approach when developing a technological tool
Adhere to the privacy & data protection regulations applicable in your region
Have you read our latest EQUATION Issue on “The Generative Revolution: Ethics in the New Wave of AI”? READ NOW!
A massive thank you to our incredible EI Experts for their contribution in the making of this month's issue: Dr. Flavio S. Correa da Silva, Anna Danés, Dr. Tomas Folens, and Abigail Ichoku.
That's all for now, EInsighters!
Liked this month's issue? Share it with a friend!
Have a tech scandal you want us to cover? Let me know!