Algorithms are as biased as the data they feed on. And all data are biased. Even "official" statistics cannot be assumed to stand for objective, eternal "facts." The figures that governments publish represent society as it is now, through the lens of what those assembling the data consider to be relevant and important. The categories and classifications used to make sense of the data are not neutral. Just as we measure what we see, so we tend to see only what we measure.
As algorithmic decision-making spreads to a wider range of policymaking areas, it is shedding a harsh light on the social biases that once lurked in the shadows of the data we collect. By taking existing structures and processes to their logical extremes, artificial intelligence is forcing us to confront the kind of society we have created.
The problem is not just that computers are designed to think like corporations, as my University of Cambridge colleague Jonnie Penn has argued. It is also that computers think like economists. An AI, after all, is as infallible a version of homo economicus as one can imagine. It is a rationally calculating, logically consistent, ends-oriented agent capable of achieving its desired outcomes with finite computational resources. When it comes to "maximizing utility," they are far more effective than any human.
With your current subscription plan you can comment on stories. However, before writing your first comment, please create a display name in the Profile section of your subscriber account page.