Don't let machines make decisions

1979 had it’s fair share of firsts. First female prime minister in the UK, first McDonald’s Happy Meal, and the first death caused by a robot. Robert Williams was working at a Ford Motor company plant when he was killed by an industrial robot, breaking Asimov’s first rule of robotics for the first time ever. That same year, and completely unrelated to William’s death, a manager working at IBM wrote in a presentation, “A computer can never be held accountable. Therefore a computer must never make a management decision.” Eighteen years later IBM’s Deep Blue became the world champion of chess and in 2011 IBM’s Watson won the TV quiz show Jeopardy!. Perhaps IBM decided that machines making decisions wasn’t such a bad thing after all.

Why couldn’t a computer be held accountable for it’s decisions? By definition, being accountable carries the expectation of being able to justify actions or decisions. Surely we could expect a computer to be able to explain the decision-making process it took better than any human. Perhaps instead because, when we say accountable, what we really mean in punishable. Professor Charles Perrow wrote about how when things go wrong, someone always gets blamed. Whatever the cause, we look for someone to punish. So, we cannot hold computers accountable because we can’t punish them, we can’t make them suffer personal consequences if they make bad decisions. If a manager makes a bad decision we can withhold their bonus or fire them, so we expect the threat of punishment to motivate them to make good decisions. This classical economic theory belief, that we are all motivated to maximise individual gain, informed that IBM manager’s perspective on accountability but were they right to apply it to computers?

This theory can be tested with a thought experiment that applies equally to people and computers. Anyone who has read any critique of self-driving vehicles will have come across the trolley problem. The trolley problem sets out a scenario where the person taking part in the experiment has to make a decision between doing nothing which results in the death of a group of people, or doing something which results in the death of one person. The experiment is meant to help us appreciate how difficult the ethics of our decision-making can be to understand and explain, and to consider how this might apply to computer decision-making in life and death situations.

So, which do you choose? Which option maximises your individual gain? If you choose to do nothing more people die but you aren’t responsible. If you actively choose to kill one person in order to save the group, you are culpable for that death. We can blame you for it. Economic theory and the desire to avoid blame suggests that an individual would let the group be killed. But computers aren’t encumbered by these things, so should we expect that computers will make better decisions than we would? Could we expect that machines would be programmed to maximise gains for the majority and so make the decision to kill one person to save a group, even knowing that it wouldn’t be held accountable, because computers can’t be.

If computer’s can’t be held accountable, can people ever really be? Whichever choice you make in the trolley problem, isn’t it the system that put you in that situation that is really to blame? The system that makes you choose between the interests of the individual and the group is the system of individualist, humanist, secular thinking that emerged from the Renaissance. This way of thinking replaced God as the central concern for everyday life with the individual, and gave them the power to think for themselves, make their own choices, and yes, be held accountable for those choices.

Over the last six decades we have stared to learn that this individualistic view has problems. It places the individual in conflict with the group, but even more problematic is that it fails to recognise the effect systems have on both. Everywhere we turn we are affected by the systems we are part of. From systems of thought to political, environmental and technological systems, they are unavoidable.

Another thought experiment: How would you reduce the 15 million general practice appointments that are booked each year but then not attended?

Would you blame an individual for missing an appointment with a doctor and contributing to the £216 million that missed appointments cost the NHS every year and consider some penalty system that punishes them? Would you still blame that individual if you knew they were a single parent on a low income with caring responsibilities for an elderly relative and suffering from social anxiety? With the first question we tend to answer yes, let’s blame the individual for their misdemeanor. But when we find out that they face barriers such as not having anyone to look after their children or elderly relatives, are unable to afford the taxi fare, and finding leaving their home a mental health challenge then we start to realise that it isn’t the individual that is to blame, but that the appointments system is not set up to enable everyone to attend appointments. If we want to reduce the costs of missed appointments then we should tackle the societal barriers that cause people to miss them, rather than blaming the people.

But how do we hold systems accountable for the effects they have on individuals and groups? Can systems even be held accountable?