Automated decision making is ubiquitous in today's world, even if it's not immediately obvious.

Like it or not, computers make decisions for us (and about us) all the time. These can be relatively trivial: your washing machine decides when to start its spin cycle based on feedback from its sensors. Or the decisions can be significant: your mortgage application is accepted or rejected by a computer program today, rather than your bank manager as would have been the case a few decades ago.

But there are some areas where people are worried about the risks of automation. Self-driving cars are a great example: companies developing the technology know that safety is the number one priority. This is because people will be understandably nervous about handing over (or giving up) control to a machine, especially when their own lives, or those of their family or other road users, are at stake. 

Strangely this imposes a double standard. Automated technology won't be accepted even if it's safer than a human doing the same job: it has to be a LOT safer. A car crash caused by human error, while potentially tragic, is nothing new. Meanwhile a car crash caused by a self-driving car's AI will be thoroughly investigated and probably make the news headlines! Even if you can statistically prove that self-driving cars are safer than human drivers overall, that doesn't mean that they will be accepted until users genuinely feel safer as a result.

There are also a host of legal implications raised as a result of automated decision making. Who is ultimately responsible for the decision made by a computer? The company behind the decision model? The company using it? The original programmers? It's really not clear how this will work out.

All of this emphasises the importance of effective decision making, especially when there are leal, financial and (especially) health and safety consequences. And the decisions don't just have to be good - they have to be fantastic, and they have to be shown to be fantastic.

Luckily this is definitely possible - aviation is a great example of a field where automation technology is widespread, widely accepted and probably safe. Even with AI problems such as the 737 Max crashes, there's no attempt to return to manual control systems - luckily so, given their far worse safety record!

How do you feel about AIs taking over safety-critical systems from you - would you trust a self-driving car? What reassurance would you need in order for you to accept technology like this?

By Ian Maddox - Own work, CC BY-SA 4.0,
By Ian Maddox - Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=67227170