Can A Robot Sin? How Artificial Intelligence Is Challenging Christian Ethics
It used to be science fiction, often scarily so. HAL in Stanley Kubrick's 1968 film 2001: A Space Odyssey still perfectly expresses the fear that when our technology becomes smarter than we are it won't necessarily like us. This unease reaches its peak in the Terminator films, in which Skynet takes over the world's weapons systems and tries to exterminate humanity, and the Asimov-derived I, Robot, which features a robot rebellion.
Alongside them there are more thoughtful explorations of the intersection between artificial and human intelligence, like Spielberg's AI (2001), which features a robot boy programmed to have human emotions, and Her (2013), in which a lonely man develops a relationship with a computer programme designed to meet his every need.
But science fiction and science fact are colliding. As artificial intelligence (AI) catches up with human intelligence and machines become more and more autonomous, roboticists are increasingly asking about ethics. If a machine is capable of making a decision, what are the moral principles that guide that decision? Can a robot have a conscience? And if so, how is that conscience to be designed and developed? And on the further fringes of the debate, can a robot sin?
According to Computer Business Review (CBR), a group of investors is putting $27 million into a fund designed to answer questions like this. They're supporting the Ethics and Governance of Artificial Intelligence Fund, which is being overseen by the MIT Media Lab and the Berkman Klein Centre for Internet and Society at Harvard University.
The idea, says CBR, is to apply the humanities, social sciences, and other non-tech disciplines to the development of AI. According to Reid Hoffman, founder of LinkedIn and partner at venture capital firm Greylock Partners: "There's an urgency to ensure that AI benefits society and minimizes harm.
"AI decision-making can influence many aspects of our world – education, transportation, health care, criminal justice, and the economy – yet data and code behind those decisions can be largely invisible."
Over in the UK, the government is launching an ethics board to be based at the Alan Turing Institute.
Among examples of semi-autonomous robots are the self-drive cars being pioneered by firms like Google and Tesla. They can be programmed to avoid accidents, but what happens when they're faced with a choice between two accidents? A human being might be able to make a moral decision about which is worst, but a robot doesn't have the same moral apparatus.
And a further moral challenge is created by the development of semi-autonomous weapons systems – for instance, drones that can be programmed to attack particular targets without the need for a human being to issue a specific command. A report on drone warfare by the Joint Public Issues Team of the Church of Scotland, Baptist, Methodist and URC Churches notes the increasing capabilities of technology in this area, but says that "while the robots of the future might be able to demonstrate discretion, the capacity to show empathy or mercy is different altogether and maybe for this reason as much as any other the autonomous operation of weapons systems is a red line that should not be crossed".
However, talk of robots having a "conscience" or a "soul" makes some Christians uneasy. They see it as crossing a different line, the line between humans created by God in his own image and all other created things. On this view, morality and conscience belong to human beings alone. Only we can be virtuous, and only we can sin.
But while the language of 'morality' and 'conscious' might not be entirely helpful or accurate when it comes to robot behaviour, at least one Christian expert points out that in practice it might not make much difference. Robots will need to behave in ways that look like moral decisions, whether we call it that or not.
In a lecture last November for BMS World Mission, roboticist Nigel Crook from Oxford Brookes University suggested robots would need to develop "moral character". It won't be enough, he says, to provide them with a "top down" set of instructions that cover every conceivable situation they find themselves in. They will need be given principles and then learn the moral implications of their actions in the same way that human beings do, resulting in the formation of "a robot of good character".
The implications of AI are still being worked out as technology advances at a dizzying speed. Christians, like everyone else, are asking questions about what this means. But one thing people of faith want to affirm most strongly is that technology has to serve the good of humanity – all of it, not just the privileged few. Intelligence – whether artificial or not – which is divorced from a vision of the flourishing of all humankind is contrary to God's vision for humanity. We have the opportunity to create machines that can learn to do things without us, but we also have the opportunity to shape that learning in a way that blesses the world rather than harms it.
The capabilities of technology don't relieve us of responsibility for acting wisely; they make us more responsible. The worst thing we could do would be to imagine that machines can do all our thinking for us; they can't. We are moral creatures, and we can't avoid that responsibility.
Follow Mark Woods on Twitter: @RevMarkWoods