John Lennox says bright young scientists who hold a belief in God should get involved in the development of artificial intelligence.
“There’s a lot of good that can be done,” the Oxford Professor of Mathematics told John Dickson in the latest episode of the Undeceptions podcast. “But we need people who are articulate that can look into the ethical dimensions.”
One of the world’s best-known public Christians, Lennox also is a best-selling author of science-tinged apologetics books such as God’s Undertaker and this year’s timely Where is God in a Coronavirus World?
Lennox points to the work of Professor Rosalind Picard at Massachusetts Institute of Technology (MIT). He tells Dickson about how she has developed her own field called ‘affective computing’, where she uses facial recognition techniques to find signs of children having seizures before they happen – in an effort to prevent them.
“Technology advances much more rapidly than ethical thinking,” says Lennox. “And it’s noticeable that the ethical thinking is not going very fast at all.”
In 2017, hundreds of scholars, scientists, philosophers and industry leaders hammered out a set of 23 high-level prescriptions for AI ethics, with a “do no harm” mandate for developers.
The ‘Asilomar Ethical Principles’ have now been endorsed by more than 1200 AI and robotics, including the late Stephen Hawking, Elon Musk and researchers from Google DeepMind, GoogleBrain, Facebook and Apple. They include principles like, “If an AI system causes harm, it should be possible to ascertain why” and “AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.”
While these principles are a good starting point, Lennox says there’s a long way to go.
“I’m involved in the business school here [at Oxford]. How many chief executives have said to me, ‘it’s one thing to have a mission statement on your office wall. It’s another thing to get it into the heart of your executives.'”
“There are still a sufficient number of people in the world who take the view that, if we can do these things then they can be done without a concern for ethics. That’s a big problem.”
“I don’t buy into the notion that technology is completely neutral from a moral standpoint.” – Vicky Lorrimar
Also joining Dickson for the Undeceptions’ AI discussion is Vicky Lorrimar, an Australian academic who focuses on theological understandings of the human being and how that relates to human enhancement technologies at Trinity College in Queensland.
She says technology will never be neutral.
“I don’t buy into the notion that technology is completely neutral from a moral standpoint. It’s not as simple as saying technology can be used for good and used for evil, though it can be used with different motives and intentions,” she tells Dickson.
“We need to be asking the question: Who is developing the technology, and what is the vision of the ‘good life’ that sits under it?”
For many, says Lorrimar, technology has replaced any thought of a need for God.
“For people who have a religious worldview, God offers hope – a way to be redeemed or glorified. It gives us an understanding of why things aren’t the way they ought to be [here and now]. It can have the effect of lifting ourselves out of whatever present condition we’re in.”
“If that’s the hope that has traditionally been offered by religion, then for some technology has become a substitute for that. We can make our lives better through our own ingenuity.”
Such ingenuity can blind us to the dangers, adds Lennox. Take surveillance technology, for example, he says.
“For a police force, it’s wonderful to be able to recognise terrorists and criminals in a football crowd. But unfortunately this kind of surveillance technology lends itself to oppression.”
“It is extremely intrusive artificial intelligence, and we need to be able to discriminate.”