How do we navigate the moral compass of machines?

Should a self-driving car prioritise passengers or pedestrians? As businesses incorporate artificial intelligence into their practices, ethical dilemmas arise. The New Zealand Law Foundation plans to explore these issues in a three-year study being run out of Otago University. 

James Maclaurin.

Siri, Amazon’s ‘recommended for you’, the Facebook newsfeed, Google’s search results, the online chatbot that pops up when you’re browsing a site … artificial intelligence has snuck into our lives with relative discretion, but lately the technology has taken even bigger strides.

Notably, driverless cars are now very much a reality, with Uber, Tesla and Google all testing out the technology (even Christchurch Airport is getting in on the fun with the launch of an autonomous shuttle car). However, the ethical practices surrounding them are the subject of scrutiny, as responsibilities usually limited to humans are handed onto bots.

What happens in the split second when a driverless car is about to hit a pedestrian? Does it swerve to avoid them, or drive straight into them?                                          

These are the kinds of ethical questions being debated globally, as Mercedes-Benz announced its driverless cars will prioritise car occupants over pedestrians in a crash.

“If these machines are autonomous in the ways humans are, how do we make sure they make morally good decisions? It’s a really interesting and tricky problem, partly because machines are capable of different things than we are,” associate professor James Maclaurin from the Department of Philosophy at University of Otago says.

“When we reason ethically, we reason on the basis of beliefs we have and we set out the reasoning: ‘I thought it was ok to park on a double yellow line because it was an emergency.’ AI, although it can do things we can do, it’s not us in the way it works. Machine-learning systems don’t have reasons in them or can’t report the reasons.”

Due to these concerns, The University of Otago is funding a three-year study into the legal, practical and ethical challenges for AI innovations for law and public policy. 

The study will be funded under the Foundation’s Information Law and Policy project (ILAPP), a $2 million fund dedicated to developing law and policy in New Zealand around IT, data, information, artificial intelligence and cyber security.

It's also interdisciplinary, with Maclaurin working along co-investigators from other departments: Colin Gavaghan from the Faculty of Law and Ali Knott from the Department of Computer Science.

A Mercedes concept car at the 2015 North American International Auto Show.

Five ethical dilemmas to grapple with:

  • The trolley car dilemma. A train is out of control and travelling at its top speed down a track. At the end of the track, five people are tied down and will be killed in seconds. There is a switch that can divert the train to another track, but another person is tied down on it and will be killed if you pull the switch. Your inaction or action is going to kill people. Do you kill five people, or one?
  • The overcrowded lifeboat. More than 30 people are crowded into a lifeboat intended to hold just 7. When a storm comes, it becomes obvious the lifeboat will need to be lightened if anyone were to survive. As the captain, do you force some to go overboard? Or do you wait and hope to be rescued, on the chance you might all die?
  • The pregnant lady and the dynamite. A pregnant lady is leading you and a group of people out of a cave on a coast. She gets stuck in the entrance to the cave, with only her head poking out. Shortly it will be high tide, and she’s not budging. The only way to get the pregnant woman loose is with a stick of dynamite which will kill her and her baby. Without using it, everybody but her will die. Do you use it?
  • The mad bomber. A madman has planted several bombs around the city and they are scheduled to go off in a short time. It is possible that hundreds of people may die. When captured, he won’t give up the locations of the bombs by conventional methods and time is running out. In desperation, someone suggests torture. This would be illegal, but the clock is ticking. What should you do? What if you know that the bomber can withstand torture himself, but would talk if you were to torture his innocent wife instead?
  • The baby or the townspeople. War criminals are raiding where you live and have orders to kill all remaining civilians over the age of two. You and some others have hidden in a secret room in the house. Outside you hear the voices of soldiers who have come to search the house. Your baby begins to cry loudly. His crying will attract the soldiers’ attention to the room, who will spare the baby, but kill you and the other adults hiding. o save your own and others’ lives, you must kill your baby. If you turn on a noisy furnace to block the sound, the room will become uncomfortably hot for adults, but deadly for an infant. Should you overheat the room to save yourself? 

Continue Reading

Autonomous cars

One of the tricky legal debates surrounding AI is who’s culpable for damage or loss of life with driverless cars.

Rather than going around in circles in a complex ethical debate about who’s responsible for damage or loss of life, Mercedes-Benz has taken the stance that its autonomous cars will save its passengers over anyone else, every time.

“If you know you can save at least one person, at least save that one. Save the one in the car,” Christoph von Hugo, the automaker’s manager of driver assistance systems and active safety, said last year.

“If all you know for sure is that one death can be prevented, then that’s your first priority.”

Maclaurin says if no one is driving a car and the programmer built the car to the best of their abilities, it’s difficult to know who should be charged with breaking the law.

“ACC would look after the injuries, but what about the damage done? We can’t send it to jail, so that’s not much of a help. Do we treat it like an act of god? Or do we have a strict liability regime where we hold the company responsible no matter what they did?”

Researchers conducted a study last year around the famous ‘Trolley’ ethical dilemma, but revamped it for the modern age with a driverless car scenario.

They asked participants, if you were in a self-driving car that was about to hit 10 pedestrians, would you want that car to swerve and kill you, but save them?

It found that while people believed that autonomous cars should choose to save as many lives as possible, most also said they wouldn’t buy a car that could save other people’s lives while putting their own at risk. In other words, profit-wise, Mercedes-Benz is on the money.

But Volvo has said it will take full responsibility for injuries or property damage caused by its autonomous cars, which are due to hit the market in 2020.

“Liability is crucial. We don't believe it's a very bold statement to say 'when the car is in autonomous mode it's a product liability issue, if that system malfunctions it's our responsibility.' I think if you are not prepared to make this statement then you really have no product to offer,” Volvo President and CEO Håkan Samuelsson said.

Global groups are taking steps towards implementing framework around AI that companies should adhere to, but Maclaurin says it’s early days.

This month, the European Parliament's Legal Affairs Committee voted in favour of a new legal framework surrounding robotics and AI to be followed alongside a new voluntary ethical conduct code that would apply to developers and designers for companies.

It suggests an obligatory insurance scheme where the makers of driverless cars pay into a fund that covers damage claims to deal with the potential cost of accidents.

The UK government also recently announced plans to introduce a "single insurer model" to address how pay outs should be handled to victims of crashes involving driverless cars.

The top ten jobs most at risk of automation

RankJob titleAutomation risk
1Telephone salesperson99%
2Typist or keyboard worker98.5%
3Legal secretary97.6%
4Financial accounts manager97.6%
5Weigher, grader or sorter97.6%
6Routine inspector and tester97.6%
7Sales administrator97.2%
8Book-keeper, payroll manager or wages clerk97%
9Finance officer97%
10Pensions and insurance clerk97%

Source: BBC

Continue Reading

Loss of jobs

Another legal hurdle of AI is what happens if robots replace everyone’s jobs – or at least a large portion of the population’s jobs.

A study released last year estimated up to 47 percent of jobs in the US could be at risk in the next two decades.

When looking at different aspects of a job, such as repetition and social intelligence, it found the least at risk job is a hotel and accommodation manager or owner, while the job most at risk is a telephone salesperson.

But Maclaurin says panic isn’t necessary for the New Zealand workforce – yet.

“What we’re seeing at the moment is autonomous systems are being developed for parts of jobs, such as IBM Watson supporting medical diagnostics – it’s not doing all the job, but some of the job. Yes, we’ll see some jobs disappear, we’ll see some jobs created, but it’s hard to know what the rate of churn is.”

Other issues include the legal conundrum arising from someone being laid off from a job and replaced by a robot, as well as the tax and levies tied to employment and wages.

“With fewer people working, we might have a shortfall and we might want people who employ robots to chip in,” Maclaurin says.

But the automation of jobs could also be viewed as a new industrial revolution, Maclaurin says.

“The positive spin is that this is the end of work. Automation will take lots of jobs, and all we need to do is work out a way to rearrange society to cope with a way where we work less, or don’t work at all,” he says.

“It’s a challenge for economics and for society, as work is tremendously important.” 

However, given that humans haven't moved toward working any less (British economist John Keynes famously asserted in 1930 that one day the workforce would only work 15-hour weeks) it's possible people prefer working more when given the option.

Research into Keynes' theory has found that “the more money you earn, the less likely you are to indulge in leisure", as a higher-income person's time at work is worth more than a lower-income person’s time at work.

Simply put: If there’s a choice between going and lazing on the beach or going to the office, someone who earns $400 a day might prefer to head to the office. 

Competition in business

Apart from hospitals trialling IBM Watson’s diagnostic capabilities and Soul Machines, a company led by Mark Sagar creating human-like avatars for artificial intelligence-based platforms, Maclaurin says not many New Zealand companies have made moves in the AI space.

But an issue with this is big overseas companies such as Google, Facebook and Amazon are selling into New Zealand as a market, which he says gives them an advantage over Kiwi companies.

He says if AI helps a company compete better over another, moral issues arise.

“Traditionally we didn’t want robots to be the sort of things to harm others. It’s a challenge in the business case to think about what counts as a morally well founded AI system.”

However, one could also argue businesses are always on the look out for a competitive advantage when it comes to tech, so AI is a natural advancement.

Maclaurin says local companies should keep abreast of AI developments that are happening overseas.

Overall, he says AI can be wonderful for the progress of humans and for businesses, so long as the laws surrounding it are clarified.

“What I say to people is ‘don’t panic’, because 100 years ago to drive your motorcar, someone had to walk around in front of you with a red flag,” Maclaurin says.

“They weren’t more dangerous and we drove much slower – but people didn’t know how they worked or how fast it went or how much control you had over them. We don’t need the flag anymore. AI is a big change, but there’s no reason to panic.”

And on the off chance that AI’s ethics fail, the EU legal framework has suggested that all automated systems have a kill switch built in. That’ll show those robots who’s boss.