It is a good time for legal scholars and ethicists interested in robotics and artificial intelligence.
Near the end of last year, a report commissioned for the UK government, Robo-rights: Utopian dream or rise of the machines?, identified robots’ legal rights as a controversy waiting to happen. A month later, just before Christmas, this hit the news.
The BBC spun the story as Robots could demand legal rights, the Financial Times blew things out of proportion with UK report says robots will have rights, and on this side of the pond CBC’s headline read Robot rights? It could happen, U.K. government told.
While news outlets didn’t mention it, the report referenced a mock trial in California that debated this issue two years ago. The Legal Affairs article about the event makes for interesting reading:
Spinning a web of legal precedents, invoking California laws governing the care of patients dependent on life support, as well as laws against animal cruelty, Rothblatt [legal counsel for BINA48] argued that a self-conscious computer facing the prospect of an imminent unplugging should have standing to bring a claim of battery… The jury, comprised of audience members, sided overwhelmingly with the plaintiff. But the mock trial judge, played by a local lawyer who is an expert in mental health law, set aside the jury verdict and recommended letting the issue be resolved by the hypothetical legislature. [emphasis added]
(You can also download transcripts and video of the proceedings.)
That may excite the lawyers, but what’s here for ethicists? Coding ethics for artificial intelligences, of course.
Just as robots’ rights have become news, the European Robotics Research Network produced a Roboethics Roadmap, and a Korean working group has promised a Robot Ethics Charter before the end of the year. The reason, according to a Korean official quoted by Reuters, is that they expect the emergence of artificial intelligence in the near future:
We expect the day will soon come when intellectual robots can act upon their own decisions. So we are presenting this as an ethical guideline on the role and ability of robots.
Not to be outdone by their neighbours, Japan followed soon after, issuing a Draft Guidelines to Secure the Safe Performance of Next Generation Robots.
These news stories caught a lot of attention, in part because researchers and journalists drew immediate references to Isaac Asimov’s laws of robotics. Here’s a quick survey:
- Trust me, I’m a robot – The Economist
- The robots are running riot! Quick, bring out the red tape – TimesOnline
- Robot Code of Ethics to Prevent Android Abuse, Protect Humans – National Geographic
- Robotic age poses ethical dilemma – BBC
- Ethical code for robots in works, South Korea says – CBC
Don’t bother reading all of them. Even though they’ve been selected to represent diversity of coverage, these demonstrate a general uniformity. Once again, journalism signals the peril of a mass media fed by press-releases, wire service filler, and sound-bites: information homogeneity. Even most of the quotes are the same, with special attention being given to Park Hye-Young’s statement to an AFP interviewer:
Imagine if some people treat androids as if the machines were their wives.
Those 13 words reveal so much about human nature.
The expertise of the Korean working group amounts to five ‘futurists’, including one science-fiction writer; none of the press suggests ethicists or legal scholars are on the team.
That’s a shame, as philosophers are interested in the project. See, for example, Glenn McGee’s article in The Scientist, which offers the usual mix of worries about safety, stewardship and societal taboo. (Hat tip to the AJOB Bioethics blog’s A Code of Ethics for Robots? Uh, Yes. Please.)
On the one hand, we don’t want artificial intelligences to hurt us. On the other, we know humans have a great capacity for harming robots, but don’t know if it really matters. Also tossed into the dystopian blender are worries about degrading whatever it is to be human.
A code of ethics is supposed to prevent all this?
For a legal scholar’s opinion of the development of a code of ethics, turn to Ian Kerr. A professor of law at the University of Ottawa, he has just written Minding the machines for a local newspaper. (Hat tip to Michael Geist for mention of Kerr’s item.) Distilled, his article makes two important points:
- The codes of ethics are being drafted for financial reasons, to create a market in domestic automation. Insofar as the codes of ethics are being drafted to promote robotics as a social good, this is an exercise in framing. (Kerr does not use the word.)
- However, codes of ethics for robots are a dangerous misdirection, because the robots themselves are not a real solution. We should instead focus on the social ills being used to generate demand for robots, and be wary of a technological fix that has the potential to leave the underlying problems hidden.
In Kerr’s own words:
Before we spend valuable resources commissioning Working Groups to invent “no-flirt” rules or other robotic laws to avoid inappropriate human-machine bonding, isn’t there a logically prior line of questioning about whether a declining birthrate is truly a problem and, in any event, whether intelligent service robots are the right response? … I am concerned about robotic laws, charters and other sleight-of-hand that have the potential to misdirect us from the actual domains of ethics and social justice.
Kerr’s argument contradicts that of author Robert J. Sawyer, whose comment On Asimov’s Three Laws of Robotics (1991) pointed out the ‘laws of robotics’ popular to science fiction are not going to guide developments in artificial intelligence.
The development of AI is a business, and businesses are notoriously uninterested in fundamental safeguards — especially philosophic ones.
If Kerr is right, Sawyer missed out on predicting business interests could find a way to make ethics sell robots. Confronted by this technology, purchasers will need reassurances it will be safe – and social engineers will need assurances their moral sensibilities will not be transgressed.
I think Kerr is spot on, but I have my doubts a Code of Ethics can work either as a sales pitch or as a real guide for moral human-robot interactions.
Even assuming artificial intelligence is an achievable goal, there are no guarantees researchers will be capable of implementing a code of ethics. Nor are there indications the public will be willing to adopt it in even one nation – let alone internationally. Besides, why should we think autonomous machines will abstain from writing over the code to make their own evolving palimpsest of ethical norms? It’s what we humans do.
A draft code of ethics for robots might make for a good discussion paper, but until we have a better understanding of how even humans come to make moral decisions, it will provide very little guidance for the practical implementation of artificial intelligences who are worthy of our moral consideration.
Anything less can be handled by ethics as we know it, and we don’t need South Korea’s Robot Ethics Charter to tell us that using robots to forget our obligations to the sick and elderly is a bad idea.