It isn’t a stretch to say ethics about artificial intelligences is in its infancy. More science fiction has been published than there are serious papers on this topic.
Should this be a hint that while there might be a lot that philosophy can mine from the topic of artificial intelligence, the codes of ethics being proposed don’t have much to do with ethics after all?
Take, for instance, the 6 target topics in robo-ethics as identified in a conference document hosted on Roboethics.org. They don’t have much to do with ethics:
- warfare use of robots
- social acceptability of robotic assistants and human augmentation
- impact of humanoid robotics on job management
- morality of autonomous robots
- impact of robotics research on human identity, safety and freedom
- theological and spiritual aspects of robotics technology
As stated, the 6 topics noted above are red herrings. Half don’t have anything to do with the things that make artificial life philosophically interesting, let alone have anything to do with a sub-discipline in ethics.
Let’s break it down:
- Warfare use? That’s just alarm about robots killing people by accident, killing people by going rogue, or encouraging humans to kill people. No novel ethics there.
- Social acceptability? Robot assistants? No ethics there. Cyborgs? As a philosophically interesting issue, that died on the operating table when the first artificial organ was implanted.
- Effect on jobs? That’s just labour politics. No novel ethics there, except in the background questions of how we want to order society, public duties to citizens, and so on.
- Robot’s morality? I don’t know what that even means. I’ll take a guess it is about including robots in the family of things we give moral concern and legal rights. That’s philosophically fun, and a political nightmare – not just because giving them the vote means people can quite literally manufacture election results. This point is the only legitimate plank in the platform put forward by the nascent field of robo-ethics.
- Effects on human identity? Expect the term ‘human dignity’ to make an appearance. While this area might be a candidate for fun philosophy, it doesn’t concern ethics except as an off-switch for the entire artificial intelligence program.
- Human theology? Expect the term ‘playing god’ to make an appearance. Again, that’s not ethics.
What happens if robots turn out to be sexy?
The purile response should not be surprising, and it’s a moralists nightmare:
Ick. Unless they look really, really hot.
Whatever the case: no ethics unique to robots has made an appearance, though objections will doubtless come loaded with familiar pseudo-philosophical baggage.
security, safety and sex are the big concerns.
But none of these are really about ethics, let alone ethics specific to artificial intelligence.
Here are the big ethical and meta-ethical questions that ethicists should be addressing:
- Should we create autonomous, artificial intelligences?
- Should we have them behave according to human ethics? Is so, do we go with something from the deontological or utilitarian catalogues?
- Should we make allowances for them to develop their own systems of ethics?
- Should a moral system created by an artificial intelligence be accepted by humans as genuine ethics? (“Thou shalt not sudo” just won’t cut it, and I’ll leave speculations about robot evangelism to Battlestar Galactica.)
- Should we include an off-switch, and under what conditions can it be used?
- Should artificial intelligences have moral and legal rights?
With all of these interesting questions, it is a shame the so-called codes of ethics being proposed are just about heading off an imagined threat.
We can see the motivation for these codes in a statement by Gianmarco Verruggio.
We have to manage the ethics of the scientists making the robots and the artificial ethics inside the robots… Scientists must start analysing these kinds of questions and seeing if laws or regulations are needed to protect the citizen… Robots will develop strong intelligence, and in some ways it will be better than human intelligence. But it will be alien intelligence; I would prefer to give priority to humans.
The upshot? When they aren’t about safety, these ‘codes of ethics’ are really just about making rules to stop the creation of beings who might be our equals. It hardly seems fair.
Now that is a philosophically interesting topic…