Feeds:
Posts
Comments

Archive for the ‘ethics’ Category

Roboethics on YouTube

Despite my skepticism about roboethics having much to do with ethics about robotics, it’s only fair to let roboethicists have their say. Here’s a YouTube advertisement of a website on the subject.

Read Full Post »

Am I press?

Having been a journalist in a former life, I try to keep my nose in the game by subscribing to press release services. It’s an interesting way to see the news cycle and journalists’ judgement in action, since reportage from many media outlets can be recycled or augmented press releases.

For the science beat, that means EurekAlert, which is how I got hold of this little gem:

It is about a study to be released in PLoS Medicine which argues we just don’t know if circumcision will hinder HIV transmission rates in the United States. Boiled down, Africa and America are two very different places with very different disease vectors, which means public health data from the former won’t yield predictions about the later.

I got all that from the press release. I haven’t read the paper, which has the following citation:

The reason I haven’t read the paper, and can’t give you juicy quotes (theirs) followed up with bloated analysis (mine)? The press release links to:

PRESS-ONLY PREVIEW OF THE ARTICLE

The all-caps aren’t mine, but they have sufficient emphasis that I gather a hands-off approach is encouraged if you aren’t a journalist.

This brings me to ask the big question which crosses the mind of a serious blogger at least once: Do I count as press?

I could put on my freelancing hat and say, ‘Yes, I am.’ I could even send off an email and ask permission. But that’s just avoiding the bigger question of when a blogger counts as a journalist and is entitled to the same treatment.

My intuition is that anyone who picks up a pen can be a journalist for the purposes of any given story. Amateur or professional, it matters not a whit as long as you behave responsibly. That’s why excluding bloggers from press galleries should rankle everyone – even graduates of journalism school.

I suspect Canadian courts would see it the same way, since they don’t elevate the rights of journalists over those of citizens. Here are two statements to that effect:

Journalists have no more right to information, or to disclosure or even to access to information than the ordinary citizen. Freedom of the press as a concept does not confer any special status on media people. MacLeod v. de Chastelain, [1991] 1 F.C. 114 (F.C.T.D.).

Canadian courts have stated emphatically that the press enjoys no privilege of free speech greater than enjoyed by a private individual and that the liberty of the press is no greater than the liberty of every subject. Coates v. The Citizen (1988), 44 C.C.L.T. 286 (N.S.S.C.). [ed: the irony of the newspaper’s name should not escape us]

Even so, I’ll wait to read the article. Unless the story merits confrontation, a journalist should be polite and err on the side of caution.

Read Full Post »

It is a fact of human nature that we tend to focus on conflict. Journalists and writers know this is their stock in trade, because without conflict there is no story, and nowhere is this more apparent than in discussions of right and wrong. Morality is the conflict-generator, and our sense of injustice is more than willing to weigh in on the legal implications following from these conflicts.

But what, exactly, is our sense of justice, and how is it to be explained?

According to Jonathan Haidt and Jesse Graham, our moral intuitions arise from the interplay between 5 core psychological responses. One of these relates to justice. Unfortunately, it is not quite clear what Haidt and Graham mean by justice, except that they associate it with fairness and rights.

Contrast this with a pair of papers by Paul Robinson, Robert Kurzban, and Owen Jones, who have a narrow examination of justice that pertains to punishment. They argue we have shared intuitions about the justice of punishment, and these arise from evolutionary biology.

It seems to me that these authors are talking about two very different conceptions of justice. Intuitions regarding justice-about-punishment are not the same as the intuitions regarding justice-about-rights.

Unfortunately, it is difficult to reconcile these views, because it isn’t clear where intuitions about punishment (criminal or otherwise) should fit in Haidt and Graham’s schema. This is a strange problem – all the more so because intuitive demands for punishment are studied in Haidt’s early examinations of moral disgust – and it means we are left to wonder whether intuitions about punishment and moral responsibility belong in one of the existing 5 categories or in a supplementary 6th category.

For further reading:

Update:

Today BoingBoing points us to a post at The Mouse Trap which references Carlsmith et al. (noted above) and a post at Do You Mind.

Read Full Post »

Following a paper published in Fertility and Sterility, the New York Times weighs in with this: As Demand for Donor Eggs Soars, High Prices Stir Ethical Concern.

The surprise for me was how Hastings Center ethicist Josephine Johnston characterized the problem: she thinks the lure of a big payment can cloud informed consent.

The real issue is whether the money can cloud someone’s judgment… We hear about egg donors being paid enormous amounts of money, $50,000 or $60,000… How much is that person actually giving informed consent about the medical procedure and really listening and thinking as it’s being described and its risks are explained?

Informed consent can’t happen when a lot of money is involved? Johnston’s assumption needs empirical evidence to back it up. Is it true that women are so tempted by money they become incapable of making an informed decision?

Informed consent is a legal concept, so its meaning varies by jurisdiction. However, at its core it involves the subject of a medical intervention only agreeing to the procedure after considering its purpose, risks, benefits and uncertainties – and the availability of alternatives.

It’s something you’ll almost never see in medical dramas like House, where doctors  – not patients – make the decisions.

There are occasions when people are believed to be incapable of providing or withholding informed consent. This happens when patients are unable to make judgements about their care, either because of immaturity, cognitive infirmity or duress.

Since youth and mental health are not at issue, perhaps Johnston is thinking about duress. But since when is a profit incentive equivalent to duress? Where are the women held to ransom by student loans, who feel bad about selling eggs but do it anyways?

While it may be common for us to think that decisions we would not make signal an inability to make informed judgements, the question of a patient’s capacity to consent should have nothing to do with whether or not we think the decision is a good one.

A principled approach also recognizes the type of procedure should make no difference at all to the measurement of a person’s capacity to give informed consent. This just means that if a person is capable of giving informed consent for one medical procedure, they are capable of doing so for any. The stakes don’t matter. A woman who can give informed consent for a vaccination can give or withhold informed consent for a kidney donation, sex change, blood transfusion, or for-profit fertility treatment.

Also, why shouldn’t people consider facts external to the medical merits of a procedure? How does this threaten informed consent? Consider the following examples:

  1. When a person worries that they might lose their job if they don’t undergo a procedure, does this mean they lose the capacity to make informed consent?
  2. When a person is worried they might lose weeks of work if they undergo a procedure, does this mean they lose the capacity to withhold informed consent and refuse treatment?

Keeping in mind the second example, consider how nobody would question a woman’s capacity to consent if a she decided to not donate her eggs because it cost her too much time and money.

How can it be that women lose the capacity to make or withhold informed consent when they think about monetary advantages, but keep their ability to decide when they think about monetary disadvantages?

The lesson we should take away from this is simple: the capacity to give or withhold informed consent doesn’t go away when a person has access to extra information about incentives and disincentives.

If there are good objections to the sale of human eggs, they don’t have anything to do with informed consent.

(Hat tip to Pure Pedantry for mention of the NYT article. However, he is wrong about the law in Canada and the UK. In both jurisdictions, egg sales are illegal. In Canada, it is a serious criminal offense to buy human eggs. In the UK, while egg sales are illegal, donation with some compensation is permitted.)

Read Full Post »

Joshua Knobe has an interesting post today at the group blog Philosophy Ethics and Academia (PEA) Soup about research done by David Shoemaker.

Test subjects reacting to stories gave intuitive evaluations of blameworthiness and praiseworthiness. His results suggest moral ignorance attracts blame while moral knowledge does not attract praise.

  • A character who does something morally wrong, believing it to be morally right, is more blameworthy than a character who does something morally wrong even though they believe it is morally wrong. Conclusion: moral ignorance increases blameworthiness.
  • A character who does something morally right, but believes it to be morally wrong, is just as praiseworthy as a character who does something morally right, believing it to be morally right. Conclusion: moral knowledge does not increase praiseworthiness.

These results leave us with two puzzles:

  1. Why do we penalize moral ignorance contributing to bad acts?
  2. When we assess the motivations for good acts, are we failing to reward moral knowledge, or are we discounting moral ignorance (i.e. giving moral ignorance a pass)?

My explanation: we are more willing to assign blame than praise, and we are unwilling to assign praise for something we would have done.

This might have something to do with the way we see our own ideas of right action as presenting the ‘obvious choice’. Satisfying expectations is nothing special, but failing to meet our expectations creates blame.

In a mix of deontological and consequentialist thinking, we have expectations pertains to the action and the motivation for the action. Bad outcomes from incorrect action fails one expectation; two expectations are violated when the wrong action is compounded by a second, underlying mistake about ethical norms.

Might there be a way to empirically test this hypothesis?

One last thing: the results do not accord with my own intuitions. I tend to think moral ignorance decreases blameworthiness.

Read Full Post »

It isn’t a stretch to say ethics about artificial intelligences is in its infancy. More science fiction has been published than there are serious papers on this topic.

Should this be a hint that while there might be a lot that philosophy can mine from the topic of artificial intelligence, the codes of ethics being proposed don’t have much to do with ethics after all?

Take, for instance, the 6 target topics in robo-ethics as identified in a conference document hosted on Roboethics.org. They don’t have much to do with ethics:

  1. warfare use of robots
  2. social acceptability of robotic assistants and human augmentation
  3. impact of humanoid robotics on job management
  4. morality of autonomous robots
  5. impact of robotics research on human identity, safety and freedom
  6. theological and spiritual aspects of robotics technology

As stated, the 6 topics noted above are red herrings. Half don’t have anything to do with the things that make artificial life philosophically interesting, let alone have anything to do with a sub-discipline in ethics.

Let’s break it down:

  1. Warfare use? That’s just alarm about robots killing people by accident, killing people by going rogue, or encouraging humans to kill people. No novel ethics there.
  2. Social acceptability? Robot assistants? No ethics there. Cyborgs? As a philosophically interesting issue, that died on the operating table when the first artificial organ was implanted.
  3. Effect on jobs? That’s just labour politics. No novel ethics there, except in the background questions of how we want to order society, public duties to citizens, and so on.
  4. Robot’s morality? I don’t know what that even means. I’ll take a guess it is about including robots in the family of things we give moral concern and legal rights. That’s philosophically fun, and a political nightmare – not just because giving them the vote means people can quite literally manufacture election results. This point is the only legitimate plank in the platform put forward by the nascent field of robo-ethics.
  5. Effects on human identity? Expect the term ‘human dignity’ to make an appearance. While this area might be a candidate for fun philosophy, it doesn’t concern ethics except as an off-switch for the entire artificial intelligence program.
  6. Human theology? Expect the term ‘playing god’ to make an appearance. Again, that’s not ethics.

Not mentioned in the list is the issue lurking deep in the hind-brains of moralists and taboo-busters:

What happens if robots turn out to be sexy?

The purile response should not be surprising, and it’s a moralists nightmare:

Ick. Unless they look really, really hot.

Whatever the case: no ethics unique to robots has made an appearance, though objections will doubtless come loaded with familiar pseudo-philosophical baggage.

Henrik Christensen, chairman of the European Robotics Network, sums up the hot topics in robo-ethics this way:

security, safety and sex are the big concerns.

But none of these are really about ethics, let alone ethics specific to artificial intelligence.

Here are the big ethical and meta-ethical questions that ethicists should be addressing:

  • Should we create autonomous, artificial intelligences?
  • Should we have them behave according to human ethics? Is so, do we go with something from the deontological or utilitarian catalogues?
  • Should we make allowances for them to develop their own systems of ethics?
  • Should a moral system created by an artificial intelligence be accepted by humans as genuine ethics? (“Thou shalt not sudo” just won’t cut it, and I’ll leave speculations about robot evangelism to Battlestar Galactica.)
  • Should we include an off-switch, and under what conditions can it be used?
  • Should artificial intelligences have moral and legal rights?

With all of these interesting questions, it is a shame the so-called codes of ethics being proposed are just about heading off an imagined threat.

We can see the motivation for these codes in a statement by Gianmarco Verruggio.

We have to manage the ethics of the scientists making the robots and the artificial ethics inside the robots… Scientists must start analysing these kinds of questions and seeing if laws or regulations are needed to protect the citizen… Robots will develop strong intelligence, and in some ways it will be better than human intelligence. But it will be alien intelligence; I would prefer to give priority to humans.

The upshot? When they aren’t about safety, these ‘codes of ethics’ are really just about making rules to stop the creation of  beings who might be our equals. It hardly seems fair.

Now that is a philosophically interesting topic…

Read Full Post »

It is a good time for legal scholars and ethicists interested in robotics and artificial intelligence.

Near the end of last year, a report commissioned for the UK government, Robo-rights: Utopian dream or rise of the machines?, identified robots’ legal rights as a controversy waiting to happen. A month later, just before Christmas, this hit the news.

The BBC spun the story as Robots could demand legal rights, the Financial Times blew things out of proportion with UK report says robots will have rights, and on this side of the pond CBC’s headline read Robot rights? It could happen, U.K. government told.

While news outlets didn’t mention it, the report referenced a mock trial in California that debated this issue two years ago. The Legal Affairs article about the event makes for interesting reading:

Spinning a web of legal precedents, invoking California laws governing the care of patients dependent on life support, as well as laws against animal cruelty, Rothblatt [legal counsel for BINA48] argued that a self-conscious computer facing the prospect of an imminent unplugging should have standing to bring a claim of battery… The jury, comprised of audience members, sided overwhelmingly with the plaintiff. But the mock trial judge, played by a local lawyer who is an expert in mental health law, set aside the jury verdict and recommended letting the issue be resolved by the hypothetical legislature. [emphasis added]

(You can also download transcripts and video of the proceedings.)

That may excite the lawyers, but what’s here for ethicists? Coding ethics for artificial intelligences, of course.

Just as robots’ rights have become news, the European Robotics Research Network produced a Roboethics Roadmap, and a Korean working group has promised a Robot Ethics Charter before the end of the year. The reason, according to a Korean official quoted by Reuters, is that they expect the emergence of artificial intelligence in the near future:

We expect the day will soon come when intellectual robots can act upon their own decisions. So we are presenting this as an ethical guideline on the role and ability of robots.

Not to be outdone by their neighbours, Japan followed soon after, issuing a Draft Guidelines to Secure the Safe Performance of Next Generation Robots.

These news stories caught a lot of attention, in part because researchers and journalists drew immediate references to Isaac Asimov’s laws of robotics. Here’s a quick survey:

Don’t bother reading all of them. Even though they’ve been selected to represent diversity of coverage, these demonstrate a general uniformity. Once again, journalism signals the peril of a mass media fed by press-releases, wire service filler, and sound-bites: information homogeneity. Even most of the quotes are the same, with special attention being given to Park Hye-Young’s statement to an AFP interviewer:

Imagine if some people treat androids as if the machines were their wives.

Those 13 words reveal so much about human nature.

The expertise of the Korean working group amounts to five ‘futurists’, including one science-fiction writer; none of the press suggests ethicists or legal scholars are on the team.

That’s a shame, as philosophers are interested in the project. See, for example, Glenn McGee’s article in The Scientist, which offers the usual mix of worries about safety, stewardship and societal taboo. (Hat tip to the AJOB Bioethics blog’s A Code of Ethics for Robots? Uh, Yes. Please.)

On the one hand, we don’t want artificial intelligences to hurt us. On the other, we know humans have a great capacity for harming robots, but don’t know if it really matters. Also tossed into the dystopian blender are worries about degrading whatever it is to be human.

A code of ethics is supposed to prevent all this?

For a legal scholar’s opinion of the development of a code of ethics, turn to Ian Kerr. A professor of law at the University of Ottawa, he has just written Minding the machines for a local newspaper. (Hat tip to Michael Geist for mention of Kerr’s item.) Distilled, his article makes two important points:

  1. The codes of ethics are being drafted for financial reasons, to create a market in domestic automation. Insofar as the codes of ethics are being drafted to promote robotics as a social good, this is an exercise in framing. (Kerr does not use the word.)
  2. However, codes of ethics for robots are a dangerous misdirection, because the robots themselves are not a real solution. We should instead focus on the social ills being used to generate demand for robots, and be wary of a technological fix that has the potential to leave the underlying problems hidden.

In Kerr’s own words:

Before we spend valuable resources commissioning Working Groups to invent “no-flirt” rules or other robotic laws to avoid inappropriate human-machine bonding, isn’t there a logically prior line of questioning about whether a declining birthrate is truly a problem and, in any event, whether intelligent service robots are the right response? … I am concerned about robotic laws, charters and other sleight-of-hand that have the potential to misdirect us from the actual domains of ethics and social justice.

Kerr’s argument contradicts that of author Robert J. Sawyer, whose comment On Asimov’s Three Laws of Robotics (1991) pointed out the ‘laws of robotics’ popular to science fiction are not going to guide developments in artificial intelligence.

The development of AI is a business, and businesses are notoriously uninterested in fundamental safeguards — especially philosophic ones.

If Kerr is right, Sawyer missed out on predicting business interests could find a way to make ethics sell robots. Confronted by this technology, purchasers will need reassurances it will be safe – and social engineers will need assurances their moral sensibilities will not be transgressed.

I think Kerr is spot on, but I have my doubts a Code of Ethics can work either as a sales pitch or as a real guide for moral human-robot interactions.

Even assuming artificial intelligence is an achievable goal, there are no guarantees researchers will be capable of implementing a code of ethics. Nor are there indications the public will be willing to adopt it in even one nation – let alone internationally. Besides, why should we think autonomous machines will abstain from writing over the code to make their own evolving palimpsest of ethical norms? It’s what we humans do.

A draft code of ethics for robots might make for a good discussion paper, but until we have a better understanding of how even humans come to make moral decisions, it will provide very little guidance for the practical implementation of artificial intelligences who are worthy of our moral consideration.

Anything less can be handled by ethics as we know it, and we don’t need South Korea’s Robot Ethics Charter to tell us that using robots to forget our obligations to the sick and elderly is a bad idea.

Read Full Post »

Older Posts »