William Saletan, the columnist with the human nature beat at Slate, has a piece about the effects of brain damage on moral reasoning. It is called, Mind Makes Right: Brain damage, evolution, and the future of morality.
I’m mentioning it here because a lot goes wrong in this column. The impression I get is that Saletan has happened on a weighty and exciting topic, and is casting about for an opinion about it. Combine that with sloppy use of the words ‘ethics’ and morality’, and it lacks clarity and focus. Even worse, he makes the mistake of saying thinks like this:
… brain science has discredited religion and philosophy, but don’t worry: Morality won’t disappear. Brain science is offering itself as the new authority. What’s moral, in the new world, is what’s normal, natural, necessary, and neurologically fit.
Umm, no. While it may be damaging to religion, neuroscience hasn’t discredited philosophy. Why would he think so? Neuroscience isn’t telling us anything about the morally right choice in a particular situation. It is telling us how we make moral decisions.
The error Saletan makes is sometimes called a naturalistic fallacy, although people who quibble (i.e. any philosopher worth their salt) will tell you it is really an example of the is-ought distinction going awry: roughly put, you can’t generate ethical conclusions from purely factual premises.
It doesn’t get any better. Here is his quick history about the evolution of human moral psychology:
As our ancestors adapted from small, kin-based groups toward elaborate nation-states, the brain evolved from reflexive emotions toward the abstract reasoning power that gave birth, in this millennium, to utilitarianism.
I hope that this statement does not suggest to some that utilitarian thinking is a product of J.S. Mill’s utilitarianism. The theory is just an elegant statement of how people have decided moral problems for a the very long time that is human history. High-order moral thinking is not something new to the human species. Neither is thinking that considers the ethical consequences of actions.
However, this does suggest an interesting avenue of thinking. Evolutionary processes have resulted in abstract thinking which allows us to consider moral problems without relying on emotions. Why is this? What evolutionary advantage is there to high-order moral decision-making? Is this just a general aptitude at reasoning applied to moral problems on an ad hoc basis? At what point in human history did biology permit moral thinking we would recognize as such?
Those questions will be left for another day, because in his column, Saletan is more interested in the future implications of this research.
Once technology manipulates ethics, ethics can no longer judge technology. Nor can human nature discredit the mentality that shapes human nature. In a utilitarian world, what’s neurologically fit is utilitarianism. It’ll become the norm, the standard of right and wrong. Sure, a few mental relics of our primate ancestry will be lost. But it’ll be worth it. I think. (emphasis added)
Ouch. I can’t help but wonder if even Saletan knows what he means when he says, ‘nor can human nature discredit the mentality that shapes human nature.’ That’s a good example of an author trying to be pithy to the point of divorcing all meaning from a statement.
As for the substance of what he says, I think he is wrong once again.
Technology isn’t manipulating ethics, and even if it could, it would not prevent ethical judgements about technology
The fMRI technology being used by neuroscientists does not manipulate thoughts, attitudes, beliefs or ways of thinking. It allows us to observe activities in the brain, and learn more about moral cognition. In fairness, this does invite a materialist approach to ethical reasoning, and suggests we can alter moral reasoning by altering the brain. But even if some future technology could make people reason without moral emotions, they would continue to be able to make moral judgements about that technology. It doesn’t take our moral decisions away from us.
At present, neuroscience is telling us that people with some types of neurological structures decide moral questions in a particular way. We may, in the future, have access to ‘thought control’ medical interventions. But I doubt there will be a long line-up to clinics promising to make us all utilitarians by wacking our collective ventromedial prefrontal cortex with a ball-peen hammer.
Besides, as any philosophy student should know, we already have lots of choices about how we wish to think about moral problems. Neuroscience is telling us something about how we have those choices, not which choice we should make.
Update: The good folks at The Situationist point to the Slate article as well.