In “5 Thought Experiments That Will Melt Your Brain,” Evan Dashevsky says that because “some science is too big, dangerous, or weird to happen in the lab,” thought experiments “may be the most valuable experiments of all.” But notice that all five of his examples are from philosophers.
“Why is it wrong for Volkswagen to lie (if it did lie) about whether its cars meet emission standards, but uncontroversial for HBO to lie (if it is lying) about whether Jon Snow is dead?” In “Companies Lie. Some Get Away with It,” Stephen Carter explains that sometimes we expect people to lie and sometimes we don’t.
The world is full of suffering. How far should you go to prevent the suffering of others, especially the suffering of strangers? Ought you spend way too much to enjoy a fancy coffee drink while children are dying of starvation? Larissa MacFarquhar discusses extreme altruism, illustrating her points with a fascinating example. “In wartime – or in a crisis so devastating that it resembles war, such as an earthquake or a hurricane – duty expands far beyond its peacetime boundaries. In wartime, it is thought dutiful rather than unnatural to leave your family for the sake of a cause. In ordinary times, to ask a person to sacrifice his life for a stranger seems outrageous, but in war it is commonplace. Acts that seem appallingly bad or appallingly good in normal circumstances become part of daily life. This is the difference between do-gooders and ordinary people: for do-gooders, it is always wartime. They always feel themselves responsible for strangers; they know that there are always those as urgently in need as the victims of battle, and they consider themselves conscripted by duty.”
Is it ever worth not knowing the truth? Sometimes there are downsides to knowing the truth. For example, the truth about health or personal relationships can sometimes produce more pain than good. But we don’t always know when the truth will be worth it and when it wouldn’t. So what should we do? In “When the Truth Hurts,” Jess Whittlestone proposes this approach: “If I’m right here that the risks involved in seeking the truth too little are greater than those involved in seeking the truth too much, then aiming to always seek the truth might be a good general rule of thumb. This isn’t to say that the truth is ultimately valuable, or that there are no cases where we’re better off not knowing the truth. Valuing the truth doesn’t mean wasting time on understanding trivial, boring things, or asking everyone you meet what they like least about you. But given that most of the time we’re operating under a great deal of uncertainty, we might benefit overall from believing – falsely! – that the truth is what matters most.”
“One day I decided to stop lying. Don’t get me wrong, I’ve never been a big liar before in my life, but I decided to – to the best of my abilities – not lie at all. I defined some borderline case rules for myself, for example, it is ok to avoid or withhold the truth, when the effects of telling it would be harmful for myself or someone else (do I look pretty in this dress?), but not to tell a direct lie, however small.” Jacob Henricson gives the result of his experiment in “Honesty in Business – A Stoic Experiment.” Kant would approve.
Some activists for animal rights “reject any compromises with welfare-oriented groups that aim to secure incremental improvements — such as larger cages — for animals raised and slaughtered in horrific circumstances.” They think that working for more humane treatment of animals in an unjust institution like factory farming violates their moral principle that it’s wrong to use animals for food in the first place. Bob Fischer and James McWilliams question whether it makes sense to put moral principle ahead of preventing suffering.
“In the age of ISIS, can we still have ‘just wars’?” In her interview with Gary Gutting, Cecile Fabre argues that the principles underlying the “just war” tradition apply not only to “traditional” wars between nation states but also to war against ISIS. “It’s illusory to think that we can ever once and for all defeat terror — as illusory as to think that we can eliminate murder, rape, drug trafficking, and so on. As I noted earlier, human beings have always done those things to one another. Most of us don’t think that the best way to stop suspected murders, rapists, and traffickers is to bomb into the ground the areas where we think they are hiding. The most we can do is to catch and punish them (or if necessary kill them with minimum collateral damage). We do so knowing that we will not be able to spare all likely victims. Outside of war, the price we pay for abiding by moral principles is a great deal of wrongful suffering. The same is true regarding war.”
In “The Moral Imperative for Bioethicists,” Steven Pinker argues that ethicists should not use their philosophical distinctions and niceties to slow down research. “Given this potential bonanza, the primary moral goal for today’s bioethics can be summarized in a single sentence. Get out of the way. A truly ethical bioethics should not bog down research in red tape, moratoria, or threats of prosecution based on nebulous but sweeping principles such as ‘dignity,’ ‘sacredness,’ or ‘social justice.’ Nor should it thwart research that has likely benefits now or in the near future by sowing panic about speculative harms in the distant future.”
Are professional ethicists more moral than others? Apparently not. According to Eric Schwitzgebel, many professional ethicists tend to be “cheeseburger ethicists.” A cheeseburger ethicist is someone who reasons that it is morally wrong to eat meat and nevertheless enjoys a cheeseburger because everyone else does it. “In most cases, we already know what is good. No special effort or skill is required to figure that out. Much more interesting and practical is the question of how far short of the ideal we are comfortable being.” And professional ethicists seem more or less as comfortable as everyone else in falling short of their moral ideals. So … what is the point of philosophical reflection about how we ought to live? “Genuine philosophical thinking critiques its prior strictures, including even the assumption that we ought to be morally good. It damages almost as often as it aids, is free, wild and unpredictable, always breaks its harness. It will take you somewhere, up, down, sideways – you can’t know in advance. But you are responsible for trying to go in the right direction with it, and also for your failure when you don’t get there.”
Is imagining what it is like to be someone else a good way to make moral decisions? Paul Bloom says no in “Imagining the Lives of Others.” For one thing, we’re not very good at imagining the lives of other persons. We are better off using general moral principles to make moral decisions at what we owe others.