Add your own egg

In “Bringing Philosophy to Life,” Nakul Krishna reflects on his introduction to philosophy by way of reading Bernard Williams. Along the way, there are many interesting points about ethics in general, utilitarianism in particular, other areas of philosophy, and what attracts people to philosophy at all. Williams brought “depth” to philosophy.

Williams never disdained rational argument, but he never thought it was enough by itself: “Analytic argument, the philosopher’s specialty, can certainly play a part in sharpening perception. But the aim is to sharpen perception, to make one more acutely and honestly aware of what one is saying, thinking and feeling.” Unhedged with cautious qualifications, his work goads you to distinguish what you actually think from what you think that you think. If his prose, compressed and epigrammatic, stands up to rereading today, as analytic philosophy seldom does, it’s because it leaves room for its readers to add something of themselves to it. A reader’s thought, Williams said, “cannot simply be dominated … his work in making something of this writing is also that of making something for himself.” For every reader comes to philosophy with “thoughts of his own, ways of understanding which will make something out of the writing different from anything the writer thought of putting into it. As it used to say on packets of cake mix, he will add his own egg.”

Superintelligence or superstupidity?

In “The A. I. Anxiety,” Joel Achenbach discusses the ideas of philosopher Nick Bostrom, physicist Max Tegmark, and others about A. I. or artificial intelligence. “Big-name scientists worry that runaway artificial intelligence could pose a threat to humanity. Beyond the speculation is a simple question:  Are we fully in control of our technology?” Which is the greater threat … that machines will become superintelligent or superstupid?

Are these lies justified?

In “Are These 10 Lies Justified” Gerald Dworkin listed ten lies he believe can be justified as morally permitted.  He asked his readers to add their comments to begin a dialogue. The article broke all records for “hits” on the New York Times’ philosophy blog “The Stone.” As he promised he would do, in “How You Justified 10 Lies (Or Didn’t)” Dworkin has now followed up with a report on readers’ comments about those ten lies.

Altruism’s blind spot

Lisa Herzog’s “(One of) Effective Altruism’s blind spot(s)” won the 2015 third place prize awarded by 3 Quarks Daily for a philosophy blog post.

John Collins, the final judge for the contest, wrote of this post: “Moral theories that prescribe extreme versions of utilitarianism are sometimes criticized for being too demanding. Herzog’s focus is on a respect in which effective altruism appears to be not demanding enough. By taking existing social institutions and practices as simply given the effective altruist finds herself choosing from a ‘restaurant menu’ of given options, ignoring the possibility of deeper structural change. When the problem is construed as one of individual choice rather than collective action, such approaches will remain invisible.”

The winning blog posts were selected from these nine finalists.

Slow corruption

Vidar Halgunset’s “Slow Corruption” won the 2015 first place prize awarded by 3 Quarks Daily for a philosophy blog post.

John Collins, the final judge for the contest, wrote of this post: “I liked the simple humanity of this essay very much. Halgunset’s immediate topic is the recent public debate in Norway over the selective abortion of fetuses diagnosed with Down’s syndrome. His central suggestion is that we focus not on the question ‘what would be so terrible about a society without Down’s syndrome?’ but ask instead, why might it be undesirable to create a society that lacked people with Down’s syndrome? And he asks us to stop and consider the reception of this debate by those of us who have Down’s syndrome.”

The winning blog posts were selected from these nine finalists.

App addiction

Is it your fault you’re addicted to Facebook, Candy Crush Saga, or whatever? Or are the web and all those apps scientifically designed to break your will? If so, shouldn’t they be regulated? These are questions Michael Schulson addresses in “User Behaviour”: “‘Much as a user might need to exercise willpower, responsibility and self-control, and that’s great, we also have to acknowledge the other side of the street,’ said Tristan Harris, an ethical design proponent who works at Google. (He spoke outside his role at the search giant.) Major tech companies, Harris told me, ‘have 100 of the smartest statisticians and computer scientists, who went to top schools, whose job it is to break your willpower.’”

Monkeys pick your coconuts

In “The Murky Ethics of Making Monkeys Pick Our Coconuts,” Justin Wm. Moyer asks: “If a creature is smart enough to pick coconuts, is it fair to make him? This is the question at the heart of a controversy over pigtailed macaques in Thailand that excel at picking coconuts loved by Western consumers — but do so on leashes.”

See also Eliza Barclay’s NPR story on “What’s Funny About The Business Of Monkeys Picking Coconuts?”