In “Can we trust robots to make moral decisions?” Olivia Goldhill describes research by philosophers and computer scientists to program robots to make ethical decisions. One big reason for asking how to build an ethical machine is that “work on robotic ethics is advancing our own understanding of morality.”
When people disagree about moral issues, is there any rational way to resolve those disputes? Some think there are moral principles that any rational person must accept. But in “Can Moral Disputes Be Resolves?” Alex Rosenberg says there aren’t any such principles. The problem, according to Rosenberg, is that moral judgments are not true or false statements based on applying moral principles to particular circumstances. They are instead expressions of our responses to conduct. “Many people will not find this a satisfactory outcome. They will hope to show that even if moral judgments are expressions of our emotions, nevertheless at least some among these attitudes are objective, right, correct, well justified. But if we can’t find objective grounds for our emotional response to honor killing, our condemnation of it might turn out to just be cultural prejudice.”
In “What, Exactly Do You Want?,” Cass Sunstein explains opting in, opting out, active choosing, and choosing not to choose. John Stuart Mill helps out along the way.
Right again. “Certainly no one has ever been so right about so many things so much of the time as John Stuart Mill, the nineteenth-century English philosopher, politician, and know-it-all nonpareil who is the subject of a fine new biography by the British journalist Richard Reeves …”