Improving morality through robots

In “Can we trust robots to make moral decisions?” Olivia Goldhill describes research by philosophers and computer scientists to program robots to make ethical decisions. One big reason for asking how to build an ethical machine is that “work on robotic ethics is advancing our own understanding of morality.”

Is there any way to settle moral disagreements?

When people disagree about moral issues, is there any rational way to resolve those disputes? Some think there are moral principles that any rational person must accept. But in “Can Moral Disputes Be Resolves?” Alex Rosenberg says there aren’t any such principles. The problem, according to Rosenberg, is that moral judgments are not true or false statements based on applying moral principles to particular circumstances. They are instead expressions of our responses to conduct. “Many people will not find this a satisfactory outcome. They will hope to show that even if moral judgments are expressions of our emotions, nevertheless at least some among these attitudes are objective, right, correct, well justified. But if we can’t find objective grounds for our emotional response to honor killing, our condemnation of it might turn out to just be cultural prejudice.”