5 thoughts on “The math of killing, letting die, and … murder”
Just wanted to point out that the link is incorrect 🙂
Anyway, ethics questions I guess are questions about knowledge. If I am driving the car and my car is about to hit a car with two passengers, it matters for the car to “know” whether I am supporting a 100 kids, or whether I am a very important person on which the whole country is dependent on or whether I am a kid who is driving an adult car etc. The more knowledge the car has, the better ethical decision it can take?
Thanks for pointing out the bad link! I think it’s working now.
Knowledge is very important for making decisions about right and wrong, especially for anyone who thinks making such decisions is a matter of weighing the consequences of actions. Judgment about what to make of that knowledge is also important. In addition, as this article points out, one of the consequences of programming cars to sacrifice the driver to save more lives could be to deter people from buying a robot car in the first place.
Knowledge is very important for making decisions about right and wrong, especially for anyone who thinks making such decisions is a matter of weighing the consequences of actions.
Can you please tell me the other way by which right and wrong can be weighed?
I was trying to say that if knowledge is required then perhaps we can never have enough knowledge? Everything is connected so if we want to make a correct ethical decision, then we need to be omniscient! Let me know what you think of that.
Some philosophers think that some acts are morally right and others are morally wrong regardless of their consequences. On this approach, a right act (e.g., not lying) is right even if it has bad consequences and a wrong act (e.g., lying) is wrong even if it has good consequences. Kant’s deontology is such a theory. According to Kant, it is always wrong to use a person merely as a means. It’s wrong even if doing so would have very good consequences or refusing to do so would have very bad consequences. Of course it is prudent to consider consequences when setting goals, but it is always wrong to use anyone (including yourself) merely as a means in order to achieve those goals.
Your point that omniscience would be required to make a fully informed moral decision shows one reason (or one motivation) to think consequences are not relevant to right and wrong. If an act’s morality depended on its consequences, then we couldn’t know whether it is right or wrong until we see the consequences. And even then we couldn’t be sure what the consequences of the consequences will be. As you point out, only an omniscient being could be certain about the future.
This piece about robot ethics — “the mathematics of murder”! — takes the approach of utilitarianism, the idea that calculating consequences is the only thing that matters in thinking about right and wrong.
Just wanted to point out that the link is incorrect 🙂
Anyway, ethics questions I guess are questions about knowledge. If I am driving the car and my car is about to hit a car with two passengers, it matters for the car to “know” whether I am supporting a 100 kids, or whether I am a very important person on which the whole country is dependent on or whether I am a kid who is driving an adult car etc. The more knowledge the car has, the better ethical decision it can take?
Thanks for pointing out the bad link! I think it’s working now.
Knowledge is very important for making decisions about right and wrong, especially for anyone who thinks making such decisions is a matter of weighing the consequences of actions. Judgment about what to make of that knowledge is also important. In addition, as this article points out, one of the consequences of programming cars to sacrifice the driver to save more lives could be to deter people from buying a robot car in the first place.
Can you please tell me the other way by which right and wrong can be weighed?
I was trying to say that if knowledge is required then perhaps we can never have enough knowledge? Everything is connected so if we want to make a correct ethical decision, then we need to be omniscient! Let me know what you think of that.
Interesting points!
Some philosophers think that some acts are morally right and others are morally wrong regardless of their consequences. On this approach, a right act (e.g., not lying) is right even if it has bad consequences and a wrong act (e.g., lying) is wrong even if it has good consequences. Kant’s deontology is such a theory. According to Kant, it is always wrong to use a person merely as a means. It’s wrong even if doing so would have very good consequences or refusing to do so would have very bad consequences. Of course it is prudent to consider consequences when setting goals, but it is always wrong to use anyone (including yourself) merely as a means in order to achieve those goals.
Your point that omniscience would be required to make a fully informed moral decision shows one reason (or one motivation) to think consequences are not relevant to right and wrong. If an act’s morality depended on its consequences, then we couldn’t know whether it is right or wrong until we see the consequences. And even then we couldn’t be sure what the consequences of the consequences will be. As you point out, only an omniscient being could be certain about the future.
This piece about robot ethics — “the mathematics of murder”! — takes the approach of utilitarianism, the idea that calculating consequences is the only thing that matters in thinking about right and wrong.
Thank you for clarifying that. I learnt something.