Disclaimer. Don't rely on these old notes in lieu of reading the literature, but they can jog your memory. As a grad student long ago, my peers and I collaborated to write and exchange summaries of political science research. I posted them to a wiki-style website. "Wikisum" is now dead but archived here. I cannot vouch for these notes' accuracy, nor can I even say who wrote them. If you have more recent summaries to add to this collection, send them my way I guess. Sorry for the ads; they cover the costs of keeping this online.
Parfit. 1994. Reasons and persons. Oxford: Oxford University Press.
Parfit argues there one can appeal to the act-consequentialist morality even when many people are involved. The following five mistakes are responsible for people concluding that an individual making an altruistic decision with many people involved has made no difference.
The five mistakes he identifies are
First. The first mistake can also be thought of as the error of ignoring opportunity costs. The opportunity cost of joining the first rescue mission is that the second mission fails and fifty people die; when this is considered the decision becomes clearer. In his formulation, one must consider whether more people are benefited or harmed more by an act, not just if someone is benefited or harmed at all.
Second. Parfit gives two problems with the second mistake. First, sometimes events are over-determined, like his example of two murderers. Each murderer is culpable, regardless of the fact that the victim would have died even if that murderer had refrained from acting. Secondly, with coordination games, coordinating on an outcome besides the best one is not moral, because all individuals together could have reached a more moral outcome, so each individual has not acted rightly, even though the act resulted in the best outcome given everyone else's acts. Coordinating on A1 is not moral in this view, but B2 is. This table is from page 72:
|Do (1)||Do (2)|
|Do (1)||A1 (second best)||A2 (bad)|
|Do (2)||B1 (bad)||B2 (best)|
Third. The argument here is twofold. First, even if the probability of an event is low, if the outcome is considerable, the expected value of the action is therefore non-ignorable because the product will be significant. This is the argument most applicable to the voting paradox. Parfit argues that if one takes the B term in the rational choice model to not just be personal benefit but rather the net benefit for all people affected by the decision, even though p, the probability of being decisive is low, pB will be substantial because B is so large and thus pB can easily outweigh C, the costs of voting. The positive value of pB - C that results will make it rational for the rational altruist to vote even without duty considerations of the d term. Second, unlikely events will become more probable the more chances they have to occur, giving another reason to not discount potential ramifications of rare events like nuclear meltdown. So B can be significant. (For development of this idea, see Edlin et al 2005.)
Fourth and Fifth. Problems four and five are similar with a slight issue of nuance for five. He uses examples of multiple torturers and multiple rescuers to show that the dilution of a benefit or harm does not negate the benefit or harm of the action. With regards to imperceptible harm, even if imperceptible harm is not transitive, it is still morally wrong. The sum of imperceptible harms/benefits adds up to perceptible effects, so participating matters. If 1000 wounded men need water, and 1000 people each have a pint they can contribute, my pint makes an imperceptible contribution: 1/1000 of a pint for each wounded man. Yet my decision to give up my water is laudable, since it results in a perceptible benefit: 1 pint for each wonded man. (If we fail take these imperceptible contributions into account, we run into a problem: Nobody is responsible for the collective outcome, yet it occurred; who caused it?) Parfit's argument does not depend on the collective; it hinges on the individual act itself. He then fits the 4th and 5th mistakes in the context of tragedy of the commons type situations.
At the end of the article, Parfit argues that the increasing inter-connectedness of the world requires that old common-sense morality, which worked better in smaller societies, needs to be altered by taking into account these mistakes. Furthermore, he argues that Rational Altruism can be a way to solve prisoner dilemma and tragedy of the common type problems.
Part 3 reminds me of a philosopher (I believe Singer) who argued against some physics experiments with exotic particles. Although the chance of a catastrophic (perhaps earth-destroying) result might be 1 in 10^-12 or even more remote, given that it would eliminate all future generations (potentially totaling trillions if not more), the expected mortality of running such an experiment could be well over a death per trial. This is an extreme example but it shows how even miniscule probabilities with enormous effects should be taken into account.
However, as per Meehl (1977), given that error in vote recording might ensure that one will never actually be the deciding vote, a probability of 0 would make this argument worthless with regards to voting, as the product of zero and even a huge number is zero. Of course, is this really a problem? See comments and criticisms of Meehl.
Furthermore, I am concerned with the discount factor that individuals will apply to the calculus of the net benefit to society for voting. Even the most altruistic individual weights themselves higher than others, and likely also their family and friends higher than those they've never met. After this, others more distant may matter less and less, albeit by still positive amounts. With such a discounting of benefit, pB may not be large enough to explain voting; it also could still be enough, but we would need to try to determine somehow the amount of the discounting factor. Limited rational altruism seems a more useful and realistic concept.
Research on similar subjects