Comments on Substack.

A fun 1992 paper by Tyler Cowen argues that if you accept consequentialism, a social discount rate of zero comes tumbling out. In other words, a consequentialist has to accept that a present and a future person matter equally. This has deep implications: there could be quadrillions of people yet to be born, and if they matter as much as present people, the vast majority of our obligations are to future people rather than to those alive today.

This post describes the thought experiment that forms the thrust of Cowen’s paper. But first, some definitions.

Consequentialism is the view that our actions should be judged solely on the basis of their consequences. So, for example, a consequentialist would divert Philippa Foot’s trolley to kill five people instead of one.

A social discount rate is the rate at which we care less about social goods as time goes on — a social discount rate of 1% means that giving someone a unit of value, like the pleasure of drinking a Coke or of seeing the Niagara Falls, matters 1% less every year. Framed this way, having a social discount rate might sound like a bonkers position, but it’s really not; most people care much less about someone born in the far future than someone currently alive. The rate only formalizes how much less.

The thought experiment

Suppose you gift Jane a unit of value. Then, you transport her to the future, paying or charging her whatever is necessary to make the whole trip an eye wash. That is, enough to make her indifferent between living now and in the future. For example, if it would take at least \$1 million to induce her to move to the year 2100, you pay her \$1 million and no more. On the other hand, if she would feel richer by \$1 million to live in 2100 (like Charles Babbage would, who claimed he would give up the rest of his life to spend just three days 500 years in the future), you charge her exactly that. (Of course, dollar amounts are just handwavy ways of saying “units of value”; maybe Jane doesn’t have a million dollars, so you inflict a million dollars’ worth of trouble on her in some other way.)

Next, you transfer the unit of value you gifted Jane to someone else living in the future she just traveled to — call him John. Since you’re a consequentialist, you have no moral qualms about this transfer. Only the consequences of your actions matter; and the consequences are that someone is as much happier as someone else is sadder. The net change in value in the universe is zero.

Time to send Jane back home. If you paid her \$1 million to just barely convince her to travel to the future, you take away \$1 million.

After this trip, the net result is that a unit of value we gave Jane has been transferred to John. Yet, at no point in this process did we do anything morally objectionable. We relocated Jane in the future, but we paid her for the inconvenience (or charged her for the pleasure). Then we transferred value between persons, but this doesn’t offend our consequentialist sensibilities. When we brought Jane back, we compensated her. (A catch: we didn’t pay for the inconvenience of the mode of the trip itself. Jane was never paid for the inconvenience of stepping in the time travel portal, only for the inconvenience of switching between eras. The mode of the trip is a pain both ways, so we’d need to pay Jane twice for it, instead of paying her a sum for one way and pocketing the sum back for the other. Cowen’s argument does assume a trip free of suffering, psychological or otherwise.)

Since no step of the trip was objectionable, the trip itself must be fine, too. So, the end result — diverting social goods from present to future people — is fine, too.

This implies that the social discount rate should be zero. Giving Jane a unit of value is no different from giving John a unit of value. So, doing the same amount of good now or ten thousand years from now (you can set Jane’s destination arbitrarily) are no different.

(Cowen’s original argument is slightly different. In it, he doesn’t gift the time traveler any value before the trip. Once the time traveler is in the future, a unit of value of the time traveler’s own is transferred to a contemporary, then the time traveler is brought back a unit of value poorer. The conclusion is the same — impoverishing present people at the expense of future people is okay. I tweaked it to sharpen the “to what time era should we allocate social goods?” application of the thought experiment.)

I don’t suppose this argument will actually convert anyone to the “social discount rate should be zero” side of the debate — it’s too cute. Arguments I’ve outlined towards the latter half of a previous post will probably do a better job. They rely on ideas like obligations to strangers (e.g., people from the far future). Perhaps the greatest value-add of Cowen’s thought experiment is simply as an intro to social choice theory that requires less setup and has more immediately far-reaching conclusions than the standard intro — Condorcet voting.

“But I’m not a consequentialist.”

You should still care about the paper.

This is because cost-benefit analysis is fundamentally consequentialist. Transfers of utility from one person to the next don’t feature in the calculation. As Steven Landsburg writes in The Armchair Economist, for any policy proposal, cost-benefit analysts “add up the gains to the winners and the losses to the losers. If the winners gain more than the losers lose, we tend to view the policy as desirable. If the losers lose more than the winners win, we declare the difference a deadweight loss, pronounce the policy inefficient, and take the size of the deadweight loss as a measure of its unattractiveness.” (Note that in a transfer, one person’s loss cancels the other’s gain exactly; you might as well ignore transfers in cost-benefit analyses from the outset.) Beneath all the mathematical rigorization in econ lectures of moving towards Pareto efficiency is this simple adding-up process.

You might ask: why the focus on comparing gains with losses, disregarding who wins and who loses?

In part because if the gains are bigger than the losses, you can split the difference between the winners and the losers such that both parties are happier with the policy than without. Imagine that the winners gain \$10 million and the losers lose \$5 million. Then, you could take \$7.5 million from the winners and give it to the losers. The winners would still be winning — now to the tune of \$2.5 million — but the former losers would be net ahead \$2.5 million too. If the “share the gains” arrangement and the “policy no go ahead” arrangement were put to vote before the stakeholders, the first one would win unanimously. That’s why the policy is “efficient” — there’s a way to distribute its effects such that everyone gains and no one loses.

This is why Cowen argues that “[r]ejecting the axioms discussed above implies a rejection of policy analysis as a method, and not merely the rejection of a zero intergenerational rate of discount. The alternative to a zero intergenerational rate of discount, then, is not a positive rate of discount, but an unwillingness to evaluate outcomes by comparing costs and benefits.”