0:00 | -24:35 |

Jones and Summers (2021) is a new working paper that attempts to calculate the social return on R&D - that is, how much value does a dollar of R&D create? The paper is like something out of another time; the argument is so simple and straight-forward that it could have been made at any point in the last 60 years. It requires no new math or theoretical insights; just basic accounting and some simple data. The main insight is simply in how to frame the problem.

What I want to do in this post is walk through Jones and Summers’ simple “thought experiment.” At the end, we’ll have a new argument that the returns on R&D are quite high and that we should probably be spending much more on R&D. Next week we’ll look at some empirical data to see if it matches the intuition of the thought experiment. (Spoiler: It does)

# Taking an R&D Break

Let’s start with a model of long-run changes in material living standards that is so simple it’s hard to argue with:

R&D is an activity that consumes some of the economy’s resources

R&D is the only way new technologies come into existence

Growth in GDP per capita comes entirely from new technologies (at least, in the long run)

We’ll re-examine all of these points later, but for now let’s accept them and move on.

This model helps clarify what it means to compute the returns to R&D. If we do more R&D, we have to use more of the economy’s resources, but in return we’ll get more GDP per capita. So computing the returns to R&D is really about computing how much does growth change when we spend a bit more on R&D. Specifically, if we increase R&D by, say, 1%, what will the expected impact be on GDP per capita?

That’s actually a really hard question to answer! And the clever thing Jones and Summers do is they don’t ask it. Instead, they ask a different question which is much easier to answer: what would happen if we took a break from R&D for a year?

Why is this easier to answer? Because in our simple model, if we stop all R&D, we stop all growth! We no longer have to estimate “how much” growth we get for an extra dollar of R&D. We know that if we stop all R&D, we stop all growth. Simple as that!

Let’s get more specific. Suppose under normal circumstances, we spend a constant share of GDP on R&D. Let’s label the share *s. *In return, the economy grows by a long-run average that we’ll call *g*. In the USA, between 1953 and 2019, the annual share of GDP spent on R&D was about 2.5%, so *s = *0.025. Over the same time period, GDP per capita (adjusted for inflation) grew by about 1.8% per year, so *g* = 0.018. If we hit “pause” on R&D for one year, then in that year we save 2.5% of GDP (since we don’t have to spend it on R&D), but GDP per capita stays stuck at its current level for one year, instead of growing by 1.8%.

But that’s not a full accounting of the benefits or the costs of doing R&D. In the next year, our R&D break will end and we’ll start spending 2.5% of GDP on R&D again. But because we took that break, we didn’t grow in the previous year, GDP will be smaller than it would otherwise have been. Since we always spend 2.5% of GDP on R&D, we’ll be devoting a bit less money to R&D than we otherwise would (since it will be 2.5% of a smaller GDP). And because we didn’t grow in the previous year, we’ll also be growing from a lower level than we would have been if we hadn’t taken our R&D break. And that will be true in the next period, and the next, and the next: in every year until the end of time, GDP per capita will be 1.8% lower than it would have been if we had not taken that R&D break.

Adding up all these costs and benefits over time requires us to do some calculations using the interest rate *r*, which is how economists value dollars at different points in time. In the USA, a common interest rate to use might be 5%, so that *r* = 0.05. Jones and Summers show the math shakes out so that the ratio from here to infinity of benefits from R&D to costs of R&D is:

Benefits-to-Cost Ratio = *g/(sr)*

In other words, on average the return on a dollar spent on R&D is equal to the long-run average growth rate, divided by the share of GDP spent on R&D and the interest rate. With *g* = 0.018, *s* = 0.025, and *r* = 0.05, this gives us a benefits to cost ratio of 14.4. Every dollar spent on R&D gets transformed into $14.40!

One thing I really like about this result is that you do not need any advanced math to derive it. It’s just a consequence of algebra and the proposed model of how growth and R&D are linked. In the video below, I show how to get this result without using any math more advanced than algebra.

Can that really be all there is to it? Well, no. If we look more critically at the assumptions that went into generating this number, we can get different benefit-cost ratios. But the core result of Jones and Summers is not any exact number. It’s that whatever number you believe is most accurate, it’s much more than 1. R&D is a reliable money-printing machine: you put in a dollar, and you get back out much more than a dollar.

But let’s turn now to some objections to the simple argument I’ve made so far.

# Is there really no growth without R&D?

Starting at the beginning, we might question the assumption that R&D resources are really the only way to get improvements in per capita living standards. If that’s wrong, and growth can happen without R&D, then our thought experiment would be over-estimating the returns to R&D, since growth wouldn’t actually go to zero if we (hypothetically) stopped all R&D for that year.

There are two ways we could get growth without doing R&D. First, it may be that we can get new technologies without spending resources on R&D. Second, we could get growth without new technologies.

The latter case is basically excluded by assumption in economics, at least for countries operating at the technological frontier. In 1956, Robert Solow and Trevor Swann argued that countries cannot indefinitely increase their material living standards by investing in more and more capital. That’s because the returns to investment drop as you run out of useful things to build, until you reach where the returns to investment are offset by the cost of upkeep. To keep growth going, you need to discover new useful things to build. You need new technology.

On the other hand, the first objection - that we may be able to get new technologies without spending resources on R&D - has more going for it. For instance, a common understanding of innovation is that it’s about flashes of insight, serendipity, and ideas that come to you in the shower. Good ideas sometimes just come to us without being sought.

The trouble with this notion of innovation is that in almost all cases, the free idea is only part of the story. It might provide a roadmap, but there is still a long journey from the idea to the execution, and that journey typically requires resources to be expended. In terms of our thought experiment, if it still takes R&D to translate an unplanned inspiration into growth, then we are actually measuring the returns to R&D correctly. If we turned off R&D, those insights wouldn’t get realized, and so growth would freeze until we began R&D again.

But maybe that’s not always the case. In The Secret of Our Success, Joseph Henrich gives a (fictional) example of how a package of hunting techniques could evolve over several generations without diverting any economic resources to innovation. In the example, proto-humans use sticks to fish termites out of a nest to eat, but one of them mistakenly believes the stick must be sharpened (their mother taught them the technique with a stick that happened to be sharp). One day, they accidentally plunge their sharp stick into an abandoned termite mound and impale a rodent - he has “invented” a spear. The proto-humans start using the sticks to impale prey. A generation later, another proto-human sees rabbits leaving tracks in the mud and going back into their hole; he realizes he can follow tracks to the hole and use the spear, instead of just hoping he sees an animal. Bit by bit, cumulative cultural evolution can happen, leading to a steadily more technologically sophisticated society.

These kinds of processes still happen today. In learning-by-doing models of innovation, firms get more productive as they gain experience in a production process. The process by which this happens is likely another form of evolution, with workers and managers tinkering with their process and selectively retaining the changes that improve productivity. We could call this kind of tinkering R&D if we wanted, but it’s almost certainly not part of the national statistics.

But here’s the rub. With modern learning-by-doing, we typically think of firms and workers finding efficiencies and productivity hacks in production processes that are novel. And where do new and unfamiliar production processes come from? In the modern world, typically they are the result of purposeful R&D. If that’s the case, then in the long run we are once again accounting correctly for the costs and benefits of R&D. In this case, if we turned off R&D for one year would delay by one year the creation of new production processes that would then experience rapid learning-by-doing gains in subsequent years.

Of course, there still might be learning-by-doing with older technologies. But learning-by-doing models typically assume progress is very, very slow in mature technologies because there are not many beneficial tweaks left to discover. The process has already been optimized.

That’s pretty consistent with what we know about growth in the era before much purposeful R&D. Tinkering and cumulative cultural evolution is probably the right model for innovation before the industrial revolution, and as best as we can tell, growth during that era was painfully slow. Nearly zero, compared to today’s standards.

All that said, if you still believe growth can happen without R&D, then you can still use Jones’ and Summers’ approach to compute the benefits of R&D and adjust the estimate to take all this into account. It’s just now you need to use take only the fraction of growth that comes from R&D as your benefit. I have argued that almost all long-run growth comes from R&D. But if you think it’s just 50%, then that would cut the benefit cost ratio of R&D in half - to a still very high 7.2.

# What about other costs?

A second objection to our initial estimate of the returns to R&D makes the opposite point: R&D is not costlessly translated into growth. New ideas must be laboriously spun out into new products and infrastructure that are then disseminated across the economy, before growth benefits are realized. Focusing exclusively on the R&D costs overstates the returns to R&D by understating the full costs of getting growth.

Take the covid-19 vaccine as an example. Pfizer has said the R&D costs of developing the vaccine were nearly $1bn. But once Pfizer had an FDA-approved vaccine, the benefits were not instantly realized by society. Instead, the shots needed to be manufactured and put into arms, and the cost of building that manufacturing capacity ought to be accounted as part of the cost of deriving a benefit from the R&D.

We don’t know exactly how much the US spends on “embodying” newly discovered ideas in physical form so that they can affect growth. But we do know that since 1960, the total US private sector investment in new capital (not merely upkeep or replacement of existing capital) has been about 4.0% of GDP per year. Not all of that is the upgrading of capital to incorporate new ideas. Some of it is just extending existing forms of capital over a growing population (think building new houses). But it’s a plausible upper bound on how much we spend turning ideas into tangible things.

If we add the 4.0% of GDP spent annually on net investment to the 2.5% spent explicitly on R&D, we get a revised estimate that the US spends 6.5% of GDP per year on creating and building new technologies. If we return to our original estimate for the benefit-cost ratio of R&D, but use *s* = 0.065 instead of *s* = 0.025, we get that the benefit-cost ratio is 5.5. Every dollar spent on R&D still generates $5.50 in value!

# Does R&D Instantly Impact Growth?

OK, so it’s important to count costs correctly. By the same token, we may believe the benefits of R&D are overstated. The simple framework I laid out above assumed if you pause R&D, you pause growth at the same time. Clearly that’s incorrect.

In reality, R&D is not instantly translated into growth. About 17% of US R&D is spent on basic research - that is, science that is not necessarily directed towards any specific technological application. As I’ve argued before, this kind of investment does eventually lead to technological innovation, but it takes time: twenty years is not a bad estimate of how long it takes to go from science to technology.

Invested at 5% annually, $1 today is worth $2.65 in twenty years. Alternatively, $1 received in twenty years is only worth $0.38 today (since you can invest the $0.38 at 5% per year and end up back with $1 in twenty years). The implication is that benefits that arrive in the more distant future should be more discounted in our accounting framework. For example, if we believed spending R&D resources today only had an impact on growth in 20 years, then we would want to discount our estimate of the benefits to 38% of the levels we came up with when we naively assumed the benefits of R&D arrived instantly. That would imply a benefit-costs ratio of 5.5, as compared the 14.4 we initially computed.

But that’s surely a big over-estimate, since only 17% of R&D is spent on basic science. The other 83% is spent on applied science and development, both of which have much shorter time horizons. Just to illustrate, let me assume 17% of R&D has a 20-year time horizon (38% discount), 33% of R&D has a 10-year horizon (61% discount), and the remaining 50% of R&D has a 5-year horizon (78% discount). In that case, the average discount we should apply, due to the fact that R&D is not instantly translated into growth, is 66%. That implies a benefits-cost ratio of 9.5, as compared the original 14.4. Again - the point is not any specific number. Just that under a lot of sensible assumptions, the return is a lot more than 1!

# What about other benefits?

So far we have looked at some ways in which the benefits-cost ratio is over-estimated. Of these, I think the argument that we should include investment as part of the cost of getting a benefit from R&D is a good one, as well as the argument that we should discount the benefits by time since they don’t arrive instantly. Combining those estimates gives us a benefits cost-ratio on the order of 3.6 (i.e., 0.66 * 0.018 / (0.065 * 0.05). Every dollar spent on R&D + investment gets us at least $3.60 in value!

But we also have plenty of reasons why we could argue it is inappropriate to simply use GDP per capita as our measure of the benefits of R&D. There are many benefits from R&D that may not show up in the GDP numbers: reduced carbon emissions from alternative energy sources and greater fuel efficiency; the reduction in work hours that more productive technology has allowed us to realize over the last century; the increased value of leisure time due to the internet; the years of life saved by the covid-19 vaccine; indeed, the years of life saved by biomedical innovation overall.

Jones and Summers take a stab at an estimate for the benefits of biomedical innovation that do not show up in GDP. Biomedical innovation is probably the single largest sector of our innovation system: probably 20-30% of total R&D spending. One way to try and get at the non-GDP benefits of this biomedical innovation is to estimate the value people place on longer lives using things like their spending to reduce their risk of death. I’m not sure how much confidence we want to put in those numbers, but Summers and Jones estimate that a range of reasonable estimates would lead us to increase the estimated benefits of R&D by 20-140%. Taking my tentatively favored benefit cost ratio of 3.6 as our starting line, scaling up the benefits by 20-140% gets us a range of 4.3-8.6.

Estimating the general non-GDP benefits of innovation beyond biomedical innovation is probably an inherently subjective task. But here’s one attempt at a thought experiment to get a sense of how much value you get out of innovations that isn’t reflected in GDP. Suppose there was a magical genie (such things happen in thought experiments) who offered to set you on one of two parallel timelines.

The first is our own timeline, where innovation will happen the same as it has been for a century, and GDP per capita growth will continue to be 1.8% per year. The second timeline is a weird one where technology is frozen at our current level, but (magically) everyone gets richer at a rate of 2.25% per year (as long as you do R&D) - 25% faster than in our current timeline. That is, in the second timeline, you get a bit more money, but you don’t have access to new products and services that innovation would bring. If we were to compute the benefits to cost ratio of R&D in that second world it would be 25% higher than in our timeline, since growth is 25% faster.

If GDP per capita is a good metric of the value of innovation, you’ll clearly choose the second timeline. But if you pick the first one, it means you value access to the newly invented technologies at a level that is at least 25% above their measured impact on GDP per capita.

It’s kind of hard to think what choice you would actually make in this scenario, since choosing between different growth rates is a very foreign decision to most of us. So consider an alternative formulation where the genie offers you the following choice:

A cash payment (right now) equal to 20% of your current income, plus the opportunity to purchase products and services developed between now and 2031

A cash payment (right now) equal to 25% of your current income

Which do you choose?

The first choice is basically where you will expect to be in the year 2031; 1.8% growth compounded over 10 years means you’ll have 20% more income. And in the year 2031, you’ll also have access to all technologies invented between now and then. The second choice gives you a growth rate that is about 25% higher than the other, but no access to the non-monetary benefits of innovation - just the cash. Again, if you pick the first choice, you are saying GDP per capita undervalues the benefits of innovation over the next decade by at least 25%. And so you should scale up your assessment of the benefits to cost ratio of R&D by 25%.

What if option #2 was a payment equal to 30% of your current income? If you would still prefer option #1 in that case, then you think GDP per capita undervalues the benefits of innovation by 50%. And so you should scale up your assessment of the benefits to cost ratio of R&D by 50%. And so on.

# What if the average doesn’t matter?

All told, Jones and Summers’ thought experiment essentially argues that R&D is a money-printing machine. Ignoring benefits that don’t accrue to GDP, every dollar you put into your R&D machine gets you back more than $3.60 in value. Possibly much more. So why don’t we use this money printing machine much more? Why are we only spending $2.50 out of every $100 on R&D?

There are two main reasons. First, the value created by R&D is distributed widely throughout society and does not does not primarily accrue to the R&D funder. If I put $1 of my own cash into the R&D machine, I’m not getting back $3.60. Very likely I might get back less than the dollar I put in. The private sector funds about 70% of US R&D and for them the average social return on R&D doesn’t really matter. What matters is the private return that the firm will receive.

But that doesn’t account for why the US government doesn’t spend more on R&D. Presumably, it should care about the social return. One obvious possibility is that decision-makers in government face incentives that don’t reward R&D spending. Maybe election cycles are too short for any politician to get credit for funding more R&D; maybe the R&D funded by government gets implemented by businesses who get all the credit; maybe government is just skeptical of academic theory. I don’t know!

But another possibility is that it’s a problem related to knowledge. The *average* return to R&D must be quite high if we buy the argument just made. But that doesn’t mean the next dollar we spend will earn the average return. Maybe we funded the best R&D ideas first, and every additional dollar is spent on a successively less promising R&D project. Maybe the supply of talented scientists and inventors is already maxed out. If we want to argue that R&D should be increased, we want to know the *marginal* return to R&D; that is, how much extra GDP will get if we spend another dollar on R&D, given what we’re already spending. As I said at the outset of this post, that’s a much harder question to answer. But there are some attempts to answer it, and they also find a quite high rate of return. We’ll look at those next week.

If you liked this post, you might also enjoy the following:

One good way to think of R&D. My experience is that research and development follow two different statistics and are hard to think of as one continuous process. Development is more predictable - outcomes are more gaussian. Research is unpredictable and driven by power-law statistics. There is an average growth rate since one only has a finite amount of historical data. But the average is driven by the big power-law returns not the smaller development returns.

If the returns of R&D are so high the why don't we pour the whole budget into R&D? Power-law returns are driven by two primary dimensions: quality of each investment and the number of investments. If you make more quality investments your chances of a big power-law return goes up, not the case if you make more non quality investments. There comes a point where investing more money it R&D is funding lower quality opportunities and the costs out weight the probability of hitting a power-law winner.

There is a saying one cannot pick winners - true; however, I think you can spot the bottom ~20-30% non winners with fairly high probability.

Great questions posed and insightful conclusions.