Guest post by Jim Manzi, founder and Chairman of Applied Predictive Technologies, and the author of Uncontrolled: The Surprising Payoff of Trial-and-Error for Business, Politics and Society.
I often criticize social scientists for making overly-aggressive claims for understanding causality in complex systems by building regression and other pattern-finding models. This is not evidence of some unique weakness of social scientists The same thing happens in business all the time, but business analyses tend not to be published for obvious reasons. A good example of one that has been published is in the current Harvard Business Review. This matters a lot, because HBR holds a unique position as the most important serious business publication in America.
Anne Marie Knott, professor of strategy at Washington University's Olin Business School, has written an article called "The Trillion-Dollar R&D Fix." The article proposes a new measurement of R&D effectiveness: RQ. In her words, RQ is a measure of "how effective your company is at R&D."
What is so striking to me about this article is how unvarnished Knott is in claiming that she has discovered a tool to do exactly what I say is so hard: make useful, reliable and non-obvious predictions for the effect of interventions in social systems. She writes that "Using standard regression analysis, the calculation tells us in a very precise way how productive each of the inputs is in generating output. It tells us, for instance, how much a 1% increase in R&D spending would increase a firm's revenue." Knott asserts that RQ allows the management if a company "to see how changes in your R&D expenditure affect the bottom line and, most important, your company's market value." She even names names: providing a table of what she thinks each of the top 20 public corporations in America should have spent on R&D, and how much more each would be worth if they followed her recommendations.
For example, Knott claims that she knows that Apple would have maximized its market value by spending $9.5 billion on R&D in 2010 They actually spent $1.8 billion. That's a fairly incredible claim. She thinks that what is generally conceded to be a management team that is pretty savvy about innovation underspent on R&D by more than 400 percent -- Apple ought to have quintupled its R&D spending in 2010. As another example, Knott claims that Dow Chemical could have roughly doubled its total market capitalization by increasing its R&D spending by 10 percent. That's a lot of money for them to leave on the table. And a very easy fix.
Knott claims that if just the top 20 American corporations had followed her recommendations, they would have collectively increased their market capitalization by more than $1 trillion. Consider this assertion for a moment The current total market capitalization of the top 20 U.S. public companies is a little over $4 trillion. Knott claims that she has outsmarted the entire system of management teams, investors, equity analysts, hedge funds, large-scale private equity firms and everyone else who is trying to change management practices to increase share price, and knows how to increase the total value of the most-closely followed companies in the world by almost 25 percent....by building a regression model using publicly-available data.
If you could rely on Knott's predictions, you could raise capital, buy these companies, change R&D spending in line with her model, and then sell them again at an enormous profit. You could start with Dow, because you know how to double its share price.
Maybe Knott has discovered an incredible, remediable market inefficiency, and somebody is about to get very, very rich. Or maybe there's a problem with her model.
The HBR article describes the calculation of RQ conceptually, and references a journal article (ungated) in which Knott describes the mechanics of it. In it, effectiveness in managing R&D is explicitly analogized to IQ, and refers to what is normally termed among businesspeople a competence for managing the R&D function. Knott contrasts her theory with an existing body of research on the topic:
Theories of innovation typically assume that firm R&D behavior is endogenously determined by industry conditions. If all firms in an industry share these conditions, and behave optimally then in equilibrium all firms should have identical R&D investment. Accordingly increases in R&D beyond the optimum should decrease market value. However the empirical record consistently demonstrates the opposite. Firms with higher R&D investment have higher market value.
We proposed that the inconsistency between theory and empirics stems from the assumption of homogenous firms. If instead firms have heterogeneous R&D elasticities (IQ), then a) the optimal levels of investment will differ across firms (firms with higher IQ invest more), and b) the market value per dollar of investment will differ across firms (firms with higher IQ have higher value per R&D dollar). This gives the empirical finding of increasing market value for increased R&D spending theoretical grounding: It is not that investing more in R&D increases market value, it is that higher IQ yields both higher returns (and market value) and therefore stimulates greater investment.
I'm sure she's correct that there are a lot of academic analyses that assume all firms have equal competence in R&D. Such studies may have some utility for some purposes. But the idea that competence in research management varies across firms is a belief that is universally held among relevant senior executives. I mean this literally. I doubt you could find three COO / CEO level executives among all large public U.S. companies who spend significant amounts on research that disagree.
So the non-obvious claim is that she has built a model which quantifies this effect with sufficient precision to reliably change the decision about how large R&D budgets ought to be, as compared to current management practices.
Knott uses a dataset of annual data for 610 publicly-traded American companies from 1981 - 2006. The primary data elements by company are annual numbers for: market value, revenues, Property, Plant and Equipment (PPE), number of employees, advertising spending, and R&D spending. (This was merged with data on patents by company in order to build a separate model.). From this primary data, Knott builds a time-series regression equation to estimate the causal effects of R&D spending.
The problems with this should be apparent.
I'm confident that exactly the effect Knott describes is real. Some companies are better at managing research, and they will spend more, all else equal, and create greater returns for it.
But causality also runs in the opposite direction. For example, when management teams rationally foresee a good year coming, they tend to relax spending discipline. So we will see R&D spending go up in year X, and profits rising in year X+1. The expectation of future profits cause R&D spending to rise today This effect is unobserved in Knott's model, because we have no data on executive anticipations. Some other variables will proxy for it, but the correlation between the actual unobserved variable and the proxy won't be close to 1.
Further, there are many confounding variables. For example, different firms that are called competitors will actually face different landscapes of potential R&D opportunities, independent of R&D effectiveness. IBM and HP, as examples, have different mixes of end-use markets, different customer bases, different installed technologies and so on that means that each is looking at a different list of potential relevant projects when deciding what to fund. This changes over time. Who were Apple's competitors in 1995? 2000? 2010? Who will be their competitors in 2015? This is referenced conceptually in Knott's paper, but how do we segregate this effect from RQ and everything else when the model has no data on it.
As another, firms have different "general management IQs" to use Knott's language. We could be cute and call this MQ. This will lead some firms to modify their RQ over time, and to better perceive potential opportunities than others, independent of effectiveness in going after them. This will also have some consistency and some changes over time, and the model is blind to it.
And yet further, there are also relevant interactions between each of these example variables. For example, higher MQ management teams will tend, all else equal, to get firms into a position with a better portfolio of possible R&D investments, but will also tend to lead them to exert more consistent cost discipline in good and bad years. These will tend to be persistent effects, but high MQ teams will also likely react better to changing external circumstances. MQ will correlate with RQ, but the correlation will be materially greater than 0 and materially below 1.
Knott's model considers none of this. I have written extensively, and more technically, here at The Atlantic about why attempts to use methods like those Knott employs (e.g., two-step Instrumental Variable models), to try to isolate the causal impact of variable X on corporate performance can't perform the magic of somehow overcoming the problem of never including so much of the relevant data. I summarized the conclusion as:
There's just no way out of the problem that what makes companies do well or badly is very, very complicated, and therefore isolating the impact of any one variable by lining up some descriptors for a few hundred companies and looking for patterns is like trying to grab liquid mercury.
Think of what Knott's advice means in practice. We would sit down with Apple's management team and say that they should quintuple their R&D spend. To avoid getting laughed out of the room, we would actually say "OK, we think there is high unexploited opportunity for R&D spend, so bump it up 10 percent." What this boils down to is some combination of taking the prioritized list of projects that have been considered, and move the "green light" line down further, and of rethinking our prioritization scheme somewhat, so as to increase spend by 10 percent. Presumably, afterward we would want to go back and try to evaluate whether these extra projects we therefore funded actually created market value. This is usually tricky, and requires judgment, but for many projects, we can evaluate the process cost reductions, or number of units the new product line sold at what margins, or whatever.
But all good executive teams do this already. They constantly attempt to evaluate how wide the choke on the R&D budget ought to be set by looking at actual performance after the fact.
If the advice is "in general, companies ought to try out getting a little looser with the R&D budget and see how it works." Fair enough, but nothing new. If the advice is "set your R&D budget each yer using this formula, and don't draw any subsequent conclusions based on the actual performance of the extra projects you funded," then it's not really very useful.
We want to hear what you think about this article. Submit a letter to the editor or write to firstname.lastname@example.org.