Navigation – Plan du site

AccueilNuméros13-1VariaThe Price of Virtue: Some Hypothe...

Varia

The Price of Virtue: Some Hypotheses on How Tractability Has Shaped Economic Models

Le prix de la vertu : quelques pistes sur la manière dont la tractabilité (tractability) a façonné les modèles économiques
Béatrice Cherrier
p. 23-48

Résumés

Cet article cherche à convaincre les historiens de se pencher sur la manière dont la « tractabilité », traduction directe de l’anglais tractability, a influencé les choix de modélisation individuels et collectifs en économie. Pour ce faire, je passe en revue la littérature en épistémologie économique sur le sujet. Celle-ci dérive des travaux visant à comprendre pourquoi les économistes font des hypothèses irréalistes. J’analyse ensuite les rares exemples où les économistes du 20ème siècle ont explicitement discuté du problème de la tractabilité. J’en conclue que la dimension collective des dynamiques par lesquelles les économistes rendent leurs modèles manipulables et solvables est un élément important et sous-étudié de l’analyse économique. Les choix de modélisation qui répondent à des impératifs de tractabilité sont souvent censés être temporaires et ad hoc, mais n’en deviennent pas moins des conventions collectives, créant des « trappes à tractabilité ». Je considère aussi l’existence de « standards de tractabilité » qui diffèrent selon les époques et les spécialités. Je suggère enfin de distinguer trois types de tractabilité : théorique, empirique et computationnelle.

Haut de page

Texte intégral

What seems terribly hard for many economists to accept is that all our models involve silly assumptions … they seem to help us produce models that are helpful metaphors for things that we think happen in the real world … all of these features were virtues, not vices. (Krugman, 1993, 28)

the principal virtue of those tools is the gain in analytic tractability and logical coherence that has been obtained precisely by abstracting from all that diversity and change … the question is, how high is the price? (Nelson and Winter, 1974, 903)

  • 1 For economists’ statements on realism in assumptions, see Friedman (1953), Gibbard and Varian (1978 (...)
  • 2 Gabaix and Laibson (2008) consider tractability as one of the “key properties of useful models,” al (...)

1How do economists make modeling choices? In this paper, I argue that answering this question requires a deeper look beyond those assumptions and properties of models that economists have deemed critical, and that philosophers and historians have dissected: profit or welfare maximization, stable preferences, (im)perfect information, rational expectations; equilibrium unicity, convergence, or identification.1 We should also consider those model characteristics that economists only discuss quickly or do not address at all because they view them as unsatisfactory, frustrating, temporary, mundane, unimportant, or conventional. In this article, I thus propose to investigate these choices that economists make for the sake of “tractability.” The term frequently appears, even in article titles, but its definition is seldom discussed. It is ubiquitous, yet invisible. Some article titles advertise “A Tractable Model of Reciprocity and Fairness” (Cox et al., 2007), “Functional Forms for Tractable Economic Models” (Fabinger and Weyl, 2018), “A Tractable Equilibrium Search Model of Wage Dynamics” (Bagger et al., 2014) or “Tractable Likelihood-Based Estimation of Non-Linear DSGE Models” (Kollmann, 2017). Public economist Stefania Stantcheva (2020, 2) praises the sufficient statistics approach in taxation economics for being “very tractable and empirically applicable.” “Tractable” is also a term almost systematically associated with famous “workhorse” models, such as the Dixit-Stiglitz model (Brakman and Heijdra, 2004) or the iceberg transport cost model (Krugman, 1995). It is a term used as a casual synonym for expedience, convenience, simplicity, clarity, solvability or manipulability, but without a clear definition or justification. It is considered sometimes a vice, sometimes a virtue of models.2 It evokes a shared meaning, sometimes a characteristic, a property, a requirement, a constraint of a model; it is often accompanied by or interpreted as an instruction to gloss over certain assumptions that are considered epistemically innocuous. Tractability characterizes whole models, but it is usually attached to specific assumptions about the number of variables in a model, representative agent, single firm, full information, zero profit, or specific functional forms (linearity, homogeneity, quadratic form, separability, convexity, symmetry).

  • 3 Economists have used various synonyms, as listed above, to discuss the manipulability and solvabili (...)

2The term “tractability” is pervasive enough to prompt the question of how to define it and how it has shaped individual and collective modeling choices in economics. While this paper does not provide a definitive answer, it aims to encourage historians of economics to explore this question, as some methodologists have done in the past fifteen years. To do so, I first survey the economic methodology literature on tractability, one that grew out of methodologists’ attempts to understand why economists make unrealistic assumptions. I then compare these accounts with the few instances where 20th century economists have explicitly discussed this notion, and show that such historical investigation, despite its limitations, sheds light on features of tractability that require further exploration. Rather than provide a consistent definition of tractability, I trace the changing and ambiguous ways in which economists have used the term, so as to outline research questions for historians.3 I loosely define tractability as a property of an economic model that makes it easy to manipulate and solve. I argue that documenting the collective dynamics at work when tractability motives are invoked would help to understand the endurance or displacement of specific types of models. I suggest that economists moved from viewing (in)tractability as a constraint of both the world and the model to seeing it as a virtue of economic models. But as even those economists who have made tractability a virtue have pointed out, those modeling strategies meant to be transitory sometimes become conventional and hinder change, progress, diversity. This highlights the existence of “tractability traps,” in need of further investigation. I explain how distinguishing between theoretical, empirical and computational tractability allows to ask why some manipulations and types of solutions are considered valid while other remain on the margins of the discipline. This leads to consider the existence of “tractability standards” that differ across time and fields.

1. Methodologists on Tractability

3The interest of methodologists in tractability is a relatively recent development stemming from their ongoing efforts to dissect the process of economic modeling (see for instance Morgan and Knuuttila, 2012 or Mäki, 2009). In her 2012 analysis of influential models constructed since the late 19th century, Mary Morgan outlined several key characteristics of useful models: manipulability and workability (which could be viewed as synonym to tractability), non-triviality (a model should bring new insights), and communality/typicality (a model should be representative of a class of phenomena). However, Morgan chose not to create an overarching historical narrative or philosophical theory of economic modeling. She thus did not focus on any one of these characteristics as the subject of frustration, collective discussions, or transformations by economists. Till Grüne-Yanoff (2008, 1-2) has proposed to frame the flourishing literature on modeling as a debate between the isolationists “who regard modelling as a way to isolate causal factors or capacities of the real world” and the fictionalists who “regard models as parallel fictional worlds, populated by fictional agents.” Such a typology highlights the close relationship between modeling, abstraction, and realism. In response to Milton Friedman (1953)’s famous methodological essay, methodologists have proposed various explanations for why economists make unrealistic assumptions. This has led Alan Musgrave (1981) to propose a typology of assumptions which included “domain assumptions” (they specify the domain of applicability of a theory), “heuristic assumptions” (they simplify the logical development of a theory) and “negligibility assumptions” (they state which factors can be overlooked in studying a phenomenon).

4Further research has refined typologies for both critical and auxiliary assumptions. Uskali Mäki (2000) proposed replacing Musgrave’s “heuristic assumptions” with “early-step assumptions” (for instance, “assume that the budget is balanced”). The term conveys the idea that such assumptions are transitory, meant to be relaxed at some point. In her study of game theory and auctions, Anna Alexandrova (2006) distinguished between “situation definers” (like the number of players, the rules of games) and “derivation facilitators” (twice-differentiable utility functions, continuously distributed probabilities, etc.). Though not devoid of empirical content, they are not introduced in the models to describe, but to solve the model.

  • 4 Hall needed to ensure that the correlation of productivity growth with aggregate demand could be co (...)

5It was however Frank Hindriks (2006) who offered a comprehensive case for the inclusion of “tractability assumptions” in methodologists’ classifications. He defined a tractability assumption as “a statement about the tractability of a problem ... [which] would be intractable if it were not for a particular first-order assumption” (Hindriks, 2006, 412). Development in science, for instance mathematical or econometric techniques, new data, or new instruments, help turn previously deemed intractable problems into tractable ones, he explained. His primary case study examines how economists make tractability assumptions to measure some of the variables in their models. By focusing on measurement, he departed with hitherto more theory-based discussions of assumptions. Hindriks (2005) documented how macroeconomist Robert Hall assumed constant productivity growth when he set out to explain why total factor productivity appeared as procyclical in the 1980s. Hall clarified that the Solow Residual captured not only productivity, but also the effects of firms’ increasing returns and market power. To demonstrate this, measuring the mark-up of price over marginal cost—that Hall assumed was a component of the Solow residual—was necessary. This, in turn, required assuming constant productivity growth, despite opposing empirical evidence.4

6Hindriks therefore argued that theoretical and empirical tractability should be distinguished from one another. Building on this work, Mäki (2009, 83) warned that tractability assumptions might become more than “auxiliaries” and “override important ontological considerations,” so that “the values of formal rigor take over in shaping the focus and strategies of research.” He cited constant return to scale and perfect markets as examples. Jaakko Kuorikoski, Aki Lehtinen, and Caterina Marchionni (2010) proposed that examining the consistency of results across several models that share a common structure yet differ in tractability assumptions can be seen as a type of robustness analysis (if results across models with different tractability assumptions are identical, then they are not spuriously driven by those assumptions). Drawing on their core-periphery model case study, Chiara Lisciandra (2017) however casts doubt on the feasibility of such analysis. She pointed out that the assumption of convexity of the iceberg cost function is one that accommodates increasing returns to scale, imperfect competition and consistent elasticity of substitution. It thus seems impossible to relax this specific tractability assumption without changing the whole structure of the model. Overall, methodologists have defined tractability as those auxiliary, transitory and sometimes unrealistic assumptions that economists make to solve their theoretical models. Some have also pointed to the need to measure key economic variables by abstracting from a complex world. These definitions of tractability, however, do not take into account possible transformations in the term’s meaning across the 20th century. Yet, a quick survey of economists’ few discussions of tractability shows that the notion has evolved from characterizing a response to the complexities of the world to being a virtue of a model.

2. Economists on Tractability: From Vice to Virtue

  • 5 I am not asserting that Robinson was the first economist to discuss tractability. For instance, Jea (...)

7 Ironically, one of the earliest and more forthcoming proponents of writing “tractable” models later recanted her advocacy. It is Joan Robinson.5 As a young theorist completing her book on imperfect competition (Robinson, 1933), she penned a 15-pages essay to defend her formal approach to firm behavior and markets. Reflecting on the methodological controversies she observed at Cambridge and throughout Europe, she elucidated how theorists all faced a tradeoff between the tractability and realism of their assumptions:

the two questions to be asked of a set of assumptions in economics are these: are they tractable? and: do they correspond to the real world? The first question can only be answered by the application of analytical technique to the assumptions. Some sets of assumptions are too complicated to be manageable by the technique which is now at our disposal. But a set of assumptions that is manageable is likely to be unreal. (Robinson, 1932, 6)

  • 6 “If the pessimistic economist prefers sitting at the apex of a pyramid of completely self-consisten (...)

8The choice between the “manageable” and the “realistic” set of assumptions was one of “temperament … not one of opinion,” she continued: “the optimistic, analytical, English, economist will choose the manageable set, and the pessimistic, methodological, Continental economist will choose the realistic set” (Robinson, 1932, 6-7). Counting herself as an optimistic analytical economist favoring tractable assumptions, she expressed her hope to move from a two-dimensional to a n-dimensional technique in the future.6

  • 7 For histories of the Cambridge and British intellectual milieu and controversies of the 1930s and 1 (...)

9In her introduction, Robinson (1933, 1) again acknowledged the “arid” character of her analytical approach, one that would undoubtedly fuel “the impatience of the political, the businessman and the statistical investigator.” She contemplated the “agonizing sense of shame” of the “analytical economist” confronted to a “practical man” (Robinson, 1932, 2). She again apologized for the “adoption of very severe simplifying assumptions” and explained that the book was “presented to the analytical economist as a box of tools,” in need of being perfected up to the point of becoming useful to the practical man (Robinson, 1932, 2). The adoption of more “tractable” yet less “realistic” assumptions associated with Cambridge was criticized by Australian economist Ronald Walker (1943, 58) in his book From Economic Theory to Policy: “It is not that the complexity of realistic assumptions makes them intractable but that they are in conflict with the assumptions used in some of the established theories and, if admitted to the economists’ system, must destroy these theories,” he wrote, before condemning what he saw as a growing “theoretical blight.” A reviewer interpreted the target of Walker’s ire as “the great Cambridge tradition” (Spiegel, 1945, 56). They both defined “tractable” in relation to the available “analytical techniques.”7

10Ronald Coase (1937) likewise interpreted the new Cambridge tradition spearheaded by Robinson’s theory of imperfect competition as shifting the balance towards tractability over realism. He opened his famous article on the “The Nature of the Firm” by quoting Robinson’s 1932 statement on the tractable/realistic dichotomy, only to claim that he developed a

  • 8 In a late series of article in which he reflected on the origins and meaning of his theory of the f (...)

definition of a firm … which is not only realistic in that it corresponds to what is meant by a firm in the real world, but is tractable by two of the most powerful instruments of economic analysis developed by Marshall, the idea of the margin and that of substitution (Coase, 1937, 386).8

  • 9 See Medema (1994) for a thorough discussion of Coase’s work and epistemology.

11Coase thus explicitly tied tractability (or manageability) to the available instruments of economic analysis.9 In the 1930s and 1940s, it was primarily those British economists who shared an interest in microeconomic analytical tools, in particular monopolistic competition, who casually discussed assumptions by using the words “(in)tractable” and manageable interchangeably: Cambridge’s Roy Harrod (1934, 336) solved a duopoly problem with linear demand functions but noted that “if more particular demand functions are allowed, the mathematics becomes less tractable”; John Hicks (1936) challenged the existence of tractable monopoly assumptions, while Fritz Machlup (1940, 296) argued that “simple monopoly is still a tractable assumption within a general equilibrium system.”

  • 10 See Boianovsky (2020) for a detailed analysis of Samuelson’s use of Medawar’s epistemological writi (...)

12Over the following decades, while Robinson began to question her previous “scholastic” approach (Robinson, 1953), many economists repeated her defense of tractability assumptions, though in a less apologetic manner. In 1970, MIT theorist Paul Samuelson (1970, 1373) reflected on his younger own impatience with “restrictive” assumptions, and his lack of awareness of the truth in Nobel biologist, Peter Medawar’s maxim that “science must deal with that which can be managed, eschewing the intractable.”10 He would use the term time and again, for instance in a 1986 edition of his collected papers, to remind the reader that the language of science requires to provide “prosaic, tractable, describable, testable and understandable model or paradigm” (Samuelson, [1982] 1986, 859). As for Robinson, while she clearly referred to modeling assumptions in her text, in other pages she explained that “economics is still in infancy, more because of the intractable nature of the subject than because of the low mental caliber of economists.” (Robinson, 1932, 12) Robinson and Samuelson thus both ambiguously discuss (in)tractability as a characteristic of the natural and social world, and of the models themselves. The slow but steady rise of uses of the term “tractable” and the ambiguity surrounding its meaning is supported by anecdotal bibliometric evidence. A search the terms “tractable” or “tractability” or “intractable” among the “articles” classified as “economics” in the JSTOR database suggest that the terms were used around 40 times in the 1930s, 75 in the 1940s and 150 in the 1950s. Most of these occurrences were adjectives, indicating that “tractability” had not yet stabilized as a property of economic models. Consistent with the ambiguities found in Robinson and Samuelson’s work, the terms tractable and intractable were sometimes used to discuss war mobilization, book printing, clothes stock reallocation (Rostow, 1942), “economic and social problems” (Menderhausen, 1949, 672), employment and investment (Austin Robinson, 1941, 481; see also Dowdell, 1940, 24), or “business cycle forces” (Bratt, 1953, 20).

  • 11 “Modeling trick” is a term that economists often use, whether casually (see for instance Stiglitz, (...)

13In the 1960s, Samuelson defended tractable models in the context of ongoing debates on the state of development economics. He criticized the voluminous, repetitive, and intractable nature of research published by Paul Rosenstein-Rodan, Arthur Lewis or Hollis Chenery (Boianovsky, 2020). Comparing this with how Paul Krugman criticized the same institutionalist development literature twenty years afterwards helps pinpoint the stabilization of what economists meant by tractability. Like Samuelson, Krugman (1995, chap. 1) complained about Albert Hirschman’s unwillingness to endorse “tightly specified models.” He assumed that Hirschman favored institutionalism due to impatience “with the narrowness and seeming silliness of the economics enterprise.” Krugman proceeded to defend those “silly assumptions,” in particular in his own field, trade (Krugman, 1993). He contrasted “big untrue assumptions—constant returns, perfect competition” with new trade theory models that “avoid these big lies but make many small ones along the way to keep matters tractable; the theorist can never forget the degree of falsification involved” (Krugman, 1994, 15). He likely had in mind his own iceberg transport cost model of trade, which treats transportation costs not as a separate variable, but as the melting of the quantity of transported goods. This modeling trick he took from Samuelson.11 Krugman insisted that he came to realize that these simplifying assumptions made for the sake of tractability “were virtues, not vices” and that they “added up to a program that could lead to years of productive research” (Krugman, 1993).

  • 12 I have presented the numbers in absolute terms rather than as a percentage of a growing body of art (...)
  • 13 See for instance Romer (1986, 664) on banking models or Holmstrom and Costa (1986) on tractable inf (...)

14The use of the term “tractability” in economics rose sharply from the 1970s, coinciding with Krugman’s emphasis on model tractability (defined as its ability to be solved analytically). The number of JSTOR articles in “economics” that used the terms “tractable,” “intractable” or “tractability” increased from around 300 in the 1960s to some 800 in the 1970s and more than 1500 in the 1980s, with the term “tractability”, and more specifically “analytic(al) tractability,” experiencing a significant raise (from 0 in the 1960s to 9 in the 1970s, 49 in the 1980s, 84 in the 1990s).12 By the end of the 1960s, John Gould (1968, 49) and Henry Grabowski (1970, 220), then two young industrial organization specialists, were among the few to discuss the “mathematical tractability” of cost functions and the “analytical (in)tractability” of alternative functional specification of duopoly models. About ten years later, by the 1980s, discussing the tractability of functional forms had become commonplace.13 The growing use of the noun may indicate the stabilization of the understanding as a property of economic models. Indeed, uses of these terms to characterize the social world, rather than the model, gradually become marginal.

3. Economists on Tractability: From Virtue to Traps

  • 14 What economists have usually called “workhorse” models is different from what philosophers of scien (...)

15 Krugman was part of this small group of economists associated with MIT who exhibited “a knack for making the kinds of simplifying assumptions that rendered his models tractable but not trivial” (Warsh, 2006, 205). Other members included Samuelson, Robert Solow, or Joseph Stiglitz, all of whom produced “benchmark” or “workhorse” models that were renowned for their “tractable” nature and applicability to various real-world problems.14 The term appears multiple times in a book edited by Steven Brackman and Ben J. Heijdra (2004) on the history and legacy of the monopolistic competition model published by Avinash Dixit and Stiglitz in 1977. This model aimed to analyze the equilibrium and welfare implications of the trade-off that firms face between consumers’ taste for diversity (an incentive to produce several goods) and the existence of economies of scale (an incentive to produce less goods in greater quantities). These phenomena introduced non-convexities in the mathematical analysis, making it difficult to solve models of monopolistic competition. To simplify the analysis, Dixit and Stiglitz used various tricks. They modeled consumer goods’ variety by using a Constant Elasticity of Substitution utility function where the elasticity of substitution between varieties is constant and various products enter symmetrically in the bundle. They used fixed costs to model increasing returns at the firm level and assumed free entry. They modeled firms as making decisions on the basis of the price of their good and of an aggregate price index summarizing all other goods’ prices. They derived an optimum in which each firm produces one good priced with a constant mark-up on marginal cost. The model proved applicable to a wide range of issues in trade, economic geography, growth and macroeconomics. Dixit later reflected that “Joe and I knew that we were doing something new in building a tractable general equilibrium model with imperfect competition, but we didn’t recognize that it would have so many uses” (quoted in Warsh, 2006, 208). He thus links tractability with typicality or versatility, that is, the possibility to apply the model to various research questions and teaching settings.

  • 15 See for instance Couix (2020) on the Georgescu-Roegen/Daly vs Solow/Stiglitz controversy on growth (...)

16All these economists found themselves defending their “tractability vs. realism” tradeoff against alternative methodologies.15 However, they also had disagreements among themselves about the best approach to writing models that can be easily manipulated and solved, as well as how to apply them. “Elegant error is often preferred to messy truth. Theoretical tractability is often preferred to empirical relevance,” Richard Lipsey (2001, 169) complained as he assessed the legacy of the Dixit-Stiglitz model. Lipsey and B. Curtis Eaton had proposed an alternative model in multi-characteristic spaces, but it did not receive the same recognition because it relied on less aggregated behavior. Also reflecting on the Dixit-Stiglitz model, Peter Neary (2004, 180) emphasized “the price of tractability”, that is, a reliance on very special functional forms, identical, atomistic firms, free entry and homotheticity. James Tobin (1986, 351) complained that microfoundations had become the only game in town in macroeconomics, and “even the individual’s optimization is simplified and specialized in the interest of analytic tractability. Utility and production functions take parametric forms. By conventions, equations are linear or log linear.” A decade earlier, evolutionary economics pioneers Richard Nelson and Sidney Winter (1977, 272) had emphasized that a “major advantage” of the use of simulation in industrial organization research was the “freedom from tractability constraints of available analytical techniques.” Referring to the use of maximization and equilibrium model in macro in an earlier article, they noted that “the principal virtue of those tools is the gain in analytic tractability and logical coherence that has been obtained precisely by abstracting from all that diversity and change … the question is, how high is the price?” (Nelson and Winter, 1974, 903) Heavy, they answered, since the “neoclassical orthodoxy” cannot properly discuss the diversity rewards and penalties that firms, sectors and countries face as a consequence of technological progress.

  • 16 According to Sergi (2017, chap. 2), the shift towards rational expectations initially made macroeco (...)

17Furthermore, what appears as a virtuous hypothesis that allows to tackle new questions in one decade might become a vice later, an assumption that keeps doors closed. The representative agent made macroeconomic models with rational expectations tractable, but it precluded the examination of how macroeconomic shocks affect income and wealth distributions;16 Dixit-Stiglitz models allowed economists to study of why firms engage in trade and how they choose their location, but they could not say anything about strategic behaviors such as commitment or price discrimination. In other words, the collective spread of tractability assumptions creates what I call “tractability traps.” As Robinson and others have highlighted, assumptions made for the sake of tractability are meant to be tentative assumptions; they are waiting to be lifted as mathematical techniques—and computer capabilities advance. In their analysis of how and why economists build models, theorists Alan Gibbard and Hal Varian acknowledged that tractability constraints are dynamic, but not the sole driving force behind modeling:

if the purpose of economic models were simply to approximate reality in a tractable way, then, as techniques for dealing with models are refined and as more complex models become tractable, we should expect a tendency toward a better fit with complex reality through more and more complex models. (Gibbard and Varian, 1978, 674)

18Regardless, these assumptions often tend to become “naturalized,” as Krugman (1993, 26) wrote in an autobiographical essay: “What I began to realize was that in economics we are always making silly assumptions; it’s just that some of them have been made so often that they come to seem natural.”

19Even those who advocate for tractability as a virtue acknowledge that economists can collectively fall into a trap. Solow (1986), for instance, bemoaned the lack of microfoundations based on actual microeconomic research in macroeconomic models in the 1980s. He pointed out that “market clearing has been an automatic assumption for a century and more, and it takes time for long-established and convenient habits of thought to change.” (Solow, 1986, 197) Lipsey (2001, 176) taunted “macroeconomic models that assume competitive market clearing behaviour in labour markets because it is tractable, not because it is right.” Thomas Piketty and Emmanuel Saez (2012, 1) explained the persistence of the Chamley-Judd model in which optimal capital tax is, unrealistically, zero, by “the absence of an alternative tractable model.” All these quotes emphasize the collective and established nature of the assumptions made for tractability purpose. This indicates that methodologists’ work needs to be supplemented by some historical analysis of the collective dynamics behind those modeling choices made for tractability purpose: how they spread, stabilize, become entrenched, and are eventually challenged.

4. Three Types of Tractability?

  • 17 This definition overlaps with the one proposed by Fumagalli (2010, 622): “a cluster concept resembl (...)
  • 18 I recovered only a few instances where economists discussed empirical tractability before the 1980s (...)
  • 19 These parameters are elasticities of capital and labor supply with respect to taxes.

20To better understand the importance of documenting the collective dynamics of tractability in economic modelling, we need to refine our definitions. In line with how most economists have debated tractability in the recent decades, I define it as a set of properties of models that generally includes manipulability and solvability.17 Most of the above discussions by methodologists and economists however focus on one specific type of tractability, the ability to solve and manipulate theoretical models. One exception is Hindriks (2005; 2006), who documents how Hall made tractability assumptions so as to measure the mark-up of price over marginal cost.18 This ability to write empirical tractable models allowing the observation, measurement, identification and estimation of key economic variables has been at the core of the debates on the Lucas critique, structural vs. quasi-experimental models, or the transformation of public economics since the 1980s. In the 1990s and 2000s, optimal taxation economists sought a “tractable” middle-ground between those who advocated for a structural approach where all variables are deep behavioral parameters, and those who endorse a more rigorous empirical reduced-form approach. They developed what Raj Chetty later called the “sufficient statistics” approach. As explained by Piketty and Saez (2012, 2), “by tractable, we mean that optimal tax formulas should be expressed in terms of estimable parameters and should quantify the various trade-offs in a simple and plausible way.”19

  • 20 For instance, macroeconomist Fabio Ghironi (2018,1) defines tractability as “the requirement that I (...)

21Some reactions to Krugman’s defense of tractability as a virtue of economic models point to a third type of tractability. Macroeconomist Robert Lucas, for instance, opened his review of Krugman and Helpman’s work on trade with the statement that “the useful development of an economic idea depends critically on one’s ability to formalize it accurately and tractably” (Lucas 1990, 664). He conceived tractability as the ability to manipulate and solve a model, but not merely at the theoretical and empirical levels.20 His writings suggest that many of his choices of (linear and quadratic) functional forms were a response to computational affordances: “there are no theoretical reasons that most applied work has used linear models, only compelling technical reasons given today’s computer technology,” he and Tom Sargent explained in 1979 (Lucas and Sargent, 1979, 13, emphasis in the original). They emphasized that many of their assumptions were for the sake of computational tractability, that is, to bring the model to data given the contemporary state of “computer technology”:

the predominant technical requirement of econometric work which imposes rational expectations is the ability to write down analytical expressions giving agents’ decision rules as functions of the parameters of their objective functions and as functions of the parameters governing the exogenous random process they face. Dynamic stochastic maximum problems with quadratic objectives, which produce linear decision rules, do meet this essential requirement—that is their virtue … [T]heoretically, we know how to calculate with expensive recursive methods, the nonlinear decision rules that would stem from a very wide class of objective functions; no new econometric principles would be involved in estimating their parameters, only a much higher computer bill. (Lucas and Sargent, 1979, 13)

22To better analyze economists’ work, it would be useful to distinguish between theoretical, empirical, and computational tractability. However, this distinction may be difficult to apply in practice for several reasons. First, computational issues exist both at the theoretical and empirical levels. Following the introduction of rational expectations in non-linear macroeconomic models, a host of methods were developed to make up for the absence of theoretical closed-form solutions, as well as for the empirical intractability of the likelihood functions required to estimate those models (for a survey, see Fernandez-Villaverde et al., 2016). Computational tractability, for instance the ability to run simulations, has become crucial, not only for investigating the theoretical properties of models and market-design algorithms, but also for helping researchers find new proofs. In fact, the development of market-design and agent-based models has challenged the separation between theoretical and empirical work itself (Backhouse and Cherrier, 2017).

  • 21 This is the “certainty equivalence” property demonstrated by Herbert Simon and generalized by Henry (...)

23Second, some modeling choices can be seen as attempts to make models both theoretically and empirically tractable. A typical case is the ascent of the Cobb-Douglas production function. A major argument in favor of using aggregate productions functions, in particular Cobb-Douglas than CES functions in growth and capital theory, for instance, was their mathematical and empirical tractability (respectively the ability to solve theoretical models, and to estimate the parameters of these functions; see Biddle, 2020). Another example is the widespread use of quadratic function in macroeconomics, for instance the quadratic loss function used since the 1960s to represent the central bank’s problem in macroeconomics. As explained by Pedro Duarte (2009), such functions were widely used in engineering and management since the 1950s because of their computational tractability. In monetary economics, this translated into the possibility to run simulations to assess alternative policy rules. But, as monetary economist William Poole remarked, “the linear structure with quadratic loss function is highly tractable. It is generally easy to obtain analytic solution” (quoted by Duarte, 2009, 4). This is because the decision maker only needs to know the expected value of the stochastic variables, not their full distribution.21

  • 22 See Backhouse and Cherrier (2017) for further references of historical work documenting shifts in c (...)

24Third, modeling choices made for tractability purpose are largely endogenous, and depends on what economists in a given field at a given time consider crucial assumptions (for instance, rational expectations). What counts as a tractable model also depends on the available technologies, including mathematical and computational tools, algorithms, hardware and software, which are also endogenous to economists’ research agenda. In response to technological limits, economists have sought more powerful computers in other institutions, or borrowed analytical and numerical methods from other sciences, which they have subsequently standardized and spread through the development of software.22

5. Changing Hierarchies and Standards of Tractability?

25Disentangling theoretical, empirical and computational tractability thus helps highlighting that economists have diverging conceptions of what it means to “manipulate” and “solve” a model. The same model may be considered tractable by some economists and intractable by others, revealing a change in hierarchies in concepts of solutions. For instance, in a series of lecture given in the 1960s, Solow (1963, 56-57) outlined a choice between either assuming immediate substitution between labor and capital goods or considering that technology is ex ante fixed and can only ex post alter the degree of capital intensity in the growth process. He noted that models with the former assumption “has the additional advantage of being tractable enough for pencil-and-paper calculations,” while the latter assumption had “the difficult property that its current and future behavior may depend on the precise sequential story of its recent past.” The only way to deal with the second type of model, he concluded, was by “experiments with a computing machine.”

26The hierarchy in concepts of solution (pen-and-pencil, estimations, numerical approximations) and tractability assumptions which transpired in Solow’s use of the term “difficult” was also found in Stiglitz’s assessment of simulation exercises in growth models with natural resources. Stiglitz came to environmental economics in a context where the collective imagination was captured by the Limits to Growth report, one based on the kind of system dynamics methods pioneered by engineer Jay Forrester at MIT. The team anointed by the Club of Rome used the World3 computer model to simulate the interactions of natural and social variables. It was thus these models that Stiglitz had in mind when he explained in 1979 that “simulation exercises” were ineffective in assessing whether the elasticity of substitution between natural resources and capital is greater or less than one, a question central to the determination of the growth path in the long-run. “Simulation may also enable us to identify the crucial parameters but, here as elsewhere, direct analytical methods are likely to be less ambiguous,” he concluded (Stiglitz, 1979, 45).

27Conflicting visions of what an appropriate “solution,” therefore a tractable model, was pervade the history of growth theory since the 1960s. Stiglitz and Solow’s dissatisfaction with the approach proposed in the Limits to Growth report led them to develop variants of their growth models where natural exhaustible resources appear in the Cobb-Douglas production function. Solow assumed substitutability between aggregate capital and resources, while Stiglitz assumed resource-augmenting technological progress. Nicholas Georgescu-Roegen (1979, 98) criticized these assumptions as “conjuring trick[s]” which disregarded the law of thermodynamics and confused flow and fund elements (Couix, 2020). Their modeling choices he viewed as reflecting economists’ “exclusive preoccupation with paper-and-pencil exercises” (Georgescu-Roegen, 1979, 97), one he wanted to break from in his own flow-fund theory of production. Despite the rise of numerical methods in economics over the last decades, the hierarchy between analytical and computational tractability persists. In 2008, Gabaix and Laibson indicated that the use of computational methods had long been seen as a second best: “models with maximal tractability can be solved with analytic methods—i.e. paper and pencil calculations … [whereas] minimally tractable models cannot be solved even with a computer.” (Gabaix and Laibson, 2008, 295)

  • 23 Around that time, Barro and Grossman’s abandoned the disequilibrium models of involuntary unemploym (...)

28The presence of multiple types of tractability and changing hierarchies raise additional questions for historians regarding the dissemination, stabilization and overturning of modeling choices aimed at making models tractable. One crucial question is whether “tractability standards” that dictate which types of solution are deemed valid by economists can be identified in the history of economics, and if they tighten or loosen over time? For instance, the history of macroeconomics in the 1970s can be reconstructed as a process of tightening tractability standards. From the late 1930s to the 1970s, economists built large-scale macroeconometric models that often consisted of over a hundred equations and had to be estimated and simulated on slow and hard-to-access mainframe computers. Any approach that allowed them to derive numbers from the IBM 360 computer after 20 hours was acceptable, including taking down whole blocks of equations, or mixing limited information maximum likelihood, instrumental variable techniques, and recursive block estimation. Which models macroeconomists were allowed to solve and how they were allowed to solve them considerably narrowed in the 1970s. Empirical strategies had to be adapted to fit the requirement that agents’ decision rules exhibit rational expectations in a stochastic setting.23 This often entailed relinquishing analytical solutions and writing down models that could be “solved” with numerical methods. These methods enabled approximating the true unknown solution through linearization, value function iteration, then perturbation and projection methods. Some macroeconomists, led by Finn Kydland and Edward Prescott, abandonned estimation for calibration methods, until the implementation of new computational methods such as Markov Chain Monte Carlo methods fueled the development of Bayesian estimation in the 2000s.

29This raises the question of who defines “tractability standards,” i.e. which manipulations and model solutions are considered acceptable. By the late 1990s, while numerical methods flourished due to the increasing speed and affordability of processors and the proliferation of personal computers, computational economist Kenneth Judd complained that acceptance of these methods was too slow due to the reluctance of journal editors to view these as adequate proofs (Backhouse and Cherrier, 2017, 18). Though the methods that he had pioneered were increasingly used in economics, Judd (1997) deplored the lack of acceptability of agent-based computational models. Economists, historians and methodologists explain the heterogeneous acceptance of strategies to make economic models tractable by widespread epistemic preferences for analytical solutions. Vela Veluppilai (quoted in Backhouse and Cherrier, 2017, 117) singles out economists’ commitment to a Hilbertian paradigm. Jesus Fernandez-Villaverde, Juan Rubio Ramirez, and Frank Schorfeide (2016) explain that “macroeconomists have been reluctant to accept the limits imposed by analytic results.” Kuorikoski and Lehtinen (2021) argue that economists locate themselves within a philosophical tradition where simulation is defined as computer-implemented methods used to handle models without analytical solutions. Simulations are viewed as computational aids that help the economist derive results from a small set of assumptions. They contrast this approach to one in which simulations are considered as processes imitating other processes, where the computer does some epistemic work by deriving conclusions. While discretizing a state space or using a Monte Carlo simulation to explore some distributional properties might be acceptable, analyzing the emergent properties resulting from interacting heterogenous agents (as in agent-based models) is less so.

30Several intriguing questions arise, including whether tractability assumptions spread due to intellectual or empirical appeal, institutional pressures fueled by peer reviewing, necessity, frustration, deliberate intellectual strategies, or rhetorical and ideological devices used to shield models from criticism. Additionally, what drives communities to loosen or change their tractability standards: individual genius, imports from other disciplines, shifts in data availability or computer affordances, changing policy regimes and demands from businesses, institutional hierarchies?

31That my discussion relies mainly on examples taken from macroeconomics, growth theory, public economics or trade suggests that these tractability standards may not just vary across time, but also across fields. For instance, how market designers think about tractability may be shaped by the existence of stabilized definition of the term in computer science: tractable problems are those that can be solved by computer algorithm in a reasonable amount of time, aka a number of steps that is a polynomial function of their size (by opposition to hard or intractable problems whose time-to-solve grow exponentially with their size). While macroeconomists or trade economists seem concerned with the width of functional forms they can work with (as exemplified by the quote from Lucas and Sargent above; see also Fabinger and Weyl, 2018), behavioral economists might rather focus on how tractability requirements affect the number of variables that a model can accommodate: “the constraint of tractability can be satisfied with somewhat more complex models [than rational choice theory], but the number of parameters that can be added is small,” Kahneman (2003, 166) observed. Though anecdotal, these quotes indicate different ways of conceiving theoretical, empirical and computational tractability and their interrelatedness in various fields.

6. Concluding Remarks

32In this paper, I have argued that the framework proposed by methodologists to think about modeling assumptions could benefit from historical investigation into the individual and collective rationales for writing “tractable” models. However, such an approach presents methodological challenges for historians. Recent attempts to move from studying a small collection of key texts by individual landmark contributors to studying collective internationalized practices in economics have often relied on quantitative techniques, such as bibliometrics (with coupling or co-citation analysis) and text-mining/natural language processing. This however requires working objects that can be captured from masses of data through coding retrieval, and tractability is an elusive object. Instances when the term is used explicitly only scratch the surface of the tractability iceberg. Other terms used to describe tractability assumptions such as convenience, simplification or expediency may evade text-mining. Most of the time, these modeling choices may not be discussed. But because tractability motives and standard matter to understand how economists work, this should not deter historians from researching the topic. I hope that the study of tractability in economics will become more tractable in the future.

Musgrave, Alan. 1981. Unreal Assumptions in Economic Theory: The F-Twist Untwisted. Kyklos, 34(3): 377-387.

I am grateful for the valuable comments and feedback provided by Fabio Ghironi, Uskali Mäki, Chiara Lisciandra, Aurélien Goutsmedt, Jean-Sebastien Lenfant, Harro Maas, Roy Weintraub, and one angry anonymous referee. I have benefited from fruitful discussions with the participants at the 2018 INEM conference at Helsinki, the 2019 “Soul of Economics” conference in Zurich, and the 2022 ALAHPE conference in Montevideo. Steven Medema and Francesco Sergi have offered invaluable suggestions and support at every stage of this long project. I owe them a special debt of gratitude.

Haut de page

Bibliographie

Alexandrova, Anna. 2006. Connecting Economic Models to the Real World: Game Theory and the FCC Spectrum Auctions. Philosophy of the Social Sciences, 36(2): 173-192.

Aslanbeigui, Nahid and Guy Oakes. 2009. The Provocative Joan Robinson: The Making of a Cambridge Economist. Durham: Duke University Press.

Backhouse, Roger and Beatrice Cherrier. 2017. “It’s Computers, Stupid!” The Spread of Computers and the Changing Roles of Theoretical and Applied Economics. History of Political Economy, 49(Supp.): 103-126.

Bagger, Jesper. François Fontaine, Fabien Postel-Vinay, and Jean-Marc Robin. 2014. Tenure, Experience, Human Capital, and Wages: A Tractable Equilibrium Search Model of Wage Dynamics. American Economic Review, 104(6): 1551-1596.

Biddle, Jeff. 2020. The Origins of the CES Production Function. History of Political Economy, 52(4): 921-952.

Boianovsky, Mauro. 2020. Voluminous, Repetitive and Intractable: Samuelson on Early Development Economics. CHOPE Working Paper, no. 2020-03. Durham: Center for the History of Political Economy, Duke University. http://0-dx-doi-org.catalogue.libraries.london.ac.uk/10.2139/ssrn.3548093 [retrieved 15/04/2023].

Brakman, Steven and Ben Heijdra (eds). 2004. The Monopolistic Competition Revolution in Retrospect. Cambridge: Cambridge University Press.

Bratt, Elmer C. 1953. The Future Character of the Business Cycle. The Analystis Journal, 9(2): 19-21.

Coase, Ronald. 1937. The Nature of the Firm. Economica, 4(16): 386-405.

Coase, Ronald, 1988a. The Nature of the Firm: Origin. Journal of Law, Economics, and Organization, 4(1): 3-17.

Coase, Ronald, 1988b. The Nature of the Firm: Meaning. Journal of Law, Economics, and Organization, 4(1): 19-32.

Couix, Quentin. 2020. Natural Resources in the Theory of Production: The Georgescu-Roegen/Daly versus Solow/Stiglitz Controversy. The European Journal of the History of Economic Thought, 26(6): 1341‑1378.

Cox, James C., Daniel Friedman, and Steven Gjerstad. 2007. A Tractable Model of Reciprocity and Fairness. Games and Economic Behavior, 59(1): 17-45.

Dixit, Avinash K. and Jospeh E. Stiglitz. 1977. Monopolistic Competition and Optimum Product Diversity. American Economic Review, 67(3): 297-308.

Dowdell, Eric G. 1940. The Multiplier. Oxford Economic Papers, 4(1): 23-38.

Duarte, Pedro. 2009. A Feasible and Objective Concept of Optimal Monetary Policy: The Quadratic Loss Function in the Postwar Period. History of Political Economy, 41(1): 1-55.

Fabinger, Michael and Glen Weyl, G. 2018. Functional Forms for Tractable Economic Models and the Cost Structure of International Trade. CIRJE Working Paper F-Series, no. CIRJE-F-1092. Tokyo: Faculty of Economics, University of Tokyo.

Fernández-Villaverde, Jessus, Juan Rubio Ramírez and Frank Schorfheide. 2016. Solution and Estimation Methods for DSGE Models. NBER Working Papers, no. 21862. Washington, D.C.: National Bureau of Economic Research.

Friedman, Milton. 1953. The Methodology of Positive Economics. In Milton Friedman (ed.), Essays in Positive Economics. Chicago: University of Chicago Press, 3-43.

Fumagalli, Roberto. 2010. On the Neural Enrichment of Economic Models: Tractability, Trade-Offs and Multiple Levels of Descriptions. Biology and Philosophy, 26(5): 617-635.

Gabaix, Xavier and David Laibson. 2008. The Seven Properties of Good Models. In Andrew Caplin and Andrew Schotter (eds), The Methodologies of Modern Economics: Foundations of Positive and Normative Economics. Oxford: Oxford University Press, 292-319.

Georgescu-Roegen, Nicholas. 1979. Comments on the Papers by Daly and Stiglitz. In Kerry V. Smith (ed.), Scarcity and Growth Reconsidered. New-York: Resources for the Future Press, 95-105.

Ghironi, Fabio. 2018. Ramblings on Tractability in Macroeconomics. Blog post, August 6, 2018. https://faculty.washington.edu/ghiro/GhiroTractability080618.pdf [retrieved 01/05/2023].

Gibbard, Alan and Hal Varian. 1978. Economic Models. The Journal of Philosophy, 75(11): 664-677.

Gilboa, Itzhak, Andrew Postelwaite, Larry Samuelson, and David Schmeidler. 2022. Economic Theories and Their Dueling Interpretations. PIER Working Paper, no. 22-012. Philadelphia: The Ronald O. Perelman Center for Political Science and Economics.

Gould, John P. 1968. Adjustment Costs in the Theory of Investment of the Firm. The Review of Economic Studies, 35(1): 47-55.

Grabowski, Henry G. 1970. Demand Shifting, Optimal Firm Growth, and Rule-of-Thumb Decision Making. The Quarterly Journal of Economics, 84(2): 217-235.

Grüne-Yanoff. Till. 2008. Preface to Economic Models as Credible Worlds or as Isolating Tools? Erkenntnis, 70(1-2): 1-2.

Harrod, Roy. 1934. The Equilibrium of Duopoly. The Economic Journal, 44(174): 335-337.

Hicks, John. 1936. Review of The Economics of Stationary States by A.C. Pigou. The Economic Journal, 46(181): 98-102.

Hindriks, Frank A. 2005. Unobservability, Tractability and the Battle of Assumptions. Journal of Economic Methodology, 12(3): 383-406.

Hindriks, Frank A. 2006. Tractability Assumptions and the Musgrave-Mäki Typology. Journal of Economic Methodology, 13(4): 401-423.

Holmstrom, Bengt and Joan Ricart I. Costa. 1986. Managerial Incentives and Capital Management. The Quarterly Journal of Economics, 101(4): 835-860.

Humphreys, Paul. 2002. Computational Models. Philosophy of Science, 69(S3): S1-S11.

Jacobsen, Lowell R. 2008. On Robinson, Coase and “The Nature of the Firm. Journal of the History of Economic Thought, 30(1): 65-80.

Judd, Kenneth. L. 1997. Computational Economics and Economic Theory: Substitutes or Complements? Journal of Economic Dynamics and Control, 21(6): 907-942.

Kahneman, Daniel. 2003. A Psychological Perspective on Economics. The American Economic Review, 93(2): 162-168.

Klein, Lawrence. 1974. Issues in Econometric Studies of Investment Behavior. Journal of Economic Literature, 12(10): 43-49.

Kollmann, Robert. 2017. Tractable Likelihood-Based Estimation of Non-Linear DSGE Models. Economic Letters, 161: 90-92.

Krugman, Paul. 1993. How I Work. The American Economist, 37(2): 25-31.

Krugman, Paul. 1994. Empirical Evidence on the New Trade Theories: The Current State of Play. In Paul Krugman (ed.), New Trade Theories: A look at the Empirical Evidence. London: CEPR, 11-32.

Krugman, Paul. 1995. Development, Geography, and Economic Theory. Cambridge: MIT Press.

Kuorikoski, Jaakko and Aki Lehtinen. 2021. Computer Simulations in Economics. In Conrad Heilmann and Julian Reiss (eds), The Routledge Handbook of Philosophy of Economics. London: Routledge, 355-369.

Kuorikoski, Jaakko, Aki Lehtinen, and Caterina Marchionni. 2010. Economic Modelling as Robustness Analysis. British Journal for the Philosophy of Science, 61(3): 541-567.

Lenfant, Jean-Sébastien. 2001. La loi de Pareto : entre équilibre social et équilibre économique. Économies et Sociétés, Œconomia Série PE, 31(11-12): 1591-1625.

Lipsey, Robert. 2001. Successes and Failures in the Transformation of Economics. Journal of Economic Methodology, 8(2): 169-201.

Lisciandra, Chiara. 2017. Robustness Analysis and Tractability in Modeling. European Journal for Philosophy of Science, 7: 79-95

Lucas, Robert E. 1990. Review [of Trade Policy and Market Structure by Elhanan Helpman and Paul R. Krugman]. Journal of Political Economy, 98(3): 664-667.

Lucas, Robert E. and Thomas J. Sargent. 1979. After Keynesian Macroeconomics. Federal Reserve Bank of Minneapolis Quarterly Review, 3(2): 1-16.

Machlup, Fritz. 1940. Professor Hicks’ Statics. The Quarterly Journal of Economics, 54(2): 277-297.

Mäki, Uskali. 2000. Kinds of Assumptions and Their Truth: Shaking an Untwisted F-Twist. Kyklos, 53(3): 317-335.

Mäki, Uskali. 2009. Realistic Realism about Unrealistic Models. In Don Ross and Harold Kincaid (eds), The Oxford Handbook of Philosophy of Economics. Oxford: Oxford University Press, 68-98.

Marcuzzo, Maria-Christina and Annalisa Rosselli. 2012. Economists in Cambridge. A Study Through Their Correspondence, 1907-1946. London: Routledge.

Medema, Steven G. 1994. Ronald H. Coase. London and New York: Macmillan and St. Martin’s Press.

Mendershausen, Horst. 1949. Prices, Money and the Distribution of Goods in Postwar Germany. The American Economic Review, 39(3): 646-672.

Morgan, Mary S. 2012. The World in the Model. How Economists Work and Think. Cambridge: Cambridge University Press.

Morgan, Mary S. and Tarja Knuuttila. 2012. Models and Modelling in Economics. In Uskali Mäki (ed.), Philosophy of Economics. Oxford: Elsevier, 49-87.

Neary, Peter. 2004. Monopolistic Competition and International Trade Theory. In Steven Brakman and Ben Heijdra (eds), The Monopolistic Competition Revolution in Retrospect. Cambridge: Cambridge University Press, 159-184.

Nelson, Edward and Sidney Winter. 1974. Neoclassical vs Evolutionary Theories of Economic Growth: Critique and Prospectus. The Economic Journal, 84(336): 886-905.

Nelson, Edward and Sidney Winter. 1977. Simulation of Schumpeterian Competition. The American Economic Review, 67(1): 271-276.

Piketty, Thomas and Emmanuel Saez. 2012. A Theory of Optimal Capital Taxation. NBER Working Paper, no. 17989. Washington, D.C.: National Bureau of Economic Research.

Plassard, Romain. 2021. Barro, Grossman, and the Domination of Equilibrium Macroeconomics. MPRA Working Paper, no. 107201. Munich: University Library of Munich. https://mpra.ub.uni-muenchen.de/107201/ [retrieved 01/05/2023].

Robinson, Austin. 1941. Review of Production for the People by Frank Verulam. The Economic Journal, 51(204): 476-481.

Robinson, Joan. 1932. Economics Is a Serious Subject. The Apologia of an Economist to the Mathematician, the Scientist and the Plain Man. Cambridge: Heffer and Sons.

Robinson, Joan. 1933. The Economics of Imperfect Competition. London: Macmillan.

Robinson, Joan. 1953. Imperfect Competition Revisited. Economic Journal, 63(251): 579-593.

Rodrik, Dani. 2015. Economics Rules. The Rights and Wrongs of Dismal Science. New York: W. W Norton & Company.

Romer, David. 1986. A Simple General Equilibrium Version of the Baumol-Tobin Model. The Quarterly Journal of Economics, 101(4): 663-686.

Rostow, Walt. 1942. Price Control and Rationning: Some Aspects of Price Control and Rationing. The American Economic Review, 32(3): 486-500.

Samuelson, Paul A. 1970. What Makes for a Beautiful Problem in Science? Journal of Political Economy, 78(6): 1372-1377.

Samuelson, Paul A. [1982] 1986. Foreword to the Japanese Edition of The Collected Scientific Papers of Paul A. Samuelson. In Kate Crowley (ed.), The Collected Scientific Papers of Paul a. Samuelson. Cambridge: MIT Press, vol. 5, 858-875.

Sergi, Francesco. 2017. De la révolution lucasienne aux modèles DSGE : réflexions sur les développements récents de la modélisation macroéconomique. PhD dissertation, Université Paris 1 Panthéon-Sorbonne.

Solow, Robert. 1963. Capital Theory and the Rate of Return. Amsterdam: North Holland.

Solow, Robert. 1986. What Is a Nice Girl like You Doing in a Place like This? Macroeconomics after Fifty Years. Eastern Economic Journal, 12(3): 191-198.

Spiegel, Henry W. 1945. Economic Theory and Economic Policy. The Journal of Business of the University of Chicago, 18(1): 56-59.

Stantcheva, Stefania. 2020. Dynamics Taxation. NBER Working Paper, no. 26704. Washington, D.C.: National Bureau of Economic Research.

Stiglitz, Joseph E. 1979. A Neoclassical Analysis of the Economics of Natural Resources. In Kerry V. Smith, (ed.), Scarcity and Growth Reconsidered. New York: Resources for the Future Press, 36-66.

Tobin, James. 1986. The Future of Keynesian Economics. Eastern Economic Journal 13(4): 347-356.

Vickrey, William. 1951. Review of A Reconstruction of Economics by Kenneth Boulding. The American Economic Review, 41(4): 671-676.

Walker, Ronald. 1943. From Economic Theory to Policy. Chicago: Chicago University Press.

Warsh, David. 2006. Knowledge and the Wealth of Nations: A Story of Economic Discovery. New York: W. W. Norton & Company.

Weisberg, Michael. 2007. Who Is a Modeler? British Journal for the Philosophy of Science, 58(2): 207-233.

Haut de page

Notes

1 For economists’ statements on realism in assumptions, see Friedman (1953), Gibbard and Varian (1978), Rodrik (2015), or Gilboa et al. (2022).

2 Gabaix and Laibson (2008) consider tractability as one of the “key properties of useful models,” alongside “parsimony, conceptual insightfulness, generalizability, falsifiability, empirical consistency, and predictive precision.”

3 Economists have used various synonyms, as listed above, to discuss the manipulability and solvability of models. However, most have not addressed these issues explicitly at all. Simply tracking the occurrence of terms such as “tractable,” “intractable,” or “tractability” in the economics literature is therefore insufficient for providing a comprehensive account. The aim of this article is not to provide exhaustive coverage. Instead, my objective is to document the growing importance of tractability as a subject for economists.

4 Hall needed to ensure that the correlation of productivity growth with aggregate demand could be considered negligible so as to be able to measure the mark-up.

5 I am not asserting that Robinson was the first economist to discuss tractability. For instance, Jean Sebastien Lenfant (2001) documents that Vilfredo Pareto believed that the law of income distribution served as an “auxiliary assumption” for general equilibrium theory. However, I do believe that Robinson was among the first economists to comprehensively articulate the tradeoff between realism and tractability with this particular terminology.

6 “If the pessimistic economist prefers sitting at the apex of a pyramid of completely self-consistent, realistic but intractable assumptions to solving unrealistic problems, there is no need to quarrel with him,” she jested.

7 For histories of the Cambridge and British intellectual milieu and controversies of the 1930s and 1940s, see Marcuzzo and Rosselli (2012) or Aslanbeigui and Oakes (2009).

8 In a late series of article in which he reflected on the origins and meaning of his theory of the firm, Coase linked the success of Robinson’s work on competition in England to the possibility to “cover the blackboard with diagrams … without the need to find anything about what happened in the real world.” (Coase, 1988a, 22) Coase (1988b, 23-24) reaffirmed the importance to beginning the paper with the methodological statement that “the assumptions we make in economics should be realistic.” “Mrs. Robinson appears to argue that if the only assumptions we can handle are unrealistic, we have no choice but to use them … it was not a procedure that I wanted to follow in the 1930s,” he continued. See also Jacobsen (2008) on the influence of both Joan and Austin Robinson on Coase’s theory of the firm.

9 See Medema (1994) for a thorough discussion of Coase’s work and epistemology.

10 See Boianovsky (2020) for a detailed analysis of Samuelson’s use of Medawar’s epistemological writings in the context of debates on the state of development economics.

11 “Modeling trick” is a term that economists often use, whether casually (see for instance Stiglitz, 1979, 36) or critically (see Georgescu-Roegen, 1979 on Solow and Stiglitz’s “conjuring trick”).

12 I have presented the numbers in absolute terms rather than as a percentage of a growing body of articles. This is because what I document here is the limited rise in awareness to a set of modeling practices.

13 See for instance Romer (1986, 664) on banking models or Holmstrom and Costa (1986) on tractable information scenarios in capital management models.

14 What economists have usually called “workhorse” models is different from what philosophers of science such as Humphreys (2002) or Weisberg (2007) have called “templates.” Unlike templates, workhorse models are generally neither computational (though the next section addresses computational tractability), nor cross-disciplinary. In fact, a characteristic of the contributions of Stiglitz, Krugman, and others is precisely that they domesticate “templates” in the sense of Humphreys-Weisberg, for instance new mathematical forms. They turn them into models of say, monopolistic competition or dynamic models which can then be “adapted” (without the established relation between mathematical object and economic content being radically altered) to deal with various economic phenomena.

15 See for instance Couix (2020) on the Georgescu-Roegen/Daly vs Solow/Stiglitz controversy on growth models with exhaustible resources.

16 According to Sergi (2017, chap. 2), the shift towards rational expectations initially made macroeconomic models less empirically tractable. This leads to the question of how the standardization of certain fundamental assumptions, such as rational expectations, necessitates the development of a new set of additional assumptions to render models tractable.

17 This definition overlaps with the one proposed by Fumagalli (2010, 622): “a cluster concept resembling notions such as parsimony, resolvability and simplicity,” but it considers parsimony and simplicity as either altogether distinct virtues, or subordinated to tractability.

18 I recovered only a few instances where economists discussed empirical tractability before the 1980s. One is William Vickrey (1951, 627): “tractability requires that the analysis deal primarily with observables, such as quantities of physical assets and their prices,” he explained. Another is Lawrence Klein (1974, 43), who discussed “tractable statistical approximation for the estimation of … relationships.” “Convenience, ease of interpretation and tractability will often govern the specification ultimately selected” in econometric work, he concluded (Klein, 1974, 45).

19 These parameters are elasticities of capital and labor supply with respect to taxes.

20 For instance, macroeconomist Fabio Ghironi (2018,1) defines tractability as “the requirement that I must be able to understand transparently the mechanisms and results of the model at hand.” As we have seen, solving models lays at the heart of Robinson’s defense of analytical methods. Kahneman (2003, 166) likewise explains that “whether or not psychologists find them odd and overly simple, the standard assumptions about the economic agent are in economic theory for a reason: they allow for tractable analysis.”

21 This is the “certainty equivalence” property demonstrated by Herbert Simon and generalized by Henry Theil. As Richard Bellman and Stuart Dreyfus themselves remarked in 1962, “more than ever before in the history of science, theoretical formulation [went] hand-in hand with computational feasibility” (quoted in Duarte, 2009, 4). Duarte (2009, 5) notes that, in the next decades, macroeconomists ended up complaining about the associated “tractability trap” (my wording), with Alan Blinder writing in 1997 that “academic macroeconomists tend to use quadratic loss functions for reasons of mathematical convention, without thinking much about their substantive implications.”

22 See Backhouse and Cherrier (2017) for further references of historical work documenting shifts in computers and mathematical techniques. I thank Aurélien Goutsmedt and Francesco Sergi for helping me articulate the endogeneity in modeling choices for tractability purposes.

23 Around that time, Barro and Grossman’s abandoned the disequilibrium models of involuntary unemployment that they had previously built. Romain Plassard (2021) attributes this shift to lack of tractability of disequilibrium models compared to those with market clearing.

Haut de page

Pour citer cet article

Référence papier

Béatrice Cherrier, « The Price of Virtue: Some Hypotheses on How Tractability Has Shaped Economic Models »Œconomia, 13-1 | 2023, 23-48.

Référence électronique

Béatrice Cherrier, « The Price of Virtue: Some Hypotheses on How Tractability Has Shaped Economic Models »Œconomia [En ligne], 13-1 | 2023, mis en ligne le 01 mars 2023, consulté le 12 juin 2024. URL : http://0-journals-openedition-org.catalogue.libraries.london.ac.uk/oeconomia/14116 ; DOI : https://0-doi-org.catalogue.libraries.london.ac.uk/10.4000/oeconomia.14116

Haut de page

Auteur

Béatrice Cherrier

CNRS and CREST, ENSAE and Ecole Polytechnique, IP Paris. beatrice.cherrier@gmail.com

Articles du même auteur

Haut de page

Droits d’auteur

CC-BY-NC-ND-4.0

Le texte seul est utilisable sous licence CC BY-NC-ND 4.0. Les autres éléments (illustrations, fichiers annexes importés) sont « Tous droits réservés », sauf mention contraire.

Haut de page
Rechercher dans OpenEdition Search

Vous allez être redirigé vers OpenEdition Search