Skip to content

Against Positive Economics: Or, If You’re Interested in Reality, Try Being Realistic

August 12, 2011

I recently re-read Milton Friedman’s “The Methodology of Positive Economics,” an odd, bad little essay that has unfortunately had a vast influence on the practice of economics. (I’m not familiar enough with the history of economics to have a sense of this influence myself, but for instance Wikipedia tells me that Friedman’s essay was described as “the most influential work on economic methodology of this century” by Daniel M. Hausman in The Philosophy of Economics.)

“The Methodology of Positive Economics” is is very badly mistaken about certain important points, and I have the hunch that this has had a negative influence on the way economics is practiced. Here I’m going to focus on the essay itself, and what’s wrong with it, rather than the extent to which modern economists obey its recommendations. (I hope they do as little as possible, because those recommendations are ill-founded, as I hope to show.)

Rather than summarizing Friedman’s essay, I’m going to jump right in and start quoting passages. If you want to read the whole thing and judge for yourself, the link up at the top of this post goes to a PDF. I’m also not going to give any background about economics and science. I hope at least that this post is comprehensible to anyone who is interested in it, whatever their level of knowledge.

To get us started, here’s a quote that touches on some of Friedman’s main points:

Viewed as a body of substantive hypotheses, theory is to be judged by its predictive power for the class of phenomena which it is intended to “explain.” Only factual evidence can show whether it is “right” or “wrong” or, better, tentatively “accepted” as valid or “rejected.” As I shall argue at greater length below, the only relevant test of the validity of a hypothesis is comparison of its predictions with experience. The hypothesis is rejected if its predictions are contradicted (“frequently” or more often than predictions from an alternative hypothesis); it is accepted if its predictions are not contradicted; great confidence is attached to it if it has survived many opportunities for contradiction. Factual evidence can never “prove” a hypothesis; it can only fail to disprove it, which is what we generally mean when we say, somewhat inexactly, that the hypothesis has been “confirmed” by experience. . . .

The validity of a hypothesis in this sense is not by itself a sufficient criterion for choosing among alternative hypotheses. Observed facts are necessarily finite in number; possible hypotheses, infinite. If there is one hypothesis that is consistent with the available evidence, there are always an infinite number that are. . . . Additional evidence with which the hypothesis is to be consistent may rule out some of these possibilities; it can never reduce them to a single possibility alone capable of being consistent with the finite evidence. The choice among alternative hypotheses equally consistent with the available evidence must to some extent be arbitrary, though there is general agreement that relevant considerations are suggested by the criteria “simplicity” and “fruitfulness,” themselves notions that defy completely objective specification. A theory is “simpler” the less the initial knowledge needed to make a prediction within a given field of phenomena; it is more “fruitful” the more precise the resulting prediction, the wider the area within which the theory yields predictions, and the more additional lines for further research it suggests.

I quote this early passage at length because it provides a clear statement of the points in Friedman’s essay with which I disagree. Friedman goes on to make a number of other, seemingly more substantial claims, but the seeds of his errors are planted in this passage just quoted. (The passage also provides a taste of Friedman’s annoyingly matter-of-fact style: notice how he just asserts that hypotheses can be falsified but never verified, as though this were a bit of common knowledge, rather than a controversial position in the philosophy of science.)

“The only relevant test of the validity of a hypothesis is comparison of its predictions with experience” — this is the doctrine for which “The Methodology of Positive Economics” is remembered. It is Friedman’s response to the criticism, made frequently both from within economics and from without, that the stylized mathematical models of mainstream economics are “unrealistic.” Friedman asks: why, at the end of the day, should we care? The goal of science is to make predictions, not to be “realistic.” (Is physics “realistic”? Compared to what?) And all else being equal, it’s better to have an unrealistic theory, that is, a theory that ignores large portions of the existing world. If such a stripped-down theory can produce accurate predictions, then it teaches us something — that we can systematically ignore certain facts and still capture the essential dynamics of some phenomenon. (Plus, simpler models are easier to work with, mathematically speaking.) Friedman expresses this point in an infamous passage:

In so far as a theory can be said to have “assumptions” at all, and in so far as their “realism” can be judged independently of the validity of predictions, the relation between the significance of a theory and the “realism” of its “assumptions” is almost the opposite of that suggested by the view under criticism. Truly important and significant hypotheses will be found to have “assumptions” that are wildly inaccurate descriptive representations of reality, and, in general, the more significant the theory, the more unrealistic the assumptions (in this sense). The reason is simple. A hypothesis is important if it “explains” much by little, that is, if it abstracts the common and crucial elements from the mass of complex and detailed circumstances surrounding the phenomena to be explained and permits valid predictions on the basis of them alone.

This passage is, I think, unfairly maligned. Of course Friedman isn’t actually saying that unrealistic models are predictively better (more likely to make good predictions), but rather that if they do work predictively, they are more enlightening. What is wrong is not this particular point, but the rest of the essay, in which Friedman encourages the confusion of these two types of success.

In that big quote up at the top, Friedman acknowledges that “comparison of predictions with experience” is not enough to determine, once and for all, which theories we should accept. Any given data set is compatible with many different hypotheses. He identifies two criteria that can help us choose between competing hypotheses that explain the data equally well: “simplicity” and “fruitfulness.” Note the conspicuous absence of what we might call “plausibility” — the degree to which a hypothesis accords with the rest of what we know about the world. Friedman’s goal, after all, is to deny plausibility a role in the evaluation of economic theories. But he does so, later in the essay, by again and again pushing the point that the validity of a theory’s predictions are all that matters. When the “predictive ability” criterion leaves us with multiple equally permissible theories — which always happens, as Friedman admits — we are supposed to fall back on “simplicity” and “fruitfulness.” But why not care about plausibility, at this point? Well, you see, plausibility can’t matter, because nothing except predictive ability can possibly matter. But (now you see the obvious glitch) we are talking about what criteria we are to use when predictive ability fails, and at this point there is no reason to exclude plausibility while including simplicity and fruitfulness, unless Friedman has an argument as to why this is the right choice. (He doesn’t.)

Okay, plausibility. Why does it matter? It matters because, as Friedman would be the first to tell you, the point of science is not just to explain existing data, but to predict new data. If you’re an economist, for instance, you will want to be able to figure out how people will respond to some particular economic policy, even if that particular policy has never been implemented before. And if you want to have the highest chance of making correct predictions about what the system will do, your best bet is make your model as close as possible to what the actual system is doing. Friedman is right that crazy simplifications and off-the-wall ideas are interesting and informative when they are vindicated — but when the data fits equally well to something realistic and something zany, the fit is less likely to be spurious in the case of the realistic model.

Here’s Deirdre McCloskey talking about spurious fits in her wonderful essay “The Secret Sins of Economics”:

And on the other hand: It is also completely obvious that a “statistically significant” result can be insignificant for any human purpose. When you are trying to explain the rise and fall of the stock market it may be that the fit (so-called: it means how closely the data line up) is very “tight” for some crazy variable, say skirt lengths (for a long while the correlation was actually quite good). But it doesn’t matter: the variable is obviously crazy. Who cares how closely it fits? For a long time in Britain the number of ham radio operator licenses granted annually was very highly correlated with the number of people certified insane. Very funny. So?

If you are trying to predict trends in the stock market, and the data matches up equally well to some realistic economic model and to a time-series of skirt lengths, it is obvious to any sane person that the realistic economic model is the better bet. Maybe there’s some secret causal relationship between skirt lengths and the stock market, or maybe both, in their own separate domains, obey the same sorts of mathematical dynamics (nature’s fondness for unity is often startling) — but both these considerations would apply even more so to the realistic model. All things considered, the realistic model just has a better chance of capturing (if perhaps approximately) the dynamics actually obeyed by the stock market. Thus it has a better chance of continuing to follow the stock market as it evolves beyond the data we current possess. But this is a consideration of realism/plausibility, a consideration which Friedman wants to do away with.

(It is possible, I guess, that the plausibility criterion could somehow be “reduced” to Friedman’s simplicity and fruitfulness. Perhaps, for instance, a realistic model of the stock market is more “simple” and/or “fruitful” than a skirt-length-based model, and perhaps the same is true for all plausibility judgments. But this seems like an awkward way to formalize the snap decision that we all make in that stock market example, which really is a judgment based on plausibility. And anyway, although such a “reduction” scheme might be interesting, Friedman doesn’t argue for it in any way — and it is not a count in this paper’s favor that it raises an interesting issue by way of ignoring that very issue, i.e., by simply assuming that simplicity and fruitfulness are all we need without trying to prove it.)

Friedman writes as though he is trying to import into economics a set of methodological points that are widely accepted in the “physical” or “exact” sciences. Friedman begins his discussion of positive (as opposed to normative) economics with the words: “The ultimate goal of a positive science is . . . ” One gets the sense that Friedman believes there is a standard methodology for “positive science” out there, already in use in the physical sciences (the paradigm case of “positive science”). Economics should adopt these conventions — that’s Friedman’s point. It is problematic, then, that Friedman argues against considerations of realism when these considerations are in fact the bread and butter of physical science.

When he rails against realism, Friedman probably has in mind the development of fundamental physics. When we are trying to improve fundamental physics — our best understanding of the basic processes that, “at the bottom,” make up physical reality — there is of course no room for realism. The founders of quantum mechanics, for instance, had no reason to care whether their new theory was “realistic.” Compared to what? When you are devising a new fundamental theory of reality, you are devising a new standard of realism. If your theory does not match up with existing fundamental theories, that is because it is supposed to improve on existing theories — so that sort of “un-realism” is not a flaw at all.

But most of physical science does not consist of devising fundamental theories. Upsets in our bottom-level view of reality are rare and, when they occur, highly significant; the combination of these two facts has probably given them more prominence in popular views of physics than they deserve. The primary job of a theoretician, in physical science, is not to devise new laws of nature but to interpret the ones we already have.

Because, you see, the laws of nature tend to be too difficult to analyze. We can write down the equations, but we don’t know how to solve them. Thus, in order to generate predictions, we must devise simplifications and approximations. In physics, a tremendous amount of energy goes into the systematic and rigorous production of approximations. Open up almost any advanced textbook on physics, engineering, or applied mathematics and I guarantee you that a large proportion of the text will be devoted to the subject of approximation. Physicists do not shy away from approximations as something provisional and embarrassing. Instead they try to make their approximations as good, as intelligent, as rigorous as possible. They try to make their answers as right as they can possibly be, given that they won’t be perfect. This is the open secret of the so-called “exact sciences”: they are really sciences of inexactness, sciences obsessed with how to quantify and reduce error, how to demonstrate that certain factors are negligible (if nonetheless present!) in certain cases, how to be intelligently imprecise.

Note the marked difference between this approach and Friedman’s vision of science. In Friedman’s ideal world, the “quality” of an approximation is a purely experimental, agreement-with-data issue:

. . . the relevant question to ask about the “assumptions” of a theory is not whether they are descriptively “realistic,” for they never are, but whether they are sufficiently good approximations for the purpose in hand. And this question can be answered only by seeing whether the theory works, which means whether it yields sufficiently accurate predictions.

In physics, on the contrary, the quality of an approximation is determined on the theoretical side: a good approximation is one that can be derived by starting with “reality” (the scary, intractable laws of nature) and applying simplifications that can be rigorously justified. Friedman’s strange operational notion of what makes a “good approximation” leaves us powerless to distinguish models that actually approximate reality from those that simply match up to reality by accident. By Friedman’s definition, the process by which skirt lengths change is a “good approximation” of the process by which stock prices change, so long as one is well-correlated with the other. Of course this is simply wrong. In order to rightfully call something an “approximation” of something else, we must actually approximate. We must start with the complicated thing and actually show, using a real theoretical argument, that the complicated thing looks like some simpler thing in some particular case.

There are always numerous hypotheses that could conceivably explain a given data set. The process of theoretical approximation allows us to home in on the hypotheses most likely to actually resemble what is actually going on. The main activity of physical science is this process of “spinning off” little approximate models from a central core of physical law, models which can then be used as needed in experiments or pieces of technology.

Suppose that I am a physicist advising an aeronautical engineer. (This “advising” relationship would not happen in real life, since engineers are in the habit of knowing about the subfields of physics that are relevant to their work. I am personalizing the relationship between physics and engineering for the sake of fun and vividness.) The engineer wants to know whether or not her plane will fly. The motion of fluids — a dynamical category that includes gases like air as well as liquids — are described by a beautiful set of equations called the Navier-Stokes equations. Unfortunately, the Navier-Stokes equations, like most physical laws, cannot be directly solved. That is, we know that the fluid’s motion will be described by a function that makes the two sides of the equation equal to one another, but we don’t know a reliable method for coming up with such a function on command. What am I to tell the engineer, who wants to know how the air will flow around her plane?

The real system is not solvable; I want to come up with a system that can be solved. Suppose I am a devotee of Milton Friedman. I thus believe that there is no reason to try to retain any of the features of the Navier-Stokes equations in my new system. That would be a misguided attempt at “realism,” and after all:

A theory or its “assumptions” cannot possibly be thoroughly “realistic” in the immediate descriptive sense so often assigned, to this term. A completely “realistic” theory of the wheat market would have to include not only the conditions directly underlying the supply and demand for wheat but also the kind of coins or credit instruments used to make exchanges; the personal characteristics of wheat-traders such as the color of each trader’s hair and eyes, his antecedents and education, the number of members of his family, their characteristics, antecedents, and education, etc. . . .

. . . so long as the test of “realism” is the directly perceived descriptive accuracy of the “assumptions” – for example, the observation that “businessmen do not appear to be either as avaricious or as dynamic or as logical as marginal theory portrays them,” or that “it would be utterly impractical under present conditions for the manager of a multi-process plant to attempt . . . to work out and equate marginal costs and marginal revenues for each productive factor” there is no basis for . . . stopping short of the straw man depicted in the preceding paragraph. What is the criterion by which to judge whether a particular departure from realism is or is not acceptable?

I also believe that my new, solvable theory cannot be produced by any systematic method, for as Friedman says:

The construction of hypotheses is a creative act of inspiration, intuition, invention; its essence is the vision of something new in familiar material. The process must be discussed in psychological, not logical, categories; studied in autobiographies and biographies, not treatises on scientific method; and promoted by maxim and example, not syllogism or theorem.

Well, then. No need to bother with the headache-inducing Navier-Stokes eqs. I’ll just go home, get myself into a proper state of romantico-mystic intuitive inspiration, and stare at the velocity field graphs that my advisee has provided for me. Hmm, nothing’s coming to mind — oh, I need to feed the cat — hmm, cats — isn’t it funny how some days my cat has lots of energy and some days he has no energy? Hey, I wonder if that’s correlated with air flow! Ha, ha, maybe I have gotten a bit too mystically inspired, here. . . . well, let’s try it — (a few hours of number-crunching) — and what do you know? The fluid flow is actually well correlated with my cat’s moods! Maybe not perfectly, but the fit is pretty good (note: this imperfection of fit would be much more embarrassing in engineering than in an “inexact science” like economics, where it is the norm). I’m a genius!

To my surprise, the engineer is not happy with my suggestion. What on earth could your cat have to do with the motions of air?, she asks. Am I supposed to entrust my plane and its passengers to such a tenuous relationship? What if your cat gets ill? Would that be inevitably mirrored in the skies through some sort of invisible cosmic linkage?

Well, what am I supposed to do?, I ask.

She responds: Why can’t you use the known laws of fluid flow? Doesn’t physics know all sorts of things about the motion of fluids?

And I say: Yes, but not in this particular case, not with the particular shape of your proposed plane. And it would be misguided to try to make our model of this particular case match up with the rest of what we know about the world. That would be an interest in “realism,” and if we start being realistic, where do we stop? Certainly we can’t carry the pursuit of realism all the way to the Navier-Stokes equations, since after all those are insoluble.

Her: An approximation — any approximation — of the N-S equations would be better than this cat stuff. If you presume to tell me how air is going to move, why can’t you at least come up with something that has something to do with the motion of air?

Me: But my cat theory is a good approximation, because it fits the data in this particular case. Haven’t you read Milton Friedman?

Clearly “I” have done something very wrong in this example. It’s possible, in principle, that something crazy like the cat theory might actually work. But engineers (and economists) don’t have the time or money to test every such crazy theory. They have predictions to make, and those predictions should be made by looking at the actual system under consideration.

Of course one might object that something as crazy as the cat theory would show its true colors almost immediately, and that anything with lasting predictive value is worth investigating, no matter how crazy it looks on the surface. (Though McCloskey tells us that the skirt length thing did work “for a long while.”) For instance, when Friedman attacks the theory of imperfect competition, it is not because he thinks that the theory of perfect competition produces reliable results in all cases, but rather that it works surprisingly well a lot of the time, and that the cases where it won’t work can be easily identified beforehand. To Friedman, “perfect competition is unrealistic” is a dumb critique, since what’s so interesting about perfect competition is that it predicts so well, given that it surely doesn’t describe what people actually do.

But this is still not good enough. Friedman is OK with perfect competition because it works pretty well in a lot of cases, and after all it’s hard to do better than that in an inexact science. But Friedman’s overall method leaves us with no way to improve on its performance, and no way to invent other theories that work similarly well. If economists had actually derived perfect competition as a real, honest-to-god approximation of real (imperfect, messy) competition, the way a physicist would have done it, then we would know exactly why perfect competition works as well as it does, and exactly when it is likely to work, and exactly how well. We could prove nice things like, “under these conditions, due to the negligibility of a certain term in a certain equation, the error introduced by assuming perfect competition will be less than (some small number).” And we could use other methods to spin off all sorts of other useful approximations from the whirling confusion of reality, just the way fluid dynamicists spin off useful approximations of the intractable N-S equations (not cat models). Instead, Friedman leaves us to invent models in an unsystematic fashion, in thunderclaps of mystic inspiration, and to slavishly test those models again and again for all time. Once in a while some especially inspired theorist will come up with something that works pretty well, like perfect competition, and everyone will rejoice after years of wasted effort studying cats and skirts. But then we’re just left with our pretty-good model, sitting there disconnected from the rest of our knowledge, its agreement with the data a persistent mystery. Can we improve on it? Can we make other ones like this? Can’t we go ask the person who came up with it what they were thinking? Might they have a special system for making models that are likely to work? Forbidden, says Friedman. You don’t get to use tricks or systems to come up with your hypotheses. You don’t get to use anything else you know about the world (that’s “realism”); every time you come up with a new model you’ve got to forget everything you know. This is the sad state of Friedman’s “positive economics”: even when it succeeds, it doesn’t know why.

5 Comments leave one →
  1. Joseph Warren permalink
    August 15, 2011 3:05 am

    I would like to apologize in advance for the lack of organization in my post. My knowledge of how economists do economics results from a haphazard array of sources, which is reflected in my concepts of these issues.

    First, I would like to say that I appreciate Friedman’s style. It is concise and straightforward, and does not require any strenuous interpretation to figure out his argument. Where he goes wrong, it is fairly simple to locate the precise logical juncture. As it is, I think his conclusions in “Methodology” are for the most part correct. The ambiguities that resulted in your concerns are likely due to a lack of knowledge of the context of his piece on your part, and a lack of depth and detail on Friedman’s, which subsequent economists have helpfully clarified.

    The most important point I think you missed was that Friedman was talking about fundamental assumptions in economics. The debate he was engaged in is one that still continues today to some degree in Political Science (for an excellent though slightly dated example, please see http://reedcollege.worldcat.org/title/pathologies-of-rational-choice-theory-a-critique-of-applications-in-political-science/oclc/30398453&referer=brief_results), related to the degree of rationality and information that economists assume market actors possess. Friedman was defending the universal use of these assumptions in microeconomics as a means for analyzing markets. Whether the specific model developed relates to the labor market, technological innovation, natural resources, etc., assumptions of utility-maximization are the means by which economists explain the actions of agents.

    The fundamental nature of assumptions of rationality in economics has bearing on two points you made. The first is how a specific model is tied to other models in the field as a whole, which I believe is made clear in the above paragraph. The other is how it is possible for predictions based upon past relationships to be extrapolated into conclusions about future events. In macroeconomics, this relates to what is called the Lucas Critique (http://en.wikipedia.org/wiki/Lucas_critique), which was first leveled as a decisive criticism of Keynesian economics. The gist of the Lucas Critique being that Keynesian models were based upon observed past relationships (“casual empiricism”) and that there was no reason to believe these relationships would stay constant in the future in the face of macroeconomic policy changes. This lead to the growth of New Keynesian economics, in which the New Keynesians attempted to provide “microfoundations” to observed macroeconomic relationships. They were largely successful, but really only in confirming what economists already knew thirty years ago. Their success lay in tying macroeconomic models to key microeconomic assumptions about market behavior, the same assumptions Friedman was defending in “Methodology.”

    The reasons why it is vital to make these assumptions is because the social world is a complicated place. We don’t know ex ante which elements are essential and which are contingent. This ambiguity is compounded because we are forced to analyze this system from within the system. The purpose of theory is to give us information about the key relationships within a complex social reality, but the only way we know how to reliably begin to simplify is through predictive testing, as Friedman explains:

    “What is the criterion by which to judge whether a particular departure from realism is or is not acceptable? Why is it more “unrealistic” in analyzing business behavior to neglect the magnitude of businessmen’s costs than the color of their eyes? The obvious answer is because the first makes more difference to business behavior than the second; but there is no way of knowing that this is so simply by observing that businessmen do have costs of different magnitude and eyes of different color. Clearly it can only be known by comparing the effect on the discrepancy between actual and predicted behavior of taking the one factor or the other into account. Even the most extreme proponents of realistic assumptions are thus necessarily driven to reject their own criterion and to accept the test by prediction when they classify alternative assumptions as more or less realistic.”

    My knowledge of physics is extremely limited. But I do know that too many physicists-turned-economists attempt to analyze the social world in the same way they do the physical. Unfortunately, this is impossible. At this point in history, we simply do not know that much about the mechanisms at work in determining outcomes in society (and for most of what we do know you would have to talk to the psychologists). In economics, in seeking to explain social structures, we must necessarily form simplistic assumptions about the way individuals make decisions. And while economists may use what we know of businesses and agents for information (for instance, in the search for potentially correlated variables), prediction is the only reliable means we have at the moment for testing the fundamental economic assumptions.

  2. rfriel permalink*
    August 15, 2011 4:00 am

    Hey Joe!

    Friedman was defending the universal use of these assumptions in microeconomics as a means for analyzing markets. Whether the specific model developed relates to the labor market, technological innovation, natural resources, etc., assumptions of utility-maximization are the means by which economists explain the actions of agents.

    I understand that he was defending this practice. I don’t agree with his defense, though.

    Their success lay in tying macroeconomic models to key microeconomic assumptions about market behavior, the same assumptions Friedman was defending in “Methodology.”

    They were successful (in one sense or another), and they made those assumptions, but do we know that those two facts are causally related?

    Also, the Lucas Critique is itself controversial — e.g., there are some who claim that it has been empirically falsified, whatever that would mean. Yes, the Lucas Critique led some macroeconomists to start using microfoundations, and those microfoundations were typically based on assumptions about rationality. But even if this approach has been relatively successful, I don’t think we can conclude that either 1) the Lucas critique has been vindicated or 2) any given assumption about rationality has been vindicated. At the moment we’re trying to match a social reality that is (as you say) very complicated by using extremely simple mathematical models. The fact that one type of simple model outperforms another type does not mean that it’s anywhere close to modelling what’s actually going on. It’s possible to do better than something largely wrong by being also largely wrong, but slightly less so.

    The purpose of theory is to give us information about the key relationships within a complex social reality, but the only way we know how to reliably begin to simplify is through predictive testing, as Friedman explains: [quote]

    The passage you quote seems very strange to me. Friedman says: Clearly it can only be known by comparing the effect on the discrepancy between actual and predicted behavior of taking the one factor or the other into account. Apparently this comparison is supposed to inform us whether or not we can neglect things like “the magnitude of businessmen’s costs” or “the color of their eyes” when constructing our models.

    Does this mean that Friedman actually wants me to collect data on the eye colors of businessmen, and come up with some way for that to affect their behavior (we don’t care about realism, so I am free to make the behavior effects completely crazy from a common-sense standpoint), and check these models against reality? Does Friedman think we should take models like this seriously, publish them in journals, discuss them, etc.? Well, of course not. But why not? It seems to me that the real reason we do not consider models like this is a consideration that has nothing to do with formal comparison of predictions with data, and rather the sort of common-sense consideration that informs model-building in the first place. Of course businessmen don’t care about the colors of each other’s eyes, or the magnitude of the geomagnetic field, or the number of species of squid that live in the nearest body of water . . . The relevant thought is not “we can’t improve our predictions by including eye color” — that study has never been done, and I don’t think Friedman actually wants me to do it!. Instead the relevant thought is “given any sane assumption about what businessmen care about, eye color can’t matter.” It’s a theoretical point, not an experimental one: “either you are committed to some utterly weird assumption about what people do, or [some factor] can be ignored with impunity.” Similar derivations could be constructed about, say, perfect competition: “the difference between imperfect competition and perfect competition is proportional to [some factor, which is negligible in a lot of cases unless you believe something zany].” But (as far as I can tell) that’s not the path Friedman argues for, in the rest of the essay.

    Or, to put the point the other way around: if we really believe that we can’t reject anything until we test it, why are we so single-minded about all these assumptions of rationality, etc.? Why not try out all sorts of strange assumptions about behavior, market structure, etc.? If someone claims that you can’t reject anything out of hand until you test it against data, that shouldn’t count as an argument in favor of using the same assumptions over and over again (which, after all, consists in rejecting everything out of hand, except for one particular framework).

    My knowledge of physics is extremely limited. But I do know that too many physicists-turned-economists attempt to analyze the social world in the same way they do the physical.

    I think that is true. But although it’s possible to imitate physics in an erroneous way, imitation of physics is not erroneous in itself.

    The reason I talk about the methodology of physics in this post is not that I think economic methodology should be identical to physics methodology, but rather that I don’t see Friedman arguing for his proposed methodology except through a sort of “this is just how science is done” pose. I wanted to point out that that is not how the more “mature” sciences do things, and while it may turn out to be the right methodology for economics, Friedman would have to actually argue that point in order to convince me of it.

    (ETA:)

    And while economists may use what we know of businesses and agents for information (for instance, in the search for potentially correlated variables), prediction is the only reliable means we have at the moment for testing the fundamental economic assumptions.

    Prediction, and also theoretical arguments that turn complicated and intractable models into simpler ones. I don’t know about you, but it makes me more confident in a simple model when it can be shown that a more realistic one reduces to it in a relevant special case. I guess I still don’t see why economists shouldn’t be making these sorts of arguments. Maybe we will never be able to able to climb down the whole ladder from full-on reality to simple models. Maybe even our best, most complicated pictures of “reality” will always be simplifications. But this is true in physics as well, and it does pretty well.

  3. Joseph Warren permalink
    August 15, 2011 7:08 am

    I’m not going to respond at length right now, however, I have a few points off hand:

    1) I don’t think Friedman believes we need to test *everything* before we accept the best fit predictor; rather we should accept provisionally the most predictive assumption until something better is demonstrated. It’s Popper through and through, but I don’t think Friedman tries to hide that.

    2) We don’t necessarily know the mechanism. Some economists (like Lukas) think that people are probably actually forward-looking utility-maximizers, while others disagree. Friedman’s point (and the reason his argument is important, I think) is that whatever fits best is what we should use because absent a mechanism, that’s all we have to go with. Even the most counter-intuitive assumptions may be helpful because people act in mysterious ways. Economists don’t have four forces upon which to base all our models. Instead, they have a set of assumptions about market behavior, the results of which match reality (mostly).

    3) The reason I brought up the Lukas Critique was to demonstrate that economists are thinking systematically based upon key fundamental assumptions, contra your fluid-flow/cats caricature.

    • rfriel permalink*
      August 16, 2011 9:08 pm

      (I clicked “Reply” on your comment this time rather than just using the box at the bottom of the window. Maybe this will give you an email notification or something. It seemed like the right thing to do.)

      I guess my main point of contention with you and Friedman is that I don’t understand why a particular sort of thinking that plays an important role in physics — namely, thinking about the reason why our simple models are good at approximating more complicated systems — should have no role in economics. Maybe it should have less of a role in economics than it has in physics, though I won’t be even convinced of that until I see a compelling argument. But I don’t see why economists think that the appropriate amount of this speculation is precisely zero. Why a corner solution? What, exactly, is so undesirable about this sort of speculation? Does it have cooties?

      It occurs to me that, in a way, my point is similar to the Lucas Critique. The “observed relationships” of Old Keynesian models were good approximations to the underlying system in particular small regions of the overall parameter space of that system. The Lucas Critique says: these models are useless, because when you make the sort of policy changes that the models were build to help us understand, you will inevitably move outside of the little region in which your modelling assumptions work. So we have to go deeper, and try to understand the underlying reasons for the shifts that occur from regime to regime.

      Of course it’s hard to do that, because we don’t know what the overall structure looks like. The New Keynesian approach to “understanding” the changes is simply to take a particular, rigid set of assumptions about behavior. But I would advocate applying the Lucas Critique’s reasoning to all our assumptions, including assumptions about rational expectations (and everything else). What we observe is that our simple assumptions work very well in some cases and not very well in others — much like the “observed relationships” of old macro models. Rather than just making these assumptions rigidly all the time, and saying “oops, sorry, it’s only a social science” when they fail, why don’t we go deeper and try to investigate the underlying logic that makes these assumptions succeed sometimes and fail at other times? Sure, we may never understand that underlying logic completely, but we still may learn something valuable — so why not engage in a nonzero amount of this sort of contemplation?

      I don’t believe that “thinking systematically” is a virtue in itself. I think a shrewd, versatile human mind is smarter (and better at prediction!) than a single set of assumptions. Rather than running a single system into the ground, we should strive to gain an understanding — however provisional — of why our systems succeed when they succeed and fail when they fail.

      (P.S. In physics we still don’t understand how the four forces fit together. We don’t have a single model of reality from which we deductively produce other models; we have a bunch of little models that link up a lot of the time, but not always. And figuring out how to link them all together is one of the main tasks of theoretical physics.)

      • Joseph Warren permalink
        August 16, 2011 11:41 pm

        I think economists do play around with assumptions. Akerlof and others have looked at the role asymmetric information plays in markets; many macro-models today incorporate both forward-looking and impatient individual behavior; a great deal of the New Keynesian project involves examining various frictions that exist in the economy. Economists are thinking about what is important for market agents that has an effect on economic outcomes. But I think it’s important to remember that economists are studying structures in society (and markets in particular). If an element in human psychology has a systemic effect on resulting social structures, then that should be included in the model, if not then it shouldn’t be. But I agree with Friedman that the only way to determine if this is the case is through prediction, which is not to say that the work of psychologists can’t provide economists with information that they might use to develop a more predictive model (as Jeff Parker pointed out recently).

        Economics is distinct from psychology, and I think there is good reason for the separation of the two fields. Economists are seeking the essential attributes of the social system, not trying to describe how everything fits together. I don’t think any economist would ever say “it’s only a social science,” except perhaps in jest. The reason the world doesn’t always match up precisely to economic models is because people are complicated, there are all sorts of contingent factors, psychological or otherwise, but as long as these are mean-zero, it should all be fine. The role of assumptions of utility-maximization in economics isn’t to describe human agents (it’s very possible such description fits not one person in the actual economy), instead, I think of their role more as to describe the constraints in which market actors operate, and to explain why we see the results that we do.

        I feel like I’ve lost a bit of focus on your original point, so I’ll refer back to what you labeled as your “main point of contention”:

        why a particular sort of thinking that plays an important role in physics — namely, thinking about the reason why our simple models are good at approximating more complicated systems — should have no role in economics.

        And I suppose I have three very brief answers: 1) we do; 2) we can’t; and 3) we shouldn’t. (1) economists do consider various assumptions concerning market actors in studying markets, take financial markets for example, and they do that because it’s been very difficult to explain financial crises with the traditional assumptions of rationality, but (2) how else is it possible to judge whether these additional assumptions have a systemic effect on the market under examination without testing the predictions of that model against that of reality? otherwise the economist has no way of determining whether the feature observed is merely interesting or if it’s important in explaining market outcomes; because (3) economists are not concerned about describing markets, economies and social structures, that would be the role of historians, nor do economists want to focus on the myriad and mysterious processes of the human brain, that would be the role of psychologists, instead economists are focused on explaining why we observe certain structural outcomes, and the only way to definitively do that is to test data through correlation and prediction.

        This is my best shot at explaining my position, apologies if it’s insufficient. Beyond this I think we’ll have to discuss these issues in person.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: