Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

February 13 2012

There's Plan A, and then there's the plan that will become your business

We need a word that captures the specific sort of pain entrepreneurs feel when their carefully developed startup ideas are met with blank indifference. All that time. All that effort. And it adds up to ... this?

"Running Lean" author Ash Maurya (@ashmaurya) doesn't have that word, but he may have something better: a method for avoiding the pain altogether. In the following interview, Maurya explains how the Running Lean process helps startups iterate from flawed "Plan A" ideas to products people want.

What is Running Lean?

Ash MauryaAsh Maurya: Running Lean is a systematic process for quickly vetting and building successful products. Most entrepreneurs start with an initial vision: their "Plan A." Unfortunately, most Plan A ideas don't work. Running Lean helps entrepreneurs iterate from their initial Plan A to one that works — before running out of resources.

What are the early signs that a Plan A idea isn't working?

Ash Maurya: A startup is about bringing bold, new ideas to the world. That naturally works to your advantage. Your initial goal is getting a strong signal (positive or negative) from customers. This typically doesn't require a large sample size. So, for instance, if you can't even get 10 strangers to say they want your product (or better yet, pay for your product), this problem is not going to go away by targeting 1,000 people. A strong negative signal indicates that your bold hypothesis most likely won't work. It lets you quickly refine or abandon it.

On the other hand, a strong positive signal doesn't necessarily mean it will scale up to a significant business. But it does give you permission to move forward on the hypothesis until it can be verified later through quantitative means.

Is there any value to writing a business plan?

Ash Maurya: Before you can start the process of iteration, you have to draw a line in the sand. You have to start by documenting your initial vision (or Plan A) and sharing it with at least one other person. Otherwise, it's too easy to endlessly iterate in your head and never be wrong.

Traditionally, business plans have been used for this purpose. But while writing a business plan is a good exercise for the entrepreneur, a business plan falls short of its intended purpose. Few people take the time to actually read business plans. More importantly, since many Plan As are likely to be proven wrong anyway, spending several weeks or months writing a 60-page business plan largely built on untested hypotheses is a form of waste.

I instead recommend using a one-page business model format called Lean Canvas. It captures the same core elements you find on a business plan, but because it fits on one page, it's a lot more concise, portable and readable.

Running Lean — This book demonstrates ways to apply and test techniques from the Customer Development, Lean Startup, and bootstrapping methods. Learn how to engage customers throughout the development cycle so you can build a product people will actually buy. (Note: This digital early release edition includes the author's raw and unedited content. You'll receive updates when significant changes are made, as well as the final ebook version.)

Why is it a bad idea to build products in stealth?

Ash Maurya: There is a fear, especially common among first-time entrepreneurs, that their great idea will be stolen by someone else. The truth is two-fold: First, most people are not capable of visualizing the potential of an idea at such an early stage; and second, they won't care. The initial challenge for most startups is getting noticed at all.

There is also a difference between stealth and obscurity. Stealth is bad because you build products in complete isolation only to find out later that you were optimizing a product no one wanted. On the other hand, obscurity is a gift. It allows you to test your product at micro-scale, getting it right, before attracting a lot of attention and scaling out.

So avoid stealth, but embrace obscurity.

Do the techniques in your book only apply to tech-centric startups?

Ash Maurya: Even though a lot of these concepts were recently popularized by tech-centric startups, I believe the principles they embody are universally applicable to products ranging from high-tech to no-tech. Several core principles in "Running Lean" date back to the last century when Taiichi Ohno and Shigeo Shingo were laying out the early groundwork for the Toyota Production System, which later became "lean manufacturing." I used these same techniques in the writing of my book, which I share as a case study in the book along with several other non-tech products.

What's the connection between Running Lean and the Lean Startup?

Ash Maurya: Running Lean is a synthesis of three methodologies: Lean Startup, Customer Development, and Bootstrapping. Of the three, Running Lean draws the most from Lean Startup. While the Lean Startup, created by Eric Ries, codifies the core principles, my goal with Running Lean was to create an actionable how-to guide for taking these principles to practice.

[Note: Eric Ries is the editor of the Lean Startup Series, which includes "Running Lean."]

Why did you decide to apply Lean Startup methods to your own work?

Ash Maurya: When I was first exposed to Lean Startup, I was already running a company and on my fifth product at the time. I had built products in stealth; attempted building a platform; dabbled with open sourcing; practiced release-early, release-often; embraced "less is more"; and even tried "more is more" — all with varying degrees of success.

I saw that acting on a vision can easily consume years of your life, and I was in search of a better, faster way of vetting and building products.

The key idea from Lean Startup that resonated with me was that of rapid iteration around customer learning. Specifically, that you could almost always test the riskiest parts of a vision without having to build the product first.

As I started internalizing these principles, I had more questions than answers. That prompted my own rigorous testing and application of these principles, which led to the book "Running Lean" and several other software products I am now building.

This interview was edited and condensed.


"Running Lean" author Ash Maurya will discuss the Running Lean methods in a free webcast on Feb. 14 at 10 am PT / 1 pm ET. Register to attend.



Related:


November 09 2010

Open question: Do you trust market research surveys?

open questionAs recently as last year, actual information about ebook consumers was nearly impossible to come by. But lately, throw a virtual rock in any direction and you're likely to hit an ereading or ebook research report. Just yesterday Forrester released their latest such report, including very reportable and re-tweetable highlights such as:

  • 2010 will end with $966 million in ebooks sold to consumers.
  • By 2015, the industry will have nearly tripled to almost $3 billion, a point at which the industry will be forever altered.
  • Current ebook readers in the survey expect an average of 51 percent of the books they read will be ebooks
  • Four in 10 people who own or expect to buy an ereader shop at Amazon for physical books.
  • Exactly 50 percent of people who bought an ebook in the past month have bought ebooks from Amazon's Kindle store.

On the same day, French management consultants Bain + Company reported that by 2015 between 15-20 percent of the book reading public will own electronic devices, and up to 25 percent of books will be sold in digital form. This nice sound bite was included with their press release: "Experimenting with new formats -- non-linear, hybrid, interactive or social -- is where opportunity lies."


These are just the tips of the e-iceberg. The data and research are coming at an unprecedented pace from all manner of organizations, associations, private firms, and individuals.

Studies are great. Data is awesome. The ebook market has gone without much of either for far too long. But, could it be that in our thirst for consumer ereading information, we're drinking down all this newly available data without stopping to check exactly what's in it? The temptation with market research is to read the cherry-picked bullet points and stop there. We need to look closer. Sometimes the small print says a whole lot more than the sound bites. It's important to understand the methodology behind the results, including factors such as:

  • How well-defined is the target population?
  • Is the sample being studied random?
  • How did the research group chose its random sample?
  • Was the sample size large enough to produce meaningful results?
  • Are the questions unbiased?


We've banged lightly on this drum before, but we thought it'd be nice, given so many new study summaries being released of late, to open up the topic for discussion.

So here goes:

Do you trust market research surveys? What would you need to know in order to make a key business decision based on survey results?

Feel free to chime in through the comments area. You're also welcome to join us on Thursday during our first TOC LinkedIn Open House discussion, hosted by BookSwim.com's Javeen Padiyar. If you're not already a member of TOC's LinkedIn Group, it's easy to sign up.


Also of interest: "eReading Survey Findings and Research: A Look Behind the Numbers," a TOC 2011 panel to be moderated by Sarah Weinman.



January 06 2010

Unemployment, Vacancies, and Inflation Since 1951: The Movie

(Rated E: For economics audiences only)

This is from Roger Farmer's "Farewell to the natural rate: Why unemployment persists." He argues that "the relationship between unemployment and inflation is more complicated than that suggested by simple new-Keynesian models that incorporate a “natural rate” of unemployment." Why is this important?:

...In two forthcoming books,... I provide a theory that explains these data. I argue that there is no natural rate of unemployment and that the economy can come to rest in a stationary equilibrium at any point on the Beveridge curve. Which equilibrium persists, is decided by the confidence of households and firms that pins down asset values as reflected in housing wealth and the value of the stock market.

When households feel wealthy, that belief is self-fulfilling. Consumers spend a lot, firms hire workers, and the economy comes to rest at a point on the Beveridge curve with low unemployment and high vacancies. When the values of houses, factories, and machines fall, households spend less, firms lay off workers, and the economy comes to rest at a point on the Beveridge curve with high unemployment and low vacancies. Both situations – and anything in between – are zero-profit equilibria. ...

Policy implications

Most policymakers subscribe to the theory of the existence of a natural rate of unemployment. The data suggest that this theory is unconfirmed at best. To make the theory consistent with data, one must posit that the natural rate changes between recessions in unpredictable ways. This version of natural rate theory is difficult or impossible to refute. It is religion, not science.

For more than fifty years policy makers have been trying to hit two targets, unemployment and inflation, with one instrument, the interest rate. Recently, central bankers have discovered a second instrument – quantitative easing. I believe that quantitative easing works by influencing the value of real assets as reflected in housing wealth and the stock market and that it was successfully deployed by central banks in 2009 to maintain aggregate demand. In my two forthcoming books, I argue that quantitative easing should permanently enter the lexicon of central banking as a second instrument of monetary policy and that it will prove to be a more effective and flexible tool than fiscal policy for restoring and maintaining full employment.

I seem to be the only skeptic about the ability of quantitative easing to have a substantial impact on unemployment.

December 28 2009

'How Economics Managed to Make Amends'

Arvind Subramanian defends economics:

How economics managed to make amends, by Arvind Subramanian, Commentary, Financial Times: In 2008, as the global financial crisis unfolded, the reputation of economics as a discipline and economists as useful policy practitioners seemed to be irredeemably sunk. Queen Elizabeth captured the mood when she asked pointedly why no one (in particular economists) had seen the crisis coming. There was no doubt that, notwithstanding the few Cassandras who correctly prophesied gloom and doom, the profession had failed colossally. ...
But crises will always happen and ... their timing, form and provenance will elude prognostication. Most crises, notably the big ones, creep up on us from unsuspected quarters. ... So, if the value of economics in preventing crises will always be limited (although hopefully not non-existent), perhaps a fairer and more realistic yardstick should be its value as a guide in responding to them. Here, one year on, we can say that economics stands vindicated.
How so? Recall that the recession of the late 1920s in the US became the Great Depression, owing to a combination of three factors: overly tight monetary policy; overly cautious fiscal policy...; and dramatic recourse to beggar-thy-neighbour policies, including competitive devaluations ... and increases in trade barriers. The impact of this global financial crisis has been significantly limited because on each of these scores, the policy mistakes of the past were strenuously and knowingly avoided. ...
What is striking about the influence of economics is that similar policy responses in the fiscal and monetary areas, and non-responses in relation to competitive devaluations and protectionism, were crafted across the globe. They were evident in emerging market economies and developing countries as much as in the industrial world; in red-blooded capitalist countries as well as in communist China and still-dirigiste India. If ever there was a Great Consensus, this was it.
If the Great Depression had not happened 80 years before,...  perhaps 2009 might have turned out differently. But... We were not condemned to repeat the mistakes of history because the economics profession had learnt and distilled the right lessons from that event.
For sure, we have not learnt all the lessons; we may even have learnt some wrong ones. It is also probable that we are setting the stage for future crises... So, economics is bound to fail again. But the avoidance of the Greatest Depression that could so easily have happened in 2009 is an outcome the world owes to economics; at the least, it is the discipline’s atonement for allowing the crisis of 2008 to unfold.

December 19 2009

"How Bad Biology Killed the Economy"

Frans de Waal says many people believe that "the economy was killed by irresponsible risk-taking, a lack of regulation or a bubbling housing market, but the problem goes deeper. ... The ultimate flaw was the lure of bad biology, which resulted in a gross simplification of human nature," In particular, the reduction of human behavior to one motive, self-interest, is at fault (this is a much shortened version of the original):

How bad biology killed the economy, by Frans de Waal, RSA Journal: ...The book of nature is like the Bible: everyone reads into it what they like, from tolerance to intolerance and from altruism to greed. But it’s good to realize that, if biologists never stop talking about competition, this doesn’t mean that they advocate it, and if they call genes selfish, this doesn’t mean that genes actually are. Genes can’t be any more ‘selfish’ than a river can be ‘angry’ or sun rays ‘loving’. Genes are little chunks of DNA. At most, they are self-promoting, because successful genes help their carriers spread more copies of themselves. ...
[Many people have] fallen hook, line and sinker for the selfish-gene metaphor, thinking that if our genes are selfish, then we must be selfish, too. ... [T]oo many economists and politicians ... model human society on the perpetual struggle that they believe exists in nature, which is actually no more than a projection. Like magicians, they first throw their ideological prejudices into the hat of nature, then pull them out by their very ears to show how much nature agrees with them. It’s a trick for which we have fallen for too long. Obviously, competition is part of the picture, but humans can’t live by competition alone. ...
Lovers of open competition can’t resist invoking evolution. The e-word even slipped into the infamous ‘greed speech’ of Gordon Gekko, the corporate raider played by Michael Douglas in the 1987 movie Wall Street: “The point is, ladies and gentleman, that ‘greed’ – for lack of a better word – is good. Greed is right. Greed works. Greed clarifies, cuts through and captures the essence of the evolutionary spirit.” ... Is the evolutionary spirit really all about greed, as Gekko claimed, or is there more to it?
This line of thinking does not just come from fictional characters. Listen to David Brooks in a 2007 New York Times column that made fun of social government programs: "From the content of our genes, the nature of our neurons and the lessons of evolutionary biology, it has become clear that nature is filled with competition and conflicts of interest." Conservatives love to believe this, yet the supreme irony of this love affair with evolution is how little most of them care for the real thing.
In a recent presidential debate, no fewer than three Republican candidates raised their hand in response to the question: “Who doesn’t believe in evolution?” American conservatives are social Darwinists rather than real Darwinists. Social Darwinism argues against helping the sick and poor, since nature intends them either to survive on their own or perish. Too bad if some people have no health insurance, so the argument goes, so long as those who can afford it do. ...
The competition-is-good-for-you logic has been extraordinarily popular ever since Reagan and Thatcher assured us that the free market would take care of all of our problems. Since the economic meltdown, this view is obviously not so hot anymore. The logic may have been great, but its connection to reality was poor. What the free-marketeers missed was the intensely social nature of our species. They like to present each individual as an island, but pure individualism is not what we have been designed for. Empathy and solidarity are part of our evolution – not just a recent part, but age-old capacities that we share with other mammals.
Many great social advances – democracy, equal rights, social security – have come about through what used to be called ‘fellow feeling’. The French revolutionaries chanted of fraternité, Abraham Lincoln appealed to the bonds of sympathy and Theodore Roosevelt glowingly spoke of fellow feeling as “the most important factor in producing a healthy political and social life”.
The ending of slavery is particularly instructive. On his trips to the south, Lincoln had seen shackled slaves, an image that kept haunting him... Such feelings motivated him and many others to fight slavery. Or take the current US healthcare debate, in which empathy plays a prominent role, influencing the way in which we respond to the misery of people who have been turned away by the system or lost their insurance. Consider the term itself – it is not called health ‘business’ but health ‘care’, thus stressing human concern for others. ...
Social creatures
Natural selection has produced highly social and cooperative animals that rely on one another for survival. On its own, a wolf cannot bring down large prey, and chimpanzees in the forest are known to slow down for companions who cannot keep up due to injuries or sick offspring. So, why accept the assumption of cut-throat nature when there is ample proof to the contrary?
Bad biology exerts an irresistible attraction. Those who think that competition is what life is all about, and who believe that it is desirable for the strong to survive at the expense of the weak, eagerly adopt Darwinism as a beautiful illustration of their ideology. They depict evolution – or at least their cardboard version of it – as almost heavenly. John D Rockefeller concluded that the growth of a large business “is merely the working out of a law of nature and a law of God”, and Lloyd Blankfein, chairman and CEO of Goldman Sachs ... recently depicted himself as merely “doing God’s work”.

We tend to think that the economy was killed by irresponsible risk-taking, a lack of regulation or a bubbling housing market, but the problem goes deeper. ... The ultimate flaw was the lure of bad biology, which resulted in a gross simplification of human nature. Confusion between how natural selection operates and what kind of creatures it has produced has led to a denial of what binds people together. Society itself has been seen as an illusion. As Margaret Thatcher put it: “There is no such thing as society – there are individual men and women, and there are families.”
Economists should reread the work of their father figure, Adam Smith, who saw society as a huge machine. Its wheels are polished by virtue, whereas vice causes them to grate. The machine just won’t run smoothly without a strong community sense in every citizen. Smith saw honesty, morality, sympathy and justice as essential companions to the invisible hand of the market. His views were based on our being a social species, born in a community with responsibilities towards the community.
Instead of falling for false ideas about nature, why not pay attention to what we actually know about human nature and the behavior of our near relatives? The message from biology is that we are group animals: intensely social, interested in fairness and cooperative enough to have taken over the world. Our great strength is precisely our ability to overcome competition. Why not design society such that this strength is expressed at every level?
Rather than pitting individuals against each other, society needs to stress mutual dependencies. This could be seen in the recent healthcare debate in the United States, where politicians played the shared-interest card by pointing out how much everybody (including the well-to-do) would lose if the nation failed to change the system, and where President Obama played the social responsibility card by calling the need for change “a core ethical and moral obligation”. Money-making cannot be allowed to become the be-all and end-all of society.
And for those who keep looking to biology for an answer, the fundamental yet rarely asked question is why natural selection designed our brains so that we’re in tune with our fellow human beings and feel distress at their distress, and pleasure at their pleasure. If the exploitation of others were all that mattered, evolution should never have got into the empathy business. But it did, and the political and economic elites had better grasp that in a hurry.

December 03 2009

"The Civil War in Development Economics"

Anything with the words "Civil War" in it is catching my attention today:

The Civil War in Development Economics, by William Easterly: Few people outside academia realize how badly Randomized Evaluation has polarized academic development economists for and against. My little debate with Sachs seems like gentle whispers by comparison.
Want to understand what’s got some so upset and others true believers? A conference volume has just come out from Brookings. At first glance, this is your typical sleepy conference volume, currently ranked on Amazon at #201,635.
But attendees at that conference realized that it was a major showdown between the two sides, and now the volume lays out in plain view the case for the prosecution and the case for the defense of Randomized Evaluation.
OK, self-promotion confession, I am one of the editors of the volume, and was one of the organizers of the conference... Angus Deaton also gave a major luncheon talk at the conference, which was already committed for publication elsewhere. A previous blog discussed his paper.
Here’s an imagined dialogue between the two sides on Randomized Evaluation (RE) based on this book:
FOR: Amazing RE power lets us identify causal effect of project treatment on the treated.
AGAINST: Congrats on finding the effect on a few hundred people under particular circumstances, too bad it doesn’t apply anywhere else.
FOR: No problem, we can replicate RE to make sure effect applies elsewhere.
AGAINST: Like that’s going to happen. Since when is there any academic incentive to replicate already published results? And how do you ever know when you have enough replications of the right kind? You can’t EVER make a generic “X works” statement for any development intervention X. Why don’t you try some theory about why things work?
FOR: We are now moving in the direction of using RE to test theory about why people behave the way they do.
AGAINST: I think we might be converging on that one. But your advertising has not yet got the message, like the JPAL ad on “best buys on the Millennium Development Goals.”
FOR: Well, at least it’s better than your crappy macro regressions that never resolve what causes what, and where even the correlations are suspect because of data mining.
AGAINST: OK, you drew some blood with that one. But you are not so holy on data mining either, because you can pick and choose after the research is finished whatever sub-samples give you results, and there is also publication bias that shows positive results but not zero results.
FOR: OK we admit we shouldn’t do that, and we should enter all REs into a registry including those with no results.
AGAINST: Good luck with that. By the way, even if do you show something “works,” is that enough to get it adopted by politicians and implemented by bureaucrats?
FOR: But voters will want to support politicians who do things that work based on rigorous evidence.
AGAINST: Now you seem naïve about voters as well as politicians. Please be clear: do RE-guided economists know something the local people do not know, or do they have different values on what is good for them? What about tacit knowledge that cannot be tested by RE? Why has RE hardly ever been used for policymaking in developed countries?
FOR: You can take as many potshots as you want, at the end we are producing solid evidence that convinces many people involved in aid.
AGAINST: Well, at least we agree on the on the much larger question of what is not respectable evidence, namely, most of what is currently relied on in development policy discussions. Compared to the evidence-free majority, what unites us is larger than what divides us.

[On the civil war reference: I'm at the University of Oregon, and my brother played football for Oregon State many years ago - he was a defensive end - so to the extent that either of us cares after all these years, it's a Ducks versus Beavers family war as well (the next generation seems to care more than we do).]

November 28 2009

"Catastrophe Theory and the Business Cycle"

As a follow up to the recent post on non-linear dynamics that continued the discussion on what's wrong with modern macroeconomics, here is a paper written many years ago by Hal Varian that extends the Goodwin-Kaldor model of business cycles. It is old-fashioned macro, but the interesting part is the wealth effect causing the difference between recessions and depressions. In particular, the results of the paper imply that shocks to wealth that change savings propensities -- as we are seeing now -- can cause recoveries that "may take a very long time, and differ quite substantially from the recovery pattern of a [typical] recession."

Here are a few selections from the paper:

Catastrophe Theory and the Business Cycle, by Hal Varian: In this paper we examine a variation on Kaldor's (1940) model of the business cycle using some of the methods of catastrophe theory. (Thom (1975), Zeeman (1977)). The development proceeds in several stages. Section I provides a brief outline of catastrophe theory, while Section II applies some of these techniques to a simple macroeconomic model. This model yields, as a special case, Kaldor's business cycles. ... In Section III, we describe a generalization of Kaldor's model that allows not only for cyclical recessions, but also allows for long term depressions. Section IV presents a brief review and summary.

This paper is frankly speculative. It presents, in my opinion, some interesting models concerning important macroeconomic phenomena. However, the hypotheses of the models are neither derived from microeconomic models of maximizing behavior, nor are they subjected to serious empirical testing. The hypotheses are not without economic plausibility, but they are far from being established truths. Hence, this paper can only be said to present some interesting stories of macroeconomic instability. Whether these stories have any empirical basis is an important, and much more difficult, question. ...

Applied catastrophe theory is not without its detractors (Sussman and Zahler (1978)). Some of the applied work in catastrophe theory has been criticized for being ad hoc, unscientific, and oversimplified. As with any new approach to established subjects, catastrophe theory has been to some extent oversold. In some cases, applications of the techniques may have been overly hasty. Nevertheless, the basic approach of the subject seems, to this author at least, potentially fruitful. Catastrophe theory may provide some descriptive models and some hypotheses which, when coupled with serious empirical work, may help to explain real phenomena. ...

[I]t makes sense to model the system as if the state variables adjust immediately to some "short run" equilibrium, and then the parameters adjust in some "long run" manner. In the parlance of catastrophe theory, the state variables are referred to as "fast" variables, and the "parameters" are referred to as "slow" variables. This distinction is, of course, common in economic modeling. For example, when we model short run macroeconomic processes we take certain variables, such as the capital stock, as fixed at some predetermined level. Then when we wish to examine long run macroeconomic growth processes, we imagine that economy instantaneously adjusts to a short run equilibrium, and focus exclusively on the
long run adjustment process.

Catastrophe theory is concerned with the interactions between the short run equilibria and the long term dynamic process. To be more explicit, catastrophe theory studies the movements of short run equilibria as the long run variables evolve. A particularly interesting kind of movement is when a short run equilibrium jumps from one region of the state space to another. Such jumps are known as catastrophes. Under certain assumptions catastrophes can be classified into a small number of distinct qualitative types. ... In the economic model that follows we will only utilize the two simplest catastrophes, the fold and the cusp. In these low dimensional cases, there are no restrictions on the nature of dynamical systems involved. ...

[One result from the macroeconomic model] is the case considered by Kaldor (1940) and, more rigorously, by Chang and Smyth (1972). It has been shown by Chang and Smyth that when the speed of adjustment parameter is large enough, and certain technical conditions are met, there must exist a limit cycle in the phase space. In the appendix I prove a slightly simplified and modified version of this result.

This "business cycle" proposition is clearly the result intuited by Kaldor thirty years ago. However, the existence of a regular, periodic business cycle causes certain theoretical and empirical difficulties. Recent theoretical work involving rational expectations (Lucas (1975)) and empirical work on business cycles (McCullough (1975), (1977), Savin (1977)) have argued that (1) regular cycles seem to be incompatible with rational economic behavior, and (2) there is little statistically significant evidence for a business cycle anyway.

However, there does seem to be some evidence for a kind of "cyclic behavior" in the economy. It is commonplace to hear descriptions of how exogenous shocks may send the economy spiraling into a recession, from which it sooner or later recovers. Leijonhufvud (1973) has suggested that economies operate as if there is a kind of "corridor of stability": that is, there is a local stability of equilibrium, but a global instability. Small shocks are dampened out, but large shocks may be amplified. ...

Such a story seems to me to be a reasonable description of the functioning of the commonly described "inventory recession." [L]et us, for the sake of argument, accept such a story as providing a possible explanation of the "cyclic" behavior of an economy. Then there is yet another puzzle. Each recession in this model will behave rather similarly: First some kind of shock, then a rapid fall, followed by a slow change in some stock variables with, eventually, a rapid recovery. Although this story seems to be descriptive of some recessions, it does not describe all types of fluctuations of income. Sometimes the economy experiences depressions. That is, sometimes the return from a crash is very gradual and drawn out. ...

Here is the interesting feature of the model. Suppose as before, that there is some kind of perturbation in one of the stock variables. For definiteness let us suppose [there is] some kind of shock (a stock market crash?).... If the shock is relatively small, we have much the same story as with the inventory recession... If on the other hand the shock is relatively large, wealth may decrease so much as to significantly affect the propensity to save. In this case,... national income will remain at a relatively low level rather than experience a jump return. Eventually the gradual increase in wealth due to the increased savings will move the system slowly back towards the long run equilibrium. ...

According to this story the major difference between a recession and a depression is in the effect on consumption. If a shock affects wealth so much as to change savings propensities, recovery may take a very long time, and differ quite substantially from the recovery pattern of a recession. This explanation does not seem to be in contradiction with observed behavior, but as I have mentioned earlier, it rests on unproven (but not implausible) assumptions about savings and investment behavior.

1.4 Review and summary

We have shown how nonlinearities in investment behavior can give rise to cyclic or cycle like behavior in a simple dynamic macroeconomic model.

This behavior shares some features with empirically observed behavior. If savings behavior also exhibits nonlinearities of a plausible sort, the model can allow for both rapid recoveries which characterize recessions, as well as extended recoveries typical of a depression.

November 27 2009

On Buiter, Goodwin, and Nonlinear Dynamics

Rajiv Sethi continues the discussion on the state of modern macroeconomics:

On Buiter, Goodwin, and Nonlinear Dynamics, by Rajiv Sethi: A few months ago, Willem Buiter published a scathing attack on modern macroeconomics in the Financial Times. While a lot of attention has been paid to the column's sharp tone and rhetorical flourishes, it also contains some specific and quite constructive comments about economic theory that deserve a close reading. One of these has to do with the limitations of linearity assumptions in models of economic dynamics:
When you linearize a model, and shock it with additive random disturbances, an unfortunate by-product is that the resulting linearised model behaves either in a very strongly stabilising fashion or in a relentlessly explosive manner.  There is no ‘bounded instability’ in such models.  The dynamic stochastic general equilibrium (DSGE) crowd saw that the economy had not exploded without bound in the past, and concluded from this that it made sense to rule out, in the linearized model, the explosive solution trajectories.  What they were left with was something that, following an exogenous  random disturbance, would return to the deterministic steady state pretty smartly.  No L-shaped recessions.  No processes of cumulative causation and bounded but persistent decline or expansion.  Just nice V-shaped recessions.
Buiter is objecting here to a vision of the economy as a stable, self-correcting system in which fluctuations arise only in response to exogneous shocks or impulses. This has come to be called the Frisch-Slutsky approach to business cycles, and its intellectual origins date back to a memorable metaphor introduced by Knut Wicksell more than a century ago: "If you hit a wooden rocking horse with a club, the movement of the horse will be very different to that of the club" (translated and quoted in Frisch 1933). The key idea here is that irregular, erratic impulses can be transformed into fairly regular oscillations by the structure of the economy. This insight can be captured using linear models, but only if the oscillations are damped - in the absence of further shocks, there is convergence to a stable steady state. This is true no matter how large the initial impulse happens to be, because local and global stability are equivalent in linear models.

A very different approach to business cycles views fluctuations as being caused by the local instability of steady states, which leads initially to cumulative divergence away from balanced growth. Nonlinearities are then required to ensure that trajectories remain bounded. Shocks to the economy can make trajectories more erratic and unpredictable, but are not required to account for persistent fluctuations. An energetic and  life-long proponent of this approach to business cycles was Richard Goodwin, who also produced one of the earliest such models in economics (Econometrica, 1951). Most of the literature in this vein has used aggregate investment functions and would not be considered properly microfounded by contemporary standards (see, for instance, Chang and Smyth 1971Varian 1979, or Foley 1987). But endogenous bounded fluctuations can also arise in neoclassical models with overlapping generations (Benhabib and Day 1982Grandmont 1985).

The advantage of a nonlinear approach is that it can accomodate a very broad range of phenomena. Locally stable steady states need not be globally stable, so an economy that is self-correcting in the face of small shocks may experience instability and crisis when hit by a large shock. This is Axel Leijonhufvud's corridor hypothesis, which its author has discussed in a recent column. Nonlinear models are also required to capture Hyman Minsky's financial instability hypothesis, which argues that periods of stable growth give rise to underlying behavioral changes that eventually destabilize the system. Such hypotheses cannot possibly be explored formally using linear models.

This, I think, is the point that Buiter was trying to make. It is the same point made by Goodwin in his 1951 Econometrica paper, which begins as follows:
Almost without exception economists have entertained the hypothesis of linear structural relations as a basis for cycle theory. As such it is an oversimplified special case and, for this reason, is the easiest to handle, the most readily available. Yet it is not well adapted for directing attention to the basic elements in oscillations - for these we must turn to nonlinear types. With them we are enabled to analyze a much wider range of phenomena, and in a manner at once more advanced and more elementary. 
By dropping the highly restrictive assumptions of linearity we neatly escape the rather embarrassing special conclusions which follow. Thus, whether we are dealing with difference or differential equations, so long as they are linear, they either explode or die away with the consequent disappearance of the cycle or the society. One may hope to avoid this unpleasant dilemma by choosing that case (as with the frictionless pendulum) just in between. Such a way out is helpful in the classroom, but it is nothing more than a mathematical abstraction. Therefore, economists will be led, as natural scientists have been led, to seek in nonlinearities an explanation of the maintenance of oscillation. Advice to this effect, given by Professor Le Corbeiller in one of the earliest issues of this journal, has gone largely unheeded.
And sixty years later, it remains largely unheeded.

November 22 2009

"What if a Recovery Is All in Your Head?"

Robert Shiller wonders if the recovery is based upon a self-fulfilling prophecy:

What if a Recovery Is All in Your Head?, by Robert J. Shiller, Commentary, NY Times: Beyond fiscal stimulus and government bailouts, the economic recovery that appears under way may be based on little more than self-fulfilling prophecy.
Consider this possibility: after all these months, people start to think it’s time for the recession to end. The very thought begins to renew confidence, and some people start spending again — in turn, generating visible signs of recovery. This may seem absurd, and is rarely mentioned... but economic theorists have long been fascinated by such a possibility.
The notion isn’t as farfetched as it may appear. As we all know, recessions generally last no more than a couple of years. The current recession ... is almost two years old. According to the standard schedule, we’re due for recovery. Given this knowledge, the mere passage of time may spur our confidence, though no formal statistical analysis can prove it.
Certainly, people did not always believe that there is a regular “business cycle” that starts and stops in a definite pattern. The idea began to spread in the popular consciousness in the 1920s and reached full bloom in the ’30s — with one major complication, the Great Depression... “Recession,” a kinder, gentler term, began to be used around the time of the 1937-38 contraction to refer to a normal downturn in the business cycle. ...
Recessions, as the term came to be used, implied timetables that mark their expected end. Uttering the word does not risk damaging confidence, at least not fundamentally. A diagnosis of a recession can be shrugged off as something from which you will recover... A depression came to be another matter entirely.

It wasn’t until 1948 that the Columbia University sociologist Robert K. Merton wrote an article ... titled “The Self-Fulfilling Prophecy,” using the Great Depression as his first example. He is often credited with having invented the “self-fulfilling prophesy” phrase...
In important ways, we are still using that 1930s pattern of thinking. We are instinctively fearful of reckless talk about depressions, and we try to support one another’s confidence. We like the idea that modern scientific economics seems to show that all recessions end in due course.
For now, our common efforts at building confidence appear to be working somewhat. But the economy has still not recovered, by any means. ...
The problem might be put this way: There is still a nagging doubt afloat that the current event is really just another example in that long sequence of recessions. In which mental category does the current contraction belong: recession or depression? We may still be at a tipping point. To the extent that the theory of the self-fulfilling prophecy is correct, there is a case for continued vigilance, to ensure that adverse events don’t encourage widespread talk of the second category.

Barry Ritholtz responds:

How Overrated is Sentiment in Economics?, by Barry Ritholtz: There is a small cadre of Economists — original thinkers, contrarians, out of the box theorists — I respect a great deal. It is a modest list ranging from Richard Thaler to David Rosenberg to Robert Shiller, with lots of econ wonks in between.

This morning, however, I find myself somewhat disagreeing with the main premise of Professor Shiller’s NYT column. ...

It is with some trepidation that I point out what I find to be flaws in Shiller’s discussion about the recovery... Its a thought provoking but unpersuasive argument... To be fair, he uses the column to provoke a debate, rather than defend the position that the recovery is “all mental.” ...

Here are 10 items that challenge the column’s main premise:

1. Time: The typical Recession lasts 8 months; We are now in month 23. If people started to spend because they sensed it was “Late in the recession”..., well then, that would have been somewhere around August 2008.

2. Not Totally Irrational: One of my complaints about economics is it over-emphasizes people as rational, unemotional actors. However, when it comes to sentiment, economics seems to make the same mistake in the opposite direction — it assumes that people are foolish, unthinking creatures unable to engage in ANY rational thought whatsoever. ... The reality is quite different: Sometimes, people behave the way they do because they have figured out a problem and are responding to it intelligently.

3. Healthy Fear of Job Loss: Employed people began to spend their money more carefully when they saw coworkers getting laid off in increasing numbers. That is a rational act in the face of an increasing possibility of a loss of income. This is unlikely to change in the near future, so long as large public layoffs remain a news item.

4. Asset Deflation: Consumers cut back their spending when they saw their biggest assets (Homes, Stocks) lose a significant value. Again, a rational response to a change in personal financial conditions.

5. False Belief System: Earlier this year, the Dow had dropped over 5,000 points in 6 months. One of the collective fallacies our culture operates under is the delusion that the market is some kind of astute forecasting machine. It is not — it represents the collective wisdom of 10 million panicked monkeys. This is not a sentiment error, but is actually a faulty belief system. That millions of slightly clever, pants wearing primates can combine their collective ignorance, their intellectual foibles, biases and false beliefs somehow into something resembling intelligence was one of the false beliefs of the era. Unfortunately, this is a condition the monkeys are prone towards.

6. Doom Warnings Began Making Sense: Many of the doomsayers have been warning of the coming apocalypse for years. ... Why did this group suddenly gain traction in 2008? Maybe it was because  the population is not as stupid as the politicians believe, and saw with their own eyes the decay in the economy. Suddenly, the warnings were not as far fetched as they previously seemed.

7. Reacting to Flat Income: Families have recognized their incomes have remained flat to negative over the past decade, while their expenses have increased. What should be the rational reaction to this realization? (Hint: a new car, a bigger house, a new vacation are not on the list of options).

8. Time to Exit the Bunkers: Ten months ago, people were betting the economic world was coming to an end. The economy was in freefall, and people had dramatically reduced spending. The freefall is now over, and while its arguable whether the recession is over ... we can all agree the Great Recession ended in the Spring of ‘09. The US consumer is no longer frozen like deer in headlights.

9. The Cheerleaders Now Look Like Fools: At the onset of a recession, we often see cheerleaders, OpEd writers, and money losing fund managers make the argument that there is no economic slowdown — that the weakness is only in people’s minds. I call these people the Pervasive Pollyannas of Prosperity. (Think Phil Gramm, Amity Shlaes, Don Luskin). Some are partisans, others are dumb, others still merely incompetent — a few are all three. yet despite their best efforts of the cheerleaders, the economy still went into freefall.

10. Deleveraging: We know why this recession was so deep and long — the wanton use of leverage by people and financial institutions. The deleveraging that is taking place is a long slow process. It is rational, it is intelligent, and it will be how families will restore their balance sheets — the paradox of thrift be damned . . .

I find that I have a knee-jerk, negative reaction to explanations based upon mass psychology, sentiment, story-telling, and the like. I have to consciously force myself not to dismiss them. I'm not sure why that is, though it probably has something to do with a feeling that such explanations aren't scientific, and hence have no place in serious academic investigations. That is, prior to the crisis I thought that the real economy drove sentiment, and not the other way around. Sentiment could definitely provide a feedback loop that strengthens negative or positive economic shocks, but psychology was not the prime mover. Thus, sentiment changes that did not have evidence to support them would quickly die out before having much, if any effect.

But this crisis has caused me to reevaluate. I still find the Shiller-type animal spirits, psychology based explanations hard to swallow, but when the foundation supporting your beliefs is called into question (in this case modern macroeconomic models), it's important to open your mind and at least give alternative explanations a chance. That's particularly true when the person pushing the stories has a pretty darn good record of using them to warn of bubbles, as Shiller does. So I'm trying.

November 21 2009

Stabilities and Instabilities in the Macroeconomy

More on what's wrong with modern macro, this time from Axel Leijonhufvud:

Stabilities and instabilities in the macroeconomy, by Axel Leijonhufvud, Vox EU: Fifty-some years ago, students were taught that the private sector had no tendency to gravitate to full employment, that it was prone to undesirable fluctuations amplified by multiplier and accelerator effects, and that it was riddled with market failures of various sorts. But it was also believed that a benevolent, competent, democratic government could stabilize the macroeconomy and reduce the welfare consequences of most market failures to relative insignificance.

Fifty years later, around the beginning years of this century, students were taught that representative governments produce pointless fluctuations in prices and output but, if they can be constrained from doing so – by an independent central bank, for example – free markets are sure to produce full employment and, of course, many other blessings besides. Macroeconomic policy doctrine had shifted from stabilizing the private to constraining the public sector.

This long swing in our understanding of the economy spans a half-century of prolific technical accomplishments in economics (Blanchard 2008). But what the story shows is that, ontologically, economics has been completely at sea, drifting on the surface in currents of our own making. We lack an anchored understanding of the nature of the reality that economics is supposed to illuminate.

Neoclassical syntheses

Around the turn of the century the pendulum began to swing back – although not very far. “Freshwater” and “saltwater” macroeconomists came to a “brackish” compromise known as the New Neoclassical Synthesis. The New Keynesians adopted the dynamic stochastic general equilibrium (DSGE) framework pioneered by the New Classicals while the latter accepted the market “frictions” and capital market “imperfections” long insisted upon by the former.

This New Synthesis, like the Old Synthesis of fifty years ago, postulates that the economy behaves like a stable general equilibrium system whose equilibrating properties are somewhat hampered by frictions. Economists of this persuasion are now struggling to explain that what has just happened is actually logically possible. But the recent crisis will not fit.

The syntheses, Old and New, I believe, are wrong. They stem from a fundamental misunderstanding of the nature of a market economy. Further technical innovations in economic modeling will not bring real progress as long as “stability-with-frictions” remains the ruling paradigm. The genuine instabilities of the modern economy have to be faced.

A complex adaptive system

The economy is an adaptive dynamical system. It possesses the self-regulating, “equilibrating” properties that we usually refer to as “market mechanisms”. But these mechanisms do not always suffice to ensure the coordination of activities in the complex system. Almost forty years ago, I proposed the “corridor hypothesis”. The hypothesis suggested that the economy might show the desirable “classical” adjustment properties within some “corridor” around a hypothetical equilibrium path but that its self-regulating capabilities would be impaired in the “Keynesian” regions outside the corridor. For large displacements from equilibrium, therefore, the market system might not be able recover unless aided by stabilization policy.

The original argument for the corridor concerned the conditions under which to expect significant deviation-amplifying multiplier effects and might not be all that persuasive by itself. It is the case, however, that all other known complex dynamical systems, whether human-made or occurring in nature, are known to have the property that their homeostatic capabilities are bounded. It is extremely unlikely that the economy would be different in this regard.

It is reasonable to believe, therefore, that the state-space of the system – in addition to regions with good equilibrating properties – has regions where deviation-amplifying processes have impaired these properties. But the story does not end there. The present crisis has shown us a whole array of destabilizing, positive feedback processes that are not as tightly bounded as the multiplier. Deleveraging by banks, for example, cuts off credit to businesses, which leads to a recession, which in turn impairs bank assets and adds to the incentive to shorten bank balance sheets. The most dangerous of these destabilizing feedback loops, which we have so far managed to avoid, is Fisherian debt-deflation. There are regions of the state-space that should be avoided at all cost.

This kind of impulse-propagation reasoning asks what the system's behaviour will be if it is displaced far from equilibrium. It treats the impulse as exogenous and misses, therefore, the possibility of endogenously generated instability.

We have known about the endogenous instability of fractional reserve banking for some 200 years. It is Hyman Minsky's contribution to have explained that this financial instability extends beyond just the commercial banking system. Minsky argued that a long period without crises – such as the late “Great Moderation” – would lead to an increased willingness to assume risk and thus cause the system to become financially fragile. And the fragile system will sooner or later crash.

Systemic problems

The currently pressing problems all concern instabilities that have been neglected in stable-with-frictions macro theory. They constitute three themes I discussed in more detail in previous Vox columns (Leijonhufvud, June 2007, January 2009, and July 2009).

  • Instability of leverage. Competing to achieve rates of return several times higher than returns in industry, financial institutions were at historically high levels of leverage towards the end of the boom, earning historically minimal risk spreads – and carrying large volumes of assets soon to be revealed as “toxic.”
  • Connectivity. In the US, under the Glass-Steagall regulations, the financial system had been segmented into distinct industries each characterized by the type of assets they could invest in and liabilities they could issue. Firms in different industry segments were not in direct competition with each other. Deregulation has dramatically increased connectivity in the global network of financial institutions. The crisis of the American savings and loan industry in the 1980s, although costly enough, was confined to that market segment. The present crisis also started in American home finance but has spread and amplified across the world.
  • Potential instability of the price level. Over the current decade, the American consumer goods price level has been stabilized largely through the exchange rate policies and competitive exports of China and several other export-oriented emerging countries. The Great Moderation has left a legacy of low volatility of inflation expectations. If these conditions were to change, inflation targeting with endogenous base money and the federal funds rate as the only instrument is bound to prove inadequate for monetary control.

Current issues

There are four issues to watch for:

  • Twin dangers looming ahead are Japanese-style stagnation on the one hand and Latin-American-style high inflation on the other. In more normal times, we would regard these prospects as both unlikely and very far apart on a spectrum of eventualities. High levels of public debt, large unfunded liabilities, and large current deficits mean that they are not at all far apart in the current situation. The apparent political difficulties in decisively remedying the public finances are likely to mean that this is not just a temporary predicament. The navigable channel between Scylla and Charybdis has become quite narrow.
  • One overwhelmingly important fact should guide policy over the near-term future – since current bailouts and stimulus policies have stretched public finances to the utmost, governments do not have the fiscal resources to handle another bubble bursting. Policy, therefore, should be conducted in a fail-safe mode. The current policies of extremely low interest rates are not fail-safe. They are aimed at reflating asset prices just enough to stave off a deeper recession. This is a delicate operation, not a robust, fail-safe move. It is creating strong incentives for the banks to return to the tables and resume the game of maturity transformation at high leverage that got us into our current troubles in the first place. It is evident that the banks are responding promptly to those incentives
  • High leverage has been the big culprit in the current disaster. To reduce the risk of another crash, we must curb leverage. But governments do not want the financial sector to deleverage now because the requisite falling asset prices and curtailed credit would deepen the recession. The question, of course, is: If not now, when?
  • The central banks are planning “exit strategies” by which they mean returning their balance sheets, which are presently bloated beyond recognition with a mix of strange assets, to a condition more resembling that normal to central banks. This will not be easy. If they succeed, however, they will still face the prospect of having to engage in many of the same desperate, unconventional policies in a future crisis. Under present arrangements, the responsibilities of central banks have no well-defined limits. This problem can only be solved by regulation of the financial sector. At present, it does not seem that we know how to do it.

References

Blanchard, Olivier (2008), “The state of macro,” NBER Working Paper 14259.

Leijonhufvud, Axel (2007), “The perils of inflation targeting”, VoxEU.org, 25 June 2007

Leijonhufvud, Axel (2009), “Fixing the crisis: Two systemic problems”, VoxEU.org, 12 January.

Leijonhufvud, Axel (2009), “Curbing instability: policy and regulation”, VoxEU.org, 11 July.

Leijonhufvud, Axel (2009), “Macroeconomics and the Crisis: A Personal Appraisal”, CEPR Policy Insight 41, November.

This article may be reproduced with appropriate attribution.

November 19 2009

Top-Down versus Bottom-Up Macroeconomics

When Paul DeGrauwe presented this paper at the What's Wrong with Modern Macroeconomics conference (papers here), his argument that rational expectations models are the intellectual heirs of central planning seemed to ruffle a few feathers:

Top-down versus bottom-up macroeconomics, by Paul De Grauwe, Commentary, Vox EU: There is a general perception today that the financial crisis came about as a result of inefficiencies in the financial markets and economic actors’ poor understanding of the nature of risks. Yet mainstream macroeconomic models, as exemplified by the dynamic stochastic general equilibrium (DSGE) models, are populated by agents who are maximising their utilities in an intertemporal framework using all available information including the structure of the model – see Smets and Wouters (2003), Woodford (2003), Christiano et al. (2005), and Adjemian, et al. (2007), for example. In other words, agents in these models have incredible cognitive abilities. They are able to understand the complexities of the world, and they can figure out the probability distributions of all the shocks that can hit the economy. These are extraordinary assumptions that leave the outside world perplexed about what macroeconomists have been doing during the last decades.
Evidence on rationality from other sciences
These developments in mainstream macroeconomics are surprising for other reasons. While macroeconomic theory enthusiastically embraced the view that some if not all agents fully understand the structure of the underlying models in which they operate, other sciences like psychology and neurology increasingly uncovered the cognitive limitations of individuals (see e.g. Kahneman 2002, Camerer et al. 2005, Kahneman and Thaler 2006, and Della Vigna 2007). We learn from these sciences that agents only understand small bits and pieces of the world in which they live, and instead of maximising continuously taking all available information into account, agents use simple rules (heuristics) in guiding their behaviour (Gigerenzer and Todd 1999). The recent financial crisis seems to support the view that agents have limited understanding of the big picture. If they had understood the full complexity of the financial system, they would have understood the lethal riskiness of the assets they piled into their portfolios.
Top-down and bottom-up models
In order to understand the nature of different macroeconomic models, it is useful to make a distinction between top-down and bottom-up systems.
  • In its most general definition, a top-down system is one in which one or more agents fully understand the system. These agents are capable of representing the whole system in a blueprint that they can store in their mind. Depending on their position in the system, they can use this blueprint to take command or to optimise their own private welfare. An example of such a top-down system is a building that can be represented by a blueprint and fully understood by the architect.
  • Bottom-up systems are very different in nature. These are systems in which no individual understands the whole picture. Each individual understands only a very small part of the whole. These systems function as a result of the application of simple rules by the individuals populating the system. Most living systems follow this bottom-up logic (see the beautiful description of the growth of the embryo by Dawkins 2009).
The market system is also a bottom-up system. The best description made of this bottom-up system is still the one made by Hayek (1945).
Hayek argued that no individual is capable of understanding the full complexity of a market system. Instead, individuals only understand small bits of the total information. The main function of markets consists in aggregating this diverse information. If there were individuals capable of understanding the whole picture, we would not need markets. This was in fact Hayek’s criticism of the “socialist” economists who took the view that the central planner understood the whole picture and would therefore be able to compute the whole set of optimal prices, making the market system superfluous.
Rational expectations models as intellectual heirs of central planning
My contention is that the rational expectations models are the intellectual heirs of these central-planning models. Not in the sense that individuals in these rational expectations models aim at planning the whole, but in the sense that, as the central planner, they understand the whole picture. These individuals use this superior information to obtain the “optimum optimorum” for their own private welfare. In this sense, they are top-down models.
In a recent paper, I contrast the rational expectations top-down model with a bottom-up macroeconomic model (De Grauwe 2009). The latter is a model in which agents have cognitive limitations and do not understand the whole picture (the underlying model). Instead, they only understand small bits and pieces of the whole model and use simple rules to guide their behaviour. I introduce rationality in the model through a selection mechanism in which agents evaluate the performance of the rule they are following and decide to keep or change their rule depending on how well it performs relative to other rules. Thus agents in the bottom-up model learn about the world in a “trial and error” fashion.
These two types of models produce very different insights. I mention three differences here. First, the bottom-up model creates correlations in beliefs that in turn generate waves of optimism and pessimism. The latter produce endogenous business cycles which are akin to the Keynesian “animal spirits” (see Akerlof and Shiller 2009).
Second, the bottom-up model provides for a very different theory of the business cycle compared to the business cycle theory implicit in the rational expectations (DSGE) models. In the DSGE models, business cycle movements in output and prices arise because rational agents cannot adjust their optimal plans instantaneously after an exogenous disturbance. Price and wage stickiness prevent such instantaneous adjustment. As a result, these exogenous shocks (e.g. productivity shocks, or shocks in preferences) produce inertia and business cycle movements. Thus it can be said that the business cycle in DSGE models is exogenously driven. As an example, in the DSGE model, the financial crisis and the ensuing downturn in economic activity is the result of an exogenous and unpredictable increase in risk premia in August 2007.
In contrast to the rational expectations model, the bottom-up model has agents who experience an informational problem. They do not fully understand the nature of the shock or its transmission. They use a trial-and-error learning process aimed at distilling information. This process leads to waves of optimism and pessimism, which in a self-fulfilling way create business cycle movements. Booms and busts reflect the difficulties of economic agents trying to understand economic reality. The business cycle has a large endogenous component. Thus, in this bottom-up model, the financial crisis and the ensuing economic downturn should be explained by the previous boom.
Finally, the bottom-up model confirms the insight obtained from mainstream macroeconomics (including the DSGE models) that a credible inflation targeting is necessary to stabilise the economy. However, it is not sufficient. In a world where waves of optimism and pessimism (animal spirits) can exert an independent influence on output and inflation, it is in the interest of the central banks not only to react to movements in inflation but also to movements in output and asset prices so as to reduce the booms and busts that free market systems produce quite naturally. ...

November 12 2009

What's Wrong with Modern Macroeconomics: Comments

I really hope that the conversation in the comments to the post What's Wrong with Modern Macroeconomics: Conference papers will continue:

Barkley Rosser said... I should have guessed that de Grauwe might have a good paper.
So, Mark, what have you to say to the assembled masses there that you are willing to report back to us about the spot-on paper by Alan Kirman?
reason said in reply to Barkley Rosser... Barkley,
thanks for the tip. The Kirman paper is my reading on the train now.
Mark Thoma said... One thing I learned from it is that I need to read the old papers by Sonnenschein (1972), Mantel (1974), and Debreu (1974) since these papers appear to undermine representative agent models. According to this work, you cannot learn anything about the uniqueness of an equilibrium, whether an equilibrium is stable, or how agents arrive at equilibrium by looking at individual behavior (more precisely, there is no simple relationship between individual behavior and the properties of aggregated variables - someone added the the axiom of revealed preference doesn't even survive aggregating two heterogeneous agents).
I need to learn the full extent to which this work undermines the whole microfoundations approach (hence Kirman's call to study the properties of networks so as to generate endogenous cycles from phase transitions rather than trying to model individual agents from first principles - that's not his sole reason for wanting to turn to this approach, but it's part of it).
I didn't understand that extent to which representative agent models are an analytical convenience to work around this problem (the DSGE theorists who understood this kept quiet about it).
You can get interactions among agents while maintaining identical agents, i.e. network effects do not necessarily require heterogeneity, but most interesting cases, it seems to me, do involve both heterogeneity and agents whose decisions are interdependent.
(Robert Solow said all you get is that excess demands have to sum to zero, i.e. Walras Law and a couple of other properties, but it's not much).
So that was the most important thing I learned, and it's something I should have known already. Once I learn a bit more about the results in these papers, I hope to post something about it. (A small part of my remarks wondered if learning models might not help to overcome this problem since they might give you a path to the equilibrium, and the number of equilibrium paths might be determinate and equal to one giving uniqueness, but it was pure speculation).
Barkley Rosser said... Mark,

The missing man in the critique of the representative agent model is Michael Jerison of SUNY-Albany. In his much-cited JEP paper on "Whom does the representative agent represent?", Kirman used (and cited) an example from a still unpublished paper by the sadly neglected Jerison.
Kirman is an interesting figure in all this as he was originally a general equilibrium theorist (well, originally a game theorist, and then a GE theorist). While he has differences with his old GE comrades, they all respect his critique because it does start with the SMD theorem, which is well known to all of them and very fundamentally disruptive. The DSGE modelers conveniently cover that one up, along with some other major problems.
Of course, in a world of multiple equilibria, learning may lead you anywhere.
Sebastion said... A good reference for anyone with access to the Mas-Colell et al micro textbook is their chapter 4 on aggregate demand and in particular chapter 4D on the existence of a representative consumer. Unfortunately, chapter 4 is usually NOT being taught in standard micro classes
Roberto Cruccolini said... And it might be good after having read chapter 4 to continue with chapter 17E in MasColell, called "Anything goes: The Sonnenschein-Mantel-Debreu Theroem". There you are, even in the advanced-micro bible you can read about those results, which are also at least extremely interesting for modern "microfounded" macro.
I think, this phenomenon of forgetting and/or neglecting former knowledge, e.g. the whole discussion and aspects of aggregation, which is really central to the methodology of modern macro, as the notion of microfoudations via optimizing agents was one of the core-arguments of New Classical Makro Revolution, is deeply unsettling. You could also add the oblivion of coordination & interaction problems, of discontinuities & emergence as probably central aspects of makroeconomics, which seem to be the reasons, why macro was once thought to be necessarily a different approach than micro.
There seem to be two ways to deal with this finding. One is to complain about the way modern macro has developed, and to suggest other/better solutions; this is, what we see most of the time right now.
That is of course worthwhile and understandable, but there remains a strange aspect: nearly all of nowadays criticisms were already mentioned 20 or 30 years before (recall Solow 1978 at the same conference as Lucas & Sargent, or Summers 1986 in response to Prescott, or Blinder 1987, or, which is sort of funny, Kirman 1989 & 1992 and - again - 2009,...)
And this leads to the interesting question, why modern macro/new classical methodology & thinking was so successful in conquering the field:
why did economists think, that Lucas 1976 said something new, given the reflections of Marshall how to theorize given ever changing structures, given Haavelmos ideas on the autonomy of economic relations, given the debates between Keynes and Tinbergen of econometrics and structural instability, and so on?
And why did they follow him in applying a Walrasian program using the representative-agent methodology given all these challenging aggregation results (see for the makro-production-function Fisher 1969 or Fisher&Felipe 2003 & 2006 and the bunch of literature to the Cambridge Capital Controversies and of course the literature interpreting the Sonnenschein-Mantel-Debreu results, f.e. Rizvi 1994 or 2006)
And in what sense does it make sense to describe modern macro models as "microfounded", if at the same time, you need some Friedman-1953-as-if argumentation to justify & make plausible your way of modeling, which is often referred to, that the inner functioning of your model is a black box and unknown, only built to generate predictions, not less. but certainly not more, in the sense that we think the way of modeling is reasonably realistic and corresponding to some mechanisms in the real world. Why should we, or why is this commonly called "microfoundations"??
johnchx said... MT wrote: "One thing I learned from it is that I need to read the old papers by Sonnenschein (1972), Mantel (1974), and Debreu (1974) since these papers appear to undermine representative agent models."
Yes; I have the impression that it's been clear for a very long time that representative agent models lack microfoundations -- that is, that classical micro assumptions are insufficient to support the existence of a representative agent, that the necessary conditions are unknown, and that the known sufficient conditions are extremely restrictive (e.g. perfectly homogeneous agents). I've tended to view this as evidence that many of those advancing the banner of microfoundations simply weren't serious.
If one really cared about understanding macro fluctuations in terms of the behavior of individual households and firms, one would study the behavior of individual households and firms; a representative agent framework would be entirely unsatisfactory.
I think it's also telling that most of the famous names associated with microfoundations in macro have been theoreticians rather than microeconometricians. I suspect that we may be able to learn more about real microfoundations of macro from, say, James Heckman than from Lucas or Barro.
Am I wildly off-base?
Herman said... Mark Thoma,
May I suggest elevating Roberto Cruccolini's very interesting comment above as a separate new post.
reason said in reply to Herman... Or even on Steven Keen's book "Debunking Economics" which covers most of the same territory in a very readable fashion.

November 10 2009

"What Computer Science Can Teach Economics"

Can real world agents actually find the Nash equilibrium that is used to describe their behavior in economic models?:

What computer science can teach economics, by Larry Hardesty, MIT News Office:  Computer scientists have spent decades developing techniques for answering a single question: How long does a given calculation take to perform? Constantinos Daskalakis, an assistant professor in MIT’s Computer Science and Artificial Intelligence Laboratory, has exported those techniques to game theory, a branch of mathematics with applications in economics, traffic management — on both the Internet and the interstate — and biology, among other things. By showing that some common game-theoretical problems are so hard that they’d take the lifetime of the universe to solve, Daskalakis is suggesting that they can’t accurately represent what happens in the real world.

Game theory is a way to mathematically describe strategic reasoning — of competitors in a market, or drivers on a highway or predators in a habitat. ...

In game theory, a “game” is any mathematical model that correlates different player strategies with different outcomes. One of the simplest examples is the penalty-kick game: In soccer, a penalty kick gives the offensive player a shot on goal with only the goalie defending. The goalie has so little reaction time that she has to guess which half of the goal to protect just as the ball is struck; the shooter tries to go the opposite way. In the game-theory version, the goalie always wins if both players pick the same half of the goal, and the shooter wins if they pick different halves. So each player has two strategies — go left or go right — and there are two outcomes — kicker wins or goalie wins.

It’s probably obvious that the best strategy for both players is to randomly go left or right with equal probability; that way, both will win about half the time. And indeed, that pair of strategies is what’s called the “Nash equilibrium” for the game. Named for John Nash..., the Nash equilibrium is the point in a game where the players have found strategies that none has the incentive to change unilaterally. In this case, for instance, neither player can improve her outcome by going one direction more often than the other.

Of course, most games are more complicated than the penalty-kick game, and their Nash equilibria are more difficult to calculate. But the reason the Nash equilibrium is associated with Nash’s name  — and not the names of other mathematicians who, over the preceding century, had described Nash equilibria for particular games — is that Nash was the first to prove that every game must have a Nash equilibrium. Many economists assume that, while the Nash equilibrium for a particular market may be hard to find, once found, it will accurately describe the market’s behavior.

Daskalakis’s doctoral thesis — which won the Association for Computing Machinery’s 2008 dissertation prize — casts doubts on that assumption. Daskalakis, working with Christos Papadimitriou of the University of California, Berkeley, and the University of Liverpool’s Paul Goldberg, has shown that for some games, the Nash equilibrium is so hard to calculate that all the computers in the world couldn’t find it in the lifetime of the universe. And in those cases, Daskalakis believes, human beings playing the game probably haven’t found it either.

In the real world, competitors in a market or drivers on a highway don’t (usually) calculate the Nash equilibria for their particular games and then adopt the resulting strategies. Rather, they tend to calculate the strategies that will maximize their own outcomes given the current state of play. But if one player shifts strategies, the other players will shift strategies in response, which will drive the first player to shift strategies again, and so on. This kind of feedback will eventually converge toward equilibrium: in the penalty-kick game, for example, if the goalie tries going in one direction more than half the time, the kicker can punish her by always going the opposite direction. But, Daskalakis argues, feedback won’t find the equilibrium more rapidly than computers could calculate it.

The argument has some empirical support. Approximations of the Nash equilibrium for two-player poker have been calculated, and professional poker players tend to adhere to it — particularly if they’ve read any of the many books or articles on game theory’s implications for poker. The Nash equilibrium for three-player poker, however, is intractably hard to calculate, and professional poker players don’t seem to have found it.

How can we tell? Daskalakis’s thesis showed that the Nash equilibrium belongs to a set of problems that is well studied in computer science: those whose solutions may be hard to find but are always relatively easy to verify. The canonical example of such a problem is the factoring of a large number: The solution seems to require trying out lots of different possibilities, but verifying an answer just requires multiplying a few numbers together. In the case of Nash equilibria, however, the solutions are much more complicated than a list of prime numbers. The Nash equilibrium for three-person Texas hold ’em, for instance, would consist of a huge set of strategies for any possible combination of players’ cards, dealers’ cards, and players’ bets. Exhaustively characterizing a given player’s set of strategies is complicated enough in itself, but to the extent that professional poker players’ strategies in three-player games can be characterized, they don’t appear to be in equilibrium.

Anyone who’s into computer science — or who read “Explained: P vs. NP” on the MIT News web site last week — will recognize the set of problems whose solutions can be verified efficiently: It’s the set that computer scientists call NP. Daskalakis proved that the Nash equilibrium belongs to a subset of NP consisting of hard problems with the property that a solution to one can be adapted to solve all the others. ...

That result “is one of the biggest yet in the roughly 10-year-old field of algorithmic game theory,” says Tim Roughgarden, an assistant professor of computer science at Stanford University. It “formalizes the suspicion that the Nash equilibrium is not likely to be an accurate predictor of rational behavior in all strategic environments.”

Given the Nash equilibrium’s unreliability, says Daskalakis, “there are three routes that one can go. One is to say, We know that there exist games that are hard, but maybe most of them are not hard.” In that case, Daskalakis says, “you can seek to identify classes of games that are easy, that are tractable.”

The second route, Daskalakis says, is to find mathematical models other than Nash equilibria to characterize markets — models that describe transition states on the way to equilibrium, for example, or other types of equilibria that aren’t so hard to calculate. Finally, he says, it may be that where the Nash equilibrium is hard to calculate, some approximation of it — where the players’ strategies are almost the best responses to their opponents’ strategies — might not be. In those cases, the approximate equilibrium could turn out to describe the behavior of real-world systems.

As for which of these three routes Daskalakis has chosen, “I’m pursuing all three,” he says.

I've got my hands full trying to figure out what's wrong with macroeconomics, so I'll let the microeconomists handle this one. But it does remind me of this passage from "The Economic Crisis is a Crisis for Economic Theory" questioning the use of representative agent macroeconomic models as a shortcut around the problems associated with the aggregation of heterogeneous agents:

The basic market model has been shown to use remarkably little information when functioning at equilibrium. But as Saari and Simon (1978) have shown, if there were a mechanism that would take a General Equilibrium, (Arrow–Debreu) economy to an equilibrium, that mechanism would require an infinite amount of information. Thus,... starting from individuals with standard preferences and adding them up allows one to show that there is an equilibrium but does not permit one to say how it could be attained.

November 07 2009

"Demystifying Social Knowledge"

Daniel Little looks at different approaches to "understanding society":

Demystifying social knowledge, by Daniel Little: There seem to be a couple of fundamentally different approaches to the problem of "understanding society." I'm not entirely happy with these labels, but perhaps "empiricist" and "critical" will suffice to characterize them.  We might think of these as styles of sociological thinking.  One emphasizes the ordinariness of the phenomena, and looks at the chief challenges of sociology as embracing the tasks of description, classification, and explanation.  The other highlights the inherent obscurity of the social world, and conceives of sociology as an exercise in philosophical theory, involving the work of presenting, clarifying and critiquing texts and abstract philosophical ideas as well as specific social circumstances.

The first approach looks at the task of social knowing as a fairly straightforward intellectual problem. It could be labeled "empiricist", or it could simply be called an application of ordinary common sense to the challenge of understanding the social world. It is grounded in the idea that the social world is fundamentally accessible to observation and causal discovery.  The elements of the social world are ordinary and visible. There are puzzles, to be sure; but there are no mysteries.  The social world is given as an object of study; it is partially orderly; and the challenge of sociology is to discover the causal processes that give rise to specific observed features of the social world.

This approach begins in the ordinariness of the objects of social knowledge.  We are interested in other people and how and why they behave, we are interested in the relationships and interactions they create, and we are interested in institutions and populations that individuals constitute. We have formulated a range of social concepts in terms of which we analyze and describe the social world and social behavior -- for example, "motive," "interest," "emotion," "aggressive," "cooperative," "patriotic," "state," "group," "ethnicity," "mobilization," "profession," "city," "religion." We know pretty much what we mean by these concepts; we can define them and relate them to ordinary observable behaviors and social formations. And when our attention shifts to larger-scale social entities (states, uprisings, empires, occupational groups), we find that we can observe many characteristics of each of these kinds of social phenomena.  We also observe various patterns and regularities in behavior, institution, and entity that we would like to understand -- the ways in which people from different groups behave towards each other, the patterns of diffusion of information that exist along a transportation system, the features of conflicts among groups in various social settings. There are myriad interesting and visible social patterns which we would like to understand, and sociologists develop a descriptive and theoretical vocabulary in terms of which to describe and explain various kinds of social phenomena.

In short, on this first approach, the social world is visible, and the task of the social scientist is simply to discover some of the observable and causal relations that obtain among social actors, actions, and composites. To be sure, there are hypothetical or theoretical beliefs we have about less observable features of the social world -- but we can relate these beliefs to expectations about more visible forms of social behavior and organization. If we refer to "social class" in an explanation, we can give a definition of what we mean ("position in the property system"), and we can give some open-ended statements about how "class" is expected to relate to observable social and political behavior. And concepts and theories for which we cannot give clear explication should be jettisoned; obscurity is a fatal defect in a theory.  In short, the task of social science research on this approach is to discover some of the visible and observable characteristics of social behavior and entities, and to attempt to answer causal questions about these characteristics.

This is a rough-and-ready empiricism about the social world. But there is another family of approaches to social understanding that looks quite different from this "empiricist" or commonsensical approach: critical theory, Marxist theory, feminist theory, Deleuzian sociology, Foucault's approach to history, the theory of dialectics, and post-modern social theory. These are each highly distinctive programs of understanding, and they are certainly different from each other in multiple ways. But they share a feature in common: they reject the idea that social facts are visible and unambiguous. Instead, they lead the theorist to try to uncover the hidden forces, meanings, and structures that are at work in the social world and that need to be brought to light through critical inquiry. Paul Ricoeur's phrase "the hermeneutics of suspicion" captures the flavor of the approach.  (See Alison Scott-Baumann's Ricoeur and the Hermeneutics of Suspicion for discussion.) Neither our concepts nor our ordinary social observations are unproblematic. There is a deep and sometimes impenetrable difference between appearance and reality in the social realm, and it is the task of the social theorist (and social critic) to lay bare the underlying social realities. The social realities of power and deception help to explain the divergence between appearance and reality: a given set of social relations -- patriarchy, racism, homophobism, class exploitation -- give rise to systematically misleading social concepts and theories in ordinary observers.

Marx's idea of the fetishism of commodities (link) illustrates the point of view taken by many of the theorists in this critical vein: what looks like a very ordinary social fact -- objects have use values and exchange values -- is revealed to mystify or conceal a more complex reality -- a set of relations of domination and control between bosses, workers, and consumers.  With a very different background, a book like Gaston Bachelard's The Psychoanalysis of Fire makes a similar point: the appearance represented by behavior systematically conceals the underlying human reality or meaning.  The word "critique" enters into most of Marx's titles -- for example, "Contribution to a Critique of Political Economy."  And for Marx, the idea of critique is intended to bring forward a methodology of critical reading, unmasking the assumptions about the social world that are implicit in the theorizing of a particular author (Smith, Ricardo, Say, Quesnay).  So Capital: Volume 1: A Critique of Political Economy is a book about the visible realities of capitalism, to be sure; but it is also a book intended to unmask both the deceptive appearances that capitalism presents and the erroneous assumptions that prior theorists have brought into their accounts.

The concepts of ideology and false consciousness have a key role to play in this discussion about the visibility of social reality.  And it turns out to be an ambiguous role.  Here is a paragraph from Slavoj Zizek on the concept of ideology from Mapping Ideology:
These same examples of the actuality of the notion of ideology, however, also render clear the reasons why today one hastens to renounce the notion of ideology: does not the critique of ideology involve a privileged place, somehow exempted from the turmoils of social life, which enables some subject-agent to perceive the very hidden mechanism that regulates social visibility and non-visibility? Is not the claim that we can accede to this place the most obvious case of ideology? Consequently, with reference to today's state of epistemological reflection, is not the notion of ideology self-defeating? So why should we cling to a notion with such obviously outdated epistemological implications (the relationship of 'representation' between thought and reality, etc.)? Is not its utterly ambiguous and elusive character in itself a sufficient reason to abandon it? 'Ideology' can designate anything from a contemplative attitude that misrecognizes its dependence on social reality to an action-orientated set of beliefs, from the indispensable medium in which individuals live out their relations to a social structure to false ideas which legitimate a dominant political power. It seems to pop up precisely when we attempt to avoid it, while it fails to appear where one would clearly expect it to dwell.
Zizek is essentially going a step beyond either of the two positions mentioned above.  The empiricist position says that we can perceive social reality.  The critical position says that we have to discover reality through critical theorizing.  And Zizek's position in this passage is essentially that there is no social reality; there are only a variety of texts.

So we have one style that begins in ordinary observation, hypothesis-formation, deductive explanation, and an insistence on clarity of exposition; and another style that begins in a critical stance, a hermeneutic sensibility, and a confidence in purely philosophical reasoning.  Jurgen Habermas draws attention to something like this distinction in his important text, On the Logic of the Social Sciences (1967), where he contrasts approaches to the social sciences originating in analytical philosophy of science with those originating in philosophical hermeneutics: "The analytic school dismisses the hermeneutic disciplines as prescientific, while the hermeneutic school considers the nomological sciences as characterized by a limited preunderstanding."  (This text as well as several others discussed here are available at AAARG.)  Habermas wants to help to overcome the gap between the two perspectives, and his own work actually illustrates the value of doing so.  His exposition of abstract theoretical ideas is generally rigorous and intelligible, and he makes strenuous efforts to bring his theorizing into relationship to actual social observation and experience. 

A contemporary writer (philosopher? historian? sociologist of science?) is Bruno Latour, who falls generally in the critical zone of the distinction I've drawn here.  An important recent work is Reassembling the Social: An Introduction to Actor-Network-Theory, in which he argues for a deep and critical re-reading of the ways we think the social -- the ways in which we attempt to create a social science. The book is deeply enmeshed in philosophical traditions, including especially Giles Deleuze's writings.  The book describes "Actor-Network-Theory" and the theory of assemblages; and Latour argues that these theories provide a much better way of conceptualizing and knowing the social world.  Here is an intriguing passage that invokes both themes of visibility and invisibility marking the way I've drawn the distinction between the two styles:
Like all sciences, sociology begins in wonder.  The commotion might be registered in many different ways but it's always the paradoxical presence of something at once invisible yet tangible, taken for granted yet surprising, mundane but of baffling subtlety that triggers a passionate attempt to tame the wild beast of the social.  'We live in groups that seem firmly entrenched, and yet how is it that they transform so rapidly?'  ... 'There is something invisible that weights on all of us that is more solid than steel and yet so incredibly labile.'  ...  It would be hard to find a social scientist not shaken by one or more of these bewildering statements.  Are not these conundrums the source of our libido scindi? What pushes us to devote so much energy into unraveling them? (21)
What intrigues many readers of Latour's works is that he too seems to be working towards a coming-together of critical theory with empirical and historical testing of beliefs.  He seems to have a genuine interest in the concrete empirical details of the workings of the sciences or the organization of a city; so he brings both the philosophical-theoretic perspective of the critical style along with the empirical-analytical goal of observational rigor of the analytic style. 

Also interesting, from a more "analytic-empiricist" perspective, are Andrew Abbott, Methods of Discovery: Heuristics for the Social Sciences, and Ian Shapiro, The Flight from Reality in the Human Sciences.  Abbott directly addresses some of the contrasts mentioned here (chapter two); he puts the central assumption of my first style of thought in the formula, "social reality is measurable".  And Shapiro argues for reconnecting the social sciences to practical, observable problems in the contemporary world; his book is a critique of the excessive formalism and model-building of some wings of contemporary political science.

My own sympathies are with the "analytic-empirical" approach.  Positivism brings some additional assumptions that deserve fundamental criticism -- in particular, the idea that all phenomena are governed by nomothetic regularities, or the idea that the social sciences must strive for the same features of abstraction and generality that are characteristic of physics.  But the central empiricist commitments -- fidelity to observation, rigorous reasoning, clear and logical exposition of concepts and theories, and subjection of hypotheses to the test of observation -- are fundamental requirements if we are to arrive at useful and justified social knowledge.  What is intriguing is to pose the question: is there a productive way of bringing insights from both approaches together into a more adequate basis for understanding society?

[Traveling: Scheduled to post at preset time.]

November 04 2009

Greed versus Self-Interest

Political philosopher Michael Sandel:

...Citizens generally who looked at this - at the bailouts and the bonuses and have been outraged - they believe there is a difference between greed and self-interest. But here's no way of capturing that intuition in economic analysis because, according to economic analysis, in any case one is deploying self-interest or greed, which is simply self-interest squared, to serve a social purpose. That's what the economic model says. And you have to introduce some normative assumption about what is excessive pursuit of gain in order to make sense of greed as a vice independent of the self-interest that all of the economic models presuppose. So I think there are intuitions in everyday life that people have that the economic models simply don't capture, and greed is one of them."

[Traveling: Preset to post automatically.]

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl