I predict future happiness for Americans if they can prevent the government from wasting the labors of the people under the pretense of taking care of them.- Thomas Jefferson.

debt clock

Thursday, December 29, 2011

John L. Chapman and John A. Allison

A Return to Gold?

“Lenin is said to have declared that the best way to destroy the Capitalist System was to debauch the currency. . . . Lenin was certainly right. There is no subtler, no surer means of overturning the existing basis of society. . . .The process engages all the hidden forces of economic law on the side of destruction, and does it in a manner which not one man in a million is able to diagnose.” — John Maynard Keynes
This summer marked the 40th anniversary of President Richard M. Nixon’s decision to sever the U.S. dollar’s official link to gold. On August 15, 1971, Nixon took to the airwaves in a national address from the Oval Office to declare that the U.S. Treasury would no longer honor foreigners’ demands to redeem dollars for gold. Because the United States was then the last country in the world with a currency defined by gold, it represented a complete and historic decoupling of the globe’s currencies—literally the money of the entire world—from the yellow metal.
For the first time in at least 2,700 years, dating to the Lydian coinage in what is now Turkey, gold was used as official money nowhere in the world. And for the first time ever the world’s monetary affairs were defined by a system of politically managed fiat currencies—that is, paper money run by governments or their central banks. The story behind Nixon’s catastrophic mistake, and the lessons it contains for today, suggest a framework for monetary policy and reforms that will induce strong and sustainable economic growth in the future.
It is important to understand what many current central bankers seem to have forgotten: the seminal importance of sound money—dependably valued, honest money whose value is not intentionally manipulated—as an institution in a modern exchange economy. Economies grow, and material wealth and welfare advance, through three interconnected phenomena, all of which are crucially supported by a well-functioning monetary unit: 1) efficient use of scarce resources via a system of prices and profit-and-loss, both of which encourage optimizing behavior on the part of all; 2) saving and the accumulation of capital for investment; and 3) the division of labor, specialization, and trade.
Regarding the last phenomenon, we would all be poor, and indeed most of us dead due to starvation, if we had to make and produce all our own food, housing, clothing, and other necessities and modern luxuries. As Adam Smith explained in his famous examination of a pin factory, dividing up the metal-straightening, wire-cutting, grinding, pin-head fashioning, and fastening and bundling operations into 18 separate steps increased the productivity of labor in the factory by at least 240-fold. (This of course dramatically increased productive output and raised workers’ real incomes.) And of course for society at large this specialization was not confined to single factories but spread across industries and agriculture: The baker, the butcher, the brewer, and the cobbler could all focus on their productive specialties and produce for a market wherein they could exchange with other specialists for desired goods.
Via economies of scale and scope, then, specialized production and exchange help to create a material horn of plenty for all in a society that’s felicitously based on peaceful, harmonious social cooperation. And here’s the key: None of this would be possible without a dependable monetary unit that serves as a medium for this exchange. Absent sound money, in fact, a division of labor, with all its specialized knowledge and skills, could hardly be exploited, because barter would mean that, say, a neurosurgeon would have to find a grocer who coincidentally needed brain surgery every time he wanted to obtain food. A barter society is by definition a primitive and poor one.
Similarly, the explosion in human progress in the last three centuries was propelled by the accumulation of capital, the tools, machinery, and other assets that increase per capita output and dramatically increase living standards. And here again, a well-functioning monetary unit facilitates the saving that allows for capital accumulation: Income need not be consumed immediately but can be transferred to others to invest productively in return for future payment streams. Sound money, in short, greatly enhances wealth-creating exchange and transfer of resources between present and future, and in doing so often assists in the development of higher output capacity in the future.
There is a third crucial way in which sound money serves to advance civilized human progress: By providing a common denominator for the expression of all exchange prices between goods, money greatly facilitates trade among all parties, thus extending the breadth of markets as far as money’s use itself, which in turn intensifies the division of labor that increases productive output and per capita incomes. Think about it: Without a monetary unit of account there would be an infinite array of prices for one good against all other goods; for example, the bread-price of shoes, the book-price of apples, and so on. In turn, calculation of profit and loss, on which effective use of scarce resources so critically depends, would be impossible.
In sum the institutional development and use of money has been an immense human achievement, every bit as important as language, property rights, the rule of law, and entrepreneurship in the advancement of human civilization. And it is important to note that while several commodities were tried as monetary exchange media over the centuries, from fish to cigarettes, the precious metals and especially gold were seen to be most effective, as they are valuable, highly divisible, durable, uniform in composition, easily assayable, transportable, and bear high value-to-bulk, along with being relatively stable in annual supply. In short, in an ever-changing world of imperfection, gold has been found to be a near-perfect, and certainly dependably valued, form of money.

Money, International Trade, and Economic Growth

To understand much about our current economic challenges and what to do to meet them, it is important to understand why gold, after several centuries of trial and error, came to be seen as sound money versus paper, other commodities, and even silver. The term sound money is especially important to grasp: It is meant to describe a reliable, dependably valued medium of exchange and account, not subject easily to manipulation, which can therefore effectively perform the three functions of money described above, all of which lead to prosperity and an advancing economy. This is critical for a civilized society whose economy is based on monetary exchange, because money is literally one-half of every transaction. So when the value of the monetary unit is volatile—when money becomes more or less unsound—it changes the intended terms of trade between parties, especially when that transaction involves exchange between present and future, as in capital investment. This in turn can cause such exchanges to break down or lead to distortions in trade that bring malinvestment of assets and waste of scarce resources.
No better illustration of this can be seen than in the German hyperinflation of 1923. German war reparations mandated by Versailles had so burdened the German economy that the German government took literally to printing the currency known as the papiermark in massive quantities. This rapidly depreciated the value of the currency until in the fall of 1923 workers were paid in wheelbarrows of cash twice daily. The velocity of spending skyrocketed, as workers immediately rushed to trade the quickly worthless paper money for anything of tangible value, buying commodities they often did not need. Saving and investment were stunted, price inflation soared out of control, and civil society lurched toward a complete breakdown by the end of 1923, when $1, which had bought 5.21 marks in 1918, now bought 4.2 trillion of them.
Seen another way, the German hyperinflation is an example of a “virus” infecting the economy, distorting prices in every transaction, every entrepreneurial investment decision, and the value of every bank account. Every calculation of profit and loss was changed in real terms as well, thus causing resources to be inefficiently used or traded—that is, wasted. While the harm caused by unsound money is usually less than what occurred in 1923 in Germany, it was no less real in a 1970s-style inflation, a 1930s-style deflation, or a 2000s-style housing bubble fueled by falsified interest rates thanks to the Fed’s over-creation of money.
Conversely it was sound money, based on the international gold standard, that greatly impelled the fantastic rise in living standards across the nineteenth century in many parts of the globe. Gold as a common medium facilitated dramatic increases in trade and the international division of labor. With a dependably valued international medium of exchange and unit of account, long-term investment could be undertaken, and ever-increasing volumes of mutually profitable trading developed between nations, increasing jobs, output, and living standards dramatically. The century up to 1914 was a golden age of prosperity and harmony among nations, and while not devoid of all war, recessions, or panics, it was comparatively more peaceful and productive than any other period in human history.

The Rise of Central Banking

While the Bank of England was created in 1694, the United States did not get a central bank until the creation of the Federal Reserve System in 1913; by 1935, with the creation of the Bank of Canada, all modern nations had central banks. In theory a central bank, through monopoly banknote issue and effective control of a nation’s money supply, serves as a stabilizing influence in an economy by acting as a banker’s bank, a lender of last resort providing liquidity in panics, and a regulator of commercial banks and thus governor of their excesses. (However, in a recent exhaustive study, economists George Selgin and William Lastrapes of the University of Georgia and Lawrence White of George Mason University show that recessions were shorter and less severe, inflation and unemployment lower, and economic growth stronger and more durable in the century before 1913 than since the Fed’s creation). At the least, the central bank’s mandate included—and seemed to assure—maintenance of the value of the currency.
Beginning with World War I, and continuing through the Great Depression and World War II, the links to gold were for the most part effectively severed from most nations’ currencies, including the U.S. dollar. In the summer of 1944 economists (led by John Maynard Keynes and Harry Dexter White) met at Bretton Woods, New Hampshire, to design a postwar monetary system conducive to international trade. The resulting mechanism, known as the gold-exchange standard, tried to resurrect the beneficial aspects of the nineteenth century’s classical gold standard and lasted until Nixon scrapped it in 1971. In short the Bretton Woods agreement charged the U.S. government with defining the dollar in gold ($35 per ounce) and maintaining convertibility at this rate only with foreign governments and central banks. (Pointedly, there was no similar obligation to U.S. banks or citizens; gold had disappeared from circulation in the United States after Franklin Roosevelt’s 1933 decree.) In turn all foreign nations were to peg their currencies to the dollar, thereby preserving a regime (however illusory) of fixed exchange rates so as to promote certainty in international exchange and encourage cross-border trade and investment.
By the 1960s this system was beginning to break down on all sides. Foreign governments announced periodic devaluations against the gold-linked dollar to promote exports and allow for domestic government spending, and the United States ramped up “guns-and-butter” federal spending on both the Great Society and the Vietnam War. Inflation slowly crept into the U.S. economy, and gold-redemption requests spiked by the late 1960s at the U.S. Treasury’s gold window.
Nixon thus made his fateful decision in the summer of 1971, freeing the government from any redemption obligations. This had two immediate effects: It amounted to an automatic, if stealthy, repudiation of U.S. debt in real terms because it devalued all dollar-denominated assets and currency at once. It also allowed the U.S. government, in concert with a technically independent Federal Reserve, to manage the U.S. money supply for its own political ends indefinitely.

The Predictable Aftermath of 1971

In developing his theory of money and credit a century ago, the great economist Ludwig von Mises explained why a system of fiat currencies was bound to break down: The politicians’ urge to inflate the money supply in order to commandeer the resources of the real economy via expanded government spending would prove too great. Further, because the dollar was the de facto reserve currency of the globe post-Nixon (replacing gold itself), any U.S. inflation would encourage other nations’ monetary expansions and competitive devaluations in tandem. And indeed, an era of predictable instability has been the result: A trenchant stagflation in the 1970s was followed by banking and S&L crises in the 1980s; Russian, Asian, and Latin American banking crises in the 1980s–90s; overleveraged financial institutions and moral hazard-based bailouts of too-big-to-fail institutions in the 1990s–2000s; and in the last decade or so two Fed-induced bubbles and subsequent crashes. The second of those, based in the housing sector, “went viral” across the world thanks to the huge nominal amount of funds plus leverage of U.S.-based mortgage debt, coupled with the expectation on the part of investors that the U.S. government would guarantee any mortgage-bond losses.
This instability has starkly proven another tenet of Mises’s seminal work: Fiat currencies managed by central banks with a monopoly on note issue, rather than being a source of macro stability, are themselves the causal agents of repeated boom-and-bust business cycles. By increasing the money supply at zero effective cost, central banks encourage government spending and cause interest rates to fall below their natural rate, which induces private investment and a temporary boom. But this boom, usually in capital-equipment sectors or long-term durables, is not based on real individual and institutional savings. That is, the accumulation of capital is not “backed” by the real resources of society. By definition such a boom is inherently unsustainable and unstable, and must end in a bust and painful retrenchment. The greater and longer the creation of fiat money by the central bank, the harder and longer will be the ensuing recession.

A Path to Reform

The best solution to the myriad problems caused by the Fed’s post-Nixon fiat currency management is to return to sound money generated by private markets and intermediated by freely competing banks issuing their own notes. These notes could be backed by any commodity but most likely would involve a return to gold. Banks would compete for customer deposits and loan business on the basis of the soundness of their balance sheets and thus could not over-issue—or else they’d face redemption of their outstanding notes and a potential collapse from a bank-run. Such a system is far more stable than a monopoly central bank without constraints, subject to the inexorable pull of political designs (that is, malfeasance).
But there are many challenges to developing and implementing such a free-banking system with commodity money; this is the subject of work to be published in the future. Meanwhile a second-best solution would be for the Federal Reserve to cease and desist with any further fiat money creation—in essence, freeze the monetary base where it is, permanently. The Fed could then announce an intent to return to full gold convertibility, and any new notes it issued (and used by Fed member banks) would be 100 percent backed by gold. Any maturing securities held as assets on the Fed’s balance sheet would be used to purchase gold to build the Fed’s reserves. The permanent price of gold would be set over a period of months after the announcement of the new regime, as gold itself and competing currencies traded at new (lower) levels based on the U.S. government’s new commitment to dollar stability.
The results of this reform program would be electric and dramatic. Capital investment would soar in the United States, as America became a haven for high-productivity ventures once again. The entire U.S. economy would in effect be recapitalized. While an end to activist Fed monetary policy would raise the short end of the yield curve, over time real interest rates would revert to historic low levels due to dollar stability. Such monetary reform implies pro-growth fiscal reforms as well; the U.S. government’s profligacy would have to end because fiscal laxity would no longer be supported by an accommodating Fed. A new, sound dollar and a passive Fed would also engender other pro-growth reforms in banking, such as a reduction in or end to deposit insurance and a lower burden of regulations that stunt growth. The banking sector would at once be more competitive, better capitalized, less brittle, and on sounder footing itself.
To bring this about monetary policy must again become a big political issue—the dominating political issue—in a way it has not been since the presidential election of 1896, when William Jennings Bryan railed against a “cross of gold.” Indeed this can happen if people come to understand that the main culprit of U.S. booms and busts since 1971, and indeed the primary progenitor of the global disaster of 2008—from which we have yet to recover—is the political management of money by the Federal Reserve. Sound money, honest money, besides being a necessary cause of sustainable economic growth itself, is the antidote to the tragically unnecessary torpor of our modern world.

Some Additional Reflections on the Economic Crisis and the Theory of the Cycle

Mises Daily: Thursday, December 29, 2011 by

The three years that have passed since the world financial crisis and subsequent economic recession hit have provided Austrian economists with a golden opportunity to popularize their theory of the economic cycle and their dynamic analysis of social conditions. In my own case, I could never have imagined at the beginning of 1998, when the first edition of my book Money, Bank Credit, and Economic Cycles appeared, that 12 years later, due undoubtedly to a financial crisis and economic recession unparalleled in the world since the Great Depression of 1929, a crisis and recession which no other economic paradigm managed to predict and adequately explain, my book would be translated into 14 languages and published (so far) in nine countries and several editions (two in the United States and four in Spain). Moreover, in recent years I have been invited to and have participated in many meetings, seminars, and lectures devoted to presenting my book and discussing its content and main assertions. On these occasions, some matters have come up repeatedly, and though most are duly covered in my book, perhaps a brief review of them is called for at this time. Among these matters, we will touch on the following:

1. The Relationship between Credit Expansion and Environmental Damage

"Free-market-environmentalism" theorists (Anderson and Leal 2001) have shown that the best way to preserve the environment is to extend entrepreneurial creativity and the principles of the free market to all natural resources, which requires their complete privatization and the efficient definition and defense of the property rights that pertain to them. In the absence of these rights, economic calculation becomes impossible, the appropriate allocation of resources to the most highly valued uses is prevented, and all sorts of irresponsible behaviors are encouraged, as is the unjustified consumption and destruction of many natural resources.
Nevertheless, free-market-environmentalism theorists have overlooked another major cause of the poor use of natural resources: the credit expansion that central banks orchestrate and cyclically inject into the economic process through the private banking system, which operates with the privilege of using a fractional reserve. In fact, the artificial expansion of fiduciary media triggers a speculative-bubble phase in which there is an "irrational exuberance." This phase ends up placing an unwarranted strain on the real economy by making many unprofitable projects appear profitable (Huerta de Soto 2009). The result is unnecessary pressure on the entire natural environment: trees that should not be cut down are cut down; the atmosphere is polluted; rivers are contaminated; mountains are drilled; cement is produced; and minerals, gas, oil, etc., are extracted in an attempt to complete overly ambitious projects that in reality consumers are not willing to demand, etc.
Eventually the market will impose the judgment of consumers, and many capital goods will remain idle, thus revealing that they have been produced in error (that is, distributed incorrectly in space and time), because entrepreneurs have allowed themselves to be deceived by the easy-credit terms and low interest rates decreed by monetary authorities. The result is that the natural environment is harmed needlessly, since consumers' standard of living has not increased at all. On the contrary, consumers become poorer with the malinvestment of society's scarce real savings in nonviable, excessively ambitious projects (for example, 1 million homes in Spain without buyers). Hence, credit expansion hinders sustainable economic development and needlessly damages the natural environment.
This brief analysis points to an obvious conclusion: nature lovers should defend a free monetary system, without a central bank, a system in which private bankers operate with a 100 percent reserve requirement on demand deposits and equivalents, a system that rests on a pure gold standard. This is the only way to eradicate the recurring stages of artificial boom, financial crisis, and economic recession, which do so much harm to the economic environment, mankind, and the process of social cooperation.

2. Then Is Credit Expansion Really Necessary to Boost Economic Growth?

A popular argument (employed and nourished by more than a few prestigious economists like Schumpeter) holds that credit expansion and low interest rates facilitate the introduction of technological and entrepreneurial innovations, which foster economic development. The argument is contemptible. In a market economy it is as important to provide financing for solvent, viable entrepreneurial projects as it is to deny it for nonviable, harebrained ones: many "entrepreneurs" are like runaway horses, and we must limit their chances of trampling on society's scarce resources.
"A popular argument holds that credit expansion and low interest rates facilitate the introduction of technological and entrepreneurial innovations, which foster economic development. The argument is contemptible."
The problem is that only the market is capable of distinguishing between these two types of projects, and it does so by a social process in which key elements are precisely the indicator of the real amount of saved resources and the social rate of time preference, which helps separate the projects that should be financed from those whose time has not yet come and which therefore must remain "in the pipeline." It is true that every artificial expansion of credit and of the fiduciary media that back it provoke a redistribution of income in favor of those who first receive the new available funds and that this does not permit us to theorize about the net effects the process will have on society's real saving. (That will depend on how the time preference of those who come out ahead compares with that of those who come out behind.) However, there are more than enough signs that inflation discourages real saving, if only because it generates an illusion of wealth, which stimulates spending on consumer goods and capital consumption.
Furthermore, in the end ("ex post") it is clear that only what has been previously saved can be invested. Even then, what has been previously saved can be invested wisely or foolishly. Credit expansion promotes the waste and malinvestment of scarce factors of production in unsustainable and unprofitable investment projects. This means that the model of economic development based on artificial credit expansion cyclically destroys a high volume of capital goods, which leaves society substantially poorer (compared with the standard of living that could be reached in the long term with sustainable growth unforced by credit expansion and more in keeping with the true wishes of consumers with respect to their valuations of time preference).
Moreover, let it not be said that fiduciary inflation at least serves to employ idle resources, since the same effect can be achieved without malinvestment and waste by making the corresponding labor and factor markets more flexible. In the long run, credit expansion generates unsustainable jobs, erroneous investments, and therefore, less economic growth.

3. Is It True that Banks Caused the Crisis by Incurring Risks Disproportionate to Their Capital?

To attribute the crisis to the bad conduct of bankers is to confuse the symptoms with the causes. After all, during the stage of speculative euphoria, bankers merely responded to the incentives (null or negative real interest rates and the artificial expansion of credit) created by central banks. Now, in a display of hypocrisy and manipulation of the citizenry, central bankers throw up their hands in horror, blame others for the consequences of their own unsound policies, and try to appear as saviors to whom we must be grateful for the fact that we are not in the grip of an even more severe depression. And we need not repeat that it is precisely during the boom stage that inflation in the prices of financial assets was so high that bankers were able to show considerable equity capital in their balance sheets, which, at least in appearance, gave them substantial leverage and permitted them to incur risks with little difficulty. This was all in an environment of null or even negative real interest rates and an extraordinary abundance of liquidity promoted deliberately by central banks. Under such conditions, no one should be surprised that increasingly, peripherally, financing was granted for investment projects that were more and more risky, and less and less profitable (and less certain of producing profit).

4. So the Problem with the Banking System Is that Bankers Did Not Manage to Properly Harmonize the Deadlines of Loans Granted with Those of Deposits Received?

No, the problem is that banks have operated with a fractional reserve; i.e., they have not maintained a 100 percent reserve with respect to demand deposits and their equivalents. The requirement of a 100 percent reserve on demand deposits avoids credit expansion and liquidity problems in the banking system, because it permits the investment of only what has been previously saved; and if investors make a mistake concerning the term of maturity and their projects are viable, they can request new loans (based on prior, real saving) to repay those that fall due. In contrast, credit expansion derived from fractional-reserve banking gives rise to a widespread malinvestment of resources that is confused by many with a failure to harmonize terms of maturity, when the problem is much deeper: investments that are unsustainable due to a lack of real saving. The fundamental economic problem does not stem from an error in the matching up of terms but from the absence of a 100 percent reserve requirement; in other words, it stems from fractional-reserve banking.

5. Can an Isolated Bank Escape Unscathed in the Case of Widespread Credit Expansion?

Those in charge of an individual bank may hope it will emerge unharmed from a process of credit expansion if (a) they believe they will be able to lend money peripherally for the most profitable and secure projects (those that will be least affected when the crisis hits); and (b) they believe that once they begin their credit expansion on these projects, the rest of the banks will follow the same expansionary policy at the same pace at least, and thus the bank will not end up alone nor will it lose reserves.
"Many 'entrepreneurs' are like runaway horses, and we must limit their chances of trampling on society's scarce resources."
In practice, b usually happens (generalized credit expansion orchestrated by the central bank itself); but a is highly unlikely to ever happen and amounts to a mere illusion: the new fiduciary media (created deposits) can only be lent at relatively reduced interest rates and can only be placed in the market as loans for projects that are increasingly lengthy (i.e., that mature in a more distant future) and risky (uncertain). These projects merely appear to be profitable at reduced rates, but as soon as rates increase, they immediately cease to be viable due to insufficient real saving.
Furthermore, any bank whose directors tenaciously decide to keep it out of the credit-expansion process will see its market share dwindle and will run the risk of becoming an exotic irrelevance. This should make it obvious that fractional-reserve banking exerts a corrupting effect on the entire banking system (an argument already put forward by Longfield in the 19th century). In addition, banking practice has continually offered confirmation of this phenomenon. (For instance, several presidents of Spanish banks have told me that during the boom stage they knew a large percentage of the real-estate loans they were granting were unlikely to be viable in the long term and were very risky, but they were "forced" to participate in many syndicated loans and questionable transactions under pressure from analysts, market agents, and their bank's need to grow or at least maintain its market share.)

6. Savings as a "Flow" Magnitude versus Cash Balances in the Form of Deposits as a "Stock" Magnitude

Money is not a consumer good (except for greedy Scrooge McDuck), nor is it a factor of production. It is a third type of good: a commonly accepted medium of exchange. Moreover, only as a present good does money fulfill its function as a medium of exchange. However, it can be lent, in which case it becomes a financial asset for the lender, for whom it ceases to provide services as a medium of exchange.
Therefore, it is absurd to claim that deposited money that forms part of an actor's cash balances has been "saved." The deposit is a cash balance and thus a stock magnitude. The flow of unconsumed income gives rise to the flow of savings that is invested in financial assets or directly in capital goods, unless someone decides to indefinitely increase his or her cash balances (a rise in the demand for money). Furthermore, cash balances can be increased not only by reducing the flow of consumption but also by reducing the flow of investment (or both).
The problem is that with a specific, stable flow of savings, if someone decides to channel his or her cash balances into demand deposits in a fractional-reserve bank, the flow of loans and investment swells without any increase in the flow of real savings, and this is precisely what sets the economic cycle in motion.
Only free banking with a 100 percent reserve requirement prevents the above anomaly by making it impossible for bankers to make the following accounting entry:
Loans to Deposits
It is this kind of entry that reflects banks' main activity, but in a free banking system with a 100 percent reserve requirement, all deposits would be backed in cash by the corresponding balance, in keeping with general legal principles:
Cash to Deposits 

7. Does Leland Yeager Offer a Sound Argument when He Asserts that It Is Impossible to Distinguish between Demand Deposits and Very Short-Term Loans?

When the principles and theory are well understood (that demand deposits and their equivalents must be backed at all times by a 100 percent reserve), the market invariably finds the most practical and operational solutions.
In an ideal banking system, with a 100 percent reserve requirement, short-term loans (from one to three months) would definitely be easy to distinguish from demand deposits, and the agents involved would undertake the usual operations necessary to match flows, operations that are so efficiently carried out in the free market, based on well-proven and deep-rooted principles of prudence.
"False" loans that disguise deposits would be easy to identify, especially if we take into account that on the edge of the very short term (from one week to one month), the demand for true loans is very weak (except under highly exceptional circumstances, and assuming the matching of flows is properly carried out).
In short, what is important is whether or not an actor subjectively considers that a "time" deposit or a (false) "loan" forms part of his or her immediately available cash balances. If so, we are dealing with true "demand" deposits, which require a 100 percent reserve.

8. What Are the Possible Scenarios in the Event of a Crisis like the Present One?

There are basically four:
  1. The bubble is created again, with massive doses of new expansion. (This is practically the worst-case scenario, since the depression is only postponed at the cost of making it much more severe later: this is what happened in 2001–2002, when the expansionary stage was prolonged six additional years, but at the cost of a financial crisis and an economic recession unlike any in the world since 1929.)
  2. The opposite extreme: the failure, as if by the domino effect, of all fractional-reserve banks and the disappearance of the financial system (a tragedy that has been avoided "in extremis" with the bailout of the banking system in the entire world).
  3. The "Japanization" of the economy: government intervention (in the fiscal and credit spheres) is so intense that it blocks the spontaneous market processes that tend to rectify the investment errors committed in the bubble stage, and hence the economy remains in a recession indefinitely.
And 4, the most probable course of events: With great difficulty, the market, which is very dynamically efficient, ends up rectifying investment errors: companies and households put their balances on a sound footing by reducing costs (particularly labor costs) and repaying loans. The companies that remain have become "healthy" and the increase in saving permits the financing of new investment projects that are actually sustainable in the long term. Climbing unemployment reaches its peak when the reorganization has concluded, and at that point the top priority is to liberalize the labor market as much as possible (hiring, wages, dismissal, and collective bargaining) so that the unemployed can again enter the (now healthy) production chain to work on viable projects. Furthermore, maximum budget austerity is necessary in the public sector; it is important to avoid tax increases and to reduce bureaucracy and government intervention in the economy.

9. What Measures in the Right Direction Could Now Be Adopted to Bring Us Closer, Even If Timidly, to the Ideal Financial System of a True Free-Market Economy?

The following table offers an answer to this question:
Ideal Monetary Model (Very) Timid Measures in the Right Direction
A pure gold standard
(Growth in the world's stock of gold ≤ 2% per year)
Rigorous compliance with a limit of 2% per year to growth in the money supply, M.
Fixed exchange rates. Euro.
A 100 percent reserve requirement
(Bank crises are not possible.)
The abolition of the central bank.
The central bank limits itself to providing liquidity to banks in trouble to avoid bank crises.
What is deposited is not lent, and there is a proper matching of the flows of savings and investment.
The business of providing liquidity is separate from that of financial intermediation.
A radical separation between commercial banking and investment banking.
(Glass-Steagall Act of 1933)
In many areas of the controlled market, which must be reformed (the privatization of streets, liberal immigration, etc.), it is a grave error to believe it necessary to eliminate all regulation until the ideal reform takes place. Quite the opposite is true. Until the reform occurs, a minimum of regulation is needed to simulate, as far as possible, the results of the ideal system: in the monetary sphere, a pure gold standard with a 100 percent reserve requirement and no central bank. Nevertheless, we must repeat again and again that, instead of trying awkwardly to replicate with doubtful half measures what the market would achieve, without a doubt the best and ultimately unavoidable line of action is to implement the definitive, radical reform demanded by the ideal monetary model.

10. Conclusion: Bewilderment among Theorists and Citizens

Society is confused and bewildered by the crisis. The gulf between people and politicians is nearly unbridgeable. Moreover, the ignorance and confusion of the latter is also spectacular. However, the worst part of the situation is that most economic theorists themselves are drawing a blank in terms of theory, and they are not managing to grasp what is happening, why it has happened, and what could happen in the future.
The loss of prestige suffered by neoclassical economics (the hypothesis of market efficiency, the theory of rational expectations, faith in "self-regulation," the principle of agent rationality, etc.) is complete and is mistakenly interpreted as a market failure that justifies more state intervention. (Keynesians attribute the crisis to the sudden financial "panic" and to a lack of aggregate demand for which the state must compensate.) Theorists of different camps fail in their understanding of the market, and thus in their analyses and prescriptions. Well into the 21st century, the theoretical void is enormous. Fortunately, the Austrian theory of the cycle, in general, and my book Money, Bank Credit, and Economic Cycles in particular, are there to fill this void and clear up the present confusion.
Anderson, T.L. and D.R. Leal. 2001. Free Market Environmentalism. Rev. ed. New York: Palgrave Macmillan.
Huerta de Soto, J. 2009. Money, Bank Credit, and Economic Cycles. 2d ed. Auburn, Alabama: Ludwig von Mises Institute.

Thursday, December 22, 2011

property rights are human rights

It is often asserted by critics of the free-market economy that they are interested in preserving “human rights” rather than property rights. This artificial dichotomy between human and property rights has often been refuted by libertarians, who have pointed out (a) that property rights of course accrue to humans and to humans alone, and (b) that the “human right” to life requires the right to keep what one has produced to sustain and advance life. In short, they have shown that property rights are indissolubly also human rights. They have, besides, pointed out that the “human right” of a free press would be only a mockery in a socialist country, where the State owns and decides upon the allocation of newsprint and other newspaper capital.[29]

There are other points that should be made, however. For not only are property rights also human rights, but in the most profound sense there are no rights but property rights. The only human rights, in short, are property rights. There are several senses in which this is true. In the first place, each individual, as a natural fact, is the owner of himself, the ruler of his own person. The “human” rights of the person that are defended in the purely free-market society are, in effect, each man’s property right in his own being, and from this property right stems his right to the material goods that he has produced.

In the second place, alleged “human rights” can be boiled down to property rights, although in many cases this fact is obscured. Take, for example, the “human right” of free speech. Freedom of speech is supposed to mean the right of everyone to say whatever he likes. But the neglected question is: Where? Where does a man have this right? He certainly does not have it on property on which he is trespassing. In short, he has this right only either on his own property or on the property of someone who has agreed, as a gift or in a rental contract, to allow him on the premises. In fact, then, there is no such thing as a separate “right to free speech”; there is only a man’s property right: the right to do as he wills with his own or to make voluntary agreements with other property owners.

The concentration on vague and wholly “human” rights has not only obscured this fact but has led to the belief that there are, of necessity, all sorts of conflicts between individual rights and alleged “public policy” or the “public good.” These conflicts have, in turn, led people to contend that no rights can be absolute, that they must all be relative and tentative. Take, for example, the human right of “freedom of assembly.” Suppose that a citizens’ group wishes to demonstrate for a certain measure. It uses a street for this purpose. The police, on the other hand, break up the meeting on the ground that it obstructs traffic. Now, the point is that there is no way of resolving this conflict, except arbitrarily, because the government owns the streets. Government ownership, as we have seen, inevitably breeds insoluble conflicts. For, on the one hand, the citizens’ group can argue that they are taxpayers and are therefore entitled to use the streets for assembly, while, on the other hand, the police are right that traffic is obstructed. There is no rational way to resolve the conflict because there is as yet no true ownership of the valuable street-resource. In a purely free society, where the streets are privately owned, the question would be simple: it would be for the streetowner to decide, and it would be the concern of the citizens’ group to try to rent the street space voluntarily from the owner. If all ownership were private, it would be quite clear that the citizens did not have any nebulous “right of assembly.” Their right would be the property right of using their money in an effort to buy or rent space on which to make their demonstration, and they could do so only if the owner of the street agreed to the deal.

Let us consider, finally, the classic case that is supposed to demonstrate that individual rights can never be absolute but must be limited by “public policy”: Justice Holmes’ famous dictum that no man can have the right to cry “fire” in a crowded theater. This is supposed to show that freedom of speech cannot be absolute. But if we cease dealing with this alleged human right and seek for the property rights involved, the solution becomes clear, and we see that there is no need at all to weaken the absolute nature of rights. For the person who falsely cries “fire” must be either the owner (or the owner’s agent) or a guest or paying patron. If he is the owner, then he has committed fraud upon his customers. He has taken their money in exchange for a promise to put on a motion picture, and now, instead, he disrupts the performance by falsely shouting “fire” and creating a disturbance among the patrons. He has thus willfully defaulted on his contractual obligation and has therefore violated the property rights of his patrons.

Suppose, on the other hand, that the shouter is not the owner, but a patron. In that case, he is obviously violating the property right of the theater owner (as well as the other patrons). As a guest, he is on the property on certain terms, and he has the obligation of not violating the owner’s property rights by disrupting the performance that the owner is putting on for the patrons. The person who maliciously cries “fire” in a crowded theater, therefore, is a criminal, not because his so-called “right of free speech” must be pragmatically restricted on behalf of the so-called “public good,” but because he has clearly and obviously violated the property rights of another human being. There is no need, therefore, of placing limits upon these rights.

Since this is a praxeological and not an ethical treatise, the aim of this discussion has not been to convince the reader that property rights should be upheld. Rather, we have attempted to show that the person who does wish to construct his political theory on the basis of “rights” must not only discard the spurious distinction between human rights and property rights, but also realize that the former must all be absorbed into the latter. - Murray Rothbard

Tuesday, December 20, 2011

How Fascism Kills the American Dream

Everyone knows that the term fascist is a pejorative, often used to describe any political position a speaker doesn’t like. There isn’t anyone around who is willing to stand up and say, “I’m a fascist; I think fascism is a great social and economic system.”
But I submit that if they were honest, the vast majority of politicians, intellectuals and political activists would have to say just that.
Fascism is the system of government that cartelizes the private sector, centrally plans the economy to subsidize producers, exalts the police state as the source of order, denies fundamental rights and liberties to individuals and makes the executive state the unlimited master of society.
This describes mainstream politics in America today. And not just in America. It’s true in Europe, too. It is so much part of the mainstream that it is hardly noticed anymore.
It is true that fascism has no overarching theoretical apparatus. There is no grand theorist like Marx. That makes it no less real and distinct as a social, economic and political system. Fascism also thrives as a distinct style of social and economic management. And it is as much or more of a threat to civilization than full-blown socialism.
This is because its traits are so much a part of life — and have been for so long — that they are nearly invisible to us.
If fascism is invisible to us, it is truly the silent killer. It fastens a huge, violent, lumbering state on the free market that drains its capital and productivity like a deadly parasite on a host. This is why the fascist state has been called the vampire economy. It sucks the economic life out of a nation and brings about a slow death of a once-thriving economy.
Let me just provide a recent example.
The Decline
The papers last week were filled with the first sets of data from the 2010 U.S. Census. The headline story concerned the huge increase in the poverty rate. It is the largest increase in 20 years, and now up to 15%.
But most people hear this and dismiss it, probably for good reason. The poor in this country are not poor by any historical standard. They have cell phones, cable TV, cars, lots of food and plenty of disposable income. What’s more, there is no such thing as a fixed class called the poor. People come and go, depending on age and life circumstances. Plus, in American politics, when you hear kvetching about the poor, everyone knows what you’re supposed to do: Hand the government your wallet.
Buried in the report is another fact that has much more profound significance. It concerns median household income in real terms.
What the data have revealed is devastating. Since 1999, median household income has fallen 7.1%. Since 1989, median family income is largely flat. And since 1973 and the end of the gold standard, it has hardly risen at all. The great wealth-generating machine that was once America is failing.
No longer can one generation expect to live a better life than the previous one. The fascist economic model has killed what was once called the American Dream. And the truth is, of course, even worse than the statistic reveals. You have to consider how many incomes exist within a single household to make up the total income. After World War II, the single-income family became the norm. Then the money was destroyed, and American savings were wiped out, and the capital base of the economy was devastated.
It was at this point that households began to struggle to stay above water. The year 1985 was the turning point. This was the year that it became more common than not for a household to have two incomes, rather than one. Mothers entered the work force to keep family income floating.
The intellectuals cheered this trend, as if it represented liberation, shouting hosannas that all women everywhere are now added to the tax rolls as valuable contributors to the state’s coffers. The real cause is the rise of fiat money that depreciated the currency, robbed savings and shoved people into the work force as taxpayers.
This story is not told in the data alone. You have to look at the demographics to discover it.
This huge demographic shift, essentially, bought the American household another 20 years of seeming prosperity, though it is hard to call it that, since there was no longer any choice about the matter. If you wanted to keep living the dream, the household could no longer get by on a single income.
But this huge shift was merely an escape hatch. It bought 20 years of slight increases before the income trend flattened again. Over the last decade, we are back to falling. Today, median family income is only slightly above where it was when Nixon wrecked the dollar, put on price and wage controls, created the EPA and the whole apparatus of the parasitic welfare-warfare state came to be entrenched and made universal.
Yes, this is fascism, and we are paying the price. The dream is being destroyed.
The talk in Washington about reform, whether from Democrats or Republicans, is like a bad joke. They talk of small changes, small cuts, commissions they will establish, curbs they will make in 10 years. It is all white noise. None of this will fix the problem. Not even close.
The problem is more fundamental. It is the quality of the money. It is the very existence of 10,000 regulatory agencies. It is the whole assumption that you have to pay the state for the privilege to work. It is the presumption that the government must manage every aspect of the capitalist economic order. In short, it is the total state that is the problem, and the suffering and decline will continue so long as the total state exists.
Lew Rockwell

Mineweb.com - The world's premier mining and mining investment website Tsunami intact: gold to rise to $3,000+ by mid-year - Goldrunner - INDEPENDENT VIEWPOINT | Mineweb

Mineweb.com - The world's premier mining and mining investment website Tsunami intact: gold to rise to $3,000+ by mid-year - Goldrunner - INDEPENDENT VIEWPOINT Mineweb

Saturday, December 17, 2011

Wages Must Fall!": What All Good Keynesians Should Say

When Keynesians want to gloat, they often point to the overwhelming empirical evidence in favor of nominal wage rigidity. For the latest example, see Krugman on the Irish labor market. Their unemployment is 14.5%, but the nominal wage index has only fallen by about 2.5%. Krugman's conclusion:
It is really, really hard to cut nominal wages, which is why reliance on "internal devaluation" is a recipe for stagnation and disaster.
The gloating is easy to understand. After all, nominal wage rigidity is the driving assumption of the Keynesian model. Unemployment is just a labor surplus; since wages are the price of labor, the fundamental cause of unemployment has to be excessive wages. And as long as the wage rigidity is nominal, you can neutralize it by printing money or otherwise boosting demand.

What's hard to understand, though, is Keynesian neglect of - if not outright hostility to - the logical implication of their argument: Wages must fall! If they're right about nominal wage rigidity, it seems like "Wages must fall!" would be the mantra of all good Keynesians. But few words are less likely to escape their lips.

Why would this be so?

1. Keynesians could say that nominal wage rigidity is such an intractable problem there's no point discussing it. That's why Krugman emphasizes that "Ireland is supposed to have flexible markets -- remember, before the crisis it was hailed as an example of successful structural reform." If wages won't even fall in laissez-faire Ireland, what hope does the rest of the world have?

There are two big problems with this story. (a) Even if it's true, Keynesians should still militantly oppose any government policy - like the employer health care mandate - that increases labor costs. (b) Government doesn't face a binary choice between conventional labor market regulation and laissez-faire. There's a third choice: Low-wage interventionism. If wages won't adjust on their own, why don't Keynesians ask government to actively push them down? If that sounds too brutal, see Singapore for clever ways to numb the blow.

2. Keynesians could say that monetary and fiscal policy are easier to promote than wage cuts. But Keynesians are the first to insist that fiscal policy is a valuable supplement to monetary policy. Why not hail wage cuts as a valuable supplement to both? At minimum, Keynesians should heatedly resist any government policy that pushes labor costs in the wrong direction - and remind us that "wrong" = up.

3. Keynesians could - and often do - retreat to the view that wage flexibility is a self-defeating solution to the problem of wage rigidity. The idea is that wage cuts reduce demand, which in turn exacerbates unemployment.

But this argument is full of holes. As I've pointed out before, there are strong reasons to think that wage cuts will increase aggregate demand, making this solution doubly attractive. Consider: Labor income equals wages multiplied by hours worked, so the effect on labor income is ambiguous; and as a matter of pure arithmetic, lower wages imply higher profit income. In any case, if nominal wage cuts really are as rare as a blue moon, what makes Keynesians so sure that wage cuts would backfire if tried? Without lots of empirical counter-examples, they have every reason to stick to the common sense position: "If wage rigidity is the cause of unemployment, wage flexibility is the cure."

At this point, Keynesians could just bite the bullet: "Wages must fall!" But in my experience they don't - and I don't think they're going to start now. The reason, I'm afraid, is politics. Keynesians lean left. They don't want to say, "Wages must fall!" They don't want to think it. "Wages must fall!" sounds reactionary - a thinly-veiled reproach to centuries of anti-capitalist intellectuals and militant unions. After all, doesn't it mean that every "pro-labor" regulation and "victory for the workers" has an ugly downside - more workers unable to find any job at all?

Keynesians are right to ridicule people who deny the reality of nominal wage rigidity. But they'd be a lot more persuasive if they put leftist qualms aside and focused on the logic of their own model. Keynesians have every reason to rant against excessive wages. They have every reason to rant against regulation that increases labor costs. They have every reason to rant against unions. And there hasn't been a better time to rant since the Great Depression. Oh my Keynesian brothers and sisters, let us rant together.

Monday, December 12, 2011

How Politicians Wreck the World: Lesson #425,689,231

By Dec 12th, 2011
One of the coolest aspects of modern life is about to come to an end. I’m speaking of the great innovation in the last 10 years in which vending machines accept credit and debit cards. No more fishing in our pockets for quarters, dimes and nickels. No more having to flatten out your $1 bills so that they will fit in just the right way and not be coughed up by the bill acceptor. Instead, you feed your card in, it is charged and you get a handy statement at the end of the month that makes sense of your spending habits.
Merchants have grown ever more willing to accept credit and debit cards for small transactions. This is how Redbox movies has made its profits and put new movies in high quality at nearly everyone’s fingertips. It is easier to get a movie than a value meal. It’s true at the convenience store too. You don’t have cash, so you pull out the card to pay for the bottle of juice or the Snickers bar.
Online it is true too: Buy a small thing or donate to a small charity with micropayments. This is just part of modern life — something we have learned to love and take for granted. It’s wonderful and innovative. It makes our lives better and the world a more beautiful place.
How could it end? Get the politicians involved. And they have gotten involved and it is all coming to an end. It is ending through a circuitous route that takes a few minutes to understand. So let me explain this.
As part of a political “crackdown” on the financial world, the Dodd-Frank Act passed last year, the fees that credit card companies can charge merchants now have a cap of 21 cents per transaction. The geniuses in Congress figured that this would save money for consumers
because the average fee is actually 44 cents. The masters of the universe figured that passing a law could make the world right. Oh, how necessary are their powers and privileges!
But you know what? The world is a bit more complex than that. What actually happened to make small transactions profitable was that companies would charge a high fee for large tickets and use that revenue to subsidize the costs of smaller transactions. Smaller transactions were not averaging 21 cents. Instead, they cost 6 or 7 cents. This was made possible by the high fees that the politicos decided to legislate out of existence.
This makes financial sense in many ways. Card companies have every reason to maximize the number of institutions that use their services. Spreading out the variable costs according to a heterogeneous model does precisely this. It works, as all things in the market tend to work.
Maybe you can figure out already what is happening. If companies can no longer subsidize small transactions, they have to spread the revenue model among everyone equally. No longer able to charge 44 cents, they can now charge only 21 cents. But transactions that used to cost 6 cents are also going up to 21 cents. This is intolerable for small merchants.
An immediate effect is that renting from Redbox will go up 20 cents, to $1.20. That is politicians picking your pocket. Many merchants are already declining to accept credit cards at all for purchases under $10. It’s their right. It’s their choice. If they can’t make it work financially, they will end the practice. For other small merchants, like convenience stores and candy stores, this is nothing short of catastrophic.
Vending machines? Enjoy it while it lasts. Many of those machines will be shut down, or else the price of a can of Coke will have to go up 20%. You might soon have to go back to fishing for change and flattening out dollar bills.
Incredible, isn’t it? Yes it is. Here is a piece of legislation that was supposedly designed to help you and me. Heaven protect us from their help!
Look what they have done. They have smashed one of the great emerging conveniences of modern life. How many small donations will no longer be made? How many micropayment widgets and systems will now go belly up? What kind of charitable projects and small web-based businesses will not come into existence because they can no longer afford the costs of doing business?
And all of this is happening in these terrible economic times. It’s been blow after blow, delivered by the political class that claims to be stimulating the economy. To be sure, the credit card companies warned that this would happen. They said it would kill their business model. But the
politicians are inclined to treat every complaint by business as a lie. So they dismissed it. They figured, hey, we put a price cap on something and the world obeys our dictate, so what’s the downside?
The downside emerges only later. And hardly anyone will really understand the cause. All they know is that the price of movie rentals will go up, and they blame business. Everyone blames business for everything. Meanwhile, these puffed-up mini-dictators are hiding in the corner hoping that no one notices the mess that they make of the world.
Here is an archetype of what is often called “unintended consequences” of government legislation. This is just one tiny piece of a much-larger puzzle. Multiply this many millions of times and you gain some insight into why the world is such a mess. The politicians try to fix it and it only gets worse. Here is a general rule: Politicians should do nothing, ever, except repeal their rotten attempts to make our lives better.
Jeffrey Tucker

Thursday, December 8, 2011

The Accelerator and Say's Law

Economists, like women, are not immune to the dictates of fashion. One such dictate in vogue among post-Keynesians is the accelerator, which enjoyed similar popularity in the early 1920s. At least a partial reason for the renewed popularity of the accelerator is that it forms an integral part of the General Theory.[1]
The acceleration doctrine holds that a temporary increase in consumer demand sets in motion an accelerated "derived demand" for capital goods. This action, according to adherents of the doctrine, explains at least part of the causation of the business cycle. As evidence supporting this theory, accelerationists point to boom-and-bust, feast-and-famine conditions prevalent in capital-goods industries.
A typical illustration of the acceleration principle follows. Assume a "normal" annual demand for a certain consumer good at 500,000 units. Production is accomplished through 1,000 durable units of capital goods; capacity of each capital unit: 500 consumer units per year; life of each unit: 10 years. Then assume a 10 percent increase in consumer demand. Thus:
Annual ConsumerDemandCapital GoodsAnnual Capital-Goods
Demand ("derived")
"normal year"500,0001000 100 (replacements)
next yr. + 10%550,0001100200 (replacements plus new)
3rd yr.-new "nor."550,0001100100 (replacements)
Conclusion: 10 percent increase in consumer demand led to 100 percent increase in capital demand in same year but to 50 percent decrease in capital demand in following year.
The argument against the acceleration doctrine simply shows so many unreal assumptions and a vital non sequitur as to nullify any validity in the doctrine whatsoever. An analysis of these objections follows.

1. Rigid Specialization in Capital-Goods Industries

Accelerationists pose their doctrine on the basis of a given capital-goods industry supplying equipment for a given consumer-goods industry and no other. Thus a decrease in consumer demand or even a falling-off in its rate of growth immediately cuts off part of the capital-goods market, and the "famine" phase of the capital-goods industry begins.
Yet where is the capital-goods industry so rigidly specialized as to preclude its serving other markets, with or without some conversion of its facilities? Are we to presume that businessmen under the pressure of overhead and profit maximization will twiddle their thumbs waiting for their consumer demand to "reaccelerate"? It is clear that accelerationists deny or ignore convertibility of facilities and substitutability of markets.
Within many capital-goods industries, trends of diversification and complementarity are evident. Examples: A machine tool manufacturer that has undertaken lines of construction and textile equipment; a basic chemical producer that has engaged in the manufacture of home clotheswasher and dishwasher detergents. These trends break down the "industry" classifications, on which the accelerator is based.

2. No Unutilized Capacity in the Consumer-Goods Industry

Holders of the acceleration doctrine assume the consumer-goods industry is operating at the extensive margin of production and no intensive possibilities for greater production exist.
But very few consumer-goods industries, typically, operate at constant peak capacity. To do so is generally to operate beyond the point of optimum efficiency as well as beyond the point of maximum profit. The usual case then, other than during wartime, is that an industry operates with some unutilized capacity, some "slack." Normally this unutilized capacity is to be found among the marginal and submarginal producers, and it is these producers which could and probably would absorb any increase in consumer demand — without, of course, the purchase of new equipment.
Yet even the successful and efficient producer would likely consider other means of absorbing higher consumer demand before committing himself to more equipment and greater overhead. For example, he could expand the existing labor force, resort to overtime, add one or two additional shifts, subcontract work in overloaded departments, and so on. That such alternatives are feasible without more equipment is evidenced by the experience of even the most efficient firms in the utilization of their capital equipment. Examples: A West Coast airplane manufacturer found his gear-cutting equipment in use only 16 percent of the time; a New York newspaper plant utilized its presses only 11 percent of the time. The concept of 100 percent utilization of all capital equipment is not tenable.

3. Automaton Role for Entrepreneurs

Accelerationists share the danger common to all holistic and macro approaches to economic problems — namely, the submergence of individual and entrepreneurial decisions (human action) to a constant factor within a pat formula. Such treatment implies on the part of entrepreneurs irrationality or sheer impulsiveness. Boulding described this situation thusly:
The picture of the firm on which much of our analysis is built is crude in the extreme, and in spite of recent refinements there remains a vast gap between the elegant curves of the economist and the daily problems of a flesh-and-blood executive.[2]
Accelerationists argue that a temporary rise in consumer demand automatically calls into being additional capital goods. If this were true, it follows that entrepreneurs in capital-goods industries witlessly expand their capacity and thereby commit themselves to greater overhead without regard to future capital-goods demand.
True, entrepreneurs can and do err in gauging future demand. But the concept of automatic response to any rise in demand, on the order of the conditioned-reflex salivation of Pavlov's dogs, is not warranted. Increased capacity is less of a calculated risk in response to increased current demand than it is to anticipated future demand. This anticipation, in turn, is likely to be based on market research, price comparison, population studies, cost analysis, political stability, etc., rather than on impulse.

4. Static Technology

It is not surprising that the accelerator perhaps reached the zenith of its popularity when professional journals were replete with terms like "secular stagnation" and "technological frontier." (Nowadays the term is "automation." Apparently we have moved from the one extreme of too little technology to the opposite extreme of too much.) Such heavy-handed treatment of technology does not coincide with experience. Science and invention do not hibernate during depressions. Du Pont introduced both Nylon and Cellophane during the 1930s.
Adherents of the acceleration principle must either minimize or ignore the impact of technology on rising productivity, for, after all, a strict ratio of capital-goods to consumer-goods output must be maintained to substantiate the action of the accelerator. Technology, however, can and does obviate such ratios. Technological advances not only serve to increase the unit volume of given capital goods through superior technical design but also through the improvement of fuel, the refinement of raw materials, the use of time-and-motion studies, the rearrangement of layout and production flow, and so on.
While the growth of technology is somewhat irregular, there can be no question of its progression. Progression tends to "accelerate" the obsolescence component of depreciation and thereby crimps the acceleration model, which, ceteris paribus, ignores the unpredictable dynamics of technology.[3]

5. Arbitrary Time Periods

Accelerationists must use time as a frame of reference for their doctrine. The most frequent time period used is a year. Such a time period, however, implies an even spread of the increase (or the decrease) of consumer demand in the time period. Thus a spasmodic strengthening and weakening of demand within the time period could distort the artificial taxonomies of the accelerator.
For example, a January–December period may carry one peak demand, whereas a July–June period may yield two peak demands. An accelerationist may read the first period as having an 8 percent increase and the second as having a 10 percent increase, which, in the long run, may average out to 9 percent or some other figure.
Moreover, within a time period, the accelerationist assumes a fixed relationship between consumer goods and capital goods. Let alone the problem of technological advances, were such a fixed relationship to exist it would necessarily mean that the cycles of production for both sets of goods were perfectly synchronized. This, however, is rarely the case. Consumer goods generally have a short cycle; capital goods, a long cycle. Thus, current capital-goods production may be based on orders originating in an earlier "period." Two consecutive increases in consumer demand could conceivably be followed by a decrease, which may well mean that the latest order for capital goods would be cancelled. The flow of goods from the capital pipeline is not irrevocable.

6. Implicit Denial of Say's Law

Previous objections to the acceleration doctrine were of the "other-things-are-not-equal" variety. In short, with so many independent variables ceteris paribus would not hold.
This objection — the implicit denial of Say's law of markets — is more fundamental. If it is valid, it would strike at the heart of the acceleration principle and reduce it to a non sequitur.
According to Say's law, the source of purchasing power lies within production — i.e., supply creates its own demand — and therefore generalized overproduction or underconsumption is not possible. Barring external distortions to the economy, such as war or drought, Say's law is operative under two conditions — the flexibility of prices and the neutrality of money. Thus it is not astonishing that a major accelerationist like Keynes who shunned price flexibility and upheld inflation should attempt a refutation of Say's law and resurrect the dead body of underconsumption, rebaptized as the "consumption function" or "the propensity to consume."
If it is true, as accelerationists claim, that a rise in consumer demand will thereby create a demand for capital goods, then it must be explained what causes the rise in consumer demand in the first place. Should accelerationists concede that the rise is due to capital — or as Böhm-Bawerk put it, "the technical superiority of roundabout production" — they would then be forced to admit, logically, that they have put the cart before the horse, that the growth of capital preceded the growth of demand.
Indeed, if demand could arise without prior production to give it effectiveness, then we should witness the overnight industrialization of India, where such astronomical "consumer demand" exists as to induce the full flowering of the accelerator.
Say's law not only points to the fallacy of the accelerator but to its corollary, "derived demand." There is a germ of truth in "derived demand" — "primary" consumer demand does affect "secondary" capital demand. But the consecutive sequence should be reversed. The effect of consumer demand on capital is not demand for capital per se. Capital is always in demand as long as time preference exists — as long as capital yields the reward of interest. Rather, the effect of "derived demand" will be, if strong enough, merely to change the form of capital goods, no more. If not otherwise impeded, capital will always flow to the most urgent of the least satisfied demands. The point is that capital accumulation — saving and investment — must come before "derived demand." So-called derived demand merely shifts already existing productive resources from present applications to alternative but more rewarding applications.
Insofar as the acceleration explanation of the business cycle is concerned, accelerationists view deceleration with equal alarm to acceleration. The dilemma was stated by Samuelson:
It is easy to see that in the acceleration principle we have a powerful factor for economic instability. We have all heard of situations where people have to keep running in order to stand still. In the economic world, matters may be worse still: the system may have to be kept running at an ever faster pace just in order to stand still.[4]
To maintain such an argument, Samuelson and other accelerationists must discount the fact that a cut in consumer demand in one line releases consumer demand for other lines. Thus, the change in the composition of consumer demand releases factors engaged in certain suspended lines of capital-goods production for new lines of endeavor. That this would cause frictional unemployment of factors is not denied, but frictional unemployment is far less of a problem than generalized unemployment. The notion of ever-accelerating consumer demand to achieve stability within its related capital-goods industry thus loses sight of the interchangeability of factors.
The essence of capitalism, as in life, is change. While some industries may be in decline, others will be in ascendancy. Capital is not eternally fixed; it can be liquidated and "recirculated." Nor does capital idly wait for consumer demand to "reaccelerate." Disinvestment and reinvestment, business mortality and business birth, industry expansion and industry contraction constantly adjust the supply and form of capital to the demand for consumer goods. Samuelson overlooks the dynamics of capital in his essentially static, timeless acceleration thesis.
Say's law places production as the controlling factor over consumption. The accelerator reverses this order. Thus accelerationist Keynes sought to accelerate consumer demand by having the unemployed uselessly dig holes or build pyramids, the important thing being to put "purchasing power" in the hands of spenders. Productionless "purchasing power," according to Say's law, is a contradiction in terms; it is nothing but inflation. In short, the false premise of "derived demand" in the acceleration principle has led to other false premises.
Four findings spring from this article. One, the accelerator is groundless as a tool of economic analysis. Two, Say's law has yet to meet an effective refutation. Three, acceptance of the acceleration doctrine leads to false conclusions in other areas of economics. And four, accelerationists must look elsewhere for an answer to the business cycle.
While there is evidence that capital-goods industries do suffer wide extremes of business activity during the course of the business cycle, it is also true that consumer-goods industries undergo much the same cycle, even if their amplitudes are smaller. That there is correlation between the two phenomena is not denied. But correlation is not causation. This is the heart of the error in the accelerator.

Wednesday, December 7, 2011

Tax Rates, Inequality and the 1%

A recent report from the Congressional Budget Office (CB0) says, "The share of income received by the top 1% grew from about 8% in 1979 to over 17% in 2007."
This news caused quite a stir, feeding the left's obsession with inequality. Washington Post columnist Eugene Robinson, for example, said this "jaw-dropping report" shows "why the Occupy Wall Street protests have struck such a nerve." The New York Times opined that the study is "likely to have a major impact on the debate in Congress over the fairness of federal tax and spending policies."
But here's a question: Why did the report stop at 2007? The CBO didn't say, although its report briefly acknowledged—in a footnote—that "high income taxpayers had especially large declines in adjusted gross income between 2007 and 2009."
No kidding. Once these two years are brought into the picture, the share of after-tax income of the top 1% by my estimate fell to 11.3% in 2009 from the 17.3% that the CBO reported for 2007.
The larger truth is that recessions always destroy wealth and small business incomes at the top. Perhaps those who obsess over income shares should welcome stock market crashes and deep recessions because such calamities invariably reduce "inequality." Of course, the same recessions also increase poverty and unemployment.
The latest cyclical destruction of top incomes has been unusually deep and persistent, because fully 43.7% of top earners' incomes in 2007 were from capital gains, dividends and interest, with another 17.1% from small business. Since 2007, capital gains on stocks and real estate have often turned to losses, dividends on financial stocks were slashed, interest income nearly disappeared, and many small businesses remain unprofitable.
The incomes that top earners report to the IRS have long been tightly linked to the ups and downs of capital gains. Changes in the tax law in 1986, for example, evoked a remarkable response—with capital gains accounting for an extraordinary 47.7% of top earners' reported income as investors rushed to cash in gains before the capital gains tax rose to 28%.
That was obviously temporary, but the subsequent slowdown in realized gains lasted a decade. Taxable gains accounted for only 16.7% of the top earners' income between 1987 and 1996. And the paucity of realized capital gains kept the top earners' share of income flat.
When the top capital gains tax fell to 20% in 1997 and remained there until 2002, realized capital gains rose to 25.4% of the top earners' income, and it explained much of the surge of their income share to 15.5% in 2000. Stock gains were more modest from 2003 to 2007, yet the tax rate on profitable trades was down to 15%, so realized capital gains rose to 26.7% of income reported by the top 1%.
True enough, capital gains are not the whole story, and the CBO's report, "Trends in the Distribution of Household Income Between 1979 and 2007," notes that "business income was the fastest growing source of income for the top 1 percent." But that too was a behavioral response to lower tax rates.
In 1988, business income jumped to 16.5% of the reported income of the top 1%, from 8.2% in 1986. Why? As the CBO explains, "many C corporations . . . were converted to S corporations which pass corporate income through to their shareholders where it is taxed under the individual income tax."
The CBO estimates top incomes from individual tax returns. So it looked like a big spurt in top income in 1988 when thousands of businesses switched to reporting income on individual rather than corporate returns as the top individual tax rate dropped to 28% from 50%.
In reality, it was just a switching between tax forms to take advantage of the lower individual tax rate. Such tax-induced switching from corporate to individual tax forms in 1986-1988 makes it illegitimate to compare top income shares between 1979 and 2007.
After the tax rate on dividends fell to 15% in 2003 from 35%, the share of income reported by top earners from dividends doubled to 8.4% in 2007 from 4.2% in 2002, according to similar tax-based estimates from economists Thomas Piketty and Emmanuel Saez. Top earners held more dividend-paying stocks in taxable accounts rather than in tax-exempt bonds, or they kept dividends in tax-free retirement accounts.
In short, what the Congressional Budget Office presents as increased inequality from 2003 to 2007 was actually evidence that the top 1% of earners report more taxable income when tax rates are reduced on dividends, capital gains and businesses filing under the individual tax code.
If Congress raises top individual tax rates much above the corporate rate, many billions in business income would rapidly vanish from the individual tax returns the CBO uses to measure the income of the top 1%. Small businesses and professionals would revert to reporting most income on corporate tax returns as they did in 1979.
If Congress raises top tax rates on capital gains and dividends, the highest income earners would report less income from capital gains and dividends and hold more tax-exempt bonds. Such tax policies would reduce the share of reported income of the top earners almost as effectively as the recession the policies would likely provoke. The top 1% would then pay a much smaller portion of federal income taxes, just as they did in 1979. And the other 99% would pay more. As the CBO found, "the federal income tax was notably more progressive in 2007 than in 1979."
Mr. Reynolds is a senior fellow with the Cato Institute. This op-ed is adapted from a forthcoming Cato Institute working paper, "The Mismeasurement of Inequality."

Tuesday, December 6, 2011

The Culture of Violence in the American West: Myth versus Reality
By Thomas J. DiLorenzo
This article appeared in the Fall 2010 issue of The Independent Review

Contrary to popular perception, the Old West was much more peaceful than American cities are today. The real culture of violence on the frontier during the latter half of the nineteenth century sprang from the U.S. government’s policies toward the Plains Indians.

The Not-So-Wild, Wild West
In a thorough review of the “West was violent” literature, Bruce Benson (1998) discovered that many historians simply assume that violence was pervasive—even more so than in modern-day America—and then theorize about its likely causes. In addition, some authors assume that the West was very violent and then assert, as Joe Franz does, that “American violence today reflects our frontier heritage” (Franz 1969, qtd. in Benson 1998, 98). Thus, an allegedly violent and stateless society of the nineteenth century is blamed for at least some of the violence in the United States today.
In a book-length survey of the “West was violent” literature, historian Roger McGrath echoes Benson’s skepticism about this theory when he writes that “the frontier-was-violent authors are not, for the most part, attempting to prove that the frontier was violent. Rather, they assume that it was violent and then proffer explanations for that alleged violence” (1984, 270).
In contrast, an alternative literature based on actual history concludes that the civil society of the American West in the nineteenth century was not very violent. Eugene Hollon writes that the western frontier “was a far more civilized, more peaceful and safer place than American society today” (1974, x). Terry Anderson and P. J. Hill affirm that although “[t]he West . . . is perceived as a place of great chaos, with little respect for property or life,” their research “indicates that this was not the case; property rights were protected and civil order prevailed. Private agencies provided the necessary basis for an orderly society in which property was protected and conflicts were resolved” (1979, 10).
What were these private protective agencies? They were not governments because they did not have a legal monopoly on keeping order. Instead, they included such organizations as land clubs, cattlemen’s associations, mining camps, and wagon trains.
So-called land clubs were organizations established by settlers before the U.S. government even surveyed the land, let alone started to sell it or give it away. Because disputes over land titles are inevitable, the land clubs adopted their own constitutions, laying out the “laws” that would define and protect property rights in land (Anderson and Hill 1979, 15). They administered land claims, protected them from outsiders, and arbitrated disputes. Social ostracism was used effectively against those who violated the rules. Establishing property rights in this way minimized disputes—and violence.
The wagon trains that transported thousands of people to the California gold fields and other parts of the West usually established their own constitutions before setting out. These constitutions often included detailed judicial systems. As a consequence, writes Benson, “[t]here were few instances of violence on the wagon trains even when food became extremely scarce and starvation threatened. When crimes against persons or their property were committed, the judicial system . . . would take effect” (1998, 102). Ostracism and threats of banishment from the group, instead of threats of violence, were usually sufficient to correct rule breakers’ behavior.
Dozens of movies have portrayed the nineteenth-century mining camps in the West as hot beds of anarchy and violence, but John Umbeck discovered that, beginning in 1848, the miners began forming contracts with one another to restrain their own behavior (1981, 51). There was no government authority in California at the time, apart from a few military posts. The miners’ contracts established property rights in land (and in any gold found on the land) that the miners themselves enforced. Miners who did not accept the rules the majority adopted were free to mine elsewhere or to set up their own contractual arrangements with other miners. The rules that were adopted were often consequently established with unanimous consent (Anderson and Hill 1979, 19). As long as a miner abided by the rules, the other miners defended his rights under the community contract. If he did not abide by the agreed-on rules, his claim would be regarded as “open to any [claim] jumpers” (Umbeck 1981, 53).
The mining camps hired “enforcement specialists”—justices of the peace and arbitrators—and developed an extensive body of property and criminal law. As a result, there was very little violence and theft. The fact that the miners were usually armed also helps to explain why crime was relatively infrequent. Benson concludes, “The contractual system of law effectively generated cooperation rather than conflict, and on those occasions when conflict arose it was, by and large, effectively quelled through nonviolent means” (1998, 105).
When government bureaucrats failed to police cattle rustling effectively, ranchers established cattlemen’s associations that drew up their own constitutions and hired private “protection agencies” that were often staffed by expert gunmen. This action deterred cattle rustling. Some of these “gunmen” did “drift in and out of a life of crime,” write Anderson and Hill (1979, 18), but they were usually dealt with by the cattlemen’s associations and never created any kind of large-scale criminal organization, as some have predicted would occur under a regime of private law enforcement.
In sum, this work by Benson, Anderson and Hill, Umbeck, and others challenges with solid historical research the claims made by the “West was violent” authors. The civil society of the American West in the nineteenth century was much more peaceful than American cities are today, and the evidence suggests that in fact the Old West was not a very violent place at all. History also reveals that the expanded presence of the U.S. government was the real cause of a culture of violence in the American West. If there is anything to the idea that a nineteenth-century culture of violence on the American frontier is the genesis of much of the violence in the United States today, the main source of that culture is therefore government, not civil society.
The Real Cause of Violence in the American West
The real culture of violence in the American West of the latter half of the nineteenth century sprang from the U.S. government’s policies toward the Plains Indians. It is untrue that white European settlers were always at war with Indians, as popular folklore contends. After all, Indians assisted the Pilgrims and celebrated the first Thanksgiving with them; John Smith married Pocahontas; a white man (mostly Scots, with some Cherokee), John Ross, was the chief of the Cherokees of Tennessee and North Carolina; and there was always a great deal of trade with Indians, as opposed to violence. As Jennifer Roback has written, “Europeans generally acknowledged that the Indians retained possessory rights to their lands. More important, the English recognized the advantage of being on friendly terms with the Indians. Trade with the Indians, especially the fur trade, was profitable. War was costly” (1992, 9). Trade and cooperation with the Indians were much more common than conflict and violence during the first half of the nineteenth century.
Terry Anderson and Fred McChesney relate how Thomas Jefferson found that during his time negotiation was the Europeans’ predominant means of acquiring land from Indians (1994, 56). By the twentieth century, some $800 million had been paid for Indian lands. These authors also argue that various factors can alter the incentives for trade, as opposed to waging a war of conquest as a means of acquiring land. One of the most important factors is the existence of a standing army, as opposed to militias, which were used in the American West prior to the War Between the States. On this point, Anderson and McChesney quote Adam Smith, who wrote that “‘[i]n a militia, the character of the labourer, artificer, or tradesman, predominates over that of the soldier: in a standing army, that of the soldier predominates over every other character.’” (1994, 52). A standing army, according to Anderson and McChesney, “creates a class of professional soldiers whose personal welfare increases with warfare, even if fighting is a negative-sum act for the population as a whole” (52).
The change from militia to a standing army took place in the American West immediately upon the conclusion of the War Between the States. The result, say Anderson and McChesney, was that white settlers and railroad corporations were able to socialize the costs of stealing Indian lands by using violence supplied by the U.S. Army. On their own, they were much more likely to negotiate peacefully. Thus, “raid” replaced “trade” in white–Indian relations. Congress even voted in 1871 not to ratify any more Indian treaties, effectively announcing that it no longer sought peaceful relations with the Plains Indians.
Anderson and McChesney do not consider why a standing army replaced militias in 1865, but the reason is not difficult to discern. One has only to read the official pronouncements of the soldiers and political figures who launched a campaign of extermination against the Plains Indians.
On June 27, 1865, General William Tecumseh Sherman was given command of the Military District of the Missouri, which was one of the five military divisions into which the U.S. government had divided the country. Sherman received this command for the purpose of commencing the twenty-five-year war against the Plains Indians, primarily as a form of veiled subsidy to the government-subsidized railroad corporations and other politically connected corporations involved in building the transcontinental railroads. These corporations were the financial backbone of the Republican Party. Indeed, in June 1861, Abraham Lincoln, former legal counsel of the Illinois Central Railroad, called a special emergency session of Congress not to deal with the two-month-old Civil War, but to commence work on the Pacific Railway Act. Subsidizing the transcontinental railroads was a primary (if not the primary) objective of the new Republican Party. As Dee Brown writes in Hear That Lonesome Whistle Blow, a history of the building of the transcontinental railroads, Lincoln’s 1862 Pacific Railway Act “assured the fortunes of a dynasty of American families . . . the Brewsters, Bushnells, Olcotts, Harkers, Harrisons, Trowbridges, Lanworthys, Reids, Ogdens, Bradfords, Noyeses, Brooks, Cornells, and dozens of others” (2001, 49), all of whom were tied to the Republican Party.
The federal railroad subsidies enriched many Republican members of Congress. Congressman Thaddeus Stevens of Pennsylvania “received a block of [Union Pacific] stock in exchange for his vote” on the Pacific Railroad bill, writes Brown (2001, 58). The Pennsylvania iron manufacturer and congressman also demanded a legal requirement that all iron used in constructing the railroad be made in the United States.
Republican congressman Oakes Ames of Massachusetts was a shovel manufacturer who became “a loyal ally” of the legislation after he was promised shovel contracts (Brown 2001, 58). A great many shovels must have been required to dig railroad beds from Iowa to California.
Sherman wrote in his memoirs that as soon as the war ended, “My thoughts and feelings at once reverted to the construction of the great Pacific Railway. . . . I put myself in communication with the parties engaged in the work, visiting them in person, and assured them that I would afford them all possible assistance and encouragement” (2005, 775). “We are not going to let a few thieving, ragged Indians check and stop the progress [of the railroads],” Sherman wrote to Ulysses S. Grant in 1867 (qtd. in Fellman 1995, 264).
The chief engineer of the government-subsidized transcontinental railroads was Grenville Dodge, another of Lincoln’s generals during the war with whom Sherman worked closely afterward. As Murray Rothbard points out, Dodge “helped swing the Iowa delegation to Lincoln” at the 1860 Republican National Convention, and “[i]n return, early in the Civil War, Lincoln appointed Dodge to army general. Dodge’s task was to clear the Indians from the designated path of the country’s first heavily subsidized federally chartered trans-continental railroad, the Union Pacific.” In this way, Rothbard concludes, “conscripted Union troops and hapless taxpayers were coerced into socializing the costs of constructing and operating the Union Pacific” (1997, 130).
Immediately after the war, Dodge proposed enslaving the Plains Indians and forcing them “to do the grading” on the railroad beds, “with the Army furnishing a guard to make the Indians work, and keep them from running away” (Brown 2001, 64). Union army veterans were to be the “overseers” of this new class of slaves. Dodge’s proposal was rejected; the U.S. government decided instead to try to kill as many Indians as possible.
In his memoirs, Sherman has high praise for Thomas Clark Durant, the vice president of the Union Pacific Railroad, as “a person of ardent nature, of great ability and energy, enthusiastic in his undertaking” (2005, 775). Durant was also the chief instigator of the infamous Credit Mobilier scandal, one of the most shocking examples of political corruption in U.S. history. Sherman himself had invested in railroads before the war, and he was a consummate political insider, along with Durant, Dodge, and his brother, Senator John Sherman.
President Grant made his old friend Sherman the army’s commanding general, and another Civil War luminary, General Phillip Sheridan, assumed command on the ground in the West. “Thus the great triumvirate of the Union Civil War effort,” writes Sherman biographer Michael Fellman, “formulated and enacted military Indian policy until reaching, by the 1880s, what Sherman sometimes referred to as ‘the final solution of the Indian problem’” (1995, 260).
What Sherman called the “final solution of the Indian problem” involved “killing hostile Indians and segregating their pauperized survivors in remote places.” “These men,” writes Fellman, “applied their shared ruthlessness, born of their Civil War experiences, against a people all three [men] despised. . . . Sherman’s overall policy was never accommodation and compromise, but vigorous war against the Indians,” whom he regarded as “a less-than-human and savage race” (1995, 260).
All of the other generals who took part in the Indian Wars were “like Sherman [and Sheridan], Civil War luminaries,” writes Sherman biographer John Marszalek. “Their names were familiar from Civil War battles: John Pope, O. O. Howard, Nelson A. Miles, Alfred H. Terry, E. O. C. Ord, C. C. Augur . . . Edward Canby . . . George Armstrong Custer and Benjamin Garrison” (1993, 380). General Winfield Scott Hancock also belongs on this list.
Sherman and Sheridan’s biographers frequently point out that these men apparently viewed the Indian Wars as a continuation of the job they had performed during the Civil War. “Sherman viewed Indians as he viewed recalcitrant Southerners during the war and newly freed people after: resisters to the legitimate forces of an ordered society” (Marszalek 1993, 380). Marszalek might well have written also that Southerners, former slaves, and Indians were not so much opposed to an “ordered society,” but to being ordered around by politicians in Washington, D.C., primarily for the benefit of the politicians’ corporate benefactors.
“During the Civil War, Sherman and Sheridan had practiced a total war of destruction of property. . . . Now the army, in its Indian warfare, often wiped out entire villages” (Marszalek 1993, 382). Fellman writes that Sherman charged Sheridan “to act with all the vigor he had shown in the Shenandoah Valley during the final months of the Civil War” (1995, 270). Sheridan’s troops had burned and plundered the Shenandoah Valley after the Confederate army had evacuated the area and only women, children, and elderly men remained there (Morris 1992, 183). Even Prussian army officers are said to have been shocked when after the war Sheridan boasted to them of his exploits in the Shenandoah Valley.
“[Sherman] insisted that the only answer to the Indian problem was all-out war—of the kind he had utilized against the Confederacy,” writes Marszalek. “Since the inferior Indians refused to step aside so superior American culture could create success and progress, they had to be driven out of the way as the Confederates had been driven back into the Union” (1993, 380).
Sherman’s compulsion for the “extermination” of anyone opposed to turning the U.S. state into an empire expressed the same reasoning he had expressed earlier with regard to his role in the War Between the States. In a letter to his wife early in the war, he declared that his ultimate purpose was “extermination, not of soldiers alone, that is the least part of the trouble, but the people.” Mrs. Sherman responded by expressing her similar wish that the conflict would be a “war of extermination, and that all [Southerners] would be driven like the swine into the sea. May we carry fire and sword into their states till not one habitation is left standing” (qtd. in Walters 1973, 61). Sherman did his best to take his wife’s advice, especially during his famous “march to the sea.” It is little wonder that Indian Wars historian S. L. A. Marshall observes, “[M]ost of the Plains Indian bands were in sympathy with the Southern cause” during the war (1972, 24).
One theme among all of these Union Civil War veterans is that they considered Indians to be subhuman and racially inferior to whites and therefore deserving of extermination if they could not be “controlled” by the white population. Sherman himself thought of the former slaves in exactly the same way. “The Indians give a fair illustration of the fate of the negroes if they are released from the control of the whites,” he once said (qtd. in Kennett 2001, 296). He believed that intermarriage of whites and Indians would be disastrous, as he claimed it was in New Mexico, where “the blending of races had produced general equality, which led inevitably to Mexican anarchy” (qtd. in Kennett 2001, 297).
Sherman described the inhabitants of New Mexico, many of whom were part Mexican (Spanish), part Indian, and part Negro, as “mongrels.” His goal was to eliminate the possibility that such racial amalgamation might occur elsewhere in the United States, by undertaking to effect what Michael Fellman called a “racial cleansing of the land” (1995, 264), beginning with extermination of the Indians.
Sherman, Sheridan, and the other top military commanders were not shy about announcing that their objective was extermination, a term that Sherman used literally on a number of occasions, as he had in reference to Southerners only a few years earlier. He and Sheridan are forever associated with the slogan “the only good Indian is a dead Indian.” “All the Indians will have to be killed or be maintained as a species of paupers,” he said. Sherman announced his objective as being “to prosecute the war with vindictive earnestness . . . till [the Indians] are obliterated or beg for mercy” (qtd. in Fellman 1995, 270). According to Fellman, Sherman gave “Sheridan prior authorization to slaughter as many women and children as well as men Sheridan or his subordinates felt was necessary when they attacked Indian villages” (1995, 271).
In case the media back east got wind of such atrocities, Sherman promised Sheridan that he would run interference against any complaints: “I will back you with my whole authority, and stand between you and any efforts that may be attempted in your rear to restrain your purpose or check your troops” (qtd. in Fellman 1995, 271). In later correspondence, Sherman wrote to Sheridan, “I am charmed at the handsome conduct of our troops in the field. They go in with the relish that used to make our hearts glad in 1864–5” (qtd. in Fellman 1995, 272).
Sherman and Sheridan’s troops conducted more than one thousand attacks on Indian villages, mostly in the winter months, when families were together. The U.S. army’s actions matched its leaders’ rhetoric of extermination. As mentioned earlier, Sherman gave orders to kill everyone and everything, including dogs, and to burn everything that would burn so as to increase the likelihood that any survivors would starve or freeze to death. The soldiers also waged a war of extermination on the buffalo, which was the Indians’ chief source of food, winter clothing, and other goods (the Indians even made fish hooks out of dried buffalo bones and bow strings out of sinews).
By 1882, the buffalo were all but extinct, and the cause was not just the tragedy of the commons. Because buffalo hides could be sold for as much as $3.50 each, an individual hunter would kill more than a hundred a day for as many days as he cared to hunt on the open plain. This exploitation of a “common property resource” decimated the buffalo herds, but the decimation was also an integral part of U.S. military policy aimed at starving the Plains Indians. When a group of Texans asked Sheridan if he could not do something to stop the extermination of the buffalo, he said: “Let them kill, skin, and sell until the buffalo is exterminated, as it is the only way to bring lasting peace and allow civilization to advance” (qtd. in Brown 1970, 265).
The escalation of violence against the Plains Indians actually began in earnest during the War Between the States. Sherman and Sheridan’s Indian policy was a continuation and escalation of a policy that General Grenville Dodge, among others, had already commenced. In 1851, the Santee Sioux Indians in Minnesota sold 24 million acres of land to the U.S. government for $1,410,000 in a typical “trade” (as opposed to raid) scenario. The federal government once again did not keep its side of the bargain, though, reneging on its payment to the Indians (Nichols 1978). By 1862, thousands of white settlers were moving onto the Indians’ land, and a crop failure in that year caused the Santee Sioux to become desperate for food. They attempted to take back their land by force with a short “war” in which President Lincoln placed General John Pope in charge. Pope announced, “It is my purpose to utterly exterminate the Sioux. . . . They are to be treated as maniacs or wild beasts, and by no means as people with whom treaties or compromises can be made” (qtd. in Nichols 1978, 87).
At the end of the month-long conflict, hundreds of Indians who had been taken prisoner were subjected to military “trials” lasting about ten minutes each, according to Nichols (1978). Most of the adult male prisoners were found guilty and sentenced to death—not based on evidence of the commission of a crime, but on their mere presence at the end of the fighting. Minnesota authorities wanted to execute all 303 who were convicted, but the Lincoln administration feared that the European powers would not view such an act favorably and did not want to give them an excuse to assist the Confederacy in any way. Therefore, “only” 38 of the Indians were hanged, making this travesty of justice still the largest mass execution in U.S. history (Nichols 1978). To appease the Minnesotans who wanted to execute all 303, Lincoln promised them $2 million and pledged that the U.S. Army would remove all Indians from the state at some future date.
One of the most famous incidents of Indian extermination, known as the Sand Creek Massacre, took place on November 29, 1864. There was a Cheyenne and Arapaho village located on Sand Creek in southeastern Colorado. These Indians had been assured by the U.S. government that they would be safe in Colorado. The government instructed them to fly a U.S. flag over their village, which they did, to assure their safety. However, another Civil War “luminary,” Colonel John Chivington, had other plans for them as he raided the village with 750 heavily armed soldiers. One account of what happened appears in the book Crimsoned Prairie: The Indian Wars (1972) by the renowned military historian S. L. A. Marshall, who held the title of chief historian of the European Theater in World War II and authored thirty books on American military history.
Chivington’s orders were: “I want you to kill and scalp all, big and little; nits make lice” (qtd. in Marshall 1972, 37). Then, despite the display of the U.S. flag and white surrender flags by these peaceful Indians, Chivington’s troops “began a full day given over to blood-lust, orgiastic mutilation, rapine, and destruction—with Chivington . . . looking on and approving” (Marshall 1972, 38). Marshall notes that the most reliable estimate of the number of Indians killed is “163, of which 110 were women and children” (39).
Upon returning to his fort, Chivington “and his raiders demonstrated around Denver, waving their trophies, more than one hundred drying scalps. They were acclaimed as conquering heroes, which was what they had sought mainly.” One Republican Party newspaper announced, “Colorado soldiers have once again covered themselves with glory” (qtd. in Marshall 1972, 39).
An even more detailed account of the Sand Creek Massacre, based on U.S. Army records, biographies, and firsthand accounts, appears in Dee Brown’s classic Bury My Heart at Wounded Knee: An Indian History of the American West: “When the troops came up to [the squaws,] they ran out and showed their persons to let the soldiers know they were squaws and begged for mercy, but the soldiers shot them all. . . . There seemed to be indiscriminate slaughter of men, women and children. . . . The squaws offered no resistance. Every one . . . was scalped” (1970, 89). Brown’s narrative gets much more graphic. The effect of such behavior was to eliminate forever the possibility of peaceful relations with these Indian tribes. They understood that they had become the objects of a campaign of extermination. As Brown writes, “In a few hours of madness at Sand Creek, Chivington and his soldiers destroyed the lives or the power of every Cheyenne and Arapaho chief who had held out for peace with the white men” (92). For the next two decades, the Plains Indians would do their best to return the barbarism in kind.
The books by Brown and Marshall show that the kind of barbarism that occurred at Sand Creek, Colorado, was repeated many times during the next two decades. For example, in 1868 General Winfield Scott Hancock ordered Custer to attack a Cheyenne camp with infantry, which Custer did. The attack led Superintendent of Indian Affairs Thomas Murphy to report to Washington that “General Hancock’s expedition . . . has resulted in no good, but, on the contrary, has been productive of much evil” (qtd. in Brown 1970, 157). A report of the attack prepared for the U.S. secretary of the interior concluded: “For a mighty nation like us to be carrying on a war with a few straggling nomads, under such circumstances, is a spectacle most humiliating, and injustice unparalleled, a national crime most revolting, that must, sooner or later, bring down upon us or our posterity the judgment of Heaven” (qtd. in Brown 1970, 157).
As the war on the Cheyenne continued, Custer and his troops apparently decided that to “kill or hang all the warriors,” as General Sheridan had ordered, “meant separating them from the old men, women, and children. This work was too slow and dangerous for the cavalrymen; they found it much more efficient and safe to kill indiscriminately. They killed 103 Cheyenne, but only eleven of them were warriors” (Brown 1970, 169).
Marshall calls Sheridan’s orders to Custer “the most brutal orders ever published to American troops” (1972, 106). This is a powerful statement coming from a man who wrote thirty books on American military history. In addition to ordering Custer to shoot or hang all warriors, even those that surrendered, Sheridan commanded him to slaughter all ponies and to burn all tepees and their contents. “Sheridan held with but one solution to the Indian problem—extermination—and Custer was his quite pliable instrument,” writes Marshall (1972, 106).
One of the oddest facts about the Indian Wars is that Custer famously instructed a band to play an Irish jig called “Garry Owens” during the attacks on Indian villages. “This was Custer’s way of gentling war. It made killing more rhythmic,” writes Marshall (1972, 107).
During an attack on a Kiowa village on September 26, 1874, soldiers killed more than one thousand horses and forced 252 Kiowas to surrender. They were thrown into prison cells, where “each day their captors threw chunks of raw meat to them as if they were animals in a cage” (Brown 1970, 270). On numerous occasions, fleeing Indians sought refuge in Canada, where they knew they would be unmolested. Canadians built their own transcontinental railroad in the late nineteenth century, but they did not commence a campaign of extermination against the Indians living in that country as the government did in the United States.
No one denies that the U.S. government killed tens of thousands of Indians, including women and children, during the years from 1862 to 1890. There are various estimates of the number of Indians killed, the highest being that of historian Russell Thornton (1990), who used mostly military records to estimate that about forty-five thousand Indians, including women and children, were killed during the wars on the Plains Indians. It is reasonable to assume that thousands more were maimed and disabled for life and received little or no medical assistance. The thousands of soldiers who participated in the Indian Wars lived in a culture of violence and death that was cultivated by the U.S. government for a quarter of a century.
The culture of violence in the American West of the late nineteenth century was created almost entirely by the U.S. government’s military interventions, which were primarily a veiled subsidy to the government-subsidized transcontinental railroad corporations. As scandals go, the war on the Plains Indians makes the Credit Mobilier affair seem inconsequential.
There is such a thing as a culture of war, especially in connection with a war as gruesome and bloody as the war on the Plains Indians. On this topic, World War II combat veteran Paul Fussell has written: “The culture of war . . . is not like the culture of ordinary peace-time life. It is a culture dominated by fear, blood, and sadism, by irrational actions and preposterous . . . results. It has more relation to science fiction or to absurdist theater than to actual life” (1997, 354). Such was the “culture” the U.S. Army created throughout much of the American West for the quarter century after the War Between the States. It is the “culture” that all military interventions at all times have created, and it contrasts sharply with the predominantly peaceful culture of the stateless civil society on the American frontier during much of the nineteenth century.
Fussell made this statement based on his personal experiences in combat, but it echoes the scholarly writing of Ludwig von Mises (who, let us remember, was also an Austrian army officer who had substantial combat experience during World War I): “What distinguishes man from animals is the insight into the advantages that can be derived from cooperation under the division of labor. Man curbs his innate Instinct of aggression in order to cooperate with other human beings. The more he wants to improve his material well being, the more he must expand the system of the division of labor. Concomitantly he must more and more restrict the sphere in which he resorts to military action.” Human cooperation under the division of labor in the civil society “bursts asunder,” Mises wrote, whenever “citizens turn into warriors” and resort to war (1998, 827).
It is not true that all whites waged a war of extermination against the Plains Indians. As noted earlier and as noted throughout the literature of the Indian Wars, many whites preferred the continuation of the peaceful trade and relations with Indians that had been the norm during the first half of the nineteenth century. (Conflicts sometimes occurred, of course, but “trade” dominated “raid” during that era.) Canadians built a transcontinental railroad without a Shermanesque campaign of “extermination” against the Indians in Canada. It is telling that the Plains Indians often sought refuge in Canada when the U.S. Army had them on the run.
The U.S. government dehumanized the Plains Indians, describing them as “wild beasts,” in order to justify slaughtering them, just as Sherman and his wife, among many others, dehumanized Southerners during and after the War Between the States. The same dehumanization by the government’s propaganda machine would eventually target Filipinos, who were killed by the hundreds of thousands at the hands of the U.S. Army during their 1899–1902 revolt against the U.S. conquest of their country barely a decade after the Indian Wars had finally ended. President Theodore Roosevelt “justified” the slaughter of hundreds of thousands of Filipinos by calling them “savages, half-breeds, a wild and ignorant people” (qtd. in Powell 2006, 64). Dehumanization of certain groups of “resisters” at the hands of the state’s propaganda apparatus is a prerequisite for the culture of war and violence that has long been the main preoccupation of the U.S. state.
It was not necessary to kill tens of thousands of Indians and imprison thousands more in concentration camps (“reservations”) for generations in order to build a transcontinental railroad. Nor were the wars on the Plains Indians a matter of “the white population’s” waging a war of extermination. This war stemmed from the policy of the relatively small group of white men who ran the Republican Party (with assistance from some Democrats), which effectively monopolized national politics for most of that time.
These men utilized the state’s latest technologies of mass killing developed during the Civil War and its mercenary soldiers (including the former slaves known as “buffalo soldiers”) to wage their war because they were in a hurry to shovel subsidies to the railroad corporations and other related business enterprises. Many of them profited handsomely, as the Credit Mobilier scandal revealed. The railroad corporations were the Microsofts and IBMs of their day, and the doctrines of neomercantilism defined the Republican Party’s reason for existing (DiLorenzo 2006). The Republican Party was, after all, the “Party of Lincoln,” the great railroad lawyer and a lobbyist for the Illinois Central and other midwestern railroads during his day.
Anderson, Terry, and P. J. Hill. 1979. An American Experiment in Anarcho-capitalism: The Not So Wild, Wild West. Journal of Libertarian Studies 3: 9–29.
Anderson, Terry, and Fred L. McChesney. 1994. Raid or Trade? An Economic Model of Indian-White Relations. Journal of Law and Economics 37: 39–74.
Benson, Bruce. 1998. To Serve and Protect: Privatization and Community in Criminal Justice. New York: New York University Press for The Independent Institute.
Brown, Dee. 1970. Bury My Heart at Wounded Knee: An Indian History of the American West. New York: Holt.
———. 2001. Hear That Lonesome Whistle Blow. New York: Owl Books.
DiLorenzo, Thomas J. 2006. Lincoln Unmasked: What You’re Not Supposed to Know about Dishonest Abe. New York: Crown Forum.
Fellman, Michael. 1995. Citizen Sherman: A Life of William Tecumseh Sherman. Lawrence: University of Kansas Press.
Franz, Joe B. 1969. The Frontier Tradition: An Invitation to Violence. In The History of Violence in America, edited by Hugh D. Graham and Ted R. Gurr, 127–54. New York: New York Times Books.
Fussell, Paul. 1997. The Culture of War. In The Costs of War: America’s Pyrrhic Victories, edited by John Denson, 351–57. New Brunswick, N.J.: Transaction.
Hollon, W. Eugene. 1974. Frontier Violence: Another Look. New York: Oxford University Press.
Kennett, Lee B. 2001. Sherman: A Soldier’s Life. New York: HarperCollins.
Marshall, S. L. A. 1972. Crimsoned Prairie: The Indian Wars. New York: Da Capo Press.
Marszalek, John F. 1993. Sherman: A Soldier’s Passion for Order. New York: Vintage Books.
McGrath, Roger. 1984. Gunfighters, Highwaymen, and Vigilantes: Violence on the Frontier. Berkeley and Los Angeles: University of California Press.
Mises, Ludwig von. 1998. Human Action. Scholar’s Edition. Auburn, Ala.: Ludwig von Mises Institute.
Morris, Roy. 1992. Sheridan: The Life & Wars of General Phil Sheridan. New York: Vintage Books.
Nichols, David A. 1978. Lincoln and the Indians: Civil War Policy and Politics. Columbia: University of Missouri Press.
Powell, Jim. 2006. Bully Boy: The Truth about Theodore Roosevelt’s Legacy. New York: Crown Forum.
Roback, Jennifer. 1992. Exchange, Sovereignty, and Indian-Anglo Relations. In Property Rights and Indian Economies, edited by Terry Anderson, 5–26. Savage, Md.: Roman & Littlefield.
Rothbard, Murray N. 1997. America’s Two Just Wars: 1775 and 1861. In The Costs of War: America’s Pyrrhic Victories, edited by John Denson, 119–33. New Brunswick, N.J.: Transaction.
Sherman, William T. 2005. Memoirs. New York: Barnes & Noble.
Thornton, Russel. 1990. American Indian Holocaust and Survival: A Population History Since 1492. Oklahoma City: University of Oklahoma Press.
Umbeck, John. 1981. Might Makes Rights: A Theory of the Formation and Initial Distribution of Property Rights. Economic Inquiry 19: 38–59.
Walters, John Bennett. 1973. Merchant of Terror: General Sherman and Total War. New York: Bobbs-Merrill.