I predict future happiness for Americans if they can prevent the government from wasting the labors of the people under the pretense of taking care of them.- Thomas Jefferson.

debt clock

Friday, March 30, 2012

The Myth of Natural Monopoly

[Originally published in The Review of Austrian Economics 9 (2), 1996.]
The very term "public utility" … is an absurd one. Every good is useful "to the public," and almost every good … may be considered "necessary." Any designation of a few industries as "public utilities" is completely arbitrary and unjustified.
— Murray Rothbard, Power and Market
Most so-called public utilities have been granted governmental franchise monopolies because they are thought to be "natural monopolies." Put simply, a natural monopoly is said to occur when production technology, such as relatively high fixed costs, causes long-run average total costs to decline as output expands. In such industries, the theory goes, a single producer will eventually be able to produce at a lower cost than any two other producers, thereby creating a "natural" monopoly. Higher prices will result if more than one producer supplies the market.
Furthermore, competition is said to cause consumer inconvenience because of the construction of duplicative facilities, e.g., digging up the streets to put in dual gas or water lines. Avoiding such inconveniences is another reason offered for government franchise monopolies for industries with declining long-run average total costs.
It is a myth that natural-monopoly theory was developed first by economists, and then used by legislators to "justify" franchise monopolies. The truth is that the monopolies were created decades before the theory was formalized by intervention-minded economists, who then used the theory as an ex post rationale for government intervention. At the time when the first government franchise monopolies were being granted, the large majority of economists understood that large-scale, capital-intensive production did not lead to monopoly, but was an absolutely desirable aspect of the competitive process.
The word "process" is important here. If competition is viewed as a dynamic, rivalrous process of entrepreneurship, then the fact that a single producer happens to have the lowest costs at any one point in time is of little or no consequence. The enduring forces of competition — including potential competition — will render free-market monopoly an impossibility.
The theory of natural monopoly is also ahistorical. There is no evidence of the "natural-monopoly" story ever having been carried out — of one producer achieving lower long-run average total costs than everyone else in the industry and thereby establishing a permanent monopoly. As discussed below, in many of the so-called public-utility industries of the late 18th and early 19th centuries, there were often literally dozens of competitors.

Economies of Scale During the Franchise Monopoly Era

During the late 19th century, when local governments were beginning to grant franchise monopolies, the general economic understanding was that "monopoly" was caused by government intervention, not the free market, through franchises, protectionism, and other means. Large-scale production and economies of scale were seen as a competitive virtue, not a monopolistic vice. For example, Richard T. Ely, cofounder of the American Economic Association, wrote that "large scale production is a thing which by no means necessarily signifies monopolized production."[1] John Bates Clark, Ely's cofounder, wrote in 1888 that the notion that industrial combinations would "destroy competition" should "not be too hastily accepted."[2]
Mises Academy: Tom DiLorenzo teaches The Road to Serfdom: Then and Now
Herbert Davenport of the University of Chicago advised in 1919 that only a few firms in an industry where there are economies of scale does not "require the elimination of competition,"[3] and his colleague, James Laughlin, noted that even when "a combination is large, a rival combination may give the most spirited competition"[4] Irving Fisher[5] and Edwin R.A. Seligman[6] both agreed that large-scale production produced competitive benefits through cost savings in advertising, selling, and less cross-shipping.
Large-scale production units unequivocally benefited the consumer, according to turn-of-the-century economists. For without large-scale production, according to Seligman, "the world would revert to a more primitive state of well being, and would virtually renounce the inestimable benefits of the best utilization of capital."[7] Simon Patten of the Wharton School expressed a similar view that "the combination of capital does not cause any economic disadvantage to the community. … Combinations are much more efficient than were the small producers whom they displaced."[8]
Like virtually every other economist of the day, Columbia's Franklin Giddings viewed competition much like the modern-day Austrian economists do, as a dynamic, rivalrous process. Consequently, he observed that
competition in some form is a permanent economic process. … Therefore, when market competition seems to have been suppressed, we should inquire what has become of the forces by which it was generated. We should inquire, further, to what degree market competition actually is suppressed or converted into other forms.[9]
In other words, a "dominant" firm that underprices all its rivals at any one point in time has not suppressed competition, for competition is "a permanent economic process."
David A. Wells, one of the most popular economic writers of the late 19th century, wrote that "the world demands abundance of commodities, and demands them cheaply; and experience shows that it can have them only by the employment of great capital upon extensive scale."[10] And George Gunton believed that
concentration of capital does not drive small capitalists out of business, but simply integrates them into larger and more complex systems of production, in which they are enabled to produce … more cheaply for the community and obtain a larger income for themselves. … Instead of concentration of capital tending to destroy competition the reverse is true. … By the use of large capital, improved machinery and better facilities the trust can and does undersell the corporation.[11]
The above quotations are not a selected, but rather a comprehensive list. It may seem odd by today's standards, but as A.W. Coats pointed out, by the late 1880s there were only ten men who had attained full-time professional status as economists in the United States.[12] Thus, the above quotations cover virtually every professional economist who had anything to say about the relationship between economies of scale and competitiveness at the turn of the century.
The significance of these views is that these men observed firsthand the advent of large-scale production and did not see it leading to monopoly, "natural" or otherwise. In the spirit of the Austrian School, they understood that competition was an ongoing process, and that market dominance was always necessarily temporary in the absence of monopoly-creating government regulation. This view is also consistent with my own research findings that the ''trusts" of the late 19th century were in fact dropping their prices and expanding output faster than the rest of the economy — they were the most dynamic and competitive of all industries, not monopolists.[13] Perhaps this is why they were targeted by protectionist legislators and subjected to "antitrust" laws.
The economics profession came to embrace the theory of natural monopoly after the 1920s, when it became infatuated with "scientism" and adopted a more or less engineering theory of competition that categorized industries in terms of constant, decreasing, and increasing returns to scale (declining average total costs). According to this way of thinking, engineering relationships determined market structure and, consequently, competitiveness. The meaning of competition was no longer viewed as a behavioral phenomenon, but an engineering relationship. With the exception of such economists as Joseph Schumpeter, Ludwig von Mises, Friedrich Hayek, and other members of the Austrian School, the ongoing process of competitive rivalry and entrepreneurship was largely ignored.

How "Natural" Were the Early Natural Monopolies?

There is no evidence at all that at the outset of public-utility regulation there existed any such phenomenon as a "natural monopoly." As Harold Demsetz has pointed out:
Six electric light companies were organized in the one year of 1887 in New York City. Forty-five electric light enterprises had the legal right to operate in Chicago in 1907. Prior to 1895, Duluth, Minnesota, was served by five electric lighting companies, and Scranton, Pennsylvania, had four in 1906. … During the latter part of the 19th century, competition was the usual situation in the gas industry in this country. Before 1884, six competing companies were operating in New York City … competition was common and especially persistent in the telephone industry … Baltimore, Chicago, Cleveland, Columbus, Detroit, Kansas City, Minneapolis, Philadelphia, Pittsburgh, and St. Louis, among the larger cities, had at least two telephone services in 1905.[14]
In an extreme understatement, Demsetz concludes that "one begins to doubt that scale economies characterized the utility industry at the time when regulation replaced market competition."[15]
A most instructive example of the non-existence of natural monopoly in the utility industries is provided in a 1936 book by economist George T. Brown entitled "The Gas Light Company of Baltimore," which bears the misleading subtitle, "A Study of Natural Monopoly."[16] The book presents "the study of the evolutionary character of utilities" in general, with special emphasis on the Gas Light Company of Baltimore, the problems of which "are not peculiar either to the Baltimore company or the State of Maryland, but are typical of those met everywhere in the public utility industry."[17]
The history of the Gas Light Company of Baltimore figures prominently in the whole history of natural monopoly, in theory and in practice, for the influential Richard T. Ely, who was a professor of economics at Johns Hopkins University in Baltimore, chronicled the company's problems in a series of articles in the Baltimore Sun that were later published as a widely-sold book. Much of Ely's analysis came to be the accepted economic dogma with regard to the theory of natural monopoly.
The history of the Gas Light Company of Baltimore is that, from its founding in 1816, it constantly struggled with new competitors. Its response was not only to try to compete in the marketplace, but also to lobby the state and local government authorities to refrain from granting corporate charters to its competitors. The company operated with economies of scale, but that did not prevent numerous competitors from cropping up.
"Competition is the life of business," the Baltimore Sun editorialized in 1851 as it welcomed news of new competitors in the gas light business.[18] The Gas Light Company of Baltimore, however, "objected to the granting of franchise rights to the new company."[19]
Brown states that "gas companies in other cities were exposed to ruinous competition," and then catalogues how those same companies sought desperately to enter the Baltimore market. But if such competition was so "ruinous," why would these companies enter new — and presumably just as "ruinous" — markets? Either Brown's theory of "ruinous competition" — which soon came to be the generally accepted one — was incorrect, or those companies were irrational gluttons for financial punishment.
By ignoring the dynamic nature of the competitive process, Brown made the same mistake that many other economists still make: believing that "excessive" competition can be "destructive" if low-cost producers drive their less efficient rivals from the market.[20] Such competition may be "destructive" to high-cost competitors, but it is beneficial to consumers.
In 1880 there were three competing gas companies in Baltimore who fiercely competed with one another. They tried to merge and operate as a monopolist in 1888, but a new competitor foiled their plans: "Thomas Aha Edison introduced the electric light which threatened the existence of all gas companies."[21] From that point on there was competition between both gas and electric companies, all of which incurred heavy fixed costs which led to economies of scale. Nevertheless, no free-market or "natural" monopoly ever materialized.
When monopoly did appear, it was solely because of government intervention. For example, in 1890 a bill was introduced into the Maryland legislature that "called for an annual payment to the city from the Consolidated [Gas Company] of $10,000 a year and 3 percent of all dividends declared in return for the privilege of enjoying a 25-year monopoly.[22] This is the now-familiar approach of government officials colluding with industry executives to establish a monopoly that will gouge the consumers, and then sharing the loot with the politicians in the form of franchise fees and taxes on monopoly revenues. This approach is especially pervasive today in the cable TV industry.
Legislative "regulation" of gas and electric companies produced the predictable result of monopoly prices, which the public complained bitterly about. Rather than deregulating the industry and letting competition control prices, however, public utility regulation was adopted to supposedly appease the consumers who, according to Brown, "felt that the negligent manner in which their interests were being served [by legislative control of gas and electric prices] resulted in high rates and monopoly privileges. The development of utility regulation in Maryland typified the experience of other states."[23]
Not all economists were fooled by the "natural-monopoly" theory advocated by utility industry monopolists and their paid economic advisers. In 1940 economist Horace M. Gray, an assistant dean of the graduate school at the University of Illinois, surveyed the history of "the public utility concept," including the theory of "natural" monopoly. "During the 19th century," Gray observed, it was widely believed that "the public interest would be best promoted by grants of special privilege to private persons and to corporations" in many industries.[24] This included patents, subsidies, tariffs, land grants to the railroads, and monopoly franchises for "public" utilities. "The final result was monopoly, exploitation, and political corruption."[25]
With regard to "public" utilities, Gray records that "between 1907 and 1938, the policy of state-created, state-protected monopoly became firmly established over a significant portion of the economy and became the keystone of modern public utility regulation."[26] From that time on, "the public utility status was to be the haven of refuge for all aspiring monopolists who found it too difficult, too costly, or too precarious to secure and maintain monopoly by private action alone."[27]
In support of this contention, Gray pointed out how virtually every aspiring monopolist in the country tried to be designated a "public utility," including the radio, real estate, milk, air transport, coal, oil, and agricultural industries, to name but a few. Along these same lines, "the whole NRA experiment may be regarded as an effort by big business to secure legal sanction for its monopolistic practices."[28] Those lucky industries that were able to be politically designated as "public utilities" also used the public utility concept to keep out the competition.
The role of economists in this scheme was to construct what Gray called a "confused rationalization" for "the sinister forces of private privilege and monopoly," i.e., the theory of "natural" monopoly. "The protection of consumers faded into the background."[29]
More recent economic research supports Gray's analysis. In one of the first statistical studies of the effects of rate regulation in the electric utilities industry, published in 1962, George Stigler and Claire Friedland found no significant differences in prices and profits of utilities with and without regulatory commissions from 1917 to 1932.[30] Early rate regulators did not benefit the consumer, but were rather "captured" by the industry, as happened in so many other industries, from trucking to airlines to cable television. It is noteworthy — but not very laudable — that it took economists almost 50 years to begin studying the actual, as opposed to the theoretical, effects of rate regulation.
Sixteen years after the Stigler-Friedland study, Gregg Jarrell observed that 25 states substituted state for municipal regulation of electric power ratemaking between 1912 and 1917, the effects of which were to raise prices by 46 percent and profits by 38 percent, while reducing the level of output by 23 percent.[31] Thus, municipal regulation failed to hold prices down. But the utilities wanted an even more rapid increase in their prices, so they successfully lobbied for state regulation under the theory that state regulators would be less pressured by local customer groups, than mayors and city councils would be.
These research results are consistent with Horace Gray's earlier interpretation of public utility rate regulation as an anticonsumer, monopolistic, price-fixing scheme.

The Problem of "Excessive Duplication"

In addition to the economies of scale canard, another reason that has been given for granting monopoly franchises to "natural monopolies" is that allowing too many competitors is too disruptive. It is too costly to a community, the argument goes, to allow several different water suppliers, electric power producers, or cable TV operators to dig up the streets. But as Harold Demsetz has observed:
[T]he problem of excessive duplication of distribution systems is attributable to the failure of communities to set a proper price on the use of these scarce resources. The right to use publicly owned thoroughfares is the right to use a scarce resource. The absence of a price for the use of these resources, a price high enough to reflect the opportunity costs of such alternative uses as the servicing of uninterrupted traffic and unmarred views, will lead to their overutilization. The setting of an appropriate fee for the use of these resources would reduce the degree of duplication to optimal levels.[32]
Thus, just as the problem with "natural" monopolies is actually caused by government intervention, so is the "duplication of facilities" problem. It is created by the failure of governments to put a price on scarce urban resources. More precisely, the problem is really caused by the fact that governments own the streets under which utility lines are placed, and that the impossibility of rational economic calculation within socialistic institutions precludes them from pricing these resources appropriately, as they would under a private-property competitive-market regime.
Contrary to Demsetz's claim, rational economic pricing in this case is impossible precisely because of government ownership of roads and streets. Benevolent and enlightened politicians, even ones who have studied at the feet of Harold Demsetz, would have no rational way of determining what prices to charge. Murray Rothbard explained all this more than 25 years ago:
The fact that the government must give permission for the use of its streets has been cited to justify stringent government regulations of 'public utilities,' many of which (like water or electric companies) must make use of the streets. The regulations are then treated as a voluntary quid pro quo. But to do so overlooks the fact that governmental ownership of the streets is itself a permanent act of intenention. Regulation of public utilities or of any other industry discourages investment in these industries, thereby depriving consumers of the best satisfaction of their wants. For it distorts the resource allocations of the free market.[33]
The so-called "limited-space monopoly" argument for franchise monopolies, Rothbard further argued, is a red herring, for how many firms will be profitable in any line of production
is an institutional question and depends on such concrete data as the degree of consumer demand, the type of product sold, the physical productivity of the processes, the supply and pricing of factors, the forecasting of entrepreneurs, etc. Spatial limitations may be unimportant.[34]
In fact, even if spatial limitations do allow only one firm to operate in a particular geographical market, that does not necessitate monopoly, for "monopoly" is "a meaningless appellation, unless monopoly price is achieved," and "all prices on a free market are competitive."[35] Only government intervention can generate monopolistic prices.
The only way to achieve a free-market price that reflects true opportunity costs and leads to optimal levels of "duplication" is through free exchange in a genuinely free market, a sheer impossibility without private property and free markets.[36] Political fiat is simply not a feasible substitute for the prices that are determined by the free market because rational economic calculation is impossible without markets.
Under private ownership of streets and sidewalks, individual owners are offered a tradeoff of lower utility prices for the temporary inconvenience of having a utility company run a trench through their property. If "duplication" occurs under such a system, it is because freely choosing individuals value the extra service or lower prices or both more highly than the cost imposed on them by the inconvenience of a temporary construction project on their property. Free markets necessitate neither monopoly nor "excessive duplication" in any economically meaningful sense.

Competition for the Field

The existence of economies of scale in water, gas, electricity, or other "public utilities" in no way necessitates either monopoly or monopoly pricing. As Edwin Chadwick wrote in 1859, a system of competitive bidding for the services of private utility franchises can eliminate monopoly pricing as long as there is competition "for the field."[37] As long as there is vigorous bidding for the franchise, the results can be both avoidance of duplication of facilities and competitive pricing of the product or service. That is, bidding for the franchise can take place in the form of awarding the franchise to the utility that offers consumers the lowest price for some constant quality of service (as opposed to the highest price for the franchise).
Harold Demsetz revived interest in the concept of "competition for the field" in a 1968 article.[38] The theory of natural monopoly, Demsetz pointed out, fails to "reveal the logical steps that carry it from scale economies in production to monopoly price in the market place."[39] If one bidder can do the job at less cost than two or more,
then the bidder with the lowest bid price for the entire job will be awarded the contract, whether the good be cement, electricity, stamp vending machines, or whatever, but the lowest bid price need not be a monopoly price. … The natural monopoly theory provides no logical basis for monopoly prices.[40]
There is no reason to believe that the bidding process will not be competitive. Hanke and Walters have shown that such a franchise bidding process operates very efficiently in the French water supply industry.[41]

The Natural-Monopoly Myth: Electric Utilities

According to natural-monopoly theory, competition cannot persist in the electric-utility industry. But the theory is contradicted by the fact that competition has in fact persisted for decades in dozens of US cities. Economist Walter J. Primeaux has studied electric utility competition for more than 20 years. In his 1986 book, Direct Utility Competition: The Natural Monopoly Myth, he concludes that in those cities where there is direct competition in the electric utility industries:
  • Direct rivalry between two competing firms has existed for very long periods of time — for over 80 years in some cities;
  • The rival electric utilities compete vigorously through prices and services;
  • Customers have gained substantial benefits from the competition, compared to cities were there are electric utility monopolies;
  • Contrary to natural-monopoly theory, costs are actually lower where there are two firms operating;
  • Contrary to natural-monopoly theory, there is no more excess capacity under competition than under monopoly in the electric utility industry;
  • The theory of natural monopoly fails on every count: competition exists, price wars are not "serious," there is better consumer service and lower prices with competition, competition persists for very long periods of time, and consumers themselves prefer competition to regulated monopoly; and
  • Any consumer satisfaction problems caused by dual power lines are considered by consumers to be less significant than the benefits from competition.[42]
Primeaux also found that although electric utility executives generally recognized the consumer benefits of competition, they personally preferred monopoly!
Ten years after the publication of Primeaux's book, at least one state — California — is transforming its electric utility industry "from a monopoly controlled by a handful of publicly held utilities to an open market."[43] Other states are moving in the same direction, finally abandoning the baseless theory of natural monopoly in favor of natural competition:[44]
  • The Ormet Corporation, an aluminum smelter in West Virginia, obtained state permission to solicit competitive bids from 40 electric utilities;
  • Alcan Aluminum Corp. in Oswego, New York has taken advantage of technological breakthroughs that allowed it to build a new power generating plant next to its mill, cutting its power costs by two-thirds. Niagara Mohawk, its previous (and higher-priced) power supplier, is suing the state to prohibit Alcan from using its own power;
  • Arizona political authorities allowed Cargill, Inc. to buy power from anywhere in the West; the company expects to save $8 million per year;
  • New federal laws permit utilities to import lower-priced power, using the power lines of other companies to transport it;
  • Wisconsin Public Service commissioner Scott Neitzel recently declared, "free markets are the best mechanism for delivering to the consumer … the best service at the lowest cost";
  • The prospect of future competition is already forcing some electric utility monopolies to cut their costs and prices. When the TVAwas faced with competition from Duke Power in 1988, it managed to hold its rates steady without an increase for the next several years.
The potential benefits to the US economy from demonopolization of the electric utility industry are enormous. Competition will initially save consumers at least $40 billion per year, according to utility economist Robert Michaels.[45] It will also spawn the development of new technologies that will be economical to develop because of lower energy costs. For example, "automakers and other metal benders would make much more intensive use of laser cutting tools and laser welding machines, both of which are electron guzzlers.[46]

The Natural-Monopoly Myth: Cable TV

Cable television is also a franchise monopoly in most cities because of the theory of natural monopoly. But the monopoly in this industry is anything but "natural." Like electricity, there are dozens of cities in the United States where there are competing cable firms. "Direct competition … currently occurs in at least three dozen jurisdictions nationally."[47]
"The theory of natural monopoly is an economic fiction. No such thing as a 'natural' monopoly has ever existed."
The existence of longstanding competition in the cable industry gives the lie to the notion that that industry is a "natural monopoly" and is therefore in need of franchise monopoly regulation. The cause of monopoly in cable TV is government regulation, not economies of scale. Although cable operators complain of "duplication," it is important to keep in mind that "while over-building an existing cable system can lower the profitability of the incumbent operator, it unambiguously improves the position of consumers who face prices determined not by historical costs, but by the interplay of supply and demand."[48]
Also like the case of electric power, researchers have found that in those cities where there are competing cable companies prices are about 23 percent below those of monopolistic cable operators.[49] Cablevision of Central Florida, for example, reduced its basic prices from $12.95 to $6.50 per month in "duopoly" areas in order to compete. When Telestat entered Riviera Beach, Florida, it offered 26 channels of basic service for $5.75, compared to Comcast's 12channel offering for $8.40 per month. Comcast responded by upgrading its service and dropping its prices.[50] In Presque Isle, Maine, when the city government invited competition, the incumbent firm quickly upgraded its service from only 12 to 54 channels.[51]
In 1987 the Pacific West Cable Company sued the city of Sacramento, California on First Amendment grounds for blocking its entry into the cable market. A jury found that "the Sacramento cable market was not a natural monopoly and that the claim of natural monopoly was a sham used by defendants as a pretext for granting a single cable television franchise … to promote the making of cash payments and provision of 'in-kind' services … and to obtain increased campaign contribution."[52] The city was forced to adopt a competitive cable policy, the result of which was that the incumbent cable operator, Scripps Howard, dropped its monthly price from $14.50 to $10 to meet a competitor's price. The company also offered free installation and three months free service in every area where it had competition.
Still, the big majority of cable systems in the U.S. are franchise monopolies for precisely the reasons stated by the Sacramento jury: they are mercantilistic schemes whereby a monopoly is created to the benefit of cable companies, who share the loot with the politicians through campaign contributions, free air time on "community service programming," contributions to local foundations favored by the politicians, stock equity and consulting contracts to the politically well connected, and various gifts to the franchise authorities.
In some cities, politicians collect these indirect bribes for five to ten years or longer from multiple companies before finally granting a franchise. They then benefit from part of the monopoly rents earned by the monopoly franchisee. As former FCC chief economist Thomas Hazlett, who is perhaps the nation's foremost authority on the economics of the cable TV industry, has concluded, "we may characterize the franchising process as nakedly inefficient from a welfare perspective, although it does produce benefits for municipal franchiser."[53] The barrier to entry in the cable TV industry is not economies of scale, but the political price-fixing conspiracy that exists between local politicians and cable operators.

The Natural-Monopoly Myth: Telephone Services

The biggest myth of all in this regard is the notion that telephone service is a natural monopoly. Economists have taught generations of students that telephone service is a "classic" example of market failure and that government regulation in the "public interest" was necessary. But as Adam D. Thierer recently proved, there is nothing at all "natural" about the telephone monopoly enjoyed by AT&T for so many decades; it was purely a creation of government intervention."[54]
Once AT&T's initial patents expired in 1893, dozens of competitors sprung up. "By the end of 1894 over 80 new independent competitors had already grabbed 5 percent of total market share … after the turn of the century, over 3,000 competitors existed.[55] In some states there were over 200 telephone companies operating simultaneously. By 1907, AT&T's competitors had captured 51 percent of the telephone market and prices were being driven sharply down by the competition. Moreover, there was no evidence of economies of scale, and entry barriers were obviously almost nonexistent, contrary to the standard account of the theory of natural monopoly as applied to the telephone industry.[56]
The eventual creation of the telephone monopoly was the result of a conspiracy between AT&T and politicians who wanted to offer "universal telephone service" as a pork-barrel entitlement to their constituents. Politicians began denouncing competition as "duplicative," "destructive," and "wasteful," and various economists were paid to attend congressional hearings in which they somberly declared telephony a natural monopoly. "There is nothing to be gained by competition in the local telephone business," one congressional hearing concluded.[57]
The crusade to create a monopolistic telephone industry by government fiat finally succeeded when the federal government used World War I as an excuse to nationalize the industry in 1918. AT&T still operated its phone system, but it was controlled by a government commission headed by the postmaster general. Like so many other instances of government regulation, AT&T quickly "captured" the regulators and used the regulatory apparatus to eliminate its competitors. "By 1925 not only had virtually every state established strict rate regulation guidelines, but local telephone competition was either discouraged or explicitly prohibited within many of those jurisdictions."[58]


The theory of natural monopoly is an economic fiction. No such thing as a "natural" monopoly has ever existed. The history of the so-called public utility concept is that the late 19th and early 20th century "utilities" competed vigorously and, like all other industries, they did not like competition. They first secured government-sanctioned monopolies, and then, with the help of a few influential economists, constructed an ex post rationalization for their monopoly power.
This has to be one of the greatest corporate public relations coups of all time. "By a soothing process of rationalization," wrote Horace M. Gray more than 50 years ago, "men are able to oppose monopolies in general but to approve certain types of monopolies. … Since these monopolies were 'natural' and since nature is beneficent, it followed that they were 'good' monopolies. … Government was therefore justified in establishing 'good' monopolies."[59]
In industry after industry, the natural monopoly concept is finally eroding. Electric power, cable TV, telephone services, and the mail, are all on the verge of being deregulated, either legislatively or de facto, due to technological change. Introduced in the United States at about the same time communism was introduced to the former Soviet Union, franchise monopolies are about to become just as defunct. Like all monopolists, they will use every last resource to lobby to maintain their monopolistic privileges, but the potential gains to consumers of free markets are too great to justify them. The theory of natural monopoly is a 19th century economic fiction that defends 19th century (or 18th century, in the case of the US Postal Service) monopolistic privileges, and has no useful place in the 21st century American economy.

Tuesday, March 27, 2012

A paper published in Cell is a tour de force of 'omics in one indivudal -- the senior investigator, Dr. Michael Snyder. Here we discuss the findings and implications of such a compehensive 'omic assessment.
Chen R, et al. Personal omics profiling reveals dynamic molecular and medical phenotypes. Cell. 2012;148:1293-1207.
Below is a transcript of Dr. Topol's post "A Landmark N of 1 'Panor-omic' Study." We look forward to your feedback.
Eric Topol here to discuss a landmark paper in the journal Cell. This is the first time we've actually reviewed a paper in Cell on the Genomic Medicine site. It's a particularly unique paper. It is an N of 1, "panor-omic," comprehensive, very detailed 'omic study of a single individual. In this case, the individual is Michael Snyder, a geneticist from Stanford University, with 39 other collaboratives, predominantly from Stanford, but also from Yale and Spain.

Basically, what this entailed was a serial examination of 20 different blood draws that Michael Snyder had over a 14-month period. During that time, virtually everything you could imagine was assessed. Not only was there DNA sequencing at very high, deep coverage, high accuracy and resolution, but also there was gene expression. There was RNA seq to detect any issues in RNA. There were protein and metabolite assays that were comprehensive, along with autoantibody, along with micro RNAs -- all of this over a 14-month period.

As you would expect, the susceptibility to some diseases through, not just common variations, but rare variations were detected, including a key rare variant associated with diabetes mellitus and another rare variant with high penetrance for aplastic anemia.

But what was interesting during this study that spanned over 14 months was that Michael Snyder had two viral infections. Right around 300 days, he had a viral infection that led to a marked increase in genes that were associated with inflammation, interferon, and the conventional serum CRP that we measure. With that, his glucoses shot up, as well as his HbA1C, even up to about 6.7% from what had been normal, with fasting glucoses that were in the mid-100s. Then he went on to a lifestyle program to lose weight and exercise more, and was able to reverse the clinical manifestations of diabetes.

This is a remarkable paper. It is an N of 1 study with an exceptional amount of billions and billions of data points across all the different 'omics, and even expanding into autoantibody formation. It also tells us about how gene pathways and gene expression change over time. It's not just a measurement once, it's dynamic -- the variants in one's genome, as they can be expressed differently in different tissues, they also can be expressed differently as a function of time. It's highly instructive.

The question, of course, is, can this be done, this type of study, with an amazing amount of bioinformatics and data -- can this be done in the real world. Should it be done in the real world? Well, certainly, as we have discussed in prior segments, this is something that could be of immense value in patients with rare, idiopathic, we-don't-know-the-cause, conditions. Certainly for serious cancers, some type of "panor-omic" view could be helpful if we could do this quickly before therapies were started, or, of course, in refractory or relapsed cases.

Ultimately, when this is all done through a means of algorithm software to process all this data and when it can be markedly reduced in expense, some of these components will be useful for prevention as was used in this classic case.

For example, with Michael Snyder being att significant risk, of developing aplastic anemia, he can go into prevention mode and surveillance, just as he did with the known risk of diabetes. In fact, much of this I had written about in the book Creative Destruction of Medicine -- but now it's already been actualized as of March 15, 2012. When you combine all these 'omics with wireless sensor data and anatomical data through high resolution imaging, like the ultrasound pocket echo, you get an N of 1 that is truly unprecedented.

I'll be interested in your views about this "panor-omic" N of 1 landmark study, a tour de force. It will be interesting to see what you have to say. Thanks very much for your attention.

Wednesday, March 21, 2012

The Vampire Economy and the Market

Mises Daily: Wednesday, March 21, 2012 by
  • [This article was originally published in New Perspectives in Political Economy, the academic journal of CEVRO Institute (School of Legal and Social Studies), vol. 7(1), pp. 141–154.]

1. Authoritarian Capitalism (Fascism) and Liberal Capitalism (the Free Market)

What is sometimes referred to as "authoritarian capitalism," or fascism, is in fact a variety of statism, specifically socialism, the system of political economy in which the prerogatives of ownership over the means of production and distribution are vested in the state. Under the fascist economic system, private capitalists are nominally regarded as the owners of the means of production, meaning that they hold property titles to these assets and are referred to as "owners" of these assets. However, this so-called ownership is merely illusory. The actual prerogatives of ownership are vested, not in the private capitalist, but in the state and its bureaucracy.[1] It is the state that tells the private capitalist how he must use "his" property, under the threat of confiscation or even imprisonment. In the words of economist Ludwig von Mises, it is "socialism in the outward guise of capitalism."[2]
This is a very different political-economic system from "liberal capitalism," also known as "free-market capitalism." Free-market capitalism is an authentically capitalist system, in which the prerogatives of ownership over the means of production are vested in private citizens, not in the state. Under this system, the means of production are genuinely privately owned, and the private-property owner holds, not just a property title, but, more importantly, the actual prerogatives of ownership and ultimate control. In the system of free-market capitalism, the private-property owner is regarded as having property rights (i.e., an enforceable moral claim to the prerogatives of ownership) that must be respected by all others, including the state and its functionaries.
In their purest forms, these two systems of political economy are fundamentally different in kind; in fact, they are polar opposites. However, this opposing nature stems from the degree to which the prerogatives of ownership of ostensibly private property are arrogated to the state — i.e., the degree of state intervention. On the one extreme we have the free market, in which there is no — or at least little — state interference with private-property ownership (which is therefore genuine); on the other extreme we have fascism, in which there is plentiful or total state interference with private-property ownership (which is therefore illusory).
Since fascism and the free market are distinguished by state intervention we can therefore see that the two systems are separated by a connecting bridge of interventionism through the system of the "mixed economy." The fascist system can be viewed as a system of hyperinterventionism, accruing when state interference with private-property rights is so extensive that the alleged private ownership of property becomes a mere farce, and the state may properly be regarded as the de facto owner of the means of production and distribution — i.e., there is de facto socialism. For this reason, the analysis of fascism and its long-term viability is very similar to the analysis of interventionism in the mixed economy, and the same kinds of economic and political insights apply.

2. Fascism and the Fusion of Business and State

Fascism is unlike other forms of socialism. Its expropriation of the means of production is done without overt nationalization and is not directed toward an egalitarian goal. It is far more subtle than this, and far more insidious. Fascism can arise by revolution, but it can also arise by gradual measures toward state control in the mixed economy. While noting the similarities between fascism and communism, philosopher Roderick Long observes that
there is a difference in emphasis and in strategy between fascism and Communism.… When faced with existing institutions that threaten the power of the state — be they corporations, churches, the family, tradition — the Communist impulse is by and large to abolish them, while the fascist impulse is by and large to absorb them.[3]
The fascist economic strategy is also one of absorption: the regime attempts to secure economic growth and prosperity by fusing a "partnership" between business and the state, absorbing business into the state in this process. Such a strategy appeals to those who correctly judge that private business is the locus of production and economic growth but who incorrectly believe that this productivity is enhanced by partnership with government and central planning of production. The fascists, like interventionists more generally, seek to get the "best of both worlds" from the productive powers of private business under capitalism and the central planning of the state under socialism.
Of course, the "partnership" between business and state that occurs under fascism is of a coercive nature: the state determines its requirements from business and orders private entrepreneurs to meet these requirements, lest they be expropriated of their remaining property (nominally held), or even imprisoned. In describing the fusion of business and state in Nazi Germany, economist G√ľnter Reimann explains the process as follows:
The State orders private capital to produce and does not itself function as a producer. Insofar as the State owns enterprises which participate in production, this can be regarded as an exception rather than a general rule. The fascist State does not merely grant the private entrepreneur the right to produce for the market, but insists on production as a duty which must be fulfilled even though there be no profit. The businessman cannot close down his factory or shop because he finds it unprofitable. To do this requires a special permit issues by the authorities.[4]
This basic conception of the role of the private entrepreneur puts him at the service of the state, and destroys any notion of self-ownership, including any genuine property rights. He exists, not to pursue his own happiness and satisfy his own personal desires, as is the case under liberal capitalism, but rather to produce for the fascist state. From here, the remaining regulations on his business affairs under this "partnership" are similarly directed toward the ends determined by the state: the state regulates the prices he can charge for his goods; the amount he can buy and sell; whom he can employ or dismiss from employment; the wages he must pay; how much of his profit he may keep (if there is any profit produced); and whether or not he will continue his business or shut it down.[5]
In tandem with the enormous body of arbitrary state regulations is the ever-present threat of expropriation. Without any overt nationalization of property the state may send its auditors to scrutinize a business for breaches of regulations, using minor infractions as a pretext for massive fines, amounting essentially to a confiscation of assets.[6]

3. Breakdown of the Rule of Law

Even the fact that every aspect of his business is regulated by the state does not give full appreciation for the perilous situation of the titular owners of property under fascism. In fact, it is not the specific content of regulations, but rather the inevitable breakdown of the rule of law that poses the greatest danger under a system of central planning.[7]
The rule of law under the fascist system is replaced with the arbitrary and unconstrained power of the political elite in the state apparatus.
The capitalist under fascism has to be not merely a law-abiding citizen, he must be servile to the representatives of the State. He must not insist on "rights" and must not behave as if his private property rights were still sacred. He should be grateful to the Fuehrer that he still has private property.[8]
It is the arbitrary power of the fascist regime that is the most important determinant of the relationship between the titular private-property owners and the state. However, it affects not only this relationship, but also the relationship between private citizens themselves.
As a rule, the relations between businessmen are still regulated by laws and customs. But customs have changed and modified law, and law has, in turn, been largely replaced by a vague conception of "honor." It is easier for a businessman to win a case in the German courts by appealing to "National-Socialist honor" than by referring to the exact text of the law.[9]
Like other citizens, the businessman cannot find justice or challenge the predations of the state, even on sound legal grounds under the prescribed regulations. This is because the courts are themselves a mere cog in the workings of the ruling regime, which claims total power over the economy. Any private-property owner who is foolish enough to seek judicial relief from the impositions of the state quickly arouses the ire of state functionaries who have unlimited means to retaliate for any fleeting victories he might obtain.

4. Fascism and the Motivation Problem

Although enforceable property rights are nonexistent, and titular "ownership" is insecure, the fascist system still avoids the crude problems of motivation experienced under egalitarian variants of socialism (e.g., communism). By allowing inequalities in the nominal ownership of property and the consumption that is contingent on this nominal ownership, the state allows incentives for the acquisition of private property to remain, even though this ownership is subordinate to the whims of the state rulers.
This observation may seem to contradict the previous assertion that the private capitalist is only the illusory owner of the property to which he holds title. However, no contradiction exists: although the prerogatives of ownership ultimately accrue to the state under fascism, this does not prevent the private capitalist from enjoying additional consumption if he is the nominal owner of property. Consumption is consumption, and once a resource is consumed by its nominal owner, or otherwise used for his immediate benefit, the state cannot exercise its de facto ownership to prevent this, no matter how authoritarian it may be.
In fact, the acquisition of private property under fascism, even while subordinated to the state, offers more than just consumption benefits. Although all private capitalists are subject to the political power of the state rulers, large capitalists can use the residual economic power they maintain to capture smaller units of political power, particularly in the lower echelons of the bureaucratic apparatus. Reimann explains the interaction between political and economic power in Nazi Germany as follows:
The authoritarian position of the provincial and local bureaucrats — and the degree to which the local Party bureaucracy is independent of industrialists and businessmen — varies with the social structure in different sections of the country. In districts where big industrial magnates have direct relations with the top flight of Party leaders, the local bureaucracy is largely dependent on — in some cases, a tool of — the big concern or trust. In districts where only small and medium-sized firms exist, however, the Party bureaucracy is much more authoritarian and independent. A dual power exists under fascism: the indirect power of money and the direct power of the Party leader.[10]
Thus, under fascism, there remains a large incentive for the acquisition of private property. Although the private capitalist has no enforceable property rights against the state, he can protect his titular ownership and subsidiary control of property by acquiring political power. His control over property, even though it is at the mercy of the state, can allow him to capture some of the political power of the state, which can in turn protect his control. If he is a small private capitalist, the local bureaucrats will be his masters, and he will be forced to pay endless tribute to them merely to survive. However, if his business concern is large and profitable, he may be able to form relationships with more powerful political figures, thereby acquiring political influence, and bringing himself within the ambit of the state apparatus.
The motivation problem in fascism is therefore of a different and more subtle form than the motivation problem in egalitarian socialist systems. Under fascism, the private citizen is at the mercy of the state, which can take his nominally held property from him at any time. He is therefore motivated to consume more of his property than he otherwise would, and to use his savings to buy political influence, rather than engaging in productive endeavors. He is motivated, in short, to engage in political rather than economic entrepreneurialism.

5. The Rise of Political Entrepreneurialism

Under fascism, businessmen may continue to work within the regulatory regime, eking out whatever living they can maintain under the arbitrary decrees of the state bureaucracies. But in order to do so they must seek to obtain influence over the state functionaries in order to survive unmolested. Under fascist regimes that have historically existed, this has given rise to large investments in maintaining good relations with the state, employing "contact men" with connections to politically powerful members of the fascist regime. For example, under the fascist economic system of Nazi Germany such "contact men" became a crucial part of any business concern:
The business organization of private enterprise has had to be reorganized in accordance with the new state of things. Departments which previously were the heart of a firm have become of minor importance. Other departments which either did not exist or which had only auxiliary functions have become dominant and have usurped the real functions of management.
Formerly the purchasing agent and the salesmanager were among the most important members of a business organization. Today the emphasis has shifted and a curious new business aide, a sort of combination "go between" and public relations counsel, is now all-important. His job — not the least interesting outgrowth of the Nazi economic system — is to maintain good personal relations with officials in the Economic Ministry, where he is an almost daily caller … [11]
As with political lobbying in the mixed economy, this heavy investment in influence over the state bureaucracies is used by businesses both for protection from the state itself and to obtain special privilege. Having invested successfully in political influence, a successful business enterprise will seek to use the state as a buyer of its products or services, and will seek to use state power to destroy its competitors. Economic and political powers jostle for control in this system, and large business entities can come to dominate smaller political units, with businessmen becoming powerful political entrepreneurs in the regime.
This interaction between political and economic power under fascism is very similar to that which exists in highly interventionist industries in the mixed economy. In the latter case, problems of regulatory capture are well known, and it is common for large firms to use their connections with the state to obtain special privileges. This leads to a concentration of economic power in a few large firms, who are able to rely on government contracts to boost their income, while at the same time using captured regulatory bodies as a means to block smaller competitors from their market.[12]
If the level of state intervention in such a system increases, government contracts and captured regulatory bodies become more and more valuable, and more effort is shifted away from productive activities and toward the capture of political power. In short, as interventionism grows, and the economic system moves toward fascism, firms will shift their efforts away from economic entrepreneurialism and toward political entrepreneurialism.
Under the pure fascist system, state intervention is ubiquitous, and connections and influence in the state apparatus become all important for business. Instead of productive success and economic entrepreneurialism, political entrepreneurialism becomes the means to acquiring wealth, and protecting it from state predation. Any firm that fails to forge state connections or find an adequate contact man will be forced out of business, while a few big firms with strong political connections will come to dominate the market.[13]
At the same time, political figures in the regime take advantage of their political power to become wealthy private capitalists themselves. High-ranking members of the ruling regime are able to exercise their political power to favor their own business interests and expand their economic power as private capitalists.[14]
Over a period of time, this process means that productive firms and economic entrepreneurs are destroyed, while unproductive (parasitic) enterprises run by political entrepreneurs take their place. Reimann explains the outcome in Nazi Germany:
[The genuinely independent businessman] is disappearing but another type is prospering. He enriches himself through his Party ties; he is himself a Party member devoted to the Fuehrer, favoured by the bureaucracy, entrenched because of family connections and political affiliations. In a number of cases, the wealth of these Party capitalists has been created through the Party's exercise of naked power. It is to the advantage of these capitalists to strengthen the Party which has strengthened them.[15]
The fascist economic system causes a convergence of economic and political power, both through the politicization of existing private capitalists, and the enrichment of political figures. The attempt to form a partnership between business and state eventually leads to a situation where business is the state, and the state is business. The resulting system is fittingly described by what philosopher Ayn Rand called the "aristocracy of pull."[16] Under this system, business enterprises are run by an entrenched class of politically privileged capitalists, with little prospect of outside competition.[17]

6. Why Corruption Is Not the Problem

It is worth noting that the breakdown of the rule of law under the fascist system means that corruption of the legal and bureaucratic system is likely to be rampant. However, it is not lawbreaking that is the problem — the problem is the law itself.
The fascist system empowers the state to intervene in all aspects of business, violating property rights at will. Its repudiation of free-market capitalism means that central planners are expected to take an active part in running the economy and cannot merely stand back and leave business alone (at least not without implicitly repudiating the fascist system). This interventionism means that considerations of property rights must necessarily be replaced by the amorphous notion of the "public good" (however this happens to be expressed), creating conditions where business success is determined primarily by influencing the judgment of bureaucrats and powerful political figures.
Because property rights have been discarded, political entrepreneurialism becomes crucial to success, regardless of whether bureaucrats are "corrupt." It occurs whether bureaucrats exercise their judgment in a transparent and impartial manner, or sell their power directly to wealthy business entities. It is not the corruption of bureaucrats that is the problem; it is the fact that there is no honest way to dole out special favors to business under a system in which the state has total control.[18]

7. Information and Calculation Problems in the Fascist Commonwealth

The rise of political entrepreneurialism is not the only problem with the fascist economy. It is augmented by the standard information and calculation problems of socialism, stemming from the lack of any genuine private ownership and the extensive price and wage controls imposed by the state.[19] (Even if price and wage controls are absent, prices and wages will be heavily distorted by state interventions in the economy, so that these prices are not commensurate to the true costs of resources.)
As with other variants of socialism, the economic exchanges in the fascist economy are not driven by the preferences of consumers or the requirements of productive entrepreneurs. Instead, the exchange of goods proceeds, mimicking the market economy in some respects, but the price system reflects the extensive price and wage controls of the fascist state, or, in the absence of price controls, the distorting effects of its other interventions. This means that the central-planning bureaucrats in the fascist state are unable to determine the true value of resources. They distort the prices of goods to such an extent that rational allocation of resources becomes impossible. Misallocations of resources occur as prices of good are artificially suppressed or inflated.
At best, the central planners can increase output for favored businesses or areas of the economy at the expense of other businesses and areas of the economy, while at the same time destroying the very price system that allows entrepreneurs to calculate rationally under the free market. Since they have no method to objectively value competing projects, their interventions will involve a misallocation of resources compared with the free-market case, and will frequently involve an aggregated loss of resources even ignoring opportunity costs. Thus, despite any pretensions to the contrary, the state is unable to increase total economic output through its central planning; instead, it destroys the price system and causes loss.[20] This gradually leads to economic decline.

8. Economic Decline and the Incentives of the Ruling Elite

The forgoing analysis of the motivations of businessmen and the economic ineptitude of the central-planning apparatus is pregnant with obvious economic conclusions. The more authoritarian the economic system becomes, the more valuable is the capture of political power and the less valuable is the expansion of productive capacity. All other things being equal, the authoritarian system will lead businessmen (and others) to shift their efforts away from production and toward the acquisition of political power.[21]
The result is obvious: under an authoritarian system, political entrepreneurialism increases, and production decreases. This further politicizes the economy and leads to ever-greater distortions of prices, making rational calculation impossible. As authority over the means of production grows, more and more people compete more and more ferociously through the political process for a smaller total economic output. With no genuine conception of property rights to guide them, there is no moral impediment to the coveting of property that is "owned" by others, and there is no legal impediment to its capture.
It is again worth noting that this is merely the most extreme manifestation of the economic effects of interventionism in the mixed economy. Since fascism is, in essence, a system of hyperinterventionism, the economic effects of the fascist system are merely the logical extremes of smaller "pragmatic" interventionist programs. Each intervention in a mixed economy distorts prices, misallocates resources to unproductive endeavors, and results in a net loss of production.[22] At the same time intervention increases the value of political influence and thereby shifts effort from production to political lobbying.
With enough political intervention in the economy, this culminates in economic stagnation, then net capital consumption, and, finally, economic collapse, occurring when capital supplies become insufficient to sustain basic services. As this process occurs, parasitic groups in the system suck as much as possible from the dying economy, with their parasitic activities becoming increasingly frantic as the economy collapses and the resources available for capture become scarcer.
The problems with the fascist economic system become more and more clear, but there is no incentive for those in control of the state apparatus to avoid the approaching disaster. Since the only antidote to the problem is liberalization of the economy from state control, the cure for the economic decline threatens the personal livelihoods of the state bureaucrats and the ideological program of the higher-level members of the ruling regime.
Of course, it is true that sustained economic decline will eventually threaten the position of the ruling elite, particularly since they must make some appeals to the "public good" in their efforts to maintain their own power. However, their situation is threatened far more directly and far more immediately by the cure for economic decline than from the decline itself.
The authoritarian State breeds irresponsibility on the part of this ever-growing and legally privileged group. Their position is secure — unless they are purged by their own friends, often as a result of rivalries — whereas the general economy is insecure. They do no work which adds goods or social services to the market. Their job is: to hold their job. The rest of the community finds itself serving as the hardworking host upon which the bureaucratic clique is feeding and fattening.[23]
We therefore see the most terrifying aspect of the fascist system. The problem is not merely that its authoritarian controls destroy the economy in the long-term. The greater problem is that as this process occurs, the authoritarian system undermines the human capital of the society it operates on. In particular, it creates a privileged ruling elite who have wrested all economic and political power from the productive capitalists they have expropriated, at the expense of impossible promises to the masses. Their sole incentive is to maintain the parasitic system that gives them power, prestige, and money — and they will do anything to keep it, even as they watch the general economy collapse into ruin.

9. The Drive to War

The economic decline ensuing from state intervention, misallocation of resources, and rising political entrepreneurialism must eventually lead to a crisis of confidence in the state, if not deflected by some nationalistic endeavor to rouse the support of the public and instill them with some alternative fear. Even the most authoritarian regime must rely on compliance from the public to maintain its power, and so it is natural that the fascist state will turn to war and conquest as its economic problems become a threat to its rule.
War and conquest serve three main purposes for the fascist state. Firstly, notwithstanding its risks, war promises the possibility of conquered territories to serve as resource cash cows for the declining economy. Secondly, the presence of an external military threat allows the ruling elite to rationalize their authoritarian rule and expand their domestic power over the public, while imbuing them with nationalist fervor. Finally, the threat of death and ruin from real or alleged foreign enemies makes the predations of the state look to many of its citizens like the lesser of two evils, and so the discontent of the public is directed to an alternative source.
This drive to war is a logical consequence of the ideology and economic program of fascism and interventionism more generally. It is no accident that fascist ideology promotes war as an energizing and righteous endeavor. Because the domestic policies of the authoritarian state revolve around appeals to nationalistic ideals (e.g., the "public good"), militarism is a natural corollary, and it is easy for the state to rouse the public to war.[24]
Of course, war is economically destructive, and more rapidly so than domestic intervention. It involves a massive reallocation of resources to military projects, a full or partial withdrawal from the international division of labor,[25] and the direct destruction of resources by enemy forces. Moreover, war involves the risk of military defeat, a prospect that usually ends the rule of the existing political elite. Nevertheless, it is the only option for a ruling class that has repudiated liberalism and hitched its reputation to the fascist system of authoritarian control. In describing the motivation of the Nazis in World War II, Reimann explains that
Nazi leaders in Germany do not fear possible national economic ruin in wartime. They feel that, whatever happens, they will remain on top, that the worse matters become, the more dependent on them will be the propertied classes. And if the worst comes to the worst, they are prepared to sacrifice all other interests to maintain their hold on the State. If they themselves go, they are ready to pull the temple down with them.[26]
Or, as Nazi propaganda minister Joseph Goebbels expressed it in his diary,
The war made possible for us the solution of a whole series of problems that could never have been solved in normal times.[27]
For those outside the ruling elite, there is a sense of inevitability to the whole process, from economic decline to war. They are stripped of any genuine property rights and exist at the mercy of the state and its functionaries. They are devoid of economic or political power, and are mere pawns in the machinations of the fascist state and its leaders.
The fatalism which was typical of the spirit of the German businessman before Europe was plunged into [World War II] was not due to economic difficulties alone, but far more to a feeling that he had become part of a machine inexorably leading him to disaster.[28]

10. Concluding Remarks

The economic system of fascism is economically unviable in the long run, and what is true of this most extreme manifestation of hyperinterventionism is true, to a lesser extent, of any interventionist system of government. The central planning of the state and the concomitant destruction of private-property rights destroy the independent businessman and replace him with a parasitic impostor, the political entrepreneur, who succeeds by special privilege rather than by economic production.
The vast power of the state leads to a convergence of all economic and political power into a small elite of political entrepreneurs, who will hold on to their power and privilege at the expense of the general economy. Combined with all-pervading regulations, price and wage controls, and other distortions of prices under state central planning, this leads to economic stagnation, then economic decline and collapse.
The long-run result of the fascist or interventionist economic systems is the drive toward war and conquest, with the ruling class desperately seeking to maintain its power at all costs, even if the cost is the complete destruction of the nation. The endpoint is tyranny, death, and destruction.

Tuesday, March 20, 2012

Why Is the Story About Malia Obama Vacationing in Mexico Disappearing from the Web?

UPDATE: The Administration has just responded to the disappearing stories.
Read it here.

Have you heard that Malia Obama, the president’s daughter, is reportedly spending her spring break in Oaxaca, Mexico? Allegedly, she’s jetting off with some of her classmates and 25 Secret Service agents to a country that the State Department has said all Americans should avoid. But something is different about the latest “Obama vacation controversy:” references to it are disappearing from the Internet — and fast.
Around 3:00 EST, a Telegraph story reporting on the event was the first to vanish (note how the url remains the same in the “before” and “after”):
Malia Obama Oaxaca, Mexico Vacation Story Disappearing from the Web
Malia Obama Oaxaca, Mexico Vacation Story Disappearing from the Web
Then, the related Huffington Post article was found to be linking back to a completely unrelated Yahoo News page titled “Senegal Music Star Youssou Ndour Hits Campaign Trail.”
The Huffington Post article:
Malia Obama Oaxaca, Mexico Vacation Story Disappearing from the Web
Links to this site:
Malia Obama Oaxaca, Mexico Vacation Story Disappearing from the Web
The Yahoo News story that HuffPo links to makes no mention of Malia Obama or her Mexican vacation. This raises two possibilities: either HuffPo has made an error in its link, or Yahoo has also removed its “Malia in Mexico” story. The latter more likely considering that the “-obamas-daughter-spends-springbreak-in-Mexico” url is still present in the Yahoo story.
And now, the link to the Huffington Post article on Google redirects to the site’s main page; the page itself is gone.
In addition to larger news organizations, smaller sites are also removing their stories.
Click here to find out more!

Free Republic removed a related discussion thread:
Malia Obama Oaxaca, Mexico Vacation Story Disappearing from the Web
And “Global Grind” removed its related article:
Malia Obama Oaxaca, Mexico Vacation Story Disappearing from the Web
Of these sites, the only one to state a reason for the change was “Free Republic,“ where the Admin wrote ”Leave the kids alone.”
So that raises the question: Why were all of these sites taken down? Is the story false? Were they removed for security reasons?
Consider that the story still lives (as of this publication)* on the site of The Australian, which uses a story from the well-respected AFP (a sort of Associated Press for France):
Malia Obama Oaxaca, Mexico Vacation Story Disappearing from the Web
So far, no outlets have explained why the stories have been removed. It will be interesting to see if they do.
The Blaze’s Jonathon M. Seidl contributed to this report.
*The Australian has since removed its article.
Buzzfeed is now reporting that it is a “long tradition” not to report on presidential kids’ vacation plans, citing this as the possible reason for the many unexplained retractions.
If this is the case, it still raises questions as to why Malia was allowed to vacation in a country that the State Department recommends no American travels to.
Neither AFP nor the White House responded to Buzzfeed’s request for comment.
The Montreal Gazette has now posted the story of the vacation. It’s one of the only sites that is reporting on it. Interestingly, the story is actually attributed to the AFP (mentioned above). You can read it here.

Thursday, March 15, 2012

The Skeptics Case

We check the main predictions of the climate models against the best and latest data. Fortunately the climate models got all their major predictions wrong. Why? Every serious skeptical scientist has been consistently saying essentially the same thing for over 20 years, yet most people have never heard the message. Here it is, put simply enough for any lay reader willing to pay attention.

What the Government Climate Scientists Say

Figure 1
Figure 1
The climate models. If the CO2 level doubles (as it is on course to do by about 2070 to 2100), the climate models estimate the temperature increase due to that extra CO2 will be about 1.1°C × 3 = 3.3°C.[1]
The direct effect of CO2 is well-established physics, based on laboratory results, and known for over a century.[2]
Feedbacks are due to the ways the Earth reacts to the direct warming effect of the CO2. The threefold amplification by feedbacks is based on the assumption, or guess, made around 1980, that more warming due to CO2 will cause more evaporation from the oceans and that this extra water vapor will in turn lead to even more heat trapping because water vapor is the main greenhouse gas. And extra heat will cause even more evaporation, and so on. This amplification is built into all the climate models.[3] The amount of amplification is estimated by assuming that nearly all the industrial-age warming is due to our CO2.
The government climate scientists and the media often tell us about the direct effect of the CO2, but rarely admit that two-thirds of their projected temperature increases are due to amplification by feedbacks.

What the Skeptics Say

Figure 2
Figure 2
The skeptic's view. If the CO2 level doubles, skeptics estimates that the temperature increase due to that extra CO2 will be about 1.1°C × 0.5 ≈ 0.6°C.[4]
The serious skeptical scientists have always agreed with the government climate scientists about the direct effect of CO2. The argument is entirely about the feedbacks.
The feedbacks dampen or reduce the direct effect of the extra CO2, cutting it roughly in half.[5] The main feedbacks involve evaporation, water vapor, and clouds. In particular, water vapor condenses into clouds, so extra water vapor due to the direct warming effect of extra CO2 will cause extra clouds, which reflect sunlight back out to space and cool the earth, thereby reducing the overall warming.
There are literally thousands of feedbacks, each of which either reinforces or opposes the direct-warming effect of the extra CO2. Almost every long-lived system is governed by net feedback that dampens its response to a perturbation. If a system instead reacts to a perturbation by amplifying it, the system is likely to reach a tipping point and become unstable (like the electronic squeal that erupts when a microphone gets too close to its speakers). The earth's climate is long-lived and stable — it has never gone into runaway greenhouse, unlike Venus — which strongly suggests that the feedbacks dampen temperature perturbations such as that from extra CO2.

What the Data Says

The climate models have been essentially the same for 30 years now, maintaining roughly the same sensitivity to extra CO2 even while they got more detailed with more computer power.
· How well have the climate models predicted the temperature?
· Does the data better support the climate models or the skeptic's view?

Air Temperatures

One of the earliest and most important predictions was presented to the US Congress in 1988 by Dr James Hansen, the "father of global warming":
Figure 3
Figure 3
Hansen's predictions to the US Congress in 1988,[6] compared to the subsequent temperatures as measured by NASA satellites.[7]
Hansen's climate model clearly exaggerated future temperature rises.
In particular, his climate model predicted that if human CO2 emissions were cut back drastically starting in 1988, such that by year 2000 the CO2 level was not rising at all, we would get his scenario C. But in reality the temperature did not even rise this much, even though our CO2 emissions strongly increased — which suggests that the climate models greatly overestimate the effect of CO2 emissions.
A more considered prediction by the climate models was made in 1990 in the IPCC's First Assessment Report:[8]
Figure 4
Figure 4
Predictions of the IPCC's First Assessment Report in 1990, compared to the subsequent temperatures as measured by NASA satellites.
It's 20 years now, and the average rate of increase in reality is below the lowest trend in the range predicted by the IPCC.

Ocean Temperatures

The oceans hold the vast bulk of the heat in the climate system. We've only been measuring ocean temperature properly since mid-2003, when the Argo system became operational.[9][10] In Argo, a buoy duck dives down to a depth of 2,000 meters, measures temperatures as it very slowly ascends, then radios the results back to headquarters via satellite. Over 3,000 Argo buoys constantly patrol all the oceans of the world.
Figure 5
Figure 5
Climate model predictions of ocean temperature,[11] versus the measurements by Argo.[12] The unit of the vertical axis is 10^22 Joules (about 0.01°C).
The ocean temperature has been basically flat since we started measuring it properly, and not warming as quickly as the climate models predict.

Atmospheric Hotspot

The climate models predict a particular pattern of atmospheric warming during periods of global warming; the most prominent change they predict is a warming in the tropics about 10 km up, the "hotspot."
The hotspot is the sign of the amplification in their theory (see figure 1). The theory says the hotspot is caused by extra evaporation, and by extra water vapor pushing the warmer, wetter lower troposphere up into volume previously occupied by cool dry air. The presence of a hotspot would indicate amplification is occurring, and vice versa.
We have been measuring atmospheric temperatures with weather balloons since the 1960s. Millions of weather balloons have built up a good picture of atmospheric temperatures over the last few decades, including the warming period from the late 1970s to the late '90s. This important and pivotal data was not released publicly by the climate establishment until 2006, and then in an obscure place.[13] Here it is:
Figure 6
Figure 6
On the left is the data collected by millions of weather balloons.[14] On the right is what the climate models say was happening.[15] The theory (as per the climate models) is incompatible with the observations. In both diagrams the horizontal axis shows latitude, and the right vertical axis shows height in kilometers.
In reality there was no hotspot, not even a small one. So in reality there is no amplification — the amplification shown in figure 1 does not exist.[16]

Outgoing Radiation

The climate models predict that when the surface of the earth warms, less heat is radiated from the earth into space (on a weekly or monthly time scale). This is because, according to the theory, the warmer surface causes more evaporation and thus there is more heat-trapping water vapor. This is the heat-trapping mechanism that is responsible for the assumed amplification in figure 1.
Satellites have been measuring the radiation emitted from the earth for the last two decades. A major study has linked the changes in temperature on the earth's surface with the changes in the outgoing radiation. Here are the results:
Figure 7
Outgoing radiation from earth (vertical axis) against sea-surface temperature (horizontal), as measured by the ERBE satellites (upper-left graph) and as "predicted" by 11 climate models (the other graphs).[17] Notice that the slopes of the graphs for the climate models are opposite to the slope of the graph for the observed data.
This shows that in reality the earth gives off more heat when its surface is warmer. This is the opposite of what the climate models predict. This shows that the climate models trap heat too aggressively, and that their assumed amplification shown in figure 1 does not exist.


All the data here is impeccably sourced — satellites, Argo, and weather balloons.[18]
The air and ocean temperature data shows that the climate models overestimate temperature rises. The climate establishment suggest that cooling due to undetected aerosols might be responsible for the failure of the models to date, but this excuse is wearing thin — it continues not to warm as much as they said it would, or in the way they said it would. On the other hand, the rise in air temperature has been greater than the skeptics say could be due to CO2. The skeptic's excuse is that the rise is mainly due to other forces — and they point out that the world has been in a fairly steady warming trend of 0.5°C per century since 1680 (with alternating ~30 year periods of warming and mild cooling) where as the vast bulk of all human CO2 emissions have been after 1945.
We've checked all the main predictions of the climate models against the best data:
Test Climate Models
Air temperatures from 1988Overestimated rise, even if CO2 is drastically cut
Air temperatures from 1990Overestimated trend rise
Ocean temperatures from 2003Overestimated trend rise greatly
Atmospheric hotspotCompletely missing → no amplification
Outgoing radiationOpposite to reality → no amplification
The climate models get them all wrong. The missing hotspot and outgoing radiation data both, independently, prove that the amplification in the climate models is not present. Without the amplification, the climate model temperature predictions would be cut by at least two-thirds, which would explain why they overestimated the recent air and ocean temperature increases. Therefore,
  1. The climate models are fundamentally flawed. Their assumed threefold amplification by feedbacks does not in fact exist.
  2. The climate models overestimate temperature rises due to CO2 by at least a factor of three.
The skeptical view is compatible with the data.

Some Political Points

The data presented here is impeccably sourced, very relevant, publicly available, and from our best instruments. Yet it never appears in the mainstream media — have you ever seen anything like any of the figures here in the mainstream media? That alone tells you that the "debate" is about politics and power, and not about science or truth.
This is an unusual political issue, because there is a right and a wrong answer, and everyone will know which it is eventually. People are going ahead and emitting CO2 anyway, so we are doing the experiment: either the world heats up by several degrees by 2050 or so, or it doesn't.
Notice that the skeptics agree with the government climate scientists about the direct effect of CO2; they just disagree about the feedbacks. The climate debate is all about the feedbacks; everything else is merely a sideshow. Yet hardly anyone knows that. The government climate scientists and the mainstream media have framed the debate in terms of the direct effect of CO2 and sideshows such as arctic ice, bad weather, or psychology. They almost never mention the feedbacks. Why is that? Who has the power to make that happen?
[1] More generally, if the CO2 level is x (in parts per million) then the climate models estimate the temperature increase due to the extra CO2 over the preindustrial level of 280 ppm as 4.33 ln(x / 280). For example, this model attributes a temperature rise of 4.33 ln(392/280) = 1.46°C to the increase from preindustrial to the current CO2 level of 392 ppm.
[2] The direct effect of CO2 is the same for each doubling of the CO2 level (that is, logarithmic). Calculations of the increased surface temperature due to of a doubling of the CO2 level vary from 1.0°C to 1.2°C. In this document we use the midpoint value 1.1°C; which value you use does not affect the arguments made here.
[3] The IPCC, in their last Assessment Report in 2007, project a temperature increase for a doubling of CO2 (called the climate sensitivity) in the range 2.0°C to 4.5°C. The central point of their model estimates is 3.3°C, which is 3.0 times the direct CO2 effect of 1.1°C, so we simply say their amplification is threefold. To be more precise, each climate model has a slightly different effective amplification, but they are generally around 3.0.
[4] More generally, if the CO2 level is x (in parts per million) then skeptics estimate the temperature increase due to the extra CO2 over the preindustrial level of 280 ppm as 0.72 ln(x / 280). For example, skeptics attribute a temperature rise of 0.72 ln(392/280) = 0.24°C to the increase from preindustrial to the current CO2 level of 392 ppm.
[5] The effect of feedbacks is hard to pin down with empirical evidence because there are more forces affecting the temperature than just changes in CO2 level, but seems to be multiplication by something between 0.25 and 0.9. We have used 0.5 here for simplicity.
[6] Hansen's predictions were made in Hansen et al, Journal of Geophysical Research, vol. 93, no. D8 (August 20, 1988), fig. 3a, p. 9,347: pubs.giss.nasa.gov/docs/1988/1988_Hansen_etal.pdfDownload PDF. In the graph here, Hansen's three scenarios are graphed to start from the same point in mid-1987 — we are only interested in changes (anomalies).
[7] The earth's temperature shown here is as measured by the NASA satellites that have been measuring the earth's temperature since 1979, managed at the University of Alabama, Hunstville (UAH). Satellites measure the temperature 24/7 over broad swathes of land and ocean, across the whole world except the poles. While satellites had some initial calibration problems, those have long since been fully fixed to everyone's satisfaction. Satellites are mankind's most reliable, extensive, and unbiased method for measuring the earth's air temperature temperatures since 1979. This is an impeccable source of data, and you can download the data yourself from vortex.nsstc.uah.edu/data/msu/t2lt/uahncdc.lt (save it as .txt file then open it in Microsoft Excel; the numbers in the "Globe" column are the changes in MSU Global Monthly Mean Lower Troposphere Temperatures in °C).
[8] IPCC First Assessment Report, 1990, page xxii (www.ipcc.ch/ipccreports/far/wg_I/ipcc_far_wg_I_full_report.pdfDownload PDF) in the Policymakers Summary, figure 8 and surrounding text, for the business-as-usual scenario (which is what in fact occurred, there being no significant controls or decrease in the rate of increase of emissions to date). "Under the IPCC Business-as-Usual (Scenario A) emissions of greenhouse gases, the average rate of increase of global mean temperature during the next century is estimated to be about 0.3°C per decade (with an uncertainty range of 0.2°C to 0.5°C)."
[9] "Argo," MetOffice.uk.gov.
[10] Ocean temperature measurements before Argo are nearly worthless. Before Argo, ocean temperature was measured with buckets or with bathythermographs (XBTs) — which are expendable probes lowered into the water, transmitting temperature and pressure data back along a pair of thin wires. Nearly all measurements were from ships along the main commercial shipping lanes, so geographical coverage of the world's oceans was poor — for example the huge southern oceans were not monitored. XBTs do not go as deep as Argo floats, and their data is much less precise and much less accurate (for one thing, they move too quickly through the water to come to thermal equilibrium with the water they are trying to measure).
[11] The climate models project ocean heat content increasing at about 0.7 × 10^22 Joules per year. See Hansen et al., 2005: "Earth's Energy Imbalance: Confirmation and Implications," Science, 308, 1431–1435, p. 1432, where the increase in ocean heat content per square meter of surface, in the upper 750m, according to typical models, is 6.0 Watt·year/m2 per year, which converts to 0.7 × 10^22 Joules per year for the entire ocean as explained at here.
[12] The ocean heat content down to 700m as measured by Argo is now available; you can download it from here as a CSV file. The numbers are the changes in average heat for the three months, in units of 10^22 Joules, seasonally adjusted. The Argo system started in mid-2003, so we started the data at 2003–6.
[13] The weather-balloon data showing the atmospheric warming pattern was finally released in 2006, in the US Climate Change Science Program, 2006, part E of figure 5.7, on page 116.Download PDF
There is no other data for this period, and we cannot collect more data on atmospheric warming during global warming until global warming resumes. This is the only data there is. By the way, isn't this an obscure place to release such important and pivotal data — you don't suppose they are trying to hide something, do you?
[14] See previous note.
[15] Any climate model, for example, IPCC Assessment Report 4, 2007, ch. 9, p. 675, which is also on the web (figure 9.1, parts c and f). There was little warming 1959–1977, so the commonly available 1959–1999 simulations work as well.
[16] So the multiplier in the second box in figures 1 and 2 is at most 1.0.
[17] Lindzen and Choi 2009, Geophysical Research Letters, vol. 36.Download PDF
The paper was corrected after some criticism, coming to essentially the same result again in 2011.Download PDF[18] In particular, we have not quoted results from land thermometers, or from sparse sampling by buckets and XBTs at sea. Land thermometers are notoriously susceptible to localized effects — see Is the Western Climate Establishment Corrupt? by the same author.Download PDF