January 12, 2022

Analysis

Controlled Prices

The history and politics of price controls and economic management in the United States

In the decades after the Civil War, Andrew Carnegie captured the American steel industry by pushing down prices. So effective was the Scottish-born telegraph operator at reducing costs, breaking cartels, and driving competition into bankruptcy during the downturns of the 1880s and 1890s, that J.P. Morgan bought out the 66-year-old Carnegie to protect the profitability of his holdings and stabilize the nation’s industrial life. When Morgan incorporated U.S. Steel in 1901, the unprecedented combine controlled two-thirds of the nation’s steelmaking capacity. For the next six decades, the company set the price of steel in the American market, anchoring industry prices by cutting last in recessions and raising last in expansions. Under this “price umbrella,” the other dozen companies owning steel factories in the US remained profitable and expanded healthily. Industry-wide, ingot capacity expanded from 21.5 million to 71.6 million tons between the firm’s creation and the eve of World War II.

Ever since Alfred Marshall first popularized a rough graph of the crossing schedules of supply and demand in 1890, a strain of Anglo-American thinking has fixated on the disruptions caused by the ancient practice of controlling prices. Impose a ceiling too low, theory teaches, and the quantity demanded will outpace the quantity supplied; a floor too high, and producers will build up surpluses above what consumers are willing to absorb at current prices. Because wants are always changing, any ceiling or floor will eventually produce such “distortions.” Limit any price and the whole society may begin to convulse.

Marshall, Carnegie, and Morgan were contemporaries. But a glaring gap stands between the theories of the economist and the practices of the industrialist. At its most quintessential moment, American capitalism was defined not by the laws of supply and demand, but by concentrations of power within the marketplace.

Before macro

Before the rise of macroeconomics that accompanied World War II, price determination was a central problem of economic thought. For a world in which the tempestuous fluctuations of the business cycle brought runaway speculation, widespread bankruptcies, and growing labor militancy, the relationship of price to cost was a critical question. Under classical theory, competitive prices would yield revenues to the firm that covered costs and a normal profit. Firms took rather than gave prices in the market. And competition would check profiteering: where earnings rose well above costs, new producers would enter the market and, with lower prices, bid away customers. In fact, theories about the interrelated movements of price and cost lay at root of some of the earliest empirical studies of the totalizing phenomenon of the business cycle.

Despite this Marshallian orthodoxy, public intervention into price determination was at the heart of the business world during the period between the Belle Epoque and the Great Depression. To many business-friendly Progressives like Herbert Hoover or Gerard Swope, president of General Electric, government-sanctioned cartels had come to appear as the inevitable tide of history. Even Congress recognized this trend. Both the 1922 Capper-Volstead Act and the 1927 Agricultural Marketing Act, for example, provided federal financing for farmer’s marketing cooperatives and exempted them from antitrust prosecution, allowing them to collude in order to raise selling prices and stem farm bankruptcies. Herbert Hoover’s tenure in Washington was defined by his tolerance for cartel behavior, even encouraging the Department of Commerce to collect and circulate industry prices and costs to help coordinate business behavior. When Franklin Roosevelt entered the White House in the spring of 1933, among his administration’s first legislative priorities were the Agricultural Adjustment Act and the National Industrial Recovery Act: each placed a central authority in Washington in charge of planning supply and prices. (When it was completed in 1936, the new US Department of Agriculture (USDA) South Building in Washington was the largest office building in the world.) During the first two years of the New Deal, tens of thousands of businessmen flocked to Washington from across the nation to sit on over 550 “code authorities,” government-sanctioned cartels established by the National Recovery Administration (NRA) to fix prices in the form of “industry codes.”

Such comprehensive price control proved unworkable under the eighteenth-century constitution. Like the Estates General of the Bourbon king, Roosevelt’s convention of the nation’s bourgeoisie for an experiment in “industrial self-government” under the NRA quickly erupted into polarizing conflict, not only over the controls themselves but more importantly over who would determine them. Large and small firms turned to the federal courts for relief against prices set by their more powerful or better-organized competitors, but the most explosive conflict was between employers and workers over the price of labor: under the NRA, employees too could elect representatives of their own choosing to sit on the “industry code authorities” that were now planning production. In 1934 alone, general strikes for union recognition erupted into police riots in the cities of San Francisco, Toledo, and Minneapolis. The experiment in industrial cartels lasted twenty-three months, until the Supreme Court in May 1935 serendipitously declared the NRA unconstitutional on behalf of a middle-sized Brooklyn poultry factory.

The demise of the NRA did not, however, reverse the trend toward planning or its instrument in price control. The upshot for the New Dealers was recourse to regulatory boards for “sick industries”—trucking, bituminous coal, or agriculture, those in which competition drove bankruptcies—where price fixing on a sectoral basis could achieve what the historian Ellis Hawley describes as “partial planning.” Competition, negotiation, and entry remained licensed by new, industry-specific regulatory powers of the “Second New Deal” in those sectors with numerous firms previously competing on price, such as the Populist-era Interstate Commerce Commission, expanded over the trucking industry; the Civil Aeronautics Board, for airlines; the Home Loan Bank Board, for housing finance; the USDA, for agriculture; and the National Labor Relations Board, for wages. Many of these industry boards remained in operation until 1980, as enduring legacies of the New Deal principle of extending public-utility regulation and price control over areas of economic life considered too vital to be left to market forces alone.

Yet the central thrust of the New Deal project to protect agriculture and raise working-class incomes ran against those areas of concentrated corporate power excluded from industry-specific regulatory boards. These were the core manufacturing firms: U.S. Steel, General Motors, General Electric, the Standard Oil companies. In 1935, administration economist Gardiner Means, toiling away in the Department of Agriculture, had found that prices fluctuated less in these industries in both magnitude and frequency compared to less concentrated sectors such as agriculture, bituminous coal, and light manufacturing. “The essential difference between the two types of prices,” wrote Means, “is that one represents an equating of supply and demand by price while the other represents the equating of production and demand at a price.” The analytical distinction turned on whether an industry was organized in a way to control its own prices.

“Nothing sacred”

The eventual postwar acceptance of the Keynesian prescription for government spending has tended to eclipse the more adventurous Depression-era policy interventions. Our understanding of such interventions has enormous implications for the balance between public power and private autonomy, as well as the relationship between democracy, capitalism, and representative government. As President Roosevelt told Congress in 1936, “Private enterprise is ceasing to be free enterprise and is becoming a cluster of private collectivisms; masking itself as a system of free enterprise after the American model, it is in fact becoming a concealed cartel system after the European model.”

Such problems were intractable under the US Constitution. Forcing down manufacturing prices to expand sales, output, and employment was the order of the day. But how was this to be done? On the one hand were the proponents of antitrust prosecution, who argued that an active competition policy through the courts might restore flexibility to prices grown rigid through consolidated ownership. Yet even industries such as cement, with five or six major national producers, exhibited administered price behavior. “Price leadership” by the most efficient firms was endemic in countless markets. How many companies would the courts require to consider price competition effective?

On the other hand, those groups left to bargain with the centers of corporate power—labor unions, purchasing cooperatives, chain-store retailers, and various groups of small actors exempted from the antitrust laws—increasingly organized themselves to administer their own incomes, just as the corporate managers did.

For much of the remainder of the twentieth century, historians described this phase of the late New Deal as a stalemate quickly overcome and forgotten by the German invasions of Poland and France. Alan Brinkley has described the shift toward expanded government spending, and the legitimacy in Washington it brought to Keynesian thinking, as the “end of reform,” while multiple generations of historians have described the period that followed, the late 1940s and the 1950s, as the highwater mark of a “liberal” or “Keynesian consensus.” Yet the shift toward government spending exacerbated rather than alleviated the problem of relating prices to costs. Against the programmatic crosscurrents of antitrust and collective bargaining in the second Franklin Roosevelt administration, prices and costs began to rise on their own, often far before production met capacity, when idle plants and resources lay across the nation.

The incipient emergency “relief” and public works programs had already signaled this possibility. During the recession of 1937–38, Montana Senator James Murray spoke on the floor of Congress to decry the fact that companies had responded to the government contracts and expanded consumer demand of 1933–36 by increasing prices rather than expanding output and employment. “The Government was spending enormous sums of money for the purpose of putting people back to work and stimulating industry. The industries should have recognized the situation as an unusual and an abnormal one and should have been satisfied with lower prices and should have cooperated with the Government in putting men to work.” As Murray explained, “[T]he royal road to prosperity in this country is for the corporations to allow wages to be raised without raising prices, and that in that way, and only in that way, are we going to have prosperity in the country. We have to spread purchasing power…” The Senator repeated the sentiment throughout his career, urging greater postwar planning in 1944 on the grounds that, while “Government was spending billions of dollars to bring about reemployment, the corporations advanced their prices and skimmed the cream off the Government’s spending.”

The problem was this: before production met capacity, firms began to raise prices, and even when production did meet capacity, there was no guarantee that higher prices would elicit investments in capacity expansion. Despite the popular memory of US businesses leading the nation to victory in wartime, throughout World War II and into the Korean War major producers such as US Steel resisted expanding capacity. They did so on the grounds that the level of demand required for commercial profitability was an unreasonable basis for private industry planning. During 1940 and 1941, prices rose far out of line with costs as managers, following the lean years of the Depression, exploited the mobilization period to earn back lost profits. After the war, despite 14 million new ingot tons of government-built capacity (about 14 percent of the total), the steel industry continued its high-price policy, arguing that markets could never absorb the volume of production required for a low-price, high-utilization management strategy.

What about a situation in which capacity was already constrained? During the late 1940s and early 1950s, many economists understood that the composition of spending required to achieve full employment was unlikely to match the available industrial capacity.  Keynes himself wrote in 1943, “[T]he inducement to invest is likely to lead, if unchecked, to a volume of investment greater than the indicated level of savings in the absence of rationing and other controls.” In a period of total industrial war, the need for prompt expansion of output and employment entailed a dramatic shift in the composition of demand away from civilian consumption. In industries where demand outpaced full-capacity operation, threatening shortages, wartime price control passed into government rationing as the instrument used to limit the growth of civilian demand. Harvard economist John D. Black, one of the authors of the Agricultural Adjustment Act (AAA) and an advisor to the USDA, explained during the war that “[t]here is nothing sacred about supply-and-demand determined prices at any time.” 

The war experience taught that such public control of prices could also serve civilian purposes in peace. And the experience of accelerating inflation in 1933–36, 1939–42, and after the war in 1946–50, taught that government spending, to be effective, might require such controls. Senator Murray’s New Deal sentiments remained near the heart of the Truman-era Democratic Party. On the basis of this experience, Richard Gilbert, Harvard economist and adviser to President Truman’s 1948 reelection campaign, wrote that “If we restrict government intervention to fiscal policy, therefore, we must face the unpalatable alternatives of partial employment on the one hand and continued and perhaps violent economic instability on the other. Some form of control of prices and wages, to supplement fiscal policy, naturally suggests itself…”

Lawrence Klein, one of the founders of modern econometrics, wrote in 1949 that since the powers over fiscal policy in the United States were given only to Congress, it was:

inevitable that the Congressional debating techniques will be much too slow and cumbersome to provide the flexibility needed for fiscal policy in a full-employment program. An alternative would be to maintain a skeleton O.P.A. [Office of Price Administration] always ready to step in when the price level shows inflationary tendencies….We must have a planning agency always ready with a backlog of socially useful public works to fill any deflationary gap that may arise; similarly, we must have a price-control board always ready with directives and enforcement officers to wipe out any inflationary gap that may arise.

If firms held some discretion over their prices, they might choose to meet an expanding demand with higher prices in addition to—or even instead of—higher output and investments in capacity expansion. The drift of Fair Deal intellectuals was toward disabling this employer power to protect owners’ income. Where public authorities considered expanding supply indispensable to national priorities, target sectors expanded under the influence of government investment in plants and equipment. The result was what many began to describe in the late 1940s as the “politically managed economy.” 

The politically managed economy and its discontents

Insofar as the New Deal dispensation was indigenously distinct from the Soviet-aligned socialism then flourishing around the planet, price control was central to the project. Throughout the late 1950s, for example, the US Congress held numerous hearings, led by Senators Estes Kefauver of Tennessee, Wright Patman of Texas, and Paul Douglas of Illinois, among others, to demonstrate that profits and prices in heavy industry—such as steel, petroleum, and auto—and in pharmaceuticals rose independently of the two general recessions of the period (1957–58 and 1959–61). 

But demonstrating that some prices moved upward in recession was far different from drafting a statute and defining the regulatory jurisdiction of new price-control powers. During the Eisenhower administration, labor-aligned politicians in Congress suggested voluntary “price advisory” boards, to which regulated industries would be required to submit prior notice of price increases, but majorities for even such non-control forms of industrial surveillance could not be found in the Cold War atmosphere created by McCarthyism and the new Western insurgents in the Republican Party. 

Once corporations ceased to be controlled exclusively by shareholder-appointed managers, who was to say what they might accomplish? Throughout the late 1940s and into the 1960s—the period when price controls were most extensively used in the industrial sectors of the North Atlantic—anti-Communist propagandists and economic thinkers began to argue that meeting any particular set of price targets would require an elaborate and inexorably growing set of rules over supply and demand. “Price control,” Joseph Schumpeter wrote in 1946, “unless intended to enforce surrender of private enterprise, is irrational and inimical to prompt expansion of output.” Ohio Senator Robert Taft—the small-business, anti-interventionist candidate for the Republican Party nomination in 1952—wrote to President Eisenhower in January 1953 to complain: “In recent years the people have come to feel that price control is part of the ordinary operation of a free economy, and that belief is very dangerous to continued liberty.” Milton Friedman distilled the argument of the late 1950s and early 1960s. “If prices are not allowed to ration goods and workers, there must be some other means to do so,” he explained in Capitalism and Freedom in 1962. “Can the alternative rationing schemes be private? […] Price controls, whether legal or voluntary, if effectively enforced would eventually lead to the destruction of the free-enterprise system and its replacement by a centrally controlled system.” It was the effort to control supply lurking within the project of price stabilization that produced the late-twentieth- and early twenty-first-century taboo against overtly political control of markets. 

In lieu of a new independent federal agency, part of the regular responsibilities of the White House between the Eisenhower and the Carter administrations included informal “jawboning”—like Samson, flexing the power of the Office to slay philistines and sway market participants— against high-profile price announcements, usually those accompanying union wage negotiations. In January and February of 1957, for example, President Eisenhower went so far as to publicly threaten to impose wage-and-price controls if companies and unions did not restrain themselves in the marketplace.

Enthroning the Fed

Notwithstanding the ideological protest of the Cold War, we never escaped the necessity of economic planning. Despite all odds, the American public today still holds the government responsible for the future. To the person in the street, the agency for such planning is undoubtedly the Federal Reserve, which has won for itself not only the power to veto macroeconomic expansions but the emergency powers to rescue financial markets through targeted asset purchases to prevent securities’ price declines. There is a deep historical reason for popular focus on the Treasury and the Federal Reserve in the current debate over how to respond to inflation: during the last period of sustained inflation, it was central bank policy that took the final decisive initiative in 1979 in the political struggle over public control of prices. This is the implicit context of the recent intellectual controversy over how to control inflation: the fact that new tools would weaken the central bank’s monopoly over economic planning.

Before Paul Volcker, few labor-friendly policymakers considered the central bank the appropriate tool for short-term economic planning. But in the context of the Vietnam War boom and the Nixon wage-and-price controls, the businessmen’s taboo against government spending and price control began to lose its ideological edge. First, the informal Democratic Party jawboning in the full-employment environment failed to keep the price level flat: the annual change in the price level rose from less than 2 percent in 1962–65 to 3 percent in 1966 and 4 percent in 1968. When Nixon entered office in 1969, he publicly disclaimed any effort to influence prices, formally or informally, instead promising to pursue a “gradualist” policy of fiscal-monetary contraction and laissez faire. The result was to exacerbate price-and-wage increases, which accelerated the annual change in the price level to more than 5 percent for most of 1969–71—confirming the widespread belief that interest-rate policy could do little for short-term high-employment stabilization.

After the Democrat-controlled Congress twice passed legislation authorizing presidential orders to control wages and prices, Nixon finally imposed controls in August 1971. Over the next two years, inflation slowed considerably. But three planning errors undermined the controls program and helped to discredit it later, during the recessions of 1974 and 1979, when prices again did not fall and when recourse to monetary policy acquired bipartisan appeal. The first planning problem was in agricultural supply for the years 1972 and 1973, when the United States entered into export agreements with the Soviet Union and China. Rather than allowing the USDA to follow the Price Commission recommendation to expand acreage plantings to keep farm prices down, Cost of Living Council Director Donald Rumsfeld and Agriculture Secretary Earl Butz urged higher farm prices to secure Republican votes in the Midwest during the 1972 election. The second planning problem was how to handle organized labor, which was so internally divided and disorganized that there was no prospect of effective cooperation in wage stabilization. The third planning problem was the announcement in January 1973, just months after the election, that the controls would be immediately scaled back and converted into a voluntary system of corporation self-reporting.

With neither gradual fiscal-monetary contraction nor continued price control able to stabilize the economy by 1973–74, the alternative lay in inducing a deep recession. This was left to the Federal Reserve, which pursued monetary contraction first under Arthur Burns in 1974 and then, famously, under Paul Volcker in 1979. Two further interventions into price-making accompanied the money squeeze. Trade liberalization, debated throughout 1969–74, was finally achieved in fast-track legislation signed by President Ford in 1975. “Deregulation” made a similar advance, on the grounds that it would weaken what Carter adviser Alfred Kahn referred to as “market power inflation.” Why guarantee stability in the old regulated markets of the New Deal when employment and profits in the core industrial sector were falling? In 1979 and 1980, Congress weakened or dismantled price regulation, subsidies, and market entry in airlines, natural gas, trucking, and mortgage banking. In the context of Paul Volcker’s massive interest rate hike, new price competition and the Reagan administration’s stand against organized labor worked to end the struggle over incomes by taking labor out of the picture.

The fact that the public looks to the Federal Reserve for assurances about the future is obvious to the financial press. It is also evident in the collection of economic statistics. The Federal Reserve is the only agency today responsible for collecting statistics on capacity utilization by industry, a critical metric for estimating unit costs and determining whether a given price increase results from increasing profit margins or from constant markups on rising costs. The previous White House bodies charged with monitoring cost-price relationships—the Council on Wage-Price Stability, the Cost of Living Council, the Council of Economic Advisers, the Office of Price Stabilization, the Office of Price Administration—either became objects of McCarthyist persecution or accommodated themselves to a world in which such microeconomic analysis was considered irresponsible for a public agency to conduct, on the grounds that the only potential use of such data was public control of prices. The origin of the one public data source of earnings and costs by industry that might be used as an alternative to calculate unit costs reveals this history. The Quarterly Financial Reports of the Federal Trade Commission (FTC) originated in the forms required by the World War II-era Office of Price Administration (OPA). When Congress defunded the OPA, the dismantled agency handed its quarterly reporting program to FTC, where it has remained ever since.

The post-Nixon taboo

Despite the post-Nixon taboo, price setting continues to characterize the economy of the United States. The consequence of central bank planning has been a stark lack of precision in growth policy. The Federal Reserve is capable of financing growth in only the most general sense, and its planning powers today are inadequate for the type of guided sectoral expansion required for growing the renewable-energy industry, or for disciplining monopolistic organizations in health care, higher education, or housing.

Take medical care and medicine. When the government expands demand for hospital services, the result is rising prices rather than access. In current dollars, according to the Center for Medicaid and Medicare Services (CMS), per-capita national health expenditures in the US have increased from $355 in 1970 to $10,739 in 2019, or about three thousand percent. The largest price increases have been in prescription drugs, hospital care, and nursing home facilities, which have increased by 525 percent, 317 percent, and 300 percent, respectively, since 1980.1 Recognizing the inability of central bank policy to plan effectively in such markets, every administration from Reagan to Obama has attempted some intervention into pricing, short of controlling supply. Reagan increased hospital competition and encouraged the consolidation that now characterizes urban healthcare markets. Obama responded to increased hospital market power by regulating the insurers’ premiums and expenditures, attempting to control the ultimate source of revenues.

Private market power in medicine is unlikely to continue to escape control. HR3, the drug-price reform bill passed by the House of Representatives in 2019, currently included in the “Build Back Better” social-spending bill, imposes price controls over the pharmaceutical industry. Most journalists focus on the bill’s Title I, which eliminates the George W. Bush-administration’s Medicare Part D prohibition against negotiating prices with pharmaceutical companies. But regardless of this Title I authority, price ceilings would come to US pharmaceutical manufacturers under HR3. The bill’s Title III imposes a ceiling on the prices of the 125 drugs purchased under Medicare’s Parts B and D, not to increase above the rate of the Consumer Price Index. 

Without authority for discretionary controls over basic commodities such as steel, lumber, and energy, our use of fiscal policy for employment goals is likely to continue the prospect of at least gradually rising prices. Today’s steel industry is a case in point. For decades, American firms have argued for higher prices as a critical protection in a market where, today, half of global capacity is in lower-cost Chinese plants.2To protect the profits of the American steel companies, President Trump used the authority of the 1962 Trade Expansion Act to impose 25-percent tariffs on the metal. Recently, under President Biden, China ratified the shift to a high global steel price by announcing its own program of output cuts under the rubric of reducing carbon emissions. From $560 per ton in 2019 and 2020, steel prices ballooned to near $2,000 per ton this past autumn. The world price has tripled, and many US free traders are now urging the repeal of the Trump steel tariffs, which the Biden administration has maintained. Despite such profiteering prices, capacity utilization in the US remains around 80 percent, the industry managers’ apparent preference. Capacity expansion in the industry took years and only accelerated after China’s announcement of production cuts. Expensive steel, it seems, benefits both the labor rebalancing program of Biden’s US and the carbon rebalancing program of Xi’s PRC.

Such a price spike of a general commodity basic to capital equipment and buildings is certain to influence the structure of costs throughout the economy. It is clear in both the economic aggregates and industry-specific surveys that producers across industries are taking the opportunity of the Trump and Biden emergency relief stimulus, and the prospect of sustained development spending on both physical infrastructure and social policy, to raise profit margins. “It hardly seems as though businesses are being forced by costs to push up prices,” writes Dean Baker. “It instead looks like they are taking advantage of presumably temporary shortages to increase their profit margins.” The question of the moment—and the question raised by the history of controls—need not be whether or when to administer price ceilings, but rather whether and why our policy toolkit limits our capacities.

As a share of Gross Domestic Product, corporation profits corrected for inventories and depreciation were 12.6 percent in Q3 2021, up from 11.1 percent in 2019 and 10.9 percent in 2016. According to the FTC’s Quarterly Financial Reports, operating income for US iron, steel, and ferroalloys corporations was $10.1 billion in Q3 2021, up from a quarterly average of $1.2 billion in 2019 and $849 million in 2016. For US wood products corporations, the same figures were $9.4 billion in Q2 2021, $1.7 billion in 2019, and $1.7 billion in 2016; for US wholesale corporations they were $38.2 billion (Q2), $18 billion, and $16.3 billion. Some might claim these earnings are only due to expanding volumes, but rising prices clearly show otherwise. Others might argue that such profits are necessary to induce entry and expand capacity in these industries, but the same FTC data show few departures in the growth trend of property, plant, and equipment in these sectors comparable to the explosion in their income from operations. It does not seem as if entrepreneurs in these markets expect the current level of demand will persist long enough to validate any dramatic expansions in capacity. But if not, then what is the current round of price increases really for?

  1. To an index of 100 for 2013, price indexes for these categories rose from a level of 16, 24, and 25, respectively, in 1980.

  2. “Global excess steel capacity and overproduction in China threaten to unleash new import surges, putting good-paying jobs and investments at risk,” warned the American Iron and Steel Institute, the research and lobbying group for US steel firms.


As the American economy reopened in the first half of 2021, reports of a “labor shortage” spread throughout US industries. But there was one sector where employer panic about hiring…

Read the full article


In one of her first statements as Treasury Secretary, Janet Yellen said that the United States faced “an economic crisis that has been building for fifty years.” The formulation is…

Read the full article


Concerns over a generalized “inflation” loom in the recovery. Yet the prices that most heavily factor into the cost of living for US workers—housing, health, and education—have already been rising…

Read the full article