Category Archive: Analysis

  1. Rating Sovereigns

    Leave a Comment

    As dark clouds gather on the horizon of the global economy in the third year of the pandemic—with debt stocks swollen, interest costs rising, and growth undermined by energy insecurity and war—policy makers and pundits are anxiously watching sovereign credit ratings as the harbingers of the storm. What will it take to save Italy from the fatal downgrade to junk status that could bring down not just the fourth largest economy in Europe, but the entire eurozone with it? Could much of the developing world be a few downgrades away from devastating debt crises? Such concerns take history as their guide. In the late 1990s, a series of rapid-fire sovereign downgrades triggered the Asian financial crisis. In the late 2000s and early 2010s, sovereign downgrades played similar havoc with European countries—pushing Greece into default, and forcing eurozone authorities to take desperate measures to save Cyprus, Ireland, Italy, Portugal, and Spain from the same fate. In both instances, sovereign ratings were blamed not only for having failed to predict the debt servicing difficulties of the countries in trouble, but also for exacerbating the crises via belated and panicked downgrades.1 Sovereign ratings exercise spectacular—and often disastrous—power in times of crisis.

    At the same time, sovereign ratings also exert subtler, but no less problematic influence over the fate of countries in “normal times.” The judgment that sovereign ratings pass about a country’s creditworthiness consistently affects the interest rate that the country has to pay on its debt. Lower ratings increase the burden of interest costs on the budget, shrinking the amount of money available to provide public services, address social needs, manage the economy or fulfill any other important political, social and economic objectives. The larger the outstanding debt, the greater the budgetary impact of ratings. Given the high and growing indebtedness of governments around the world in the past decades, even a couple of dozen basis points of increase in interest costs associated with adverse rating changes can make a tangible effect on the budget. Furthermore, sovereign ratings affect the financing costs of all economic actors across the domestic economy, thereby indirectly influencing the functioning of equity markets, growth, unemployment and competitiveness.2 Therefore, governments face very strong incentives to try to stay in the good graces of the rating agencies. 

    Indeed, rating agencies have been called the “new superpowers” of our globalized world, whose influence over governments’ financial elbow room rivals that of the best-known wielders of financial power, such as the International Monetary Fund or the World Bank. And just as the concern about international financial institutions is that they make funding conditional on policy choices that countries would not make on their own, the worry about credit-rating agencies is that they interfere with the sovereignty of democratically elected governments by tying favorable funding conditions to specific policy choices. Indeed, scholarly research has shown that the Big Three regularly comment on the politics and policy choices of the countries they rate, 3 and they assign lower ratings to countries with center-left governments 4 and large welfare commitments. 5 The fact that unelected, unappointed, for-profit commercial organizations can put a price tag on electoral and policy choices in democracies around the world should give us pause.

    Sovereign ratings continue to exercise immense power over the fate of governments—in good times and bad—despite vigorous attempts in the wake of the last crisis to “break the power” of the “Big Three” credit rating agencies—Fitch, Moody’s and Standard and Poor’s—in Europe and the US alike. Incensed by what they claimed were unfairaggressive and harmful downgrades in the midst of financial, fiscal and economic troubles that the Big Three themselves were partially responsible for, policy makers committed to sidelining ratings from public regulatory systems, and promised to create new, better (public) rating institutions. US authorities brought billion-dollar lawsuits against the Big Three. Congress created extensive regulatory oversight over the credit-rating business, and eliminated the public use of credit ratings under the Dodd-Frank Wall Street Reform and Consumer Protection Act. In Europe, the Parliament and Council created the European Securities and Market Authority with the explicit purpose of regulating the credit rating industry as one of its main responsibilities. Yet, more than ten years later, the Big Three are just as influential (and profitable!) as ever, and once more the public braces itself for a potential new wave of sovereign debt crises triggered by downgrades.

    The resilience of sovereign ratings in general and the Big Three in particular is all the more remarkable because—quite apart from regulatory action—investors themselves had strong incentives to break with ratings and the Big Three, given the grave losses they suffered in successive crises due to rating failures. The fact that markets continue to rely on the ratings of the Big Three, instead of trusting their own risk assessment, experimenting with alternative risk metrics, or availing of the services of other rating agencies has profoundly puzzled expert observers. The staying power of the Big Three is truly striking. 

    A common language

    Understanding why the Big Three became such influential, and apparently unassailable, gatekeepers of sovereign-debt markets starts with debunking a common (intuitive but misleading) conception about the role that ratings play in financial markets today. Ratings—with their well-known alphanumeric code—denote categories of credit risk, ranging from virtually riskless ‘AAA’ all the way to default-prone ‘C’. As indicators of risk, ratings are commonly construed as a decision-support tool for investors: a due diligence of investment options to identify and highlight risks that investors might otherwise be unaware of. This conception of ratings is reflected, for example, in the lawsuits and legislative changes that sought to make rating agencies liable for the losses suffered by investors as bond prices collapsed in the wake of successive rating failures, implying that drastic rating changes exposed fraudulent, negligent or incompetent handling of the due diligence task that the Big Three took upon themselves. 6

    Interpreting ratings as a due diligence mechanism is intuitive but anachronistic. Historically, rating agencies did start out as fact-finding and -analyzing enterprises. Emerging in the late 19th and early 20th centuries in the United States, they helped investors assess the bonds of railroad companies and provided businessmen with basic information about little-known clients they contemplated extending commercial credit to.7 But the compilation and analysis of difficult-to-access information declined in utility in the 1970s, with the advent of the information age and the ascendancy of institutional investors in bond markets. Armed with practically limitless (often privileged) access to information, modern computing power and armies of highly trained analysts and portfolio managers, contemporary institutional investors no longer need to rely on rating agencies for information, insight or judgment. 

    Yet, rather than make them redundant, the rise of the institutional investor has made credit ratings an indispensable component of contemporary financial markets. Ratings became a crucial coordination mechanism to facilitate the interactions of institutional investors with one another and with their stakeholders. Instead of unearthing novel information, ratings now serve as commonly accepted, third-party indicators of credit risk to enable institutional investors, their clients and regulators to negotiate about risk, enter transactions, and manage relationships characterized by asymmetric information. 

    Since institutional investors invest money on behalf of others, they may be inclined to take higher risk than would be acceptable to their clients in the hope of realizing higher returns. Managing this moral hazard is of central concern to clients and to regulators, but devising private or public rules to regulate the risk-taking of institutional investors is predicated upon a common understanding of how risky each credit instrument is. Since clients and regulators cannot possibly keep track of the riskiness of innumerable bonds from across the globe in the portfolios of institutional investors, portfolio mandates and official regulations depend on ratings as commonly trusted third-party indicators of credit risk to define limits on risk-taking. Without an independent measure of credit risk, institutional investment on behalf of millions of investors would be unfeasible. Ratings serve to mitigate information asymmetries between institutional investors, clients, and regulators through independent risk estimates.

    In dealings among institutional investors, ratings serve as a shorthand for defining minimum quality standards for collateral used to secure against counterparty risk in the myriad transactions (mostly repos) that make up a large part of the volume of activity in modern financial markets. Rather than mitigating information asymmetry, in this case ratings allow for efficient transactions without having to negotiate (and repeatedly renegotiate, as circumstances change) standards for acceptable collateral in each specific deal. Without a common standard for denoting risk, transactions among institutional investors would be too cumbersome to be practical.

    What ratings do in both of these settings is—as David Beers, Standard and Poor’s Global Head of Sovereign and International Public Finance once put it—provide a “common language of credit risk”8 that actors enmeshed in a complex web of financial market transactions can use to negotiate and regulate their relationships. By supplying a common language, ratings play a crucial role in supporting the relationships and transactions that constitute contemporary financial markets. They play an infrastructural function comparable in significance to that of SWIFT, the interbank payment system. Whereas SWIFT physically transfers messages about transactions, ratings provide the vocabulary for defining vital terms of transactions and relationships. Sovereign ratings—and particularly sovereign ratings on the upper end of the rating scale—play a particularly important part in this vocabulary, because highly-rated sovereign bonds make up the bulk of the so-called safe assets that are most commonly used to safeguard client-investor relationships and inter-investor transactions. Without a common language for what constitutes safe enough assets to reassure clients, regulators and counterparties, these transactions and relationships—i.e. modern financial markets—could not function.

    Empowered by convention and failure

    Why do the Big Three retain a monopoly over this medium of communication? Although several dozen credit-rating agencies are registered across the US and Europe, the Big Three continue to overwhelmingly dominate the rating market, with a practically 100 percent share in sovereign ratings.9 (A curious feature of the rating market is that it is characterized by a joint monopoly of Fitch, Moody’s and Standard and Poor’s—with Moody’s and Standard and Poor’s both having a 100 percent of the market and Fitch holding a somewhat lower share. That is because the overwhelming majority of collateral standards and private or public regulatory documents call for at least two ratings and specifically require ratings by Fitch, Moody’s and Standard and Poor’s.)10 In the wake of the last crisis, new contenders threw their hats into the ring with some fanfare, but their initiatives petered out before they could properly set up their operations, while existing competitors continued to be constrained to narrow niches of the rating market.

    The Big Three gained first mover advantage in the market for a global lingua franca of credit risk between the mid-1970s and the early 2000s, when they were the only “nationally recognized statistical rating organizations” in the United States.11 Obliged to use the Big Three for public regulatory purposes, institutional investors fell back on the same method of risk certification in their own private portfolio mandates and contracts, too. Thus, the ratings of the Big Three became entrenched in both public and private use in the United States just as modern financial structures were born. Given the dominance of American financial markets and investors within the international financial system, the practice of using the ratings of the Big Three as commonly accepted indicators of risk was exported as globalization progressed. The world settled on the convention that when market actors need to discuss credit risk, they universally speak the language provided by the Big Three. This convention goes a long way towards explaining why no competitors organically grew into serious challengers even after the Securities and Exchange Commission opened up the status of “nationally recognized statistical rating organizations” to new applicants, and the European Securities and Market Authority registered several dozen new rating agencies. 

    Convention alone cannot explain why the Big Three retained their monopoly over the global language of risk, after the language they provided repeatedly proved so fragile. “AAA” is supposed to mean practically zero credit risk, whereas “BBB” reflects tangible, if moderate, doubts about payment capacity. If a AAA-rated sovereign can become BBB-rated within a relatively short period of time (like Ireland or Spain did in the European debt crisis), “AAA-rated” no longer denotes the kind of safety that market actors meant to enshrine in their contracts and portfolio regulations. When ratings drastically change, market actors are forced to make hasty changes to their portfolios to adhere to the stipulations of the contracts and regulations that bind them. Such forced adjustments have the potential not only to impose losses on individual investors but also to destabilize entire markets by triggering simultaneous adjustments across innumerable portfolios and placing enormous pressure on the prices of the affected bonds. 

    In light of the deleterious effects of failing ratings, any competitor able to offer more reliably accurate indicators of credit risk should have the potential to supplant the Big Three. But reliably accurate indicators of credit risk—especially sovereign credit risk—are a mirage. Credit relations are characterized by uncertainty, not risk: it is not possible to accurately account for all possible contingencies that might affect the future ability and willingness of debtors to service their debt over the lifetime of bonds and assign “realistic” probabilities to all conceivable outcomes. This is especially true for sovereign debtors whose debt servicing capacity is determined by the complex interaction of political, economic, and fiscal factors. Absent a crystal ball, sovereign ratings (or any other forms of sovereign credit analysis) are no more than educated guesses about the future credit standing of countries, liable to be proven wrong from time to time by adverse surprises. 

    Furthermore, even if analysts compensate for the limits of their foresight by preemptively lowering ratings to reflect threats to debt servicing capacity from conceivable negative surprises, they cannot insure ratings against market panics. Whereas even the most dramatic political, economic or fiscal blows rarely cause a country with previously strong debt servicing capacity to renege on its debt commitments, a “sudden stop” of funding, caused by panic, might do that. “Sudden stops” make it impossible for countries to roll over their expiring debt at an interest rate that they can afford to pay, forcing them to default even if they otherwise have good credit standing. Maintaining investment-grade rating on a sovereign that defaults is the ultimate disgrace that can befall a rating agency. Therefore, at any signs of panic brewing, ratings have to precipitously fall (towards speculative grade) to reflect the growing possibility that the given country is careening into default—irrespective of what the country’s economic, political and fiscal fundamentals warrant. But, of course, falling ratings fuel further sell offs—as investors try to comply with margin calls on collateral and rating requirements in regulated portfolios—triggering further downgrades. Once panic starts, ratings fail.

    Indeed, sovereign ratings have so far always failed by falling victim to panic. The Irish example is illustrative of this dynamic. Even though the shock that hit Ireland’s public finances was tremendous—by bailing out domestic banks, the government increased the country’s debt by almost a hundred percent of the GDP—the country’s debt servicing capacity was not fundamentally undermined. Once the dust settled after the crisis, Irish sovereign ratings settled on A+. This was not the coveted AAA that Ireland used to hold, but a decent investment-grade rating. However, during the crisis, as investor panic drove yields on Irish debt through the roof, there was a very real possibility that Ireland would lose its access to market funding altogether, triggering a rapid succession of downgrades, plunging Ireland from AAA in 2009 into non-investment grade territory by mid-2011, when markets were finally calmed by a bailout package from the troika of the International Monetary Fund, the European Commission and the European Central Bank.12 Whereas some downward adjustment in the longer term was clearly warranted by the shock to public finances, the mayhem of the 10-notch drop within two years (a manifest rating failure) was driven by market panic incommensurate with the original shock that triggered it, as evidenced by the five-notch correction after the crisis. Other country cases in the Asian financial crisis and the European debt crisis illustrate the exact same pattern.13

    Given the universality of this dynamic in times of crisis, there is little reason to hope that similar dynamics will not unfold in the future, or that alternative indicators of credit risk would not be vulnerable to this mechanism. Failures of credit risk indicators are baked into the markets. Consequently, even if the Big Three commit mistakes in their credit-risk analysis (which they certainly do), any improvement on their performance is ultimately bound to be marginal given the inevitability of failure of any indicators of credit risk in the face of bad news and market panic. The measurement of credit risk is an essentially illusory undertaking, predestining anyone that attempts it to recurrent failure. This fact explains not only the petering out of private initiatives for supplanting the Big Three, but also the manifest reluctance of public regulators to get involved in the production of risk indicators.14

    The futility of attempts to produce reliable estimates of credit risk does not undermine the vital need of contemporary financial markets for independent, authoritative, third-party indicators of credit risk, to be used in negotiating the relationships of institutional investors with their clients, regulators, and one another. What it does is militate strongly against destabilizing the current entrenched convention around the Big Three as providers of the crucial common language of risk for the sake of experimenting with alternatives. Reluctance to rock the boat in the absence of better options helps to account for the timidity of authorities in enforcing regulations that would have undermined the willingness or ability of the Big Three to continue to provide their services. The most conspicuous example of this was the reversal of the decision under the Dodd-Frank Act: to make rating agencies legally liable for their opinions. The legislative change was promptly revoked when the Big Three threatened not to authorize the use of their ratings in prospectuses and debt registration statements, causing severe dislocations in the market.15 Other provisions of the Dodd-Frank reforms—like the tasking of the Securities and Exchange Commission with either changing the business model of rating agencies or creating a board that randomly assigns credit rating assignments to one of the nine nationally recognized statistical rating organizations—were quietly shelved without any attempt to implement them. Similarly, in Europe, regulations never went beyond technical details like requiring a set rating calendar or rules for disclosure. Regulatory forbearance underscores that, for all their faults and failures, the ratings of the Big Three are the only game in town. 

    Reflexivity

    The tenacity of the Big Three is full of paradoxes. The firms derive immense power and spectacular profits from having joint monopoly on an impossible task. The fact that they are bound to repeatedly fail at that task—with grave consequences for financial markets and entire national economies—neither diminishes the demand for their services, nor threatens their monopoly. In fact, inevitable failure further cements their position as pivotal actors in financial markets whose authoritative judgment on risk governs portfolios and contracts around the globe. Nevertheless, for all their authority, the Big Three are also hostages to markets given the immense vulnerability of their ratings to sudden adverse changes in market sentiment. When markets respond to unexpected bad news with signs of panic, foreshadowing the possibility of a “sudden stop,” ratings have to tumble to reflect impending disaster. But exactly by doing that, they unleash a meltdown of bond prices, fulfilling the prophecy of disaster. Caught in a predictable, yet unavoidable, vicious cycle, ratings drive themselves and markets into failure—becoming both the victims and the culprits of crises. 

    While intriguing in its own right, the logic of credit ratings also highlights crucial vulnerabilities in contemporary global financial architecture. Beyond the fact that the transactions that make up the bulk of financial market activity around the globe are governed by a shaky illusion (that credit risk can be measured), the logic of ratings also calls attention to the heightened danger of intensified market reflexivity.16 Financial markets have always been complex systems of interconnected decisions in which investors try to anticipate what the market at large might do, generating potentially destabilizing movements. However, in contemporary global financial architecture, the proliferation of relationships and transactions that require a common language of risk has amplified the coordination of destabilizing developments around the globe. Investors were always watching the market, but there used to be some room for differences in perceptions that would dampen herd effects. With a vast share of portfolios legally forced to follow changes in ratings en masse, ratings became the focal point of reflexivity overpowering any stabilizing effects of potential differences in opinions. The more one contemplates ratings, the more reasons emerge to eliminate them altogether. Doing so without dismantling the broader global financial architecture that encases it, however, seems doubtful.

  2. Pragmatic Prices

    Leave a Comment

    The following is an adapted excerpt from How China Escaped Shock Therapy: The Market Reform Debate.

    Price setting in the Second World War

    European and American traditions of economic theorizing on price control are intimately connected with war. The experience of the First World War had been one of inflation and limited price controls throughout its duration, followed by a sudden liberalization after the war’s end. The transition from war to peace gave rise to a sharp boom bust cycle and eventually led to the Great Depression.

    The experience stood as a warning for policymakers during World War II. With the exception of China, all major powers experiencing hyperinflation1 implemented price and wage controls that were far more comprehensive than those of the First World War: “Controls over prices and wages were the rule; freedom from such regulation was the exception.”2 Despite their scale, the policies were not based on elaborate theoretical principles. In fact, some of the most sophisticated 20th century economists—Keynes, Hansen, Galbraith—were unable to reconcile with the realities of the war economy. John Kenneth Galbraith, the most prominent American price-fixer of the Second World War, described inflation control as an evolutionary development “in the sense that the final structure was influenced less by an effort to build to an overall design than by a series of individual decisions.”3

    In the following extract, I survey the theory, practice, and outcomes of price setting during the Second World War and its aftermath. In doing so, I argue that the implementation and practice of American price controls was less a result of ideology, economic theory or calculation than pragmatic decision making according to economic needs. In fact, the most beautiful theoretical plans for inflation policies crumbled when confronted with the reality of the war economy. The pragmatism that ultimately served to stabilize prices was made possible through the surrender of theoretical dogmas in the face of emergency.  

    Three views on wartime inflation

    In 1940, Keynes’s How to Pay for the War set the tone for reflections by professional economists on war finance and economic stabilization. His considerations of the war economy radically departed from the principle of effective demand—the concept for which he is most famous. Keynes had derived the importance of effective demand for the general case of peacetime in his General Theory. But he did not believe that it applied to the special case of war: “[I]n war time, the size of the cake is fixed,” he argued. “If we work harder, we can fight better. But we must not consume more.”4

    According to Keynes, the reason for this difference was that, in times of war, all expansion of production is to supply the goods needed for the war: “The war effort is to pay for the war; it cannot also supply increased consumption.” 5 Under conditions of war, Keynes argued, the government had to expand its spending and, before full employment of labor and capacity was reached, demand for consumption goods exceeded their supply. This was the case because “[e]ven if there were no increases in the rates of money-wages, the total of money-earnings [would] be considerably increased.”6 Keynes suggested that people previously unemployed or not in paid employment were drawn into military service or civilian war production and hence received a money wage. The aggregate wage fund increased, but it encountered roughly the same amount of consumption goods that had been available before that increase.

    In principle, Keynes indicated three alternatives for solving this problem. A laissez-faire solution, comprehensive price controls and rationing, and his preferred scheme of deferred pay combined with price controls for selected essentials. The first was based on the assumption that peacetime economic policies and taxation were, in essence, fit to serve the war economy and needed only to be complemented with propaganda to encourage voluntary saving. 7 Keynes found it highly unlikely that voluntary savings were sufficient to limit the purchasing power in order to match the constrained consumption supply as well as to provide the necessary finance for the war effort.8 This scheme would imply a repetition of the policy of the First World War, which entailed a “sufficient degree of inflation to raise the yield of taxes and voluntary savings.”9 But this would not actually be a voluntary saving. It would, rather, be “a method of compulsory saving, converting the appropriate part of the earnings of the worker which he does not save voluntarily into the voluntary savings (and taxation) of the entrepreneur.”10

    Keynes warned that inflation would be fueled by a price-wage spiral. As the economy approached full employment and the wage fund increased, prices would rise. If workers were compensated for the rising prices, this would put renewed pressure on the price level.11 As a result of accelerating inflation, workers would be left without savings and without increased consumption enjoyments. Capitalists, on the other hand, would profit from the increasing prices and would become creditors to the government. Keynes cautioned that, after the war, the state would be left with very high debt in the hands of a few powerful and rich. The workers would have paid with their labor for the war but been left with nothing.

    If Keynes’s first alternative was somewhat laissez-faire, the second alternative was the opposite. It was to “control the cost of living by a combination of rationing and price-fixing.”12 Keynes took an equally critical stance toward this approach. It might be “a valuable adjunct” to his main proposal, but it would be “a dangerous delusion to suppose that equilibrium [could] be reached by these measures alone.”13

    In Keynes’s eyes, price controls would not serve to effectively limit the excess purchasing power that resulted from the war-driven expansion of employment. “[I]t would never be practicable to cover every conceivable article by a rationing coupon” and, by the same token, to control all prices. 14 Therefore, Keynes suggested that the purchasing power would be redirected toward those commodities, which remained uncontrolled because they attracted relatively little demand. The consumer would end up receiving what is “least desirable” while an overall excess demand remained and drove up the prices of these uncontrolled commodities.

    In sum, for Keynes, comprehensive price controls and rationing were unlikely to be effective in containing the excess purchasing power. If, against his prediction, they were to be effective, the result would be an undesirable allocation of the limited supply of consumption goods.

    Instead, Keynes promoted a third alternative: “a scheme of deferred pay” combined with carefully selected price controls to the extent necessary. He argued that the excess purchasing power of the wage earners should be withdrawn during wartime by forced saving with a government-run bank. This way, they would be rewarded after the war by “a share in the claims on the future which would otherwise belong to the entrepreneurs.”15 Keynes thought that inflation would be contained by the resulting temporary reduction in aggregate demand. The same could be achieved by taxation, but in this case, the wage earner would be left without any individual claim on future wealth.

    The plan of deferred pay was to be assisted by limited price controls and rationing, which would serve “to divert consumption in as fair a way as possible from an article, the supply of which has to be restricted for special reasons,” such as the interruption of foreign trade. 16 Such a diversion of demand should, according to Keynes, take the form of rationing and price control only “if this article is a necessary, an exceptional rise in the price of which [was] undesirable.” 17 For all other goods, demand should be checked by “the natural method” of a rising price in response to a limited supply.

    Keynes’s plan for deferred pay was the most prominent theoretical contribution to the question of war finance, not only in the United Kingdom but also in the United States.18 Even Hayek acknowledged that Keynes had set the standard for thinking about the problem of wartime inflation.19 Another important contribution more tailored to the specific conditions in the United States came from Alvin Hansen, an institutionalist and one of America’s leading teachers of Keynesian thought at Harvard University.20

    Hansen departed to some extent from Keynes in his vision of price controls. For him, “until an approach to full employment [was] reached … the main danger of inflation [was] in the development of bottlenecks”21 and, hence, these bottlenecks should be the primary target of anti-inflation policy. Hansen recommended that “the weapon of specific price increases where these may help eliminate bottlenecks” should be used when “provision of adequate plant and equipment capacity and … an adequate supply of skilled mechanics” was readily forthcoming.22 In the event of a lack of such capacity and labor with regard to a specific bottleneck, however, direct price control and rationing could help prevent inflation. Hansen saw the most serious of these bottlenecks in steel. 23 Thus, he differed from Keynes by shifting attention from price controls on necessary consumption goods to those on production goods. While Keynes’s plan relied almost entirely on the relationships between homogeneously perceived aggregates, Hansen considered the great asymmetries in sectorial production capacities and demand pressures. In this view, shortages could exist side by side with oversupply driving inflation up long before full capacity utilization.

    Galbraith—then a young economist—made his name by deepening Hansen’s analysis of such heterogeneities. He too emphasized the pressing need to prevent inflation, which had as a result of the First World War been an “almost paranoiac concern in 1940 and 1941.”24 But neither Hansen’s peacetime-inspired notion of bottlenecks nor Keynesian considerations of some aggregate relations would serve to capture the dramatic change in requirements resulting from war and, hence, to understand the problem of war inflation. According to Galbraith, the problem of the war economy was “progressively more difficult” than these two notions suggest, and it entailed nothing less than a reorganization of the resources in the whole economy. Under the circumstances of such restructuring, one would “encounter a steadily increasing number of industries where the supply function [would] be inelastic.” As a result of these rigidities, Galbraith argued, there would be “price advances in the interim” and “[f ]ull employment will have little or no relation to the appearance of inflation.”25

    Nevertheless, Galbraith remained optimistic that “[r]easonably full use of resources without serious inflation [could] be achieved.” But, contrary to Keynes, in Galbraith’s view, it was impossible to “rely entirely, or even in major part, upon measures which reduce[d] the general volume of spending in the economy.”26 Such a reduction in aggregate demand had to be combined with direct measures to facilitate the industrial reorganization. This, according to Galbraith, had to include two main tasks: first, it was necessary to develop capacity, skills, and domestic sources of material supply to smooth the expansion of anticipated pressure points; second, “in the areas where resistance develop[ed]… specific price controls or price-fixing” supported by a degree of rationing was needed. Thus, Galbraith believed in a role for Keynes’s investment policies during wartime, while Keynes himself had declared his own theory inapplicable to this special case.

    Practical attempts at control

    While the arguments of Keynes, Hanse and Galbraith were all convincing on their own terms, they all proved impractical in reality. At the time when Galbraith caught the attention of the economics profession with his bold argument in favor of greater price controls, Leon Henderson was heading the newly created Office of Price Administration (OPA). He was a prominent economist in the Roosevelt administration, and his approach to economic policy “had the characteristic New Deal qualities of public activism, brash experimentalism.”27 Henderson had become involved in attempts at price stabilization when the prices of raw materials and industrial goods shot up after the German invasion in Poland, which he thought would impede the recovery of the American economy still suffering from the Great Depression.28 When the United States entered the Second World War in December of 1941, Henderson immediately pushed for far-ranging controls to stabilize prices during the war effort and to prevent a boom–bust cycle upon its end.29 Conscious of Galbraith’s writing, Henderson decided to offer him what would come to be called “the most powerful civilian post in the management of the wartime economy.” Galbraith was put in command of all prices of the United States.

    Thus began “the longest and most comprehensive trial [of price controls] in America’s history.”30 The American price-fixers had to defend their selective price schedules on multiple fronts. They had to debate the high-powered economists with their theoretical considerations, negotiate fiercely with the industry bosses, and fight to be granted the necessary legal power from Congress. Yet, to the surprise of the price-fixers, for those commodities for which they published specific price schedules, the ceiling prices were well observed even before penalties could be imposed.31 By the fall of 1941, their informal controls effectively restrained about 40 percent of wholesale prices.32 But the task of determining commodity-specific prices for all relevant products, while allowing for a degree of flexible price adjustments, proved impossible.

    The price-fixers were challenged by the complexity of input-output relations and “began to realize for the first time what an unreasonably large number of products and prices there were in the American economy.”33 Further, they had overlooked the importance of “wage–push inflation.” 34 The OPA, in line with the New Deal legacy, remained committed to the cause of the poor, the workers, and the farmers and initially tried to abstain from wage controls and strict price controls for agricultural goods. A mutually reinforcing upward pressure on wages as employment was expanding and, as a result, also drove up agricultural and industrial prices. This proved to be a great challenge to the effort of price stabilization.35

    Despite the OPA’s efforts, consumer prices had risen by 11.9 percent, and wholesale prices by 17.2 percent, from April 1941 to 1942 alone.36 This immediate inflationary pressure was much higher than most economists had expected at the beginning of the war. As a result, the OPA and Galbraith became amenable to the ideas of Bernard Baruch.

    Baruch’s voice was very different from that of most of the professional economists. Born in 1870, he had become rich in his early years on Wall Street and had gained influence as a political advisor to President Woodrow Wilson. He became chairman of the War Industries Board during the First World War and, as such, was also a member of the Price-Fixing Committee. Baruch experienced the challenges of controlling inflation firsthand. 37 He lobbied for an overall price freeze in the Second World War. Instead of selective price controls, he believed that all prices should be pegged where they stood. This became known as the Baruch Plan. 38

    Henderson, Galbraith, and their team, joined by celebrated economists such as Irving Fisher, had initially lobbied against the Baruch Plan. But about a year into Galbraith’s work at the OPA, they had to admit that their own attempt at scientific price-fixing had failed to prevent the price level from rising rapidly. 39 On April 28, 1942, the General Maximum Price Regulation was imposed under the lead of the OPA. All “prices legally within reach” were set a ceiling that was defined as “the highest charged [price] in March by that seller for the same item” 40, and wages were brought under government control. 41 The new policy was based on given prices observed in the market within a certain time span, rather than trying to determine abstractly what the price for each commodity should be.

    The General Maximum Price Regulation was more effective than selective controls, but it failed to stabilize the overall price level sustainably due to a lack of control over wages and agricultural prices. Following Henderson’s initiative, Roosevelt finally received backing from Congress for legislation to freeze farm prices at parity and to stabilize wages in September of 1942.42 This proved unpopular with both workers and farmers and caused severe losses for the Democrats in the congressional elections. As a consequence, Henderson had to resign.

    On April 8, 1943, President Roosevelt issued an executive order to force an effective general price freeze. This policy was called “hold the line” and enforced that no “further increases in prices affecting the cost of living or further increases in general wage or salary rates” would be tolerated, “except where clearly necessary to correct substandard living conditions.”43

    With the executive backing of the “hold-the-line” order—and thanks to a great popularization effort to make dollar-and-cents price ceilings known to sellers and consumers—the OPA, under Henderson’s successors Prentiss Brown and Chester Bowles, managed to halt inflation. The annual Bureau of Labor Statistics living cost index increased by less than 2 percent annually between spring 1943 and April 1945, or one-sixth the rate of the preceding two years.

    The outcome of wartime controls

    The benchmark for the OPA’s work was to achieve greater price stability and higher output growth than occurred during the First World War. Year-by-year comparisons of the macroeconomic performance during the Second World War, relative to that of the First World War, were a common tool for evaluation as well as a public demonstration of the effectiveness of price controls.44 At the end of the Second World War, the Harvard economist Seymour Harris delivered a careful empirical analysis of the detailed workings of price controls in the United States; his analysis documents the comparatively superior stabilization record in the Second compared to the First World War. Harris suggests that “the most significant test of the success of any price-control program is its effects on production”45 (see Figure 2.2). As they were slowly put in place in the later years of the war effort, the partial price controls of the First World War did help to some extent to stop inflation from rising further. Nevertheless, the experience of the First World War was one of inflation under lose price controls, while production stagnated; in the Second World War, price controls became strict, price rises were low, while the increase in output was almost beyond imagination.

    In contrast with Keynes’s assumption of a fixed “size of the cake” during wartime, the gross national product (GNP) of the United States almost doubled from 1940 to 1944: private capital formation declined but was more than compensated for by a dramatic expansion of government expenditures and a slight increase in consumer expenditures (see Figure 2.3). Instead of crowding out private expenditures, government expansion drastically increased the size of the cake.

    The consumption of durable goods such as automobiles and furniture declined, but personal savings rates more than tripled during the Second World War, from the prewar level of about 6 percent of GNP to more than 20 percent in the years 1942–1944.46 Personal savings, which peaked at about USD 30 billion in 1945, were critical to closing the gap between the growth in purchasing power and that in the supply of consumption goods.47 In the context of an increasing supply of consumption goods, effective rationing of scarce goods, and extremely high saving rates, black markets were by far not as pervasive as many had predicted, and quality deterioration was limited to a few products.48 At the same time, corporate profits might not have skyrocketed in the ways some business leaders had hoped, and the profit rate fell during the war. But annual profits after taxes still more than doubled from 1939 to 1943, from USD 4 billion to USD 8.5 billion, as a result of the rapid expansion of GNP.49

    In 1952, Galbraith wrote A Theory of Price Control to summarize his reflections on the workings of price controls. He argued:

    [T]he strategy of control must involve a two-way move. Along with the controls over the growth of income from the side of taxation and savings there must be direct market controls. On this side the role of price control per se … is strategic. No more than the economist ever supposed will it stop inf lation. But it both establishes the base and gains the time for the measures that do.50

    The postwar transition

    On August 15th, 1945, Japan announced its surrender; on the 18th, the new President of the United States, Harry S. Truman, issued an executive order “for the orderly Modification of Wartime Controls.”51 It stated the aim to “move as rapidly as possible without endangering the stability of the economy toward the removal of price, wage, production and other controls and toward the restoration of collective bargaining and the free market.”

    As part of this order, direct wage controls were removed.52 Truman was aware of the dangers of a hasty price increase as a result of decontrol, but he hoped to be able to relieve the direct controls and “hold the line” through voluntary cooperation with business leaders and trade unions.53 When this failed, he changed course and sought support against a policy of decontrol.

    Truman was not alone. In 1946, fifty four economists published a letter in the New York Times urging the extension of the Price Controls Act for another year. Opinion polls showed that the general public also strongly supported the plan of keeping price and wage controls.54 But the eventual bill extending the price control contained so many “crippling amendments” that on June 29, Truman vetoed it, hoping for a bill that would keep price and wage controls in place more effectively. Such a bill never passed. As a result, wartime price controls reached an abrupt end.

    The sudden end to almost all price controls did, in fact, cause the inflationary rise in prices as the President, the OPA staff, and the letter by economists had predicted (see Figure 2.1). Some key input commodities, such as steel scrap, copper, tin, and rubber, still had their prices set by the government, and rents, sugar, and rice remained controlled. But, irrespective of this, prices rose rapidly after the failure to renew the Price Control Act.

    Michael Kalecki analyzed postwar inflationary tendencies in the United States and other countries on behalf of the United Nations Department of Economic Affairs. He showed that, in particular, the prices of essential raw materials and foodstuff, for which the respective demand by producers and consumers is inelastic, shot up.55 Wage increases did not compensate for the increase in the cost of living, so price decontrols resulted in a decline in real incomes for workers. Real labor incomes had declined 8 percent by the first half of 1947, as compared with the first half of 1946.56 At the same time, profit margins increased, and there was a shift of gross private income from labor to capital. In the course of the war, labor had increased its share. By the first half of 1946, labor still accounted for 61.5 percent, which declined to 58.8 percent by the first half of 1947. Conversely, gross corporate profits rose from 11.6 to 14.0 percent.57 The rapid decrease in workers’ purchasing power, the redistribution of income, and the devaluation of their wartime savings unleashed the greatest wave of labor strikes of the US postwar decades.58 The immediate postwar years saw a short, inflationary boom paired with labor unrest, followed by a sharp downturn. The United States thus experienced the boom–bust cycle Truman had feared.59 Yet, a Great Depression was avoided, possibly thanks to spending for the Korean war.

    Pragmatic prices

    In “Reflections on the Invisible Hand,” mathematical economist Frank Hahn warned that both those believing in the omnipotent power of the visible and the invisible hand “take it for granted that somewhere there is a theory, that is a body of logically connected propositions based on postulates not wildly at variance with what is the case, which support their policies.” 60 During the Second World War, even the self-declared pioneer of free enterprise, the United States, retreated to a pragmatic approach in its use of the visible hand and controlled, among other things, most prices and wages to finance the war while achieving low inflation. Not only had the ideal of the invisible hand been abandoned, but, more fundamentally, it had been acknowledged that there is no such “theory, that is a body of logically connected propositions” that serves to draw up a comprehensive blueprint for economic policy. Instead, one had to apply “the wishy-wash, step by step, case by case approach,” which Hahn recommended as “the only reasonable one in economic policy.” In our current moment of crisis and instability, we would do well to heed this advice.

  3. Odious Debts

    Leave a Comment

    In the aftermath of its 2003 invasion of Iraq, the United States was eager to restructure the ailing country’s sovereign debt. International sanctions since the Gulf War meant that Iraq was economically isolated, yet the country had a large stock of unpaid debts issued to governments, financial institutions, and commercial trading partners that dated back to its weapons purchases during the 1980 Iran-Iraq war. 

    In the early 2000s, Iraq was the most indebted nation in the world, with a debt totaling 573 percent of GDP. This debt, and the war it financed, transformed it from a prosperous nation to a war torn, insolvent one in just a few years. Scandals surrounded Iraqi creditors, who were charged with everything from illicit money flows to covert weapons sales. In the long run, this debt was a significant obstacle to international trade. In the absence of restructuring, old creditors could attach commercial or sovereign assets to recoup their money, as they had in cases of earlier defaults. 

    The Iraqi debt stock is a glaring example of “odious” or illegitimate debt—a notion that dates back to the 1898 peace negotiations of the Spanish-American War. Back then, the Americans claimed that neither they nor the Cubans were responsible for Cuba’s outstanding sovereign debt. Rather, they argued, since the debt had been forced on Cuba by Spanish colonial rulers, it should be understood as illegitimate. 

    Since then, there have been attempts to define an explicit legal doctrine of legitimate debt, but the adoption of such a doctrine has remained rare in international negotiations. Today, the notion of odious debt is best thought of as a principle rather than a defined set of international laws. It holds that debt issued without public consent and serving no public purpose at the time of issuance ought not to be inherited by successive governments. This, however, runs against key tenets of international law, which maintain that debt issuance depends on the apolitical durability of obligations. 

    But even within the existing international framework, countries do not always pay back their debt. The number of sovereign-debt defaults and restructurings have been increasing alongside debt levels. Debt crises are still dealt with on an ad-hoc basis, and the international monetary system is unprepared for the cluster of economic and political crises that continue to unfold. In this context, the notion of odious debt gains renewed urgency—both in conceptualizing sovereign debt crises, and in combating their severe consequences. Iraq and Haiti present two key examples.

    Restructuring Iraqi debt

    Iraqi sovereign debt was issued to fight a geopolitical war which bore little relevance for the vast majority of Iraqis. By the US invasion in 2003, there was a strong case not only to restructure the debt, but to simply write it off. This was the position of many grassroots groups who called for a debt jubilee. Surprisingly, it’s a proposal that even received support from the Department of Defense and the White House, institutions that rarely interfere in issues of sovereign-debt management. While the US government never put forward a coherent view, many US officials recognized that success in Iraq depended on its integration in international trade. 

    Though they don’t follow a template, sovereign-debt restructurings always include distinct players: the debtor country’s government, normally represented by the finance minister or central bank governor, aims to streamline the process and maximize relief. On the creditor side is a patchwork of committees: Western countries negotiate collectively in the Paris Club, while commercial creditors tend to form working groups based on industry, geography or type of claim. Bankers and lawyers represent each group, and the IMF provides credibility (debt sustainability analyses and balance of payment data) as well as bridge financing. In the Iraqi case, the US Treasury was heavily involved in discussions thanks to the US government’s political engagement. 

    Despite the moral and economic justification for debt cancellation, the IMF, finance ministries around the world, lawyers, and bankers all had an incentive to resist the invocation of odious debt for two reasons. First, it would upset the normal order of sovereign-debt restructurings, potentially rendering other debt claims subject to dispute. Second, in the case of Iraq, political backing from the US meant that a normal debt restructuring could be done swiftly and powerfully. 

    The global financial community argued that a standard restructuring would be easier than developing a new doctrine. Because there was powerful political backing from the coalition governments to write off Iraqi debt, they turned out to be right—the subsequent restructuring was successful, and Iraqi debt was reduced by almost 90 percent in net present value terms. Most governments signed off, and commercial creditors accepted the deal. The agreement offered a 20 percent return of nominal claims plus interest. Because the debt was so old at the time of restructuring, interest had accumulated and creditors received high returns. Loans which had been written off were held at the same rate on balance sheets as non-performing loans. 

    The standard restructuring allowed creditors to settle their debts and avoid the uncomfortable questions surrounding its issuance. Crucially, however, the Iraqi government was not at the table. The case demonstrates the complexities in officially recognizing a debt as illegitimate—even if odious debt is identified and agreed upon, the political will to write it off tends to be lacking. 

    Haiti’s independence debt

    Between 1825 and 1950, Haiti paid 150 million francs to France. In the wake of the slave revolt that led to Haiti’s declaration of independence in 1804, the fledgling former colony was forced to pay an enormous “indemnity” fee to its former oppressor. It was also obliged to give preferential treatment to French exports. To repay its creditors, Haiti had to borrow extensively from French banks, ensuring its indebtedness for generations. War reparations are normally imposed by the victor, but in the case of Haiti, a different logic prevailed. Rather than the Haitian people or government demanding reparations from their former French masters, it was France that sought compensation for the loss of its former colony and slaves. According to an estimate by Kim Oosterlinck, Ugo Panizza, Mark Weidemaier, and Mitu Gulati, the indemnity payments were equivalent to 270 percent of Hatian GDP. 

    Though Haiti had won its independence, it was quickly crushed by foreign debt. The exorbitant sums crippled the Haitian economy, hampering domestic and foreign investment, while cementing the status of a small elite. Oosterlinck and coauthors calculate that had the indemnity been invested with a real return of 1 percent since 1825, it would be equivalent to 22 percent of Haitian GDP today. 

    Today, Haiti is one of the poorest nations in the world, and calls for reparations are growing. Before the 2004 overthrow of President Aristide, Haiti prepared a lawsuit against France for restitution of the debt. The claim was and is substantial; the indemnity and associated loans were not made for the benefit of the Haitian people, but for that of Parisian bankers, French exporters, and (later) New York banks. Haitian elites gained recognition of their property rights, but the vast majority of Haitians suffered. 

    The cases of Iraq and Haiti illuminate the persisting importance of odious debt, from colonization to modern day warfare. In Iraq, creditors lent money for weapons purchases, financing a war fought on someone else’s behalf. In Haiti, French gunboats were situated outside the capital until the indemnity agreement was signed, ensuring deep indebtedness for generations. In both cases, coercion and force were central to the imposition of odious debt burdens. As with other cases, a small cadre of elites benefitted from the loans, but they never reached the broader public. 

    What comes next?

    The issue of odious debt puts forward a fundamental question: how do we design a system in which corrupt loans can be written off, without destroying the foundations for sovereign-debt lending altogether? It’s been almost twenty years since the Iraqi sovereign-debt restructuring, but no progress on detailing a usable doctrine has been proposed. Similarly, over 200 years after the imposition of its independence debt, Haiti has halted efforts to bring a claim against France. If these two egregious cases of odious debt cannot be used as the basis for reform, there seems to be little hope of tackling the problem of illegitimate debt. 

    Central to devising an odious debt doctrine is ensuring the ability to distinguish between illegitimate and legitimate debts. Not all loans should be at risk of being called illegitimate—debtors’ ability to tear up loan agreements without proper justification would pose a problem for the sovereign debt market overall. Bond contracts are written with cross-default provisions on a country’s “external indebtedness,” often governed by English or New York law. If a country chooses to default on one creditor, then it almost always results in an automatic default on all other creditors as well. This means that if a country borrowed money from a bank to build torture facilities for dissidents of its previous regime, but also borrowed money from a pension fund to build windmills, it is then difficult to pay back the pension fund but not the bank. Reform is clearly needed in this regard. Recognizing odious debts would mean that a court could designate the loan to build torture facilities as illegitimate, thereby allowing the country to renege on a specific loan and remain in good legal standing without any second-order contractual problems. 

    But even this reform faces major political obstacles. Powerful agents in the global economy have no interest in developing a working doctrine of odious debt; many investors and banks have given loans to corrupt regimes and despots, and creditors like to be repaid. A norm that relies on the Paulie Rule of repayment is obviously a norm that creditors want to maintain. So far, very few sovereign-debt restructurings have involved any discussion of the moral dimension of paying creditors. But as the global debt crisis worsens in the coming years, such considerations are likely to become indispensable. 

  4. Development Engines

    Comments Off on Development Engines

    In December 2021, President Joe Biden announced a proposed consumer tax incentive for electric vehicles (EV) made in the US by unionized autoworkers.1 The tax incentive promises to support the transition to “green technologies,” curtail dependence on fossil fuels, and “decarbonize” the economy, while strengthening collective bargaining after decades of state-led efforts to weaken unions.

    Unsurprisingly, the proposal encountered opposition. In the US, Honda, Kia, Nissan, Hyundai, and Tesla opposed the bill on the basis of its union-made incentive; it only benefits Ford, General Motors, and Chrysler—Detroit’s Big Three whose workforces are famously unionized. If the concern is climate change, business analysts asked, why not expand the credit to companies that are already manufacturing electric cars? The incentive also sparked accusations of protectionism. The Mexican government argued that a tax incentive contravenes the USMCA trade deal “by granting undue advantage to US-built vehicles.” Since the EV tax credit is a type of subsidy, it goes against World Trade Organization rules. The European Commission also opposed the incentive, arguing that it discriminates against EU car and car-component manufacturers, which employ 420,000 workers within the US.

    The row over the EV tax incentive exposed the tensions between domestic policies and free trade rules in a highly integrated regional economy. Policies to secure jobs for a particular workforce potentially make jobs even more uncertain for other workforces employed in the North American automotive sector. This includes Mexico, where the automotive sector represents 20 percent of GDP.

    Prior to the North American Free Trade Agreement (NAFTA), Mexico’s automotive sector was directed by domestic policy. National and local policies aimed to  build a capital- and labor-intensive manufacturing sector, thus serving as a source of job creation. This changed in the 1990s. With the formation of the North American geo-economic region in 1994, car production came to implicate foreign policy, trade agreement rules, transnational corporations, and international unions. Today, car production is far more complex than simply setting up a factory in a particular locale. Tracing Mexico’s automotive industry over six decades illuminates how the North American automotive sector of the present combines local, domestic, and global interests, each carrying distinct consequences for labor, industrial policy, and now, climate strategy.

    Building Mexico’s automotive sector 

    In a bid to spur industrial development, in 1962 the Mexican government decreed that 60 percent of each car sold in Mexico must be manufactured domestically.2 This required multinational automobile corporations partnering with local businesses to establish car factories, strengthening Mexico’s oligarchy while developing the domestic car industry.3 A national automobile industry was integral to Mexican industrialization, as car production was closely linked to the iron, steel, rubber, plastic, glass, and oil industries.4    

    Domestic policies gave preference to industry over small-scale agricultural production. State-led industrialization via the economic model of import-substitution industrialization (ISI) benefited car corporations by way of protectionist policies, tax concessions, and infrastructure (such as highways). The government also gave manufacturers cheap access to land expropriated from peasants, as it considered factories to be of public interest. Volkswagen Mexico, located in Puebla, 81 miles south of Mexico City, benefited from three such land expropriations.5The state also played a major role in directing production: in 1969, for example, the government required car companies to increase their exports to balance out the imported content of their vehicles.6

    The automotive industry propelled the rise of a waged, skilled, and mostly male working class in Mexico. The industry’s development in the 1960s coincided with the rise of state-provided social services and infrastructure—public healthcare, credit for buying houses, and subsidized food stores—which were geared toward autoworkers and their dependents. Whereas public education is a right based on citizenship, healthcare and credit for housing are only accessible to those formally employed in the public or private sectors.7 This system of public services supplanted low wages—workers regularly worked double or triple shifts in order to meet the cost of living—but still enabled autoworkers’ social mobility. The social safety net had a clear preference for industrialized workers over other sections of the working class, such as peasants. Under ISI, corporations, state-owned companies, and a mostly male industrial workforce in the oil, automotive, and electric sectors benefited from national policy.

    Labor had little influence over industrial policy. Since the 1930s, state-allied unions have dominated Mexico’s labor movement, and when it came to the automotive industry, they endorsed contracts that primarily served company interests and deterred any labor unrest that resulted. Independent unionism emerged in the automotive sector in the 1970s, encouraging internal union democracy and union control over the labor process at the factory level, but these unions were excluded from the policy debates that concerned the automotive sector.8 Nevertheless, organized labor gained victories during during pro-labor governments.9

    In the 1970s, Mexico shifted from ISI to an export-oriented industrialization (EOI) strategy. Factories owned by transnational corporations known as maquiladoras still benefited from the ISI practice of providing facilities to companies moving to Mexico. What changed, however, was that at maquiladoras, the materials brought from the US were assembled in Mexico, and then exported back to the US for sale.10 The rise of EOI was simultaneous with austerity measures imposed by the IMF in the aftermath of Mexico’s debt crisis in the 1980s. As a result, in the 1980s and 1990s, Mexico cut state subsidies, privatizing state enterprises, education, and healthcare. Domestic policy moved towards the creation of low-paying jobs contingent on market conditions and simultaneously undoing the system of social services that defined the ISI era. 

    The NAFTA shift

    Although the development of an export-oriented strategy de facto ended ISI through the 1970s, it was only formally terminated by decree in 1989, in order to create a legal structure for NAFTA at the national level. At the time of signing in 1992, NAFTA created the largest free trade zone in the world, part of an emergent multilateral trade order in which free trade agreements effectively wrote the “constitution of a single global economy.” By this point, the maquiladora model dominated the auto industry.11 NAFTA and the introduction of just-in-time production in the automotive sector lowered costs by increasing flexible specialization. The industry was thus fragmented into three sharply differentiated processes: auto-parts manufacturing, distribution, and car assembling.12

    So-called just-in-time drastically restructured production and labor. Car factories continued to do assembly but otherwise outsourced much of the production process to hundreds of companies that handled auto-parts manufacturing, distribution, and tasks only indirectly connected to car production, such as janitorial and cafeteria services, and the preparation of cars for shipment. Today, the segments in the production chain that employ the largest workforces are manufacturing and distribution. The production practice has opened the door to increased labor flexibilization and precaritization: during the past ten years, industrial robots—requiring no human operator—have slowly displaced autoworkers. According to a 2018 report by Stephen Woodman for the Center for International Governance Innovation, “In 2011, there were 83 Mexican autoworkers to every one robot.… By 2015, the ratio had dropped to 19 to 1.”  

    With the implementation of just-in-time production, independent unions lost control over the production process, especially in their ability to influence production rates, promotion procedures, and employment security. At the national level, independent unions and their state-allied counterparts lost their already constrained role in negotiating agreements,13 though independent unions were more likely to strike. 

    Nevertheless, Mexico’s automotive sector is often considered a major success of NAFTA. The automotive sector has indeed generated employment and increased the labor participation of a younger generation, absorbing it into the ranks of the formal economy. The post-NAFTA era has led to new employment opportunities in engineering, administration, and management. In areas with a high concentration of transnational corporations, local policies attempt to prepare aspiring workers for the standardized test required for employment. Local governments also pay a small bonus to companies to send Mexican workers abroad for training. High-paying jobs, however, are not created at pace and the majority of the jobs created by the automotive sector remain low-paying and precarious. 

    ISI laid the foundation for a development model dependent on private corporations for wage labor and the state for land expropriation, and NAFTA consolidated this model. Under NAFTA, the Mexican government continued granting tax exemptions and incentives to entice automotive investments.14 Federal and state governments have given millions of dollars to Toyota, Kia, Mazda, Honda, Volkswagen, Audi and Pirelli, a multinational tire manufacturer, according to a 2016 report prepared by the Automotive Policy Research Center. Kia and Audi also received land donations from the government—533 hectares and 460 hectares respectively. In Nuevo Leon, the government allocated Kia US$115 million in direct incentives, a 20-year waiver for a payroll tax, and US$197 million in infrastructure spending to support the installation of the plant. The land for Audi’s factory, an industrial park that houses its suppliers, and a new city next to the factory, was expropriated from peasants. Along with land, car factories receive unprecedented amounts of water, diverted from peasants to Mexico’s car economy.  

    At the national level, corporations remain key players of socioeconomic development. They are the main providers of wage labor, and therefore access to healthcare and housing. Corporations had similar dominance over wage labor under ISI, but during this era the Mexican state had a greater role in delineating industrial policy. At a global scale, corporations exercise a power granted by the laws structuring and regulating free trade. NAFTA was the first trade agreement to establish the investor-state dispute settlement (ISDS) allowing investors to sue a country for “breaching obligations” and causing damages to investors as a result.15 Under NAFTA, the government’s role was in large part to promote investment while keeping labor in check.16

    Labor under the United States-Mexico-Canada Agreement (USMCA)

    In 2020, NAFTA was replaced by USMCA. Unlike its predecessor, USMCA addresses labor issues directly, with broad effects for labor and organizing in Mexico. Chapter 23 of the agreement includes a mechanism to enforce workers’ individual and collective rights, which has already had positive effects for workers. In 2021, the Mexican government was asked to review potential labor rights violations in a GM factory in Silao and an auto-parts factory in the border city of Matamoros. In February 2022, GM workers elected a new independent union in northern Mexico. UAW President Ray Curry voiced his support for the victory, along with Unifor, Canada’s largest private-sector union.17 UMSCA also includes a clause to improve wages, requiring that “40–45 percent of auto content be made by workers earning at least $16 per hour.” While promising in theory, it is not clear how the requirement will work in practice for those on the shopfloor. So far, Jesús Seade Kuri, Mexico’s chief negotiator on the USMCA, declared that the $16-per-hour requirement is being met through the salaries of engineers and administrative staff.18

    UMSCA preserves the integration of the automotive sector across Mexico, the US, and Canada that was emblematic of the NAFTA era—a system where, according to a 2021 report from the US Congressional Research Service, “hundreds of suppliers make parts that cross borders seven or eight times before being assembled into a finished car.” Previously, NAFTA required that 62.5 percent of a vehicle net cost and 60 percent of the cost of its parts had to originate in the North American geo-economic region to obtain free-trade benefits. UMSCA increases the regional content value to a range of 70 to 75 percent, depending on the vehicle, and divides that content requirement into three groups: core parts, main parts, and complementary parts. Through changing the content values, USMCA extends further into the organization of car production, and makes each country’s automotive sector dependent on one another.

    NAFTA, and now USMCA, have transformed the automotive sector into a significant engine of jobs and economic growth in the three signing countries. By 2020, the sector represented represented 3 percent of US GDP and employed an average of 4.1 million people. In Mexico, it represented US$78 billion in annual revenues and employed over 1 million people across the country, making it the largest manufacturing sector in the country led by original equipment manufacturers (OEMs). In Canada, the sector contributed CAN$12.5 billion to the GDP and employed 117,200 people directly, as well as 371,400 people indirectly. The three countries are among the largest automotive producers worldwide: the United States is the second largest global car producer and the second largest auto-part manufacturer and exporter; Mexico is the sixth and the fifth respectively; and Canada is among the world’s top twelve producers.

    The era of electric cars

    Today, emissions-free cars have opened up a new chapter in the North American automotive sector, but competition around clean energy has challenged the era of free trade. With electric cars seen as a path to curtail fossil-fuel dependence, car manufacturers located in Mexico have pledged to phase out internal-combustion engines and hybrid cars over the next two decades. These promises have led to major investments: Ford invested $420 million in a factory in Cuautitlán which currently assembles electric cars; in April 2022, GM announced a US$1 billion dollar investment to upgrade its factory in Ramos Arizpe, Coahuila to begin manufacturing all-electric vehicles in 2023; in March 2022, the Volkswagen group announced a US$7.1 billion investment to produce battery-electric cars throughout North America. GM also announced a $9 billion investment in its US factories to manufacture electric vehicles or battery cells.19 Volkswagen will upgrade factories on both sides of the border. With these simultaneous investments, corporations seem to be committed to distributing car production throughout the North America corridor rather than move production entirely to a single locale. Auto-part manufacturing in Mexico has also seen a bump in production; in 2019, it produced over US$8 billion worth of electrical automotive parts, an 8.3 percent increase from the previous year. The investments have extended down the supply chain. The Chinese company Ganfeng Lithium, a supplier for Tesla, announced the construction of a lithium-ion battery recycling factory in Sonora. The Chinese corporation Contemporary Amperex Technology Co., (CALT)—the world’s largest producer of EV batteries—is considering investing US$5billion in Mexico, Canada, or the US. The construction of battery production and recycling factories are key for complying with UMSCA content requirements, given that, so far, most batteries are manufactured in Asia. 

    For Mexico, the shift to emissions-free cars means an opportunity to regain some control over natural resources that fell out of public hands during the twenty-five years of NAFTA. In March 2022, President López Obrador nationalized lithium—required to produce EV batteries—declaring it to be a “strategic mineral” for Mexico. By nationalizing lithium, the Mexican government aims to building a strong and affordable public energy sector. The announcement was controversial. Kenneth Smith Ramos, who headed the technical negotiations to create UMSCA, argued that the bill contravenes UMSCA.20 Katherine Thai, the US Trade Representative, stated that Mexico’s legislation in relation to lithium is “anticompetitive and counter to USMCA protections and provisions” and stymies climate responses by preventing the three signing countries from working together to develop clean energy.

    Biden’s EV tax incentive and López Obrador’s lithium nationalization are evidence of nascent competition between the US and Mexico around billions in potential green investments in the automotive sector. In an era of free trade, however, this competition not only involves granting tax concessions, infrastructure, and the availability of a skilled workforce, it also complicates trade agreement rules. What’s revealed is an uneven field of power relations, shaped by the history of foreign relations and corporate power in each country. 

    Efforts to “re-nationalize” elements of car production demonstrate how the automotive sector negotiates local, national, and global interests—policies aiming to support labor in one country implicate labor and corporate interests in another. Still, there are continuities between the origins of Mexico’s automotive sector and its current state. The Mexican government’s manufacturing-centric growth model, reliance on corporate investment, and divestment from social services have enabled the precaritization increasingly prevalent in the sector today.

    At the same time, a strengthened labor force—as evidenced by recent victories of independent unions—may suggest a potential new direction. USMCA’s chapter 23, along with new labor reforms in Mexico requiring direct union elections and public contracts, have allowed Mexican workers to challenge corrupt unions. Although it remains to be seen how different national interests can be reconciled, shifts in labor legislation across levels of governance are hopeful signs of emerging possibilities in labor unionism.

  5. A New Labor Regime

    Comments Off on A New Labor Regime

    Since coming to power in 2014, India’s right-wing government led by Prime Minister Narendra Modi has introduced sweeping reforms aimed at strengthening the union government at the expense of the states, and catering to large corporations over smaller establishments and workers. 

    The 2017 Goods and Services Tax (GST) and the 2020 Farm Laws represent distinct ways with which these reforms have been dealt by local governments. The former, which centralized India’s indirect tax regime, was initially met with stiff opposition from the states. Over time, however, this opposition ebbed due to the difficulty of unifying against the government-controlled GST council. By contrast, farmer-led opposition to the agrarian reforms, which would have likely weakened farmers’ bargaining power and left them at the mercy of corporations, was so vociferous that even the proud Modi government was forced to repeal the laws

    Which of these paths will characterize the fate of the recently passed labor reforms is yet to be determined. Like the tax and agrarian legislation, the four new Labour Codes—the Labour Code on Wages of 2019, the Labour Code on Industrial Relations of 2020, Labour Code on Social Security of 2020 and Occupational Safety, Health and Working Conditions Code of 2020—strip workers of bargaining power through over-centralization. Their passage, which consolidates 44 different pieces of labor legislation, came without any meaningful debate or discussion.

    Together, the reforms illustrate the Modi government’s attempt to weaken existing structures of formal employment and loosen regulations, with the ultimate effect of shifting power to large employers. Despite their enormous significance, opposition to the reforms has thus far remained lukewarm. In what follows, I take the construction industry as one avenue for studying the context and consequences of the Labour Codes. Reflecting on the experience of class among India’s building workers, I then consider the likelihood of successful resistance.

    Background

    After agriculture and domestic work, the construction industry employs the largest proportion of India’s workforce: 10 percent of all registrations in the union government’s social scheme portal declared their occupation as construction in March of this year.1 Of those workers, 75 percent are male, approximately 90 percent earn less than Rs. 10,000 (roughly $132 USD) per month, and nearly 77  percent come from lower-caste or tribal backgrounds. These features have characterized India’s construction industry for decades.

    Labor laws governing the industry to date have been forced to adapt to effective labor organizing. Over the course of decades, building workers’ unions agitated tirelessly against the Contract Labour (Regulation and Abolition) Act of  1970 and the Inter-State Migrant Workers Act of 1979, arguing that the measures failed to provide an adequate social security net, ensure safe working conditions, and regulate wages and hours. These efforts came to fruition in 1996, with the passage of the Building and Other Construction Workers Act (BOCW).

    The BOCW Act had two key aims: firstly, to regulate the employment conditions of building workers, and secondly, to construct a system of social security benefits specific to their needs. This system was to be  administered by the states-run Building and Other Construction Workers Boards (BOCW Boards). While its practical implementation was far from perfect, the BOCW Act has nevertheless empowered vulnerable workers to make demands on their employer, and equipped them with  pension schemes, maternity grants, marriage grants, education scholarships, and other essential benefits. 

    The Act does this by implementing a 1 percent fee on total-project costs from all construction sites, making one of the rare labor-welfare laws with a built-in financing mechanism. While the new codes retain this mechanism,  they modify nearly everything else—welfare benefits, employer obligations, working conditions, and the very definition of a building worker itself. 

    Centralizing the welfare regime

    Echoing the Modi administration’s reforms in other arenas, the new codes increase the role of the central government in the administration of welfare schemes. Hitherto, local BOCW Boards designed welfare schemes alongside state governments, ensuring that schemes reflected the needs of particular jurisdictions. For instance, while the southern union territory of Puducherry passed a scheme for minimum basic pay due to loss of work during the monsoon season, the Delhi government passed a commensurate legislation for loss of work due to poor air quality. The states are also able to adjust benefits to local economies—while Delhi’s pension scheme amounts to Rs. 36,300 per year, Bihar’s is just 12,000 annually. 

    Now, under the Code on Social Security 2020 (SS Code), the welfare schemes can only be designed by or with the central government. Beyond the political attack on state governments, the reforms neglect financial disparities between states and the specific needs of local populations. The SS Code also broadens the criteria of eligibility for BOCW benefits to include government employees and also enables diversion of funds to hold all forms of properties. In these ways, it weakens the utility of the BOCW for construction workers.2

    The more subtle impacts of the reforms emerge from their emphasis on digitization. Usurping the power of states to register workers with the BOCW boards, the central government has replaced statutory identity cards with digital ones, and digitized the registration process. Under the BOCW Act, statutory cards empowered workers who may have lacked language and technical skills to assert their rights and receive benefits. Additionally, local registration ensured that claims could be made by vulnerable workers. The digital requirement on cards and registration ignores enormous disparities in digital infrastructure between states and digital skills between workers. Construction workers rarely have the means to pay for a phone, and regular access to internet is well out of reach. In Delhi, the digital registration process reportedly took workers more than an hour and a half to fill. 

    Uneven and arbitrary categories

    The government claimed that consolidating 44 labor laws into four new codes would introduce uniformity and simplicity. But as demonstrated, the codes have obfuscated more than they have clarified. Key to this is the disparity between the definition of a building worker in the  employment regulations and that referenced in the welfare regime. In the former, the category of building worker does not depend on the size of the construction site. But in the latter, only a worker who is employed at a site which employs more than ten workers is eligible for benefits, with the rest considered “unorganized workers.” This definition is especially problematic given the “floating” nature of the construction industry—workers move not just across project sites, but from one contractor to another. 

    In the multiple worker categories which the codes outline, each worker type is entitled to a different set of rights and protections. A person who qualifies as an “employee” has the widest variety of rights and protections, a “worker” has fewer, and a “contract laborer” has even fewer. Even these definitions of “employee,” “worker,” and “contract labor” change across the codes. Under the SS Code, a worker employed through a contractor would fall under the definition of “employee” but this same worker would not qualify as an employee under the remaining three codes. 

    In fragmenting the legal category of worker, the new codes weaken protections and limit access to welfare benefits. Workers struggle to understand which right, duty, and protection covers them, and are consequently prevented from making demands. At the same time, the codes are rather clear for employers. As a result, we can anticipate a string of  arduous court battles aiming to clarify provisions, categorizations and definitions, each with multiple rounds of litigation, to secure each right and definition of a worker under the new regime. These definitions become even more muddled when considering the nature of subcontracting in the industry.

    Contractualization

    The problem of categorization is especially acute in the construction industry, where the majority of the work is distributed from the main contractor to subcontractors, who contract to petty-contractors, who contract to pettier-contractors still, and who, finally, contract to the munshis or the head workers. Even munshis occasionally rely on  jamadars to supply unskilled labor. In almost all cases, the munshi and the jamadar belong to the same socio-economically vulnerable background as the laborers. Under the SS Code, the munshi and the team would qualify as “employees” of the main contractor, but for the purposes of the remaining three Codes, the team may equally be employees of the munshi or jamadar only. Who, then, is responsible for worker protections? Though the BOCW Act was equally ambiguous, it assigned co-responsibility to contractors and establishment owners to cater to the needs of workers. By eliminating the responsibility of the latter, the Labour Codes place responsibility on the poorer elements of the hierarchy and relieve the larger players.3  

    While the government claims to limit contractualization, the OSH code effectively promotes it—the principal employer only needs to prove that contract labor performs the normal functioning of certain activities. The new codes allow employers to classify even those working continuous, regular hours, as contract labor, stripping them of the rights and protections of permanent workers. The increased reliance on contractualization—largely cheaper and thus advantageous for employers— furthers the divides between permanent and contract laborers, a dynamic that has plagued India’s trade unions over the past three decades.

    A receding regulatory and supervisory state

    In addition to digitizing registration, blurring worker categorization, and encouraging contractualization, the new codes reduce government capacity for regulation and enforcement. While the old regime was not known for its strict enforcement of regulations at construction sites, its powers to inspect and enforce compliance remained intact. Under the new regime, compliance enforcement can only be undertaken with prior notice given to the employer, and the inspector’s powers to seek attendance and gather evidence regarding non-compliance is severely reduced. Physical inspections will largely be replaced with virtual ones, limited to scrutinizing company-filed documents.4

    The power of states to prescribe local safety conditions has been exchanged for standards set by the central government, though no record of these standards exist to date. Similarly, labor unions are no longer entitled to report violations—this power is preserved for government-commissioned labor inspectors. 

    The new inspection regime serves the interests of  the bigger players in the industry—main contractors and subcontractors. With a digitized enforcement system, even the pettiest contractors will struggle to comply with given technology limitations, and the largest (and richest) employers will be able to get away with blatant violations. As a result, workers will be forced to confront an increasingly unregulated and dangerous industry. 

    The future of sectoral demands

    The new Labour Codes illustrate the dangers of consolidation: under the guise of integration and inclusion, the reforms bulldoze decades of labor victories and replace them with minimal rights and protections. Why, then, have the codes seen so little resistance?

    India’s largest central unions have voiced their total rejection of the reforms and advocated for their complete repeal. In doing so however, they ignore the woeful inadequacy of existing regulations. Some, for example, have commented on the new codes’ improvements for gig workers, who are now recognized by the laws for the first time (though many argue the benefits for gig workers have been greatly misrepresented5). Moreover, the divided and varied nature of Indian labor markets makes it difficult to formulate a representative set of demands. While building workers may advocate for the entire hierarchy of employers—from principal employers to petty contractors—to be equally responsible for safe working conditions, gig workers may prefer to be employed by the employer, as is the case at Uber, Zomato, or another web-based application. Sales workers and those who work on a “piece-rate” basis are fighting to expand the definition of wages to include commissions, granting them access to the rights, entitlements, and protections in current legislation. 

    In a nation with high rates of fluctuating employment and informal work, making demands specific to each sector and the unique characteristics of its labor-market structure may be the ideal strategy. The BOCW Act of 1996—where building workers organized around the specific floating nature of their industry to gain access to formal benefits and rights—is a case in point. But even in this single industry, the reforms threaten the ability to organize. By fragmenting workers into various categories of “employee,” “worker,” and “contract laborer,” and dividing workers’ rights and protections based on these inconsistent groupings, even those working in a single construction site will now face increased difficulties in building and sustaining power. The challenge for the labor movement, then, is to confront the codes’ distinct impacts on each industry and segment of the labor market. So far, the wider coalition led by central unions—and notably lacking sectoral demands—has been unable to mount a meaningful opposition. But through recognizing and articulating these divisions, a common path can be forged. 

  6. Geographies in Transition

    Comments Off on Geographies in Transition

    Though it failed to resolve a number of contentious issues, the COP26 meeting in Glasgow solidified a consensus around the need for a global transition to clean energy. Implicated in this transition is the wide-scale adoption of renewables: we must build larger wind turbines, produce more electric vehicles, and phase down coal factories in electrifying rapidly growing cities. Climate negotiations often refer to the “common but differentiated responsibility” that countries bear in promoting this transformation. But in reality, its protagonists are European governments and high-tech manufacturing companies involved in the production of renewable goods. And their policies have a cost—if the world meets the targets of the Paris Agreement, demand is likely to increase by 40 percent for copper and rare earth elements (REES), 60–70 percent for cobalt and nickel, and almost 90 percent for lithium over the next two decades. 

    The EU’s proposed Green Energy Deal secures critical minerals through open international markets, necessitating mineral extraction at a faster and more intense pace. But if it is to mitigate or overturn historical imbalances between North and South, the clean-energy transition cannot reproduce the same extractive relations underpinning industrial production. In what follows, I examine the green transition both as an opportunity and a challenge for resource-rich countries in the Global South. Importantly, I argue that we need to look beyond traditional growth-oriented industrial policies and the successful “catch up” of East Asian economies to develop inclusive and sustainable green development. 

    New Geographies of Extraction

    The clean-energy transition is a historically significant moment for resource producers. It involves the intensive and extensive extraction of “rare metals”—metal minerals produced in low quantities and utilized as intermediate inputs in the manufacturing of digital, renewable, and energy technologies. These have also been identified by the US and EU as “critical raw materials” (CRMs), which are both strategically important for industrial competitiveness and at high risk due to possible supply gluts and changes in world markets. The US lists thirty-five critical minerals in order of their importance for national security and wider supply-chain vulnerabilities. EU Commission reports identify thirty critical raw materials, found in Table 1. In this ordering, critical minerals are grouped into three major categories: heavy rare earths (HREEs), light rare earths (LREEs), and platinum group metals (PGMS). The list also includes various ferrous and non-ferrous metals.

    Table 1: Current Global Share of Production and Processing of Critical Minerals


    Material

    Stage
    Main global supplier
    Share
    AntimonyEChina74%
    BaryteEChina38%
    BauxiteEAustralia28%
    BerylliumEUSA88%
    BismuthPChina80%
    BorateETurkey42%
    CeriumEChina86%
    CobaltECongo,DR59%
    Coking coalEChina55%
    DysprosiumEChina86%
    ErbiumEChina86%
    EuropiumEChina86%
    FluorsparEChina65%
    GadoliniumEChina86%
    GalliumPChina80%
    GermaniumPChina80%
    HafniumPFrance49%
    Ho,Tm,Lu,YbEChina86%
    IndiumPChina48%
    IridiumPS. Africa92%
    LanthanumEChina86%
    LithiumPChile44%
    MagnesiumPChina89%
    Natural graphiteEChina69%
    Natural rubberEThailand33%
    NeodymiumEChina86%
    NiobiumPBrazil92%
    PalladiumPRussia40%
    Phosphate rockEChina48%
    PhosphorusPChina74%
    PlatinumPS. Africa71%
    PraseodymiumEChina86%
    RhodiumPS. Africa80%
    RutheniumPS. Africa93%
    SamariumEChina86%
    ScandiumPChina66%
    Silicon metalPChina66%
    TantalumECongo, DR33%
    TerbiumEChina86%
    TitaniumPChina45%
    TungstenPChina69%
    VanadiumEChina39%
    YttriumEChina86%
    StrontiumESpain31%
    Legend
    StageE = Extraction stage, P = Processing stage
    HREEsDysprosium, erbium, europium, gadolinium, holmium, lutetium, terbium, thulium, ytterbium, yttrium
    LREEsCerium, lanthanum, neodymium, praseodymium and samarium
    PGMsIridium, palladium, platinum, rhodium, ruthenium

    Among the minerals listed above, the most significant are rare earth elements (REEs). Table 2 summarizes these seventeen chemically similar metals and their applications across various industries. These metals share a strategic importance due to their numerous industrial applications, mostly as intermediate outputs like alloys and components which are then assembled into higher value-added industrial goods such as electric motors and drones. Rare metals are essential inputs for clean technologies, notably in the mass production of wind turbines, photovoltaic panels, as well as hybrid and electric vehicles.1

    Table 2: Rare Earths Elements and their Industrial Applications

    NameSymbolAtomic No.Applications and products
    ScandiumSc21Aerospace materials, consumer electronics, lasers, magnets, lighting, sporting goods
    YttriumY39Ceramics, communications systems, lighting, frequency meters, fuels additive, jet engine turbines, televisions, microwave communications, satellites, vehicle oygen sensors
    LanthanumLa57Catalyst in petroleum refining, television, energy storage, fuel cells, night vision instruments, rechargeable batteries    
    CeriumCe58Catalytic converters, catalyst in petroleum refining, glass, diesel fuel additive, polishing agent, pollution control systems  
    PraseodymiumPr59Aircraft engine alloy, airport signal lenses, catalyst, ceramics, coloring pigment, electric vehicles, fiber optic cables, lighter flint, magnets, wind turbines, photographic filters, welder’s glasses
    NeodymiumNd60Anti-lock brakes, air bags, anti-glare glass, cellphones, computers, electric vehicles, lasers, MRI machines, magnets, wind turbines 
    PromethiumPm61Beta source for thickness gages, lasers for submarines, nuclear powered battery
    SamariumSm62Aircraft electrical systems, electronic counter measure equipment, electric vehicles, flight control surfaces, missile and radar systems, optical glass, permanent magnets, precision guided munitions, stealth technology, wind turbine  
    EuropiumEu63CFL, lasers, televisions, tag complex for the medical field 
    Gadolinium Gd64Computer data technology, magneto-optic recording technology, microwave applications, MRI machines, power plant radiation leaks detector
    TerbiumTb65CFL, electric vehicles, fuel cells, televisions, optic data recording, permanent magnets, wind turbines
    DysprosiumDy66Electric vehicles, home electronics, lasers, permanent magnets, wind turbines
    HolmiumHo 67Microwave equipment, color glass 
    ErbiumEr68Color glass, fiber optic data transmission, lasers
    ThuliumTm69X-ray phosphors
    YtterbiumYb70Improving stainless steel properties, stress gages
    LutetiumLu71Catalysts, positron emission tornography (PET) detectors

    Although mining has long been a staple input for industrialization, mineral extraction for renewables generates new patterns of trade in the energy sector. For example, resource-poor but industrialized East Asian countries like Japan, Korea, and Taiwan compete directly with European and American capital when it comes to renewables. Their car manufacturing, digital and ICT, and intermediate sectors like permanent magnets and semi-conductors all rely on access to raw materials from China and the rest of the developing world. By contrast, EU and US governments have been principally concerned with the fact that non-democratic regimes like China and Russia hold such significant market power in controlling CRM reserves and production. As the clean-energy transition accelerates, East Asia, the EU, and the US are all repositioning themselves in a rapidly changing global-value chain, deploying various diplomatic, trade, and industrial strategies to maintain, if not strengthen, their competitive advantage in a new global economy marked by competition for raw materials, technological innovation, and new manufacturing capabilities. 

    The Logic of Europe’s CRM Strategy

    The EU views the challenge of accessing critical minerals through a market lens: resource-producing countries simply need to sell their commodities at a higher price on world markets in order to resolve the supply constraint. While this means that the bloc might pay higher prices during commodity swings, critical minerals will remain available in the global market. Such a perspective assumes that market players will separate their economic and political interests. This, indeed, is what has happened since the 1980s, when European governments managed to secure gas from Russia and oil from the Middle East.

    But as Figure 1 shows, the current energy transition involves more veto players—key among them China. As a result, the EU is forced to reassess its relationship with Africa and Latin America, where substantial reserves of base metals, lithium, cobalt, and platinum are located. Nickel—a major metal needed for the construction of bigger wind turbines, EV cars, and solar panels—is concentrated in the Philippines and Indonesia. Consequently, some strategic repositioning is required for the EU to secure future minerals for green technology.  

    Of all the strategic considerations, a growing dependence on China remains the most important. As Table 1 and Figure 1 illustrate, China not only holds the most extensive rare earth reserves, it also exercises control over REE production and processing. China’s share of global production of other key metals—Graphite, Magnesium, Tungsten, Vanadium—further reveals the EU’s dependence. Moreover, recent waves of resource nationalization have suggested that emerging markets are willing and able to bargain for better terms of trade and investment.

    The EU’s CRM scarcity is an indicator of its potential vulnerability. This is compounded by the concentration of energy-transition minerals. For cobalt, lithium, and REEs, the top three producing countries control about three-quarters of current global outputs. The costs of extracting transition minerals have also increased in response to environmental pressures. For example, Chilean copper ore quality has declined by 30 percent over the past fifteen years. These trends place significant pressures on the EU’s response. Its Strategic Foresight Report recognizes the need to address the redistribution of global power as the geo-economic center of gravity moves definitively eastwards. Consequently, the EU is charting its path to coping with increased competition for resources and rivalry for influence from emerging economies.

    Perhaps unsurprisingly, EU strategy thus appears to be a defense of the current world order. In the past, European powers defended international trade under the guise of comparative advantage. Agricultural and mineral production in developing countries were taken for granted; African and Latin American states were expected to sustain mineral production for the world economy while industrialized countries would sell their manufactured goods. The recently concluded EIT Raw Materials Conference in Brussels in November 2021 reflects a similar free-trade logic. The Commission clearly articulated their intention to promote multilateralism, open access to world markets, and economic globalization. The panel on EU-Africa relations, for instance, emphasized the need to promote “European values” as a public good that African leaders might choose over China’s debt diplomacy. The report cogently captures this:

    Openness, as well as rules-based international and multilateral cooperation, are strategic choices…The history of the European project demonstrates the benefits of well-managed interdependence and open strategic autonomy based on shared values, cohesion, strong multilateral governance and rules-based cooperation [emphasis added].

    But the benefits of the open-market strategy for the Global South are dubious at best. As dependency theorists have argued since the 1960s, trade with global hegemons has often denied policy space for countries in the Global South to transcend their role as primary-commodity producers. Accordingly, commodity exporters are likely to lose against manufacturing based economies due to the price difference of their products in international trade.

    As one of my interviewees suggests, the EU’s lack of knowledge regarding the minerals in its own borders is a direct reflection of its colonial past. Reluctant to embrace mineral production in their own territories, European governments have leveraged the language of comparative advantage to place the responsibility of mineral production on the shoulders of poor, resource-rich countries. The EU’s current framework similarly seeks to justify extractive relationships with world markets through a liberal language. In the 2021 State of the Union Speech, President Ursula von der Leyen presented the bloc’s EU-Indo-Pacific strategy as a preventative measure against the influence of autocratic regimes. The EU Global Gateway—its answer to China’s Belt and Road Initiative (BRI)—emphasized the importance of European values:

    We want investments in quality infrastructure, connecting goods, people and services around the world. We will take a values-based approach, offering transparency and good governance to our partners…We want to create links and not dependencies! [emphasis added].”

    To seriously engage with the historical legacy of colonialism, the EU’s future cooperation and infrastructure strategy must recognize Europe’s historical responsibility for the lack of industrialization in the developing world. This would legitimate demands for more value added from resource-rich states. Crucially, it might help correct the colonial logic of extraction that is deep-seated and widely embedded in European institutions and public debates. If anything, the current shift eastwards could tame the intrinsically Eurocentric perspective dominant in mainstream social sciences.

    Critical Minerals: Curse or Blessing?

    Resource producers in Africa, Latin America, and Southeast Asia are well-positioned to take advantage of soaring demand for CRMs. For instance, Mongolia, the Philippines, and Indonesia hold significant nickel and copper reserves which can complement the contribution of Latin American producers like Bolivia, Peru, and Chile. The danger lies in the political and economic dynamics of resource dependence. Scholars of the “resource curse” have argued that increased reliance on natural resources for exports and revenues yields slower economic growth, higher propensity for corruption and rent-seeking, and increased likelihood of political violence. 2 Though its theoretical and methodological assumptions have come under scrutiny,3 the resource curse continues to persuade donor agencies who associate corruption, economic mismanagement and predatory behavior with resource-rich national governments. 

    Unable to develop indigenous technology, build inter-sectoral linkages, or mitigate the social conflicts arising from unchecked resource exploitation, resource-rich countries also suffer from commodity specialization. Some have attributed this development malaise to the detrimental effects of external linkages with advanced industrialized countries: as long as resource producers remain commodity exporters, there are very few incentives to promote export diversification and to craft ambitious industrial plans that would break the cycle of extractivist growth. 4 The historical legacy of extractive-led growth heavily weighs upon the politics, institutions, and interests of these countries. 5

    Nevertheless, there are important advantages to mining-based development. First, demands for base metals like nickel, copper, and iron are expected to steadily increase, granting governments time to protect their economies from cyclical boom and busts, design financial infrastructures which maximize mineral rents, and direct investment towards communities at the frontiers of extraction who face its grave socio-ecological consequences. Despite the preponderance of weak states in the Global South, there are notable examples of institutional innovation. For example, Peru has implemented comprehensive mining reforms which better distribute revenue between central and regional governments, alleviating the distributional pressures and socio-environmental consequences of localized resource curse.6 Brazil, with its long history of institution-building, was able to promote renewable energy, impose flexible local content requirements to support domestic industrialists, and establish strict environmental licensing procedures in the oil and gas sectors.7

    Second, there is evidence for the success of the so-called ‘resource-based industrialization’ strategy. Brazil’s Petrobras, Norway’s Statoil, and Venezuela’s PDVSA prior to Chavez demonstrated how state enterprises can be utilized to achieve the ‘big push’, natural resources-intensive industrialization. Their emphasis was on developing indigenous technology to create a niche market and to obtain access to downstream networks of consumers, thereby, creating vertically integrated firms capable of competing against international oil companies.  

    Mineral states can pursue resource-based industrialization by promoting industrial policies aimed at processing critical minerals within their domestic markets. Indonesia’s ban on exporting raw nickel and its forcing of international companies to refine and process mineral ores inside the country is one example. Another method has been the strategic deployment of state-owned enterprises (SOEs) in managing natural resource assets—a policy often adopted by countries with limited production capacity. In the best possible conditions, states might also encourage domestic companies to directly compete with international mining companies in the exploration and production stages, as in the example of Brazil’s national oil company Petrobras and regional SOE Codemge. However, this approach requires sophisticated industrial strategies that only countries with sufficient planning and coordination capabilities can implement. Irrespective of the policy choice, the fundamental logic remains: mining is a high-risk industry, requiring enormous capital investments and regularly subject to market failures. States, taking a developmental longer-term horizon, must accept these risks to promote sectoral development.

  7. Leapfrog Logistics

    Leave a Comment

    In Spring 2018, two significant labor disputes broke out at opposite ends of the earth. The first, in Brazil, was an 11 day mass strike of 400,000 truckers in response to successive price increases unleashed by the state oil company, Petrobras, liberalizing diesel prices. The second, in China, was a large strike which spread across the nation in response to low fees paid by digital trucking platforms, which account for a growing share of China’s road freight market. 

    Neither action concluded in decisive victory. Brazilian truckers won temporary diesel price reductions and minimum transport fees, while China is belatedly witnessing regulatory and trade union organizing moves in the trucking sector. But these two incidents—amongst the largest labor actions in the logistics sector in recent years—highlight the increasing importance of labor struggles in sectors associated with commodity distribution. The disputes also call attention to two crucial features of the contemporary global economy: the reliance on logistical planning required to facilitate spatially fragmented production, and the use of digital platforms to mediate socioeconomic life. 

    The growing economic significance of infrastructures and logistics, on the one hand, and digital platforms on the other, are increasingly interdependent trends. Since the logistics revolution of the 1970s, highways, ports, road networks, utility systems and advanced logistical mapping and forecasting have become central to securing the huge volumes of commodities flowing through the global economy—especially as production processes have fragmented and trade in parts and components has expanded.1 Global shipping volumes doubled from 5,984 million tons in 2000 to 11,071 million tons in 2019.2 Moreover, digital platform firms increasingly control physical logistics infrastructure, by taking ownership of cloud computing and data centers, internet cabling, telecoms networks, transport systems and satellites. 

    Advances in digitalization and platformization enable technological leapfrogging in the logistics sectors of large Global South economies. The road freight industry has traditionally been dominated by owner-drivers and small-scale informal operators, which together made up 70 percent of fleets in 2009.3 These small-scale operators often drive older vehicles that carry less weight per truck than the new, larger trucks operated by transport companies. Unlike ports, waterways and railways, trucking is one of the most labor-intensive parts of the global logistics sector, employing tens of millions of workers worldwide. 

    A dramatic rise in tonne-kilometres of freight moved by road during the past decade4 has precipitated the rise of digital trucking platforms across the Global South. Often nationally based firms, these trucking platforms consolidate markets, drive cost efficiencies, enhance information flows,5 discipline labor, and centralize investment capital in a sector traditionally dominated by petty capital. Given the global slowdown in productivity growth and profit rates,6 and the challenges for catch-up development in the context of global value chains,7 cutting costs in logistics is one alternative means for Global South economies to enhance competitiveness. In the long-run, this may entail a shift towards more rail- and water-based transport. In the short- to medium-term, though, digital platforms enable a coordination of older and more labor-intensive modes of logistics like trucking in a more efficient and transparent way. As a result, the penetration of digital platforms into logistics provides an approach to improving the competitive position of individual producers, economic sectors, and national economies.

    Brazil and China constitute two of the world’s largest logistics markets. Both experienced booms in road-based infrastructure investment during the past two decades.8 And in their road freight sectors, digitalization has advanced far more quickly than in the Global North, as platforms have come to play an increasingly critical role in transport infrastructure. This may present economic opportunities—for instance, despite lower costs for soy production, Brazilian agribusiness currently suffers from higher costs for logistics due to expensive and inefficient road freight transport. Through rationalization, investment, and downward pressure on wages, platforms could ameliorate this situation. Similarly, in China, more efficient logistics systems may compensate in part for the rising wages of factory workers. A closer look at the digital trucking platforms operating in Brazil and China thus provides a lens through which to observe the transformations that the platformization of logistics may deliver in large economies of the Global South more broadly. These upper middle income economies have been able to leap to the technological frontier: trucking platforms are now restructuring road freight markets, and in the process, they are impacting the working lives of millions of truck drivers.

    Digital freight platforms in Brazil

    Traditionally, the Brazilian trucking sector consists largely of self-employed drivers. Since regulations prevent these drivers from closing freight contracts with shippers, many transport companies hire a small number of formally employed drivers in addition to self-employed truckers who are often disguised wage workers. Official statistics indicate the existence of 546,499 self-employed drivers and 344,231 formally employed drivers in Brazil.9 But media reports estimate a total number of 1.5 million truck drivers in Brazil—implying an additional 600,000 informal, unregistered drivers. 

    The general characteristics of the industry, and the grievances of truckers (low freight prices and high diesel prices) have not changed much since the 1970s. But the conditions of self-employed truckers have deteriorated since 2014, when Brazil entered an economic crisis that persists to the present. From 2010 on, the Brazilian government facilitated the purchase of new trucks, and the number of truckers started to grow faster than GDP growth. When the economic crisis hit in 2014, transport volumes decreased, but the number of truckers remained constant due to a lack of alternatives on the labor market. With the excess labor in the sector, drivers are not able to obtain higher freight prices whilst the costs for vehicle repair, tires and diesel increase.

    In Brazil, freight transport accounts for 65 percent of all commodity transport, against an average of 30 percent in developed countries, and 42 percent across the Global South. Rail transport carries just 15 percent of all commodities in the country, compared to an average of 40 percent in the Global North and South. The high proportion of road freight transport drives up the costs for commodity producers in Brazil, since buyers pay for the transport price from the port of shipping. Road freight transport is more expensive than other modes once it exceeds 100 kilometers of distance, and due to overcrowded and low-quality roads in Brazil, it is the cause of slower transport times. About 40 percent of road freight trucking in Brazil is for agribusiness: the largest single commodity transported in Brazilian road freight transport is fertilizer, followed by soy.

    The Brazilian road freight transport market has an overall annual turnover of 160 billion real per year (numbers for 2020), of which digital platforms are intermediating a rapidly growing share. App-based trucking started in Brazil in the mid-2000s, but the Covid-19 pandemic ushered in the explosive growth of trucking apps. The CEO of a major platform, Cargo X (whose board members overlap with those of Uber), reported a 75 percent growth in turnover for 202010 while FreteBras, a competitor, reported a 62 percent growth in terms of cargo volumes in 202011. The Fretebras platform now dominates the market with 50 billion real of freight shipped over the platform in 2020, and estimates are that 2021 will see a volume of 80 billion real. Cargo X, meanwhile, claimed that in 2020 their total turnover was around 875 million real. Financial newspaper Valor Econômico estimates that Fretebras covers 80 percent of the market of trucking apps, and competitor Truckpad, one of the first such platforms which came online in 2013, controls a further 10 percent.12 This means that in 2020 all trucking apps combined mediated about half of all road freight transactions, perhaps the largest share among countries with large logistics markets, and far ahead of advanced economies like the United States and Europe.

    Foreign and local players continue to enter the market. The Chinese platform Manbang Group, known internationally as Full Truck Alliance, invested an unknown amount of capital in TruckPad at the end of 2019. Among the new entrants are the large agrotrading companies which are mostly represented by non-Brazilian transnational capital. Recently, they have launched their own apps: US-based multinational Bunge launched the platform Vector in 2020 in a joint venture with the Argentinian logistics operator Target, and Chinese state-owned agrotrading company Cofco is set to join the enterprise.13 On the other hand, four more large agricultural traders, Amaggi, Louis Dreyfus, Cargill, and ADM joined to integrate their logistics data into the trucking app Carguero in 2019, which mediated 2 billion Real in freight in 2020, and is still running as a pilot project awaiting a full launch.

    In November 2021, FreteBras merged with Cargo X to form the new company Frete.com. It received $220 million in investment, primarily from Japanese Softbank Latin Fund and Chinese tech giant TenCent. Brazilian investment bank BTG Pactual and former Florida Governor Jeb Bush were also counted among the contributors. The merger integrates FretBras’s unparalleled user numbers and Cargo X’s deep knowledge of digitalization services. The latter’s key advantage is its algorithmic price setting system, which was difficult to adapt for the freight market, where long unloading times delay payment. Thanks to this merger, more sophisticated Uber-style models for the trucking industry are likely to emerge.

    The rapid advance of Brazil’s digital freight platforms can be explained by the possibilities for leapfrog development enabled by the informal, undercapitalized, and poorly organized nature of the sector. Self-employed truckers represent more than half of the 1.5 million strong trucking workforce, which currently resents the long waiting times and burdensome paperwork. Existing intermediaries also cut into truck driver earnings by collecting 20 to 40 percent of freight prices. Normally, they pay truckers 80 percent of the freight price in advance, and 20 percent upon completion, though this latter portion is frequently unpaid. Before the arrival of trucking platforms, obtaining a freight contract for a self-employed driver typically required waiting anywhere from several hours to several days at the office of an intermediary or a transport company to pick up an order, and then completing more than twenty official forms.

    Apps address some of these issues by displaying demand for jobs in real time. However, there are limits to the platform-based rationalization of Brazil’s freight industry. Most of the country’s freight platforms, including Fretebras, do not set prices, and instead function as bidding marketplaces. Those most often bidding are rarely shippers—instead, they remain the same intermediary transport companies which dominate the non-platform trucking sector. This is because most shippers demand a service package not yet offered by platforms, including functions like insurance, tracking, and planning of routes. As intermediary transport companies increasingly rely on platforms to hire self-employed truckers, they also reduce their formally employed workforce. Fretebras openly advertises that it is 23 percent cheaper to hire self-employed truckers via their platform than to directly employ them. Rising platform usage thus seems likely to lead to a further reduction in the share of formally employed truckers, undermining any progression towards formal employment relations.

    Should this trend hold, it will carry signifiant consequences for the broader labor market. Unlike other platform-based work, trucking is the primary occupation and long-term profession for most self-employed Brazilian truckers. While platforms reduce some pecuniary and time costs for truckers, they also place different segments of the labor market into direct competition with one another. For instance, many truckers operate in specific regions or on specific routes, since they do not want to spend too much time away from home. But enhanced access to information platforms motivates some truckers to alter their routes or expand their field of operations, thus altering the supply of labor in different regions. Freight transport prices in Brazil remained more or less flat during 2021 while diesel prices rose 40 percent, with the effect of an enormous drop in real earnings of self-employed truckers. As in earlier years, self-employed truckers lack the power to demand higher prices from transport companies, and therefore any inflation in inputs cuts into their earnings. Until now, platforms have failed to limit truckers’ vulnerability to the increasing cost of inputs like tires, car parts, and fuel.

    Brazil has seen little in the way of institutional responses to the digitalization of road freight. A simplified electronic transport document, DT-e, was approved by the government in September 2021 and is awaiting implementation. Once fully implemented, DT-e has the potential to eliminate functions currently conducted by transport companies (which act as intermediaries) as well as reduce the relevance of platforms. This is because DT-e would allow truckers’ associations and trade unions to assume intermediary functions and legal responsibility for transport. Whether this poses a major challenge to platform business models remains to be seen. One result of the truckers’ strike in 2018 was the establishment of legally defined minimum freight prices. Despite patchy implementation, employers’ organizations successfully mobilized the Supreme Court to rule on the minimum prices; fees in violation of those prices have been suspended since 2020. The Supreme Court is dragging its feet on the issue since it is neither motivated to confront truckers nor employers. 

    The growth of online truck-hailing in China

    China is the world’s largest employer of truckers by some margin, with around 15 million road freight trucks and about 30 million truck drivers.14 Most truck drivers are self-employed and work on a freelance basis. Most of them also own their own vehicles, often financed with heavy debts. To take more orders and pay off debts, drivers are constantly on the road, living, eating and sleeping in their trucks. Like in Brazil, significant rigidities and bureaucratization have made working life difficult for truckers. The traditional way for them to secure jobs is to physically attend information stations in local logistics parks, where they pay an information service fee, obtain order information, and bargain over prices. 

    The rapid development of internet technology and huge flows of capital into the sector in the mid-2010s initiated the platformization of road freight operations in China. In 2017, a watershed moment in this process took place: the merger of the two largest players in the online truck-hailing sector, Yunmanman and Truck Alliance founded the largest truck-hailing platform in China, the Manbang Group. Amongst Manbang’s biggest backers are global private equity and funds like Softbank and Sequoia Capital, alongside Chinese funds like Tencent Holdings and Yunfeng Capital. Manbang platforms match truck drivers with shippers directly to improve freight efficiency and reduce empty loads. In 2020, the firm listed on the New York Stock Exchange and was valued at US$21 billion. It claims to have gained RMB 173.8 billion (US$26.6 billion) in gross transaction value, accounting for 64 percent of market share of the GTV of the Chinese digital freight platforms, and fulfilled over 2.8 million truck shipping orders over the previous 12 months.15

    New business models have also emerged with the growth of platforms. The platform For-U Smart Freight, established in 2015, provides end-to-end fully digitalized services, including pricing and order placing, dispatching, and transportation. It focuses on the full truckload (FTL) market, instead of the less-than-truckload (LTL) market. Its main shippers are large enterprise shippers, such as Deppon logistics, JD logistics, and SF Express. Some other platform companies, such as LaHuoBao and Kuaicheng, are targeted at particular goods classifications, such as hazardous chemicals and coal.

    In contrast to the For-U Smart Freight company, Manbang’s platforms are more popular among gig drivers and small to medium enterprises. Among 2055 drivers surveyed in 201916, more than 90 percent claimed that they have used the platforms alongside apps like Yunmanman and Truck Alliance. About 38 percent of drivers used these platforms to get more than one-third of their orders. The platformization of road freight relieves information asymmetry and breaks down some geographical constraints on information: drivers are able to access the order information more easily in broader areas or even across provinces, and there is no longer a need to physically attend local information stations for hours or days at a time to pick up jobs. Truckers can also avoid carrying empty loads on return trips. 

    But due to the predominance of Manbang in the truck-hailing sector, platforms also have established growing rule-setting power. At the early stage, platforms provided a free matching service and allowed drivers and shippers to negotiate prices. Eventually, however, platforms introduced fee schemes for both drivers and shippers. In June 2018, they initiated a policy that prevents truck drivers and shippers from contacting each other to ensure transactions and prices rates were set exclusively via the apps. Platforms also restricted price negotiations through automated pricing. The resulting squeezed profits, combined with rising fuel costs and arbitrary fines, instigated the strike wave of 2018.17.

    Through referral schemes, bonuses, and zero-down payment truck loans, platforms incentivize market expansion. But while they offer benefits, these incentives also burden new drivers with heavy debts. In order to pay off the monthly mortgage, drivers often need to take more orders at extremely low prices and work longer hours. As a result, drivers bid against each other, lowering prices in a vicious race to the bottom. Interviewed drivers claimed that the prices have been reduced by as much as 30 to 50 percent in the last five years. One driver warned, “for those who just entered this sector and need to pay the monthly mortgage to get a truck, I would strongly recommend they go back to factories and make screws. There is no space to survive in this sector now.”

    Social and policy responses to the platformization of critical logistics infrastructure are more evident in the Chinese case. Informal networks of workers are growing. WeChat channels are widely used among drivers to share information on pricing and pay and publicize negative experiences such as arbitrary decommissioning, often with powerful effects. And the state has increasingly played a larger role, scrutinizing labor conditions in the gig economy. In January of 2022, Manbang and three other road freight platforms (Huolala, Didi Freight, Kaigou) were interviewed by the Ministry of Transport regarding concerns over the arbitrary power over pricing system, increased membership fees, low-price spiral, and illegal transportation of oversized and overloaded vehicles.18. The All-China Federation of Trade Unions (ACFTU) has also attempted new ways of organizing such workers. Since October 2021, truck drivers in Anhui Province can join the trade union by registering on an app which provides a range of services for truck drivers (such as communication, insurances etc.) 19 Whether these measures will  have any material effect on drivers’ conditions and wages remains to be seen. Finally, as part of the broader crackdown on the tech sector in the summer of 2021, the Chinese Cybersecurity Review Office launched an investigation into the data security of Manbang. During the investigation, the apps were taken offline and limited to existing users.

    Infrastructure platformization and the global economy 

    While China is now at the technological forefront of platform logistics, even Brazil’s less technologically advanced bidding apps outstrip technologies deployed in Europe and North America. While the logistics revolution of the 1970s primarily shaped the Global North, trucking apps allow us to study the contemporary leapfrogging of digital logistics in the Global South. Plagued by underinvestment and fragmentation, dominated by intermediaries, and largely shielded from the competitive pressures of the global economy, the trucking industry of the late-twentieth and early twenty-first century promoted business practices that largely harmed workers through long waiting times, dense bureaucracies, and high fees. But despite the efficiency gains they offer, digital trucking platforms intensify labor discipline for drivers who are increasingly working in “sweatshops on wheels.”20 

    Unlike the already flexible logistics labor markets of the Global North,21 rigidities and inefficiencies in Global South logistics present outsized productivity and profitability rewards for platforms able to engage in leapfrog development, using technology to consolidate the market, integrate services, and eliminate barriers to circulation. This mirrors other forms of digital Taylorism sweeping service sectors in the Global South, as work in sectors like parcel and food delivery, ridehailing, and even white collar internet work is subjected to intense datafication, performance management, and control-by-algorithm.22 What is more, these productivity gains—even where platforms demonstrate aspects of rentierism—are likely to accrue to firms across the economy, improving national competitiveness through lowering logistics costs. 

    As on-location platform services increasingly absorb the tasks of critical transport infrastructure, governments will be incentivized to control and regulate these in new ways to guarantee regularity and universality of supply.23 China demonstrates the possibilities for governments to exercise substantial sovereign and territorial control over digital platforms.24 But the emergence of Brazilian firms at the technological frontier further demonstrate the possibilities for national varieties of digital platforms to emerge in the Global South. These firms can flourish even in an economy like Brazil’s, historically controlled by a comprador bourgeoisie. These new digital national champions signify a deepening techno-economic multipolarity, generalizing the gains, contradictions, and agonies of rapid capitalist development across the continental-scale economies of the Global South.

  8. Farmland Assets

    Comments Off on Farmland Assets

    The election of Jair Bolsonaro in 2018 commenced a long agenda of environmental destruction in Brazil. Before taking office, Bolsonaro had openly threatened Indigenous communities with racist attacks, commenting that Indigenous peoples should not have “an inch of land” and that “Indians in reserves are like animals in zoos.”1 Targeting the land rights of Indigenous peoples, peasants and quilombolas (rural Afro-Brazilian communities), he articulated a neocolonial agenda to control the land and natural resources of rural communities. 

    Such encroachment on rural communities predates Bolsonaro, and it intersects closely with global crises of climate change and food production. Industrial agriculture has been a driver of climate change,2 with chemical inputs destroying local soil and water sources. In Brazil, widespread fires in the Amazon, the Pantanal wetlands, and the Cerrado, the most biodiverse savanna in the world, have been unprecedented in their number and scale in recent years.3

    The expansion of industrial agriculture in Brazil has been an international affair, linking pension funds, university endowments, and major financial actors across the world. After the global financial crisis of 2008, international agribusiness and financial corporations formed alliances with rural oligarchies to operate in the Brazilian farmland market. In recent years, mergers and joint ventures have changed the profile of agribusiness in the country. Land purchases are no longer limited to large-scale Brazilian companies, or even foreign agricultural corporations. Now, financial firms and oil companies have prominent roles in the market as major sources of capital.4 The entrance of these firms has unleashed new forms of speculation within land markets, with expanded access to credit, and therefore, land purchases. These practices have led to devastating new realities for those living through the changes. 

    Land grabbing in the Cerrado

    The Cerrado, the vast savanna region in the western part of the country, has an area of approximately two million kilometers, representing almost a quarter of Brazil’s territory. Its trees are important sources of underground water, such as the Guarani, Bambuí and Urucuia aquifers, which contribute to the formation of two-thirds of Brazil’s hydrographic regions.5

    Central to water and food production, the Cerrado has been a target of an especially predatory form of agricultural real estate speculation. Many of the industrial farms now operating in the chapadas (high plains) were established in areas that were once public lands, making up the homes of peasant, Indigenous, and quilombola communities. Upon moving into these regions, agribusiness corporations take advantage of local operators6 by illegally forging ownership, fencing off the farmland, and displacing local communities. Recent legislation introduced by Bolsonaro aims to further facilitate the privatization of public lands. This process, named land grabbing by local activists, also affects the baixões (lowlands), which also have a long history of housing rural communities.7

    Drastic environmental changes have accompanied land grabbing in the region, with major implications for land use. Land previously used to cultivate food crops for local consumption now grows commodities—in many cases, sugarcane for ethanol production and soybean are grown in monocropping plantations. As a resident of a quilombola community in the northeastern Brazilian state of Piauí explained in a Network for Social Justice and Human Rights interview, “We used to live off of fishing and farming. I can still remember the smell of rice when it was being harvested. But now we can no longer grow our crops.”8  

    Water pollution and deforestation are now widespread; the heavy use of pesticides and other chemical inputs sprayed by tractors and airplanes have destroyed local water and food supplies. “The water from the highlands flows down and fills our streams with agro-toxins. The water gets muddy, it stinks. In the river, we see the little fish floating on top of the water, dead. I didn’t see the little fish dead before. When we go fishing now, if we go in the morning, it takes us until noon to catch a little fish. There are no more fish in the river because of the poison,” stated one resident. Deforestation has caused the extinction of endangered species and polluted river springs, affecting biodiversity and rainfall patterns.

    Soybean plantations have destroyed many water sources, leading rivers to dry up across the region. With monocropping extending into forest areas, fires have become more frequent. A resident of the local community described, “We are very concerned with the burnings because the fire destroys all the flora, the pequi flower burns, the cashew burns, it burns the trees that provide food. The fires also cause damage to streams, our streams are no longer filling.” 

    Financialization of farmland

    The devastation in the Cerrado is not simply the result of the outsized presence of domestic agribusiness. The industry’s financial support can be traced to an intricate web of capital flows, connecting funds in New York, London, Germany, and other financial centers in the Global North to Brazil-based firms.

    Beginning in 2002, agribusiness corporations in Brazil started to take advantage of high commodity prices for products such as sugarcane in international markets. Companies began contracting debt in US dollars with the expectation of increasing future exports, and the mills negotiated export contracts in future markets to justify their territorial expansion and mechanization. This inflated the price of agricultural land. At the same time, promises of future production by companies that had previous debts fueled new indebtedness and further territorial expansion. The cycle continued—the strategy of financing existing debts with new funds based on futures markets intensified the scale of industrial agriculture, and therefore the use of natural resources.9 Brazil is already home to one of the highest concentrations of land ownership in the world—one percent of large landowners control 90 percent of agricultural land—and industrial agriculture has led to even greater concentration of rural property ownership. 

    The global financial crisis of 2008 marked a watershed for agricultural expansion. The price of sugar began to fall along with agriculture commodity prices, and several Brazilian sugarcane companies went bankrupt.10 While government support for commodities aimed to achieve a positive balance of trade, the fall in commodity prices led to trade deficits. But this drop did not hurt the land market, as some would have expected. Brazilian agricultural companies began to court international markets for access to credit. Land prices continued to rise, attracting the attention of international finance. The flow of financial capital into farmland markets generated speculative bubbles and sharp escalations in land and food prices, with the burdens shifted mostly to rural communities.

    Before 2008, the possibility of a growing global demand for ethanol was used to justify the territorial expansion and monocropping of the sugar-energy industry—sugarcane increasingly replaced local food crops as large corporations took over smaller farms. The global financial crisis, however, led forecasts to change. Ethanol’s economic viability on a large scale depends on market conditions of two major international commodities—sugar and oil—making ethanol function not as a “leading” but a “supporting” commodity in this context.11 But given the instability of financial and commodity markets, ethanol was no longer projected to provide energy security in Brazil nor internationally. 

    Despite these consequential market shifts, agribusiness corporations still present themselves as engines of economic development in a commodity-oriented economy. In return, they demand state subsidies for expanding monocropping plantations. The large corporations of the industry commonly measure their contribution to national development by estimating the role of industrial agriculture in Brazilian GDP; they usually place that number between 30 and 40 percent. But this percentage is inflated by the inclusion of “productive chains,” which encompass agrochemical, industrial, commercialization and retail activities, outside of agricultural production itself. The GDP contributions do not consider the various forms of public subsidies, unpaid debt, and other economic, social, and environmental impacts. 

    Over several decades, the agribusiness lobby has continued to receive large amounts of state-subsidized credit12 to expand plantations in areas with access to infrastructure, vast hydrographic basins, and biodiversity. Even though small farmers produce 70 percent of food for local markets, large agribusiness corporations received the vast majority of approximately US$46 billion in subsidized credit available for farmers in 2021. 

    Even with receiving a steady flow of subsidized credit, the industry has historically generated public debt. Back in 1980, the Brazilian government forgave a US$13 billion debt to agriculture corporations, representing twice the amount of the trade balance for agriculture at that time. Though indebtedness has persisted, access to various types of subsidies and tax incentives has endured. In 1999, the Brazilian government forgave a US$18 billion debt, a year when the announced trade surplus for the agribusiness sector was US$10 billion. Sixteen years later in 2015, while the country was in the midst of an economic crisis, subsidized credit provided by the state program Plano Safra increased by 20 percent compared to the previous year, reaching R$180 billion reais.13 Data from the Ministry of Agriculture show that this amount was equivalent to the trade balance of agribusiness in 2014—US$80 billion dollars at an average exchange rate of 2.5— excluding agribusiness debts. In the 2014-2015 harvest period, the debt of sugar and ethanol corporations alone exceeded R$50 billion, which represented a 12 percent increase compared to the previous year.14

    The role of TIAA

    A key case for understanding this trend in farmland speculation—and its transformation following the global financial crisis—is Radar Agricultural Properties. The rural real estate company was founded in 2008 as a joint venture between the largest sugarcane corporation in Brazil, Cosan (with 18.9 percent of shares), and a financial company, Mansilla, which was the main shareholder at that time.15 Data from 2012 indicates that Radar controlled 151,468 hectares of farmland in Brazil, estimated at R$2.35 billion in Brazilian currency or $1 billion in US dollars.16

    Radar has a surprising source of capital: TIAA, a pension fund manager for employees in academic, government, non-profit, and research fields in the United States. Assets under TIAA’s management are valued at approximately US$1 trillion. For its speculation in global farmland, TIAA collects capital from other sources in the United States, Europe, and Canada, such as funds AP2 in Sweden, Caisse de Dépôts et Placement du Québec, British Columbia Investment Management Corporation (bcIMC), Stichting Pensionenfonds AEP in the Netherlands, Ärzteversorung WestfalenLippe in Germany, Cummins UK Pension Plan Trustee Ltd., Environment Agency Pension Fund and Greater Manchester Pension Fund in England, and New Mexico State Investment Council in the United States.

    To operate in Brazil, TIAA uses Brazilian subsidiaries Mansilla, Tellus and Nova Gaia Brasil Participações to comply with a Brazilian law that limits foreign ownership of land. In 2020, the National Land Institute, INCRA, investigated TIAA’s acquisitions of farmland and argued that TIAA violated Brazilian ownership laws, because TIAA and its Brazilian subsidies are part of the same economic group. INCRA recommended that all of the lands purchased via TIAA’s subsidiaries since 2010, covering more than 150,000 hectares, be annulled and void.17

    Through TIAA’s investments of their pension funds, staff and faculty at US institutions are tied to environmental destruction in Brazil. The role of US higher education in the country broadens when considering the activities of university endowments. Harvard University, for example, acquired agriculture land by using subsidiaries including Terracal, Caracol Agropecuária and Insolo Agroindustrial to operate in the Brazil’s rural real estate markets. By 2016, the university had acquired more than 405,000 hectares—twice the size of Massachusetts’ total farmland area. In October 2020, the State Court of Bahia issued a sentence blocking the registration of lands for one of Harvard’s largest farmland acquisitions in Brazil, a 107,000 hectare agglomeration of lands known as Gleba Campo Largo. To avoid legal consequences, Harvard transferred its farmland division to a private equity corporation called Solum Partners, a partner of AIG insurance group. It’s likely that INCRA’s criticism applies to Harvard: the university’s new subsidiaries may not actually comply with Brazilian law.

    Through this mechanism, international corporations can “outsource” their land deals, forming several companies with the same administrators and making it appear that they pertain to different owners. The “outsourcing” mechanism has other implications for transparency: with several interrelated subsidiaries, companies obscure the locations of the farms they control, as well as their social and environmental impacts. 

    The pure scale of foreign investment carries major consequences for the price of rural real estate. By creating specific funds to invest in farmland markets, a large pension fund like TIAA can inflate land prices so that they no longer reflect commodity prices. In 2012, land prices rose by an average of 56 percent,18 and Radar’s portfolio increased by 93 percent compared to 2011. With TIAA’s backing, Radar currently owns 555 properties in Brazil, with approximately 270,000 hectares of land at a declared value of R$5.2 billion.19

    The price of land

    The entrance of pension funds and endowments as financial backers has revealed a stark disconnect between land and commodities. Commodities such as sugarcane, soybean, corn, cotton, and eucalyptus continue to rationalize land prices, but in fact it is Brazil’s massive land area—the fifth largest in the world—which has become the main financial asset. This is indicated through the trajectory and production of the sugarcane industry. Though sugarcane plantations saw a decline in productivity after 2010, their territorial expansion, and the concentration of that expansion among few landowners, persisted. In 2011-2012, the industry acquired even more territory while sugar production stayed the same.20

    The commodities production that justifies the land acquisitions has upended life for many rural workers. The industrialization of the sugarcane sector has increased unemployment, and for those who remain employed, it has lowered wages. In the state of São Paolo, for instance, the rural worker population declined from 440,000 in 1986 to 94,000 in 2014.21 In the early 2000s, sugarcane cutters saw both their pay per ton and daily wage slashed as a result of mechanization. The “ethanol boom” of this period was accompanied by an alarming increase in the deaths of sugar workers, mostly attributed to poor labor conditions.

    In areas where monocropping has been introduced, frequent assassinations target Indigenous residents, particularly those seeking to protect public lands from encroachment. Rural activists, who for generations have struggled to secure collective land rights,22 are now organizing with greater urgency against constant threats of violence. The very presence of these communities has become an obstacle for the financialization, and thus devastation, of land.

  9. The Price of Oil

    Comments Off on The Price of Oil

    In October 2021, the price of gasoline in the United States rose to its highest level in seven years. There were many reasons for this: surging demand following a year-and-a-half of lockdown, a slower than expected recovery of oil production, and imbalances in products inventories due to energy shortages in Europe and East Asia. Experts believed prices would fall in the new year. Instead, Russia’s invasion of Ukraine in February 2022 sent them to new and historic heights, rattling markets and increasing the US price of gasoline to more than $4 a gallon.

    It is easy to see that the price of oil is one of Joe Biden’s biggest problems, but harder to figure out whether Biden can do much about it. If he can’t, who can? An entire industry exists to predict future changes in the price of oil. Oil companies themselves try to imagine where the price will be, so that they can schedule capital expenditures to meet future demand, often without much success

    Today, the volatility of oil prices is taken for granted. But this was not always the case. From the early 1930s to the 1970s, the price of oil in the United States was managed through a combination of voluntary action by private actors and regulatory oversight from state agencies. Crisis in the early 1970s motivated the federal government to institute formal controls over the price of crude oil. While these controls were inexpertly administered and pursued contradictory goals, they succeeded in absorbing the impact of the shock in global oil prices and ensuring access to energy on affordable terms to most Americans. Where the policy did not succeed, however, was in increasing domestic production and making a meaningful impact on oil imports. A second oil shock in 1979 convinced policymakers to do away with controls and helped spawn a militarized commitment to “securing” the oil of the Middle East, the blowback of which we continue to confront today. US power and a free market, many leaders argued, would produce abundant and affordable energy. 

    Examining the history of oil prices reveals two persistent features. The industry has often been subject to various forms of price-setting, both by government and by firms with market power; and at the same time, the entanglement of market forces with geopolitics has meant that true control over the price of energy remains elusive, absent agreements on a planetary scale. Such coordination now appears necessary to counteract the spiraling volatility endangering access to energy and complicating the energy transition away from fossil fuels. To arrest volatility and avoid the worst effects of climate change, it’s time to get oil back under control.

    Out of control

    During the early days of the US petroleum industry in the late nineteenth century, growing demand for oil products drove a constant search for new deposits. Oil produced from the ground belonged to whomever obtained it first according to the so-called “rule of capture,” a concept summarized in the closing scene of Paul Thomas Anderson’s 2007 film There Will Be Blood. With production fragmented among many different companies and without a central authority imposing order, the market went through a constant boom-and-bust cycle.

    To secure their investments, companies managed volatility through vertical integration and concentrating ownership. Oil magnate John D. Rockefeller sought stability of prices through monopolization. At its peak, Rockefeller’s Standard Oil controlled roughly 90 percent of the domestic industry. While Standard was broadly unpopular, oil economist George Stocking argued that “unrestricted competition” led to waste and depressed prices, necessitating a degree of concentration and cooperation to keep profits at a level that allowed for investment in new production.1 After the US Supreme Court broke up the Standard Oil monopoly in 1911, both the large vertically integrated “majors” and smaller “independents” sought measures to limit competition and prevent overproduction. “The inescapable fact,” according to one editorial in 1928, “is that the oil industry has just so much of a market to supply… too much crude produced means oversupply of products and this results in inadequate returns.”2 

    But managing competition was different from managing demand. Amidst the global economic depression of the early 1930s, prices in Texas collapsed from $1 per barrel to as little as $0.06 per barrel. To eliminate rampant price cutting, the majors lobbied state authorities to take action to rein in smaller producers, who they contended were driving the market glut by producing “hot” oil that could not find a market.3 In August 1931, the Governor of Texas ordered National Guard troops to the oil fields to shut down production. By 1934, state agencies like the Texas Railroad Commission (TRC) imposed limits on how much oil drillers could produce through centrally organized “pro-rationing” schemes—in effect placing American oil production under government control to ensure stable prices. 

    While the TRC restricted American domestic production, the majors worked to control competition and prevent oversupply on the global market. Experts regarded it as “oligopolization”; critics called it a cartel. An initial private agreement made in 1928 recognized “excessive competition” led to “tremendous overproduction.” After World War II, the majors established an informal oligopoly, the infamous “Seven Sisters” that included BP, Shell, Exxon, Mobil, Chevron, Gulf, and Texaco to control the flow of oil internationally, particularly new production from the fields of the Middle East. 

    The companies linked their commercial priorities to the emerging Cold War, delivering oil to Western Europe through the Marshall Plan and using their market power to establish a price using a “delivered price” based on freight rates from the Gulf of Mexico. This new “posted price” was stable, though high compared to the cost of production: Middle East oil at the wellhead cost as little as $0.10 per barrel while the majors sold it for $1.20. The stability of the system was maintained by the market power of the oligopoly, which managed more than 80 percent of the global oil trade through their own vertically integrated corporate structures. For much of the 1950s, the price for oil shipped from the Middle East was fixed between $1.75 and $2.20 per barrel, depending on its point of departure.4 Acting collectively, the majors restrained production in the Middle East, while the TRC did the same in the United States. In each area, the problem was not preventing scarcity, but managing abundance—there was always more supply than demand, necessitating cooperative and restrictive measures.

    While the majors controlled the global market through an oligopoly that restricted competition, restrained production, and kept prices high, smaller independents leveraged political power within the United States to push through protectionist policies, including quotas on the majors’ imports.5 For smaller refinery customers, the Kennedy administration introduced a system of “import tickets” to ration imported crude to the highest bidder. Oil industry advocates argued that the quotas were justified on national security grounds, claiming a strong domestic industry would be needed in the event of war with the Soviet Union.6 In practical terms, however, the quotas raised prices from $2.50 per barrel to $3.50 per barrel between 1959 and 1969 and transferred $6 billion from consumers to oil companies every year.7 Protectionism in the name of the Cold War kept the price of oil high, benefitting independents and forcing consumers to subsidize domestic oil production.

    Despite subsidies like the import quotas, generous tax breaks, and support from allies in Congress, the domestic American oil industry entered a slow decline in the late 1960s. While domestic production continued to increase, costs rose among older wells and squeezed profits, resulting in a decline in drilling activity and refinery construction after 1964. The majors chose to invest in their fields in the Middle East, where oil could be produced much more cheaply. Oil consumption throughout the Western world grew at a rapid rate, rising to 34 million barrels per day in 1970, and most of this demand was met with Middle East oil as output from the United States stagnated. 

    The choice to control

    Transformations in the global oil market during the 1960s opened the way for a series of debates on the future of domestic oil production. Though the US Department of the Interior confidently predicted continued abundance up until 1980, geologist M. King Hubbert foresaw an inevitable decline in domestic production—rising costs and the slowing rate of reserve replacement were expected to generate a peak around 1970 and a decline thereafter. 

    Among economists, maintaining oil abundance depended on questions of price. Keynesian theorists who had been influential during wartime advocated controlling inflation through price controls, which alleviated inflationary pressure on prices and wages while managing aggregate demand. During WWII, the Roosevelt administration initiated price and wage controls to limit the cost of the war and to aid in keeping the wartime economy from spiraling out of control. Private industry, including oil companies, supported the controls both as measures necessary to win the war and as effective means of maximizing production.8 As inflation began to tick upward in the late 1960s, progressive Democrats and some economists again advocated for wage and price controls to offset the “cost-push” nexus driving up wages and prices. 

    On the other end of the spectrum were economists like Morris A. Adelman, who, drawing on the work of Harold Hotelling and Erich W. Zimmerman, contended that the price mechanism, once suitably unshackled, would guarantee adequate supplies. Milton Friedman chastised the major oil companies for relying “so heavily on special governmental favors,” and argued that a deregulated industry would cause prices to decline, as greater competition and less protection led to investment and increased production. Opposition to price controls depended on a “quantity theory” of monetary policy, which held that changes in the money supply were the main cause of inflation. “Direct control of prices,” argued Friedman, doyen of the new monetarist school, “does not eliminate inflationary pressures. It simply shifts the pressure elsewhere.”9 According to the Adelman school, oil was theoretically limitless so long as the price was permitted to rise during times of shortage, encouraging investment in new and more expensive fields and methods. Abandoning efforts to stabilize the market would unlock oil companies’ potential to satisfy the nation’s energy needs and ultimately serve consumers in the long-term.

    Advocates of “decontrolling” the price of oil found a home in the Nixon administration, which came to power in 1969 determined to shake up American energy policy and combat rising inflation. Budding neoliberal policymakers like William E. Simon, Nixon’s advisor on energy, favored deregulating the price of natural gas and ending the import quotas, allowing both to be governed by market forces, which Simon (in line with the Adelman school) believed would ensure long-term supply.

    Despite the presence of Friedman acolytes like Simon and George P. Shultz in his administration, Nixon chose price controls out of concerns that managing inflation through fiscal-monetary policy alone would sap economic vitality and endanger his chances of reelection. Price controls, which polled at over 60 percent in early 1971, were linked to his simultaneous decision to end the dollar’s convertibility into gold and embrace an expansionary fiscal-monetary policy that would stimulate economic activity.10 In August 1971 Nixon ordered a general wage and price freeze, including on the price of domestic crude oil, leaning on advice from Treasury Secretary John Connally.11 For Nixon, embracing controls was a domestic political decision. Yet domestic prices would soon be influenced by events and forces that lay outside the President’s power, as the oligopoly’s system for managing global prices and the TRC’s sway over domestic production began to fall apart.

    Adjusting to crisis

    Price controls, while politically popular, produced complications for normal industry operations. Refiners, for example, had relied on seasonal price changes to adjust their throughputs from gasoline to home heating and residual fuel oil. Price controls were imposed in August 1971 and price levels adjusted monthly. Without clear indications from price changes, refiners’ output lagged behind demand, producing shortages of petroleum products in the winter of 1971–1972 and again more markedly in 1972–1973.12 Between 1969 and 1972, the supply glut which had persisted since the 1950s vanished as world consumption boomed under the military stimulus of the Vietnam War and the internal, political imperatives for full employment among the NATO powers and France. Years of falling investment within the United States produced a decline in production, and in 1970 production peaked, just as M. King Hubbert had predicted. By 1972, the TRC allowed producers to pump as much as they could to meet demand at federally controlled prices, ending nearly forty years of prorationing.13

    With Middle East oil supplying a large and growing portion of the market—especially in Western Europe and Japan, which had become entirely dependent on imported oil—producer states secured considerable leverage over the majors by the late 1960s, utilizing threats to nationalize their industry or shut off production if their demands were not met. Apart from bringing their oil industries under state control, the OPEC states wanted to raise the price of oil, which the majors had kept stable since the late 1950s. OPEC protested the unequal terms of trade that limited the value of capital goods—vital for economic growth—that member states like Iran, Venezuela, and Algeria could import with their oil export revenues. A stable price eroded oil’s purchasing power. By raising the dollar price of oil, OPEC hoped to gain advantage against American corporations while keeping pace with rising inflation in the West.14 

    In 1971 the OPEC states secured major price increases from the companies. As energy historian Richard Vietor noted, Nixon’s decision to control the price of oil “coincided with the collapse of the ten-year-old system for petroleum-market stabilization.”15

    The shift in the balance of power between OPEC and the majors complicated the efficacy of Nixon’s controls regime. As the price of imported oil rose past that of domestic crude, smaller refiners in the United States selling at controlled prices were unable to purchase imports as they had done in the past. Refiners had trouble obtaining crude for the 1973 summer driving season, resulting in gasoline shortages and lines at pumping stations—months before the embargo of October, it should be noted—as gasoline retailers did what they could to meet public need by self-rationing limited supplies.16

    In October 1973, OPEC pressed the majors to accept a doubling in their rate of taxation. This, in effect, doubled the price of oil outside the United States, from $2.50 per barrel to $5.10 per barrel. OPEC doubled the price again in January 1974 to $11.65. At the same time, Arab oil producing states declared an embargo on oil shipments to the United States in response to the US resupply of Israel during the October War against Egypt and Syria, while reducing their total production by 25 percent. The result was the first “oil shock”—a simultaneous price increase and supply shortage. 

    The Nixon administration responded to the crisis of October 1973 by expanding the controls program, producing the most ambitious energy management system in American history. The Emergency Petroleum Allocation Act (EPAA) passed in November allocated available crude supplies to refiners all over the country. Imbalances between refiners obtaining old oil versus those with access to new oil were corrected through “entitlements,” introduced in late 1974, which equalized the costs to refiners based on their access to the cheapest crude.17 While “old” oil produced from wells that were active in 1972 (roughly 60 percent of domestic production) was priced at $5.25 per barrel, “new” oil was set at the price of imported oil, roughly $12 per barrel.18 The tiered approach was designed to encourage investment in new production under controlled prices, alleviating dependence on imports and offsetting the risk of another embargo. By the time House minority leader Gerald Ford assumed the helm of the disgraced Nixon White House in August 1974, the US market in crude oil had become thoroughly regulated by priority-allocation rationing, a tiered system of government-fixed sales prices, and refinery subsidies to offset the resulting cost differences.

    Measuring success

    The allocation system enforced by the EPAA and its successor, the Energy Policy and Conservation Act (EPCA) passed in 1976, was incredibly complex. It had goals—reducing imports, propping up small producers, and encouraging domestic production, all while protecting consumers from sudden price spikes—that were in some instances contradictory.19 The controls tried to increase production while weaning Americans off imported oil, which policymakers regarded in the post-embargo context as a threat to national security.

    Given the different goals as well as the changing nature of the controls program from 1971 to the period after the price shock of 1973–1974, judging the relative success of the controls’ program was difficult. While crude oil products were used for many different applications, for planners the chief product to worry over was gasoline. One study found that controls lowered refiners’ costs and encouraged them to increase throughput, which increased stocks of gasoline and likely contributed to lower prices at the pump (while the price of crude was controlled, the price of products like gasoline derived from crude prices and was managed less rigidly by the EPAA/EPCA).20 Keeping pre-1973 output at the “old” level of $5.25 meant the average domestic price of crude oil hovered between $6.80 and $7.80 from 1974 until January 1976, when new regulations on imported oil raised the price to over $8.00 per barrel.21 That was significantly lower than $12.50–$13.00, the price that persisted on global markets, though the price did slowly rise as “new” oil and imports displaced “old” oil priced at the lower level. 

    A study by C.E. Phelps and R.T. Smith for the Rand Corporation argued that controls had little impact on the price of gasoline or fuel oil. Crediting Morris Adelman as a key influence, the authors contended that imported petroleum products, and thus trade policy affected by global trends, were the chief influences over the price of gasoline—not the domestic planning program. “Market forces,” argued the Rand economists, “impose a greater discipline on refined products prices than do the FEA controls.”22 Others disputed Phelps and Smith’s findings, citing the relatively small amount of products imports (around 1 percent of total products’ consumption).23 Finding refinery margins to vary markedly from market to market, Robert T. Deacon concluded that controls on crude actually lowered gasoline prices by as much as $0.03 per gallon compared to international prices, proof that controls were effective and disproving the Phelps and Smith thesis.24 

    Admittedly, given the profit-seeking ownership of the industry, there was no way for federal policy to increase domestic production while maintaining lower prices. While there were proposals for a “public option,” the US government did not nationalize the oil industry or pursue public oil production, preferring instead to leave the job of investing in new production with the companies. Any “new” oil that came online was sold at the higher-tier price. This produced upward pressure on domestic prices, as refiners were allowed to pass on the cost increases in higher gasoline prices.

    These contradictory impulses to satisfy both the public demand for affordable fuel and the industry owners’ condition for greater investment undermined the government’s stabilization apparatus. Despite tariffs on imported oil, domestic production could not compete with imports, which remained cheaper to produce and more abundant, and supplied a growing share of the US market. Though the US government hoped to reduce imports both for balance-of-payments reasons and to limit US exposure to another embargo, there simply was not enough domestic oil to meet demand. By 1978, the average price of oil within the United States had risen to equal the price on the global market, marking the effective end of the controls regime. Imports accounted for a quarter of total consumption.25 

    While the controls remained politically popular, critics argued they were unworkable and unfairly targeted producing companies. Policymakers had drawn advice from geologists and oil executives early in the crisis, but by the late 1970s economists had emerged as a vocal and powerful group influencing policy, with decontrol and neoliberal voices the loudest and best organized. An efficient market, they argued, obviated the need for controls or import quotas, which inevitably produced inefficiencies and sowed the seeds of future shortages.26 The government could best manage energy policy by getting out of the way. By the time President Jimmy Carter confronted his own inflationary spiral, such price theory arguments were a major rhetorical device to use against returning to a more state-directed system of energy planning, particularly when a new geopolitical shock undid what was left of the decade’s energy order.

    Domestic deregulation and global militarization 

    In 1979, the price of oil worldwide tripled. The cause was not coordinated action by OPEC, but a revolution in Iran that shut down that country’s oil production, removing 4.8 million bpd or 7 percent of total supply from the global market. Responding to an anticipated shortage, OPEC producers raised their prices from $13 per barrel to more than $30 per barrel. 

    Rather than satisfy consumer demands for lower prices, President Jimmy Carter advocated for cutting consumption. As American consumers adapted to new world volatility, Carter focused on the national security concerns linked to imported oil. It was under Carter, a candidate favored by environmentalist and progressive Democrats, where a commitment to deregulating energy at home and militarizing US efforts to secure oil abroad became ingrained within US policy. On June 1, 1979 Carter committed to ending price controls through phased price increases, to conclude in 1981.27 Following the seizure of the US embassy in Tehran in November 1979 and the Soviet invasion of Afghanistan in December, Carter codified the US commitment to station military forces astride the world’s oil fields permanently: “An attempt by any outside force to gain control of the Persian Gulf region will be regarded as an assault on the vital interests of the United States.” 

    Government planning failed for a variety of reasons, not least geopolitical uncertainties. The upheaval of the Iranian Revolution punctuated an ideological shift away from planning in the United States. This shift was felt across the world. As historians Rudiger Graf and Giuliano Garavini have argued, politicians in Western Europe during the 1980s moved toward using market solutions to energy problems after decades of experimenting with state intervention. President Ronald Reagan signed an executive order formally ending price controls on January 28, 1981 after only a week in office, despite continued opposition from Congress.28 In 1982, Reagan vetoed a bill that would give Congress stand-by authority to institute price controls in the event of a major disruption in imports. OPEC had raised prices and the Arab states had embargoed the United States, but they had not caused gas lines and shortages: “It took government to do that,” Reagan argued.29

    Reagan’s surrender to market forces did not resolve the energy crisis. Rather than a change in government energy policy, it was the aggressive deflationary policies pursued by the Federal Reserve under Chairman Paul Volcker between 1979 and 1981 that resolved the imbalance between supply and demand which had persisted throughout the previous decade. As Volcker’s policies raised interest rates to cut inflation, the United States fell into a deep depression and consumption fell from 18.5 million barrels per day to 15.2 million barrels per day between 1978 and 1982. Reagan boasted in 1983 that decontrol had caused gas prices to drop, when in reality it was the slowing economy and an increasing global over-supply that caused prices to dip. 

    In 1985, prices collapsed further and the American oil industry suffered one of the worst downturns in its history. The Carter Doctrine, meanwhile, was demonstrated by the US efforts to protect tankers traveling through the Persian Gulf during the Iran-Iraq War, as well as the enormous demonstration of US military power against Iraq in 1990–1991. The militarized commitment to “securing” Middle East oil remains a feature of US foreign policy, even as its justification grows more nebulous in light of rising non-Middle East oil production.

    The failure of decontrol

    In the 1980s, the US replaced a public-private framework of price controls with a new framework that combined deregulation and muscular foreign policy in the world’s major oil-producing region. The post-oil shock instability has become the norm, rather than the exception, as prices now oscillate and the industry proceeds through a boom-and-bust cycle. Oil-producing states attempt to move markets, but the effects of their measures are imprecise and often limited. Commodities traders, not state bureaucrats, wield more power over the prices of key goods like oil, which now float on an ever-changing market, beyond the scope of Western politicians, oil executives, or OPEC’s oil ministers. But this power is not the power to control, as procyclical trades exacerbate scarcity pricing or short-sell glutted markets. 

    By some measures, decontrol seemed to meet the promises its advocates had made in the 1970s. Between 1985 and 2019 global oil production increased from 60 million barrels per day to 100 million barrels per day, driven by an explosion in demand in the developing world, particularly East Asia. Despite fears of scarcity resurfacing in the early 2000s, high prices and new technology spurred investment in American fossil fuels. Between 2010 and 2019, US oil production doubled from 5 million to more than 12 million barrels per day.

    But despite this new abundance, oil today exists in a nearly constant state of volatility. Surging US production, driven in large part by aggressive investment from Wall Street, glutted the market and caused the collapse in prices in 2014–2016. As OPEC attempted to right markets with coordinated production cuts, capital expenditure dried up as and companies tightened their belts, unsure of whether future demand justified increased spending. In March 2020, the price of West Texas crude dropped to -$30 per barrel, as the market glutted in the face of a demand shock stemming from the COVID pandemic. A year later, as economies reopened, past investment decisions caught up with suppliers as investors doubted whether further spending was justified by uncertainty over future prices. In late 2021 the market tightened, in part due to past investment shortfalls, changing weather conditions, and the demand shock of post-COVID recovery. Geopolitical volatility returned with tremendous force in February 2022, when Russia’s invasion of Ukraine sent prices over $100 for the first time since 2014. Throughout this instability, the geopolitical imperative of control over supply has fueled an expensive, destructive, and ultimately futile US mission in the Middle East.

    New management

    A decade ago, even as concern about climate change gained force, there was little audience for debate about the limits of markets. Today, there is increasing recognition that the necessity of the transition away from fossil fuels may require “abandoning the fetish of the price mechanism in order to plan.” An examination of the history of oil price controls reveals a diverse mix of potential solutions to the current market-driven and geopolitically influenced volatility, a “chaos” and “bedlam” in need of mitigation to a smooth the transition away from fossil fuels.

    From the 1930s to the 1970s, private companies and state regulatory agencies managed the price of oil. The result was an artificially inflated price which nevertheless allowed for a rise in consumption and matched supply with demand. The controls program of the 1970s, though criticized for its inefficiencies, succeeded in allocating energy at a time of profound crisis. It is doubtful that the United States could have avoided dramatic dislocations in 1973–1974 without government intervention in the oil market. 

    The current crisis is of a different variety than the last: rather than allocate limited supply, controls would ensure steady access to abundant supplies while reducing demand as consumers transition, a necessary component in reaching decarbonization targets. It is not altogether clear whether the price mechanism offers the most efficient means toward accomplishing a smooth energy transition away from fossil fuels. Under current geopolitical conditions, prices are volatile and volatility leads to a bunching of capital investments and the persistence of a speculative fossil fuel industry.

    As they did during the two World Wars, controls would offer producers assured profits, removing the uncertainty that comes from the boom-and-bust cycle with which the oil industry has struggled since the 1980s. A system of controls would also be tied to incentives and tax breaks, some of which lie within the scope of President Biden’s proposed (though, it would appear, defunct) Build Back Better plan. More limited means exist within existing policy instruments, including the Strategic Petroleum Reserve, which allows the United States to influence the supply-demand balance both in the short term by adding supply and in the medium to long term through purchases to refill the reserve.

    Though they are certain to resist such regulations, energy companies have already outwardly admitted the need to decarbonize. Assurances that oil prices will remain stable could be tied to policies designed to compel these companies to pursue these goals more aggressively. This transition is already taking place, but should be accelerated in order to avoid an increase in 3 degrees Celsius, as outlined in the 2021 report of the Intergovernmental Panel on Climate Change.

    While the Biden administration is committed to an energy transition, the United States believes the private sector will manage it without state intervention. This is a self-limiting and dangerous presumption. The volatility of the global oil market presents an obstacle to a smooth and rapid energy transition. Policymakers should consider deploying price controls for oil and oil products, in a manner that would aid the energy transition while protecting consumers and facilitating continued economic growth.

  10. Politics and the Price Level

    Leave a Comment

    In 1959, the leaders of the Organization for European Economic Cooperation (OEEC, now the OECD) appointed a Group of Independent Experts “to study the experience of rising prices” in the recent history of the advanced capitalist countries. Between the end of World War II and the termination of the Korean War conflict, economic planners had tolerated rising prices as an insurmountable consequence of postwar reconstruction and war-induced commodity speculation. These governments expected inflation to end as economies readjusted following the stalemate in Korea. “In the event, however,” the Group of Independent Experts wrote in their final report, “rising prices proved to be a continuing problem.” 

    The OECD report classified four causes to the inflation of the 1950s: rising wages, monopolistic pricing, excess demand, and what they called “special prices”—those influenced, for example, by foreign governments, bad harvests, or the lifting of government price controls. 

    At a moment when inflation control is back on the agenda, it is worth observing just how little consensus existed—during a period remembered for its supposed social cohesion and intellectual conformity—on these ostensibly technical concerns. By 1961, when the study group released its final report, its members were not even able to agree on concrete findings regarding the causes of inflation. Richard Kahn, the British economist appointed to the group, insisted on including his objection in the final report: “One of us (Richard Kahn) does not believe that the concept of ‘excess demand’ provides a satisfactory method of analyzing the processes of inflation.”1

    If unemployment defined the capitalist epoch from its triumph in the Victorian era to its etiolation during the Great Depression, the invention of the managed economy during the 1920s brought a parallel struggle with persistent inflation. Despite the specialized terms of the debate, the historical dissensus over how governments should respond to rising prices has always betrayed other criteria for evaluating techniques of inflation control.2 These are overwhelmingly political, in that they represent conflicting interests. Akin to the problem of war, when the national interest comes most fully into public scrutiny, conflicts over prices historically persist until particular interests become strong enough to establish as reasons of state their own prerogatives for social change or conservation. The process by which this transcending of politics takes place, and a set of prerogatives becomes common policy, is rarely acknowledged, but it is key to the success of any stabilization. 

    The contributions to this series, largely historical in nature, provide a starting point for considering the politics of the price level. As they show, it is these political criteria that determine the selection and timing of techniques for inflation control. By moderating or exacerbating particular increases in prices or incomes, often with the assistance of public subsidies to demand or to producers, price policies reflect the underlying balance of interests in a nation and their competition for control of the state. Because the composition of incomes ultimately shapes the profile of spending, its demands on the existing level and organization of capacity, and the level of investment in an economy dominated by private enterprise, inflation cannot be properly understood without attention to the institutions that determine incomes. Inflation control policies are thus another lens onto the debate over which private incentives should be protected by the cloak of public interest, and which should not. 

    Explaining the politics of prices 

    Price control was long considered an important instrument of sovereignty. 3 The restorationist judge Lord Hale, on whose foundations Blackstone constructed his commentaries, considered the power to regulate excessive prices, along with coinage of currency, one of the fundamental powers of the king. And it was Hale’s phrase “affected with a public interest” that emerged in the United States after the American Civil War as the controlling opinion for the constitutionality of industry regulation. 

    During the eighteenth century, when the Absolutist kingdoms of Europe grappled with the efficiencies to be gained by allowing the emergence of independent profit centers from which to draw tax revenue, the popular assault on an increasingly illegitimate kingly power included this ancient prerogative. Removing state control of prices was understood to further diversify the international division of labor, expand the low price producers, increase productivity, and further drive down prices through the production of greater volumes. Fostering such private initiatives was the very essence of radical politics, inseparable from the reinvention of government as popular sovereignty. Enterprise, long understood to be subject to the sovereign, became a civil liberty. 

    No sooner were the kings’ powers broken than republican governments confronted similar problems of production, distribution, stability, and change. Following Louis XVI’s flight to Varennes, the National Convention, under threat of foreign invasion and embargoed colonial trade, confronted immediate shortages of bread and American commodities—coffee, candles, sugar, among them. Prices rose. To ensure order during the defense of the republic, in 1793 the National Convention passed the Law of the General Maximum, launching an enduring interpretive debate over the economics of price control and money. 4

    Control of prices has also been a key feature of American law since the nineteenth century. In the early Republic, state governments employed licensing laws to fix minimum charges for such occupations as porters, carters, wagoners, draymen, and wood sawyers. After the Civil War, the political economy in the United States was largely redefined by competition over real income among three classes: farmers; large corporations in manufacturing, processing, and distribution; and industrial workers. Responding to the challenge, state governments in the 1870s fixed the prices charged by grain elevators and railroad corporations. Statute by statute, they enlarged the remit of public rate regulation over the next five decades to include fuel, meat, insurance, housing, and human labor. Under theories of “natural monopoly” elaborated by the founders of the American Economics Association (AEA) the states established public utility commissions. Congress established the Interstate Commerce Commission in 1887, the Federal Reserve Board in 1913, and the Federal Trade Commission in 1914. 

    Following World War I, the US Department of Agriculture undertook a decades-long exploratory program on the problem of declining farm prices and the resulting redistribution of resources from country to town, rural to urban communities. Wealthy organized farmers lobbied repeatedly for the McNary-Haugen acts of 1926–1927, which proposed a government export corporation to purchase surplus commodities at guaranteed high prices for dumping in foreign markets. With the coming of the New Deal, the general suspension of antitrust laws under the National Recovery Administration, and the Supreme Court’s eventual expansive reading of the Constitution’s interstate commerce clause, the taboo against price policy was openly challenged. During World War II, an unprecedented government-financed mobilization program under comprehensive control of prices brought production to capacity, with inflation almost entirely suppressed.

    Contemporary debates on inflation control remain loaded with political conflict. When, by the summer of 2021, the prospect of an accelerating increase in the price level became apparent, economists and journalists turned with expectation to the meetings of the Federal Reserve. The Fed, to its credit, initially announced it would not begin raising interest rates until 2023: the pandemic-induced inflation, Chairman Jerome Powell assured his financial constituency, was “transitory.” This explanation for inaction betrayed the weakness of programmatic full-employment policy after decades of official inattention. Every month of continued rising prices undermined the public basis of the central bank’s program.

    When a few voices attuned to the interests served by the ostensibly apolitical decisions of monetary policy openly questioned the government’s response, the economics priesthood closed ranks in condemnation. Were rising prices and record profits not in part a sign of exploitative corporate prowess? Was an interest rate hike not striking bluntly at the wrong sources of rising prices? NPR gave voice to Obama CEA-chair Jason Furman, who reassured millions of commuters that “Companies always want to maximize their profits. I don’t think they’re doing it any more this year than any other year.” The Wall Street Journal opinion page was more direct: “The White House and congressional Democrats have decided that bashing businesses for rising prices is more politically useful than admitting Washington [is at] fault for the debt-fueled federal spending, misguided incentives and epic money creation that are driving inflation.” So sacred are the tenets of government-by-interest rate that the possibility of discussing alternatives was rapidly foreclosed.

    Meanwhile, business reporters openly noted the perspective of corporate executives to explain rising costs. The Journal’s news desk reported that energy companies use lucrative earnings from record fuel prices to purchase their own stock, rather than to invest in productive capacity. Airline executives boast “We are very, very confident of our ability to recapture over 100% of the fuel price run up.” Business journalists’ conclude matter-of-factly: “Firms flex pricing power.” Asked in May about the cause of rising prices, Powell himself explained: “We see that companies have the ability to raise prices and they’re doing that.” Unremarked is the possibility that such decisions could be restrained, or at least subjected to public scrutiny. 

    Despite early assurances to the contrary, the Federal Reserve has raised its target federal funds rate in two increases of a combined .75 percent since March 2022. This can only moderate price increases through reducing real investment, employment growth, and consumption. “It is time to raise rates,” the New York Times tells its readers. “Though He slay me, yet will I trust in Him.”

    The case of the Cold War

    No period in the history of price setting has more clearly defined present debates than the politics of the Cold War. Writing about the Kennedy-Johnson planning efforts, George Soule, former editor of the New Republic and director-at-large of the National Bureau of Economic Research, explained that “It is the aim of the planners, as a rule, to expand production and to avoid increase in prices.” The Kennedy and Johnson administrations did not have the legal tools for this task. Concentrated points of control in supply chains—“processors and distributors” as Soule described them—were able, in periods of rising demand, to charge more for their products than the rise in unit costs.5 Rising sales on stable or declining unit costs means rising productivity. The Kennedy-Johnson economists’ goal was to ensure equitable distribution of the gains of rising productivity through a system of wage-price guidelines. But as productivity rose in the 1960s boom, the majority of these gains accrued to corporate balance sheets through higher prices. 

    During the Cold War, the manufacturing industry proved most resistant to guidelines on prices and profits. As Soule wrote, “when there are only a few producers, agreement on prices or the tacit following of a price leader may more easily occur.” The legal scheme bequeathed by the New Deal-cum-Cold War state exempted the power of comprehensive price control from the general police power. Though the Korean War Defense Production Act established military stockpiles and direct subsidies for producers, manufacturers and food processors had lobbied tirelessly, to ensure repeal of its price control titles. 

    In 1961 the Group of Independent Experts of the OECD understood this exemption of manufacturing prices from regulation as a source of inflationary pressure. “For the United States,” they wrote, “there is reason to believe that attempts in a few industries to raise rates of profit at high levels of sales, or to have a lower break-even point, contributed to the rise of industrial prices that occurred” during the 1953-60 period.6 Accounting for rising costs, the OECD group considered the size of the price rise during the 1950s excessive: “gross profits in certain industries would be very high if something like capacity operations in the economy were attained.” As Lyndon Johnson and the Eighty-Ninth Congress would soon discover, corporate incomes were indeed engorged by the rising capacity utilization stimulated by the Vietnam War. Under the inflation these profits produced, the government was incapable of holding wages to guideline norms. 

    In pursuit of full employment, various governments across the North Atlantic attempted to restrain private pricing and wage-making through productivity guidelines along the American model. Rather than the “price controls” of the wartime decades, economists came to discuss the problem of controlling sectoral and class income claims in terms of “incomes policies,” directed primarily at organized labor. In addition to the annual Economic Report of the President, published by the Council of Economic Advisers; the Federal Republic of Germany published reports recommending guidelines; the French Commissariat général du Plan included guidelines and formal controls under the Fourth, Fifth, and Sixth Plans; the Austrian government established a Price-Wage Commission; and the British published guidelines through its National Economic Development Council and National Incomes Commission, both established in 1962. 

    The economic theory behind these new inflation control programs placed excessive demands on the organized politics of the mixed economies. Within the new councils and commissions for planning the macroeconomy there remained the vestigial rights and privileges of private property. After some success in the early 1960s, such guideline policies almost uniformly failed against the accelerating inflation at the decade’s turn. By the 1970s, economists and administrators sought to give such incomes policies greater force. Denmark had frozen wages in 1963; Great Britain in 1966 and again in 1972. The French froze prices in 1963, the Swedish in 1970, and the Norwegians in 1972. In 1971, the Nixon administration froze both wages and prices for three months, followed by a partial system of wage-and-price controls. 

    None of these efforts succeeded in halting world inflation. There were two reasons for this failure. First was that, given private control of investment, profit limitation often came at the price of capacity expansion. Of all the sectors impacted by the Nixon freeze, for example, none was more important than energy: while the controls insulated the US from the geopolitical price shock from the Middle East, under their restraint on profits the oil companies nevertheless reduced output and shortages developed. “I have yet to see… any firm evidence that efforts of the sector… have produced any significant increase in investment or in employment, and that is the test,” said Jack Jones of the British Trades Union Congress (TUC) of British incomes policies in 1977. “In my view, an industrial strategy which relies only on polite talks with industrialists and trade associations… is not a strategy at all.”7

    So long as investment and production remained by law privately controlled, investment decisions were held hostage to private prerequisites on profit rates. The political scientist Gerhard Lehmbruch appraised the failure of such experiments in West Germany and Austria: “Enlarging the field of corporatist economic decision-making beyond incomes policies (or, more exactly, control of wage policies) would have meant, among others, control of profits and of investment…” The late Leo Panitch described these 1970s experiments in inflation control as the “specific form of state-induced class collaboration in capitalist democracies” characterized by a “belief in the neutral state and its promulgation of ‘planning’.” To Panitch, faith in the state’s neutrality in the class struggle revealed “the emptiness of this planning.” 8

    The social and the national in price stabilization

    Popular consent was the second cause for the failure of incomes policies. During the 1970s, North Atlantic governments exhorted their publics to participate in incomes policies on the grounds of “growth,” but continuing inflation revealed the weakness of this purpose as a cause in itself capable of securing popular consent to planning. If macroeconomic stability required falling real wages or controls on employer profits, what good was it? A general expansion, in which all lines of industry are stimulated by an undirected expansion of spending, proved impossible without inflation. Particular public goals, such as affordable housing, increased income for disadvantaged groups, a growing renewable energy sector, or greater social equality, compete for resources with other private goals, such as luxury housing, corporate profits, or fossil fuel company operations. Some sanctions must come into play to consciously allocate resources toward a capacity profile geared toward the desired composition of full-employment demand. 

    War, NBER founder Wesley Mitchell wrote, alters economic calculation. Rather than figuring goods in terms of money to direct production by the profit motive, managers must figure money in terms of goods to plan necessary financing for the social project. Prices are controlled, the government enters the market as a purchaser and as a distributor, frequently distributing on a non-price basis. Resources must be allocated to wherever they can expand the necessary composition of final goods required by the program. Unless some particular composition of output is defined, government efforts to break any particular set of capacity bottlenecks in the course of a general expansion will only reveal other points of resistance up and down the supply chain. “The factor that sets the effective limit to accomplishments shifts from month to month, still more from year to year, and from country to country,” Mitchell wrote. 9

    What those committed to the principle of incomes policy discovered during the last crisis of capitalism was that the objective of national planning had to be spelled out to the public in terms more specific than “economic growth” if private participation was to be ensured. For such a strategy to have any chance of success, the bottleneck-breaking authorities must have some sense of the desired composition of final spending which their efforts are intended to satisfy. In a representative government, only elected leaders can provide a moral sanction for such national planning. “What was the collective purpose that could weld individuals together and whose expression could be the object of politics?” the historian Richard Adelstein asked of the Progressive-era legal reforms. “What… was the public analogue to corporate profits?”10 The question has confronted peacetime government ever since. In the United States, only the armaments program of World War II, and the particular composition of procurement contracts and materials required to meet them, has supplied such a collective purpose. It has had no parallel in America’s national history. 

    The debate over incomes policies during the 1970s turned on this underlying theoretical point. As Great-Society economist Gardiner Ackley wrote during the Nixon controls: 

    Many believe that the ‘consent’ of the great economic interest groups—which, in the long run, is the only possible basis for a successful system of inflation control—can only be secured and maintained if the system of wage-price restraints is coordinated with the other tools of government policy in order quite consciously to promote a progressive redistribution of income in specific directions which society approves. Indeed, to the extent that the source of existing inflationary pressure lies in a fundamental dissatisfaction with the existing income-distribution on the part of one or more powerful groups, while other groups resist any significant change in that distribution, there can probably be no real ‘consent’ to an incomes policy.11

    The unhappy search for popular consent during the 1970s exhausted the Cold War-liberal faith in what had become of the welfare state. 

    The need for a larger social vision brought Ackley, the consummate American Keynesian technocrat, to a belated realization earlier reached by countless leaders of movements for social transformation—from the “industrial democracy” of the Gilded-Age socialists and the Populist Party’s “cooperative commonwealth,” to Martin Luther King Jr’s “Arc of History”—the realization that those who hope to self-consciously open new chapters in human experience must of necessity turn the page of their own history. 

    Nixon, in his cynical way, attempted this renaissance: it won him an inauguration, but at the cost of the prestige and moral authority of the federal government. Weakened by the Nixon administration’s exploitation of the situation, public officials’ role in controlling prices came under coordinated business attack during the 1970s. Employer organizations such as the Business Roundtable; research institutes funded out of business grants and private fortunes, such as the Cato Institute, the American Enterprise Institute, and the Heritage Foundation; and professional associations such as the Federalist Society remade the legal scheme bequeathed by the New Deal. In the process, the courts and Congress pared government rate regulation considerably. As interests recomposed, under deindustrialization and growing trade liberalization, into a competition for national income between the energy sector and the growing service industries, among them the most refractory being health care, a popular understanding of the responsibility for prices was displaced from the institutions of production and distribution to the Federal Reserve and the Treasury. Prices fixed privately by giants in the market are now the norm: hence the crisis when the sovereign interest requires an expansion of effective demand. 

    The legal scheme and intellectual priority given to inflation’s macroeconomic causes shapes our social world. With public budgets restrained, and employer power over production and prices unchecked, we turn to interest rate policy in the brink. Within the labor market, this intellectual prohibition against structural reform protects and enforces the largely autonomous structures of race, sex, gender, ethnicity, and citizenship shaping the distribution of income. Low wages for public school teachers in many states, for example, are the legacy of an era when the profession was dominated by women whom many expected to work only partial careers earning second incomes for their husbands. The growth of open-shop construction is a perverse example of the revolution in ethnic and racial norms: enabled by the chiseling of federal labor and employment law, the right to work for poverty wages has been made into a positive case for diversity in the industry. In our own time, the persistence of differences in income and prestige by gender and race has been a boon to a beleaguered labor movement, which rightly sees them as a basis for new organizing. But until wages policies are coordinated with both the macroeconomic throttle and control of other incomes, the growth of a new labor movement will be vetoed by the public preference for recession over inflation. In a country where elections reign supreme, only a broad social vision will command the popular consent required to coordinate this national program. 

    In the experience of the United States, only victory in war has supplied that vision. When American planners sought to use the symbols of patriotism and growth to shoehorn a stabilization policy into the legal schema of the Cold War, the result was a profound disorganization of national life. The inadequacy of these symbols alone for the project of renovating that legal scheme is even greater than for pursuing stabilization within it. Building up institutions to serve this function toward ends more distinguished than human destruction today is a task of historical importance. As the quest for inflation control policies returns from its historical sojourn, and the conflict of social classes is reconvened, these essays offer new perspectives on one intractable problem of our modern forms of social organization.