In early May, in the wake of the Trump administration’s evolving immigration policies, the Murphy Institute convened national and local experts and leaders in a day long conference to discuss the implications of these changing positions for immigrant communities. In this excerpt, Muzaffar Chishti, Director of the Migration Policy Institute, discuss the most insidious and less well-known aspects of Donald Trump’s progress toward fulfilling his campaign’s anti-immigrant rhetoric.
Remaking the Rust Belt: The Postindustrial Transformation of North America
By Tracy Neumann
University of Pennsylvania Press, 2016
From Steel to Slots: Casino Capitalism in the Postindustrial City
By Chloe Taft
Harvard University Press, 2016
In the aftermath of the 2016 election, all eyes turned to the Rust Belt. We heard stories of Youngstown and Erie, of the misery of coal country, and of how the anger of laid- off factory workers drove them into the arms of Donald Trump. Two new books suggest we have a good deal more to learn about what has been happening in what used to be the arsenal of democracy. Tracy Neumann’s Remaking the Rust Belt: The Postindustrial Transformation of North America and Chloe Taft’s From Steel to Slots: Casino Capitalism in the Postindustrial City respectively examine postindustrialism in the former steel towns of Pittsburgh, Hamilton, Ontario, and Bethlehem, Pennsylvania. Things are not so simple after the factory gates close, both argue.
Neumann and Taft both complicate and expand on the definitions and geography of deindustrialization offered in seminal works like Barry Bluestone and Bennett Harrison’s The Deindustrialization of America and Daniel Bell’s optimistic The Coming of Post-Industrial Society. These well-researched, passionately-argued books each show, in the words of Neumann, “the primacy of local place in understanding how global and national social, political, and economic processes that constituted postindustrialism were worked out on the ground.” In their close attention to the particularities, processes, and context of how local communities grappled with these large-scale transformations, they complicate not just existing definitions of postindustrialism, but also neoliberalism. Neumann and Taft’s approaches reveal the value in exploring what geographers Neil Brenner and Nik Theodore have deemed “actually existing neoliberalism” rather than simply critiquing neoliberal ideology.
While Remaking the Rust Belt and From Steel to Slots seem to cover similar ground, they are neither in conflict nor redundant. Both authors expand the parameters of the “community study” approach to scholarship, breathing new life into the method. Yet they are quite different books and offer distinct perspectives and approaches.
Remaking the Rustbelt redraws the geography of the Rust Belt, drawing in Canada and Western Europe. Neumann challenges the widespread assumption that postindustrial transformation was historically inevitable and the by-product of “natural business cycles” and “neutral market forces.” Instead, she reveals how it emerged from the deliberate efforts of public-private partnerships between politicians and corporate elites. She defines postindustrialism simultaneously as a “pervasive ideology that privileged white-collar jobs and middle-class residents” and “a set of pragmatic tactics” of public-private partnerships, which “included financial incentives, branding campaigns and physical redevelopments.”
Pittsburgh is routinely celebrated as the success story in narratives of urban rebirth, while Hamilton—its smaller Canadian counterpart—is seen as Toronto’s unsuccessful sibling. The comparison between the two steel cities enables Neumann to show both that the Rust Belt was a transnational phenomenon and how postindustrialism developed unevenly not just within cities, but among them.
Neumann demonstrates that while postindustrial ideology doggedly emphasized the future, it had roots firmly in the growth coalitions that had dominated postwar cities. Upending many treatments of postindustrial and neoliberal urbanism, she contends that public-private partnerships did not emerge de novo in the 1970s, but were intensifications of arrangements forged during the era of urban renewal. The postindustrial city was not a form of rupture, but rather continuity.
Pittsburgh offers an effective case in point. In the 1950s and 1960s the city initiated a massive urban renewal program called the Renaissance aimed at revitalizing the central business district (“the Golden Triangle”), which created key alliances between the public and private sector. In the 1970s and 1980s, a series of mayors in the New Democrat mold joined corporate leaders to reinvigorate that model. The members of this growth coalition saw in the decline of steelmaking not an impediment, but an opportunity. They initiated “Renaissance II,” mobilizing public subsidies to draw corporate headquarters downtown, swapping smokestacks for skyscrapers. Renaissance II developed new retail, entertainment and leisure spaces beyond the Golden Triangle to transform the city into “a postindustrial utopia for young well-educated professionals.”
Neumann provides an insightful analysis of how the city’s branding campaign served as a “material and symbolic” to create a new mental map of urban space. Mayor Richard Caliguiri’s goal was a population of “less people with high incomes than more people relatively low earning and spending power.” He wanted “to tear every picture of Pittsburgh’s smokestacks out of the country’s textbooks.” While branding and other tactics stopped short of that ambition, they were able to transform Pittsburgh’s reputation and physical landscape. By the mid-1980s, it earned the designation as “America’s Most Livable City.” The mid-1980s, of course, simultaneously saw a Depression-scale social and economic collapse in the old blue-collar neighborhoods along the Monongahela River. “Livable” for whom?
Pittsburgh, nevertheless, became an international model for other cities across the Rust Belt, including Ontario’s Hamilton. That steel center had its own growth coalition, which sought to remake it into a headquarters for the service and financial sectors. Hamilton’s bureaucrats emulated Pittsburgh, exchanging ideas and taking “policy tours” of their southern neighbor. However, Canadian policy constrained Hamilton’s leaders. The federal and provincial governments imposed a provincial growth policy requiring the city to remain a manufacturing center and preventing it from taking steps that might threaten Toronto’s position as the center of the postindustrial economy. Despite efforts on the part of the growth coalition and the consultants they hired, Hamilton had difficulty transcending its “lunch bucket” image. Ultimately city leaders had no choice but to embrace that reputation in a rather less successful urban branding campaign than Pittsburgh’s.
Rather than examine the hard-luck workers, Neumann focuses primarily on the efforts of the politicians, corporate leaders, technocrats, policy officials, and urban branders who together produced these new visions of the postindustrial landscape. She is careful to neither celebrate nor revile them. Neumann instead contends that that these figures pursued such a vision because they saw it as their only politically viable option. This was less “neoliberalism by design,” and more “neoliberalism by default.” Such choices often emerged from forces beyond policymakers’ control; nonetheless, they inscribed inequality even deeper into the urban landscape.
Neumann’s attention to urban policy is important for understanding the construction of key structures and systems of inequality. The book gives concrete meaning to abstractions like neoliberalism and postindustrialism. Her approach, nevertheless, demonstrates a tradeoff not only for policymakers but also for the scholars who study them. Neumann’s emphasis means that she gives less voice to the blue-collar and poor residents who absorbed the brunt of the urban transformation. She describes how branding campaigns glossed over Pittsburgh’s tradition of labor unrest, but at times she unintentionally replicates that tendency. She does describe the valiant and often dramatic efforts of activists against urban growth coalitions, though they were of little avail. In the epilogue, Neumann notes that by 2010 Pittsburgh had the highest rate of poverty among working-age African Americans in the forty largest metropolitan areas in the United States. While the book explains some of the policy that produced this calamity, there is little discussion of the people who experienced it.
While Neumann might not pay enough heed to how ordinary people made sense of the remaking of the urban economy and landscape, this is the central focus of From Steel to Slots. Taft provides an ethnographic analysis of Bethlehem, Pennsylvania, which sits 300 miles east of Pittsburgh. Throughout the twentieth century it served as headquarters of Bethlehem Steel, at one point the world’s second largest steelmaker. Following the closing of “the Steel” in 2009, Sheldon Adelson’s Sands Casino Corporation—the world’s largest casino operator—opened an outpost on the former site of the mill. This transformation reveals an alternative model of public-private partnership and economic redevelopment rooted in the gaming industry. Yet the relationship between Adelson and local bureaucrats is not what interests Taft. Rather, she concentrates on “how locals have variously embraced and grappled with the remaking of their steel town as a postindustrial city.”
. While Neumann defines postindustrialism in terms of urban policy and development, Taft is more invested in its cultural dimensions—what postindustrialism means to those who live it. She offers a literally fine-grained analysis, showing how particles of dust, Christmas lights, and mailboxes all became sites where residents grappled with the transformation of Bethlehem. In her examination of the texture of the city, Taft illustrates how economic restructuring left its mark on a range of spaces and experiences from churches and local festivals to heritage tourism exhibits. Where Neumann grounds her argument in planning reports and other evidence culled from municipal archives, Taft relies on interviews with 76 residents. These lend the book a conversational tone.
Taft’s source base also helps her show that deindustrialization was “not a finite moment or breaking point.” She finds instead a “diversity of experiences and interpretations of ongoing economic change.” Taft’s informants transcend class, racial and spatial boundaries and include representatives from Bethlehem’s sizable white-collar workforce and significant Latino population. These perspectives help her dispel the assumption that white steelworkers were the only people who lived in Bethlehem or who experienced deindustrialization.
Taft avoids declaring Bethlehem’s transformation a success or a failure. She does not offer a nostalgic view of the city’s bygone industrial era or castigate Bethlehem’s links to a global gaming network extending from Las Vegas to Macau. Bethlehem has long been enmeshed in a wider world. The Steel’s products were always part of the global market. The same roadway built to carry Bethlehem’s products to market now brings Chinese immigrants from New York and New Jersey to test their luck at the Sand’s baccarat tables. “Lived from day to day,” she writes, “postindustrialism reflects an ongoing process marked by complicated, and at times paradoxical, continuities that also challenge well-worn categories of ’before’ and ‘after.’”
The casino itself highlights this duality. The Sands Corporation decided to abandon the Venetian-themed aesthetic of its other casinos for an industrial style, designing the building to evoke a steel mill in 1942. Taft suggests that by selecting that particular year, which represents the apex of production, the casino’s design offers as much of an escapist fantasy as the gondolas in Las Vegas. The decision of the Sands to embrace an industrial aesthetic might appear perverse—even cruel. Taft, though, finds a signification that the process of creative destruction was never entirely as complete as it might seem.
Taft provides a fascinating and detailed discussion of the ways in which the casino operates as a “postindustrial factory”—a phrase that succinctly collapses the dualities of before and after. While the dealers’ jobs reproduce the routinization of factory work without the midcentury social contract, many of the dealers have absorbed the company ideology about the value of entrepreneurship, individual responsibility, and flexibility. She suggests the dealers’ labor and attitude stands in for the broader experiences of postindustrial employment insecurity. However, if casino workers have indeed bought in to management ideology, it seems doubtful that they would join in the kind of collective action that Taft repeatedly suggests the built environment and processes of “place-making” could bring.
From Steel to Slots suggests that the casino itself reflects the larger financialization of the “new economy.” While Taft’s focus on Bethlehem allows her argument an intimate scale, her discussion of aspects of economic and political structures is much more vague than Neumann’s. Taft frequently alludes to a neoliberal and free-market logics without explaining what she means. This type of analysis ultimately makes the forces of the market seem inevitable and natural rather than the product of policies, deliberate decisions by politicians and corporate leaders, or even identifiable economic processes. Despite a few references to Adelson, managers, bureaucrats, and politicians play a minor role in the book and there is little attention to specific policies—those that Neumann draws out so well. (For example, what process led Pennsylvania to loosen regulation on the gaming industry in the 2000s, allowing the casino to be built in the first place?) Taft’s attention to how residents made cultural and social meanings out of economic restructuring is compelling; ultimately, though, without more context about local, state and international political economy, it is hard to grasp in material terms how such meanings provide building blocks to create the “more equitable future” she calls for. It might make some readers skeptical that it would be possible.
As Neumann points out, policy decisions shape “the material possibilities and daily lives of urban dwellers.” By now it has become clear that our national politics are being whipsawed by the retribution for decades elite control over such material possibilities. In facing the new political order emerging throughout the North Atlantic, sensitivity to the unevenness of postindustrial development—to “actually existing neoliberalism”—is needed now more than ever. Neumann and Taft’s collective analysis is extremely important for demonstrating that there is no one definition or uniform set of policy prescriptions that will work in all communities. There is no singular Rust Belt space, resident, or experience; nor is there a singular postindustrial city. Economic change has gnawed away quickly here, slowly there, creating a variegated map of deprivation and prosperity. Such understanding will be crucial in organizing to resist policies that do not take such forms of unevenness into account and for proposing ones that do.
In 2016, Uber deployed its first self-driving cars in Pittsburgh. The high-tech car service giant has come to represent both the transformation of American cities into “livable” playgrounds for the affluent and, at the same time, the worst kind of gig economy working conditions. The growing tech boom in Pittsburgh—reflected in Uber’s presence—is the fruit of the urban redevelopment efforts of the postwar years and particularly the 1970s and 1980s. This approach to postindustrial renewal has, on its face, been successful in bringing about a new economy. But we should not overlook the irony that even cab drivers are now at risk of replacement by automation. Professional-class Democrats counting on “retraining” to make the Rust Belt working class vanish and drop its grievances would do well to take note, and to heed the lesson of these books: the Rust Belt did not just happen, but was made. Those who live through economic restructuring do not always have the same experiences, or interpret those experiences in ways that are predictable. No one is immune to creative destruction, but we are not helpless before it either.
In years past, it has often been difficult to find anti-militarist beacons in Congress – Democrats included. Particularly since Dennis Kucinich’s 2013 departure from the House, it’s sometimes seemed that the only prominent national political figure willing to oppose the latest White House military venture was the somewhat-libertarian Senator Rand Paul. And today, with a Democratic Party left struggling to emerge and define itself in the midst of the Trump opposition, the imperative to create a sane foreign policy – distinct from that of politicians whose domestic policies often verge on the insane – has never been greater. A Democratic left cannot claim to offer a thorough-going alternative to business-as-usual Washington politics until and unless we break with the conventional bipartisan wisdom on foreign policy. All of which lends particular significance to Bernie Sanders’s prominent opposition to Donald Trump’s Syria bombing.
Much of the mainstream response to the Syria raid was, of course, familiarly tragicomic – sometimes almost to the point of laughable – with one network newscaster sufficiently moved by the “beautiful pictures of fearsome armaments” so as to quote “the great Leonard Cohen: ‘I am guided by the beauty of our weapons.’” Another opined that by launching the attack, “Donald Trump became president of the United States.” More significantly, Congressional leaders previously vowing to fight the man’s administration tooth and nail hastened to back him as he violated American law by usurping their exclusive right to declare war and violated international law by attacking a country that has not attacked us.
The really tragic aspect of the overall reaction, however, lies in the presumption that with this latest act of war, we have actually “done something” in response to the horrific circumstances of the Syrian war – “done something,” that is, in the sense of doing something positive. And it is precisely on this point, that the Sanders response is most important, as he called on the Trump administration to “explain to the American people exactly what this military escalation in Syria is intended to achieve, and how it fits into the broader goal of a political solution, which is the only way Syria’s devastating civil war ends.” Now this in itself hardly qualifies as a radical statement. And the fact that it stands out as in any way unusual is itself an indictment of the current environment in which “doing something” meaningful for the Syrian people is presumed to require dropping bombs and/or sending troops somewhere – and little else. But given our country’s history of liberal leaders who talk tough about taking on the powers that be, only to rush to join the parade to salute to the commander-in-chief when he plays the war card, the Sanders statement stands out as an all too rare example of a leader on domestic issues proving equal to a foreign policy challenge.
To be fair – and frank – about the current situation, lets not ignore the fact that when Sanders says, “we should’ve learned from the wars in Iraq and Afghanistan … that it’s easier to get into a war than get out of one,” and that if “the last 15 years have shown anything, it’s that such engagements are disastrous for American security, for the American economy and for the American people,” it was Barack Obama who was in the White House for most of those years. To put it bluntly, the Obama presidency largely anesthetized the American antiwar movement. Again, to be fair, they weren’t the only ones lulled into complacency – let’s not forget that the Nobel Peace Prize Committee gave him the world’s most prestigious prize early on in an administration that went on to bomb seven countries. But if it takes a figure like Donald Trump to restore the American left’s mojo, well so be it.
A couple of short years ago, it was a fair question whether there really was such a thing as an American left – outside of college lecture halls and counter cultural institutions. No more. Post-Sanders campaign, we now find millions seeking a government not dominated by Goldman Sachs and their Wall Street peers. Millions viewing the richest nation on earth being unwilling to guarantee health care to all of its people as an absurd situation. Millions considering the pursuit of corporate profit an inadequate governing principle for meeting twenty-first century global environmental challenges. Millions looking for leaders who will reverse the growing divide of wealth and power – across the nation and world wide. And, likewise, there are millions who recognize that the nation – and the planet itself – cannot indefinitely sustain our current delusionary policy of achieving world peace through ever-increasing armament and intervention.
No one in recent politics has been more insistent on the point that “It’s not me, it’s us,” than Bernie Sanders. But at the same time, there is no getting around the fact that individual politicians are sometimes required to rise to the occasion. And Sanders has done so at a particularly important juncture. Frankness does also require that we recognize that he has not always shone in this area throughout his entire political career: After starting out as a mayor with a foreign policy – meeting with Ronald Reagan-nemesis Daniel Ortega, the Sandinista President of Nicaragua, when he held the top job in Burlington, Vermont – his focus shifted to domestic economic issues when he went to Congress and on occasion he seemingly fell into orthodox foreign policy voting.
He did, however, unquestionably break new ground in the history of presidential debates when he called climate change the greatest threat to our national security, excoriated the policy of overthrowing legitimately elected governments dating back to Iran and Guatemala in the 1950s, and took Hillary Clinton to task for her association with Henry Kissinger. And now, in standing up against the tradition of critical American political thinking ceasing once the president gets violent, he nurtures our chances to really develop an alternative to the bipartisan endless war consensus. And yes, in the long run that is a job for us, not just him.
I carry numbers. Ernest Hemingway carried numbers too. In his case, it was the numbers of roads and regiments. He didn’t care much for platitudes about glory, sacrifice, honor, or courage. He found them obscene. So do I. But my numbers are different from his. The numbers I’m most conscious of – that claw at me – are the numbers of the dead. Twenty-five. That’s the number of
Single-payer healthcare is back on the radar after the collapse of Trump’s attempt to “repeal and replace” the ACA. Senator Bernie Sanders announced that he would be introducing a “Medicare for All” bill soon. While pollsters have known for years that a majority of Americans support single-payer, universal heath care, including many Republicans, the conservative case for it hasn’t received much attention.
Conservative pundit David Frum writes:
“Whatever else the 2016 election has done, it has emancipated Republicans from one of their own worst self-inflicted blind spots. Health care may not be a human right, but the lack of universal health coverage in a wealthy democracy is a severe, unjustifiable, and unnecessary human wrong. As Americans lift this worry from their fellow citizens, they’ll discover that they have addressed some other important problems too.” (http://pnhp.org/blog/2017/03/28/can-more-republicans-support-single-payer/)
The problems Frum lists range from hindered entrepreneurship, the struggles of the white working class, and a lack of racial equity. While not all of these problems weigh equally on the minds of conservatives, the understanding that universal health care coverage will make other goals easier to achieve.
Avik Roy of National Review argues that Republicans must “come to agree that it’s a legitimate goal of public policy to ensure that all Americans have access to quality health care” and that it is a mistake to “cede this moral ground to the Left”. He continues: “To credibly advance this approach, conservatives must make one change to their stance: They have to agree that universal coverage is a morally worthy goal…Ensuring that every American has access to quality health coverage is a legitimate goal of public policy, and it can be done in a way that expands freedom and reduces the burden on American taxpayers.” (http://www.nationalreview.com/corner/368772/conservative-case-universal-coverage-avik-roy)
Many past arguments against universal healthcare have revolved around a dislike of larger government, and the burden on taxpayers Roy mentions. In his article in IVM, Craig Burlin argues that neither of these have to be a reality in order to accomplish something along the lines of “Medicare for All”. He points out that:
“Unless someone is very poor or disabled and likely receiving disability or Medicaid benefits already, the tax base can be broad. This could be via a transaction tax, meaning everyone would pay including the underground economy and those who are at an age where they might forego coverage. The insurance pool would therefore be 100%, an actuarial benefit.” (https://ivn.us/2016/03/04/conservative-solutions-to-universal-health-care/)
He also references Australia’s system in which nearly half of the population retains private health insurance despite being entitled to free treatment, saying “Those of greater means can always afford things others cannot”. Burlin also argues the moral stance, similar to Frum and Roy saying that “There is a compelling argument to be made that basing medical care entirely on the profit motive is likely going to produce the kinds of winners and losers that are hard to justify on an ethical basis.”
While such arguments from the right are unlikely to convert Republican Congress members any time soon, there’s evidence to suggest that movement is possible.
Despite all of the parallels drawn between President Obama and Franklin Roosevelt, the new administration initially responded to the health care crisis as though it were 1993, not 1933. Obama sought a minimalist health care reform solution, rather than seizing on the exceptional political moment to strike out in a bold new direction.
Held captive for so long by neoliberal ideas about how best to organize the U.S. economy and society, Obama and many other would-be reformers put competition and consumer choice at the center of their efforts to reform the U.S. health care system. Dozens of major organizations close to the Democratic Party, ranging from the AFL-CIO to MoveOn.org to the Children’s Defense Fund, mobilized over the last year or so on behalf of a breathtakingly modest solution: creation of a public health care plan—essentially a nonprofit insurance company—to compete with the commercial health care insurers. They largely abandoned the call for a single-payer health care system (modeled after Canada’s) around which many progressives have rallied since the demise of the Clinton administration’s Health Security Act. This faith in market-led solutions for health care remained largely unshaken in spite of the recent financial collapse.
The Public Plan Panacea
The centerpiece of their efforts was the creation of a new government-sponsored health care plan for uninsured Americans under age sixty-five who lack employer-based health benefits and do not qualify for Medicaid. This group would be able to choose between a standard package of benefits offered by the public plan or a comparable one provided by private insurers.
Private insurers insisted that a public plan would not compete on a level playing field and would ultimately drive them out of business. Their contention subtly recast the debate over health care reform. The focus shifted to how to make the public plan a “fair” competitor and away from the enormous inequities of the under-regulated private insurance market that have contributed so significantly to the country’s health care crisis. In order to rally support for a public plan and neutralize charges of unfair advantage, some supporters of the public plan watered down the original proposal beyond recognition or bargained away (or shunned) key reforms needed to rein in insurers and providers.1
Supporters of a new government-sponsored health care plan extolled the public sector for its reported superior ability to contain costs and pursue innovations that improve the quality of care.2 They heralded Medicare in particular for retaining wide access while containing expenditures on health care through cost-saving innovations like the prospective payment system introduced in 1983 and fee schedules for doctors introduced in the 1990s.3 Left out of the story is that, for many years, Medicare was largely an unregulated cash cow for providers. The quid pro quo to get physicians and hospitals to end their jihad against Medicare in the mid-1960s was an agreement to reimburse them on a fee-for-service basis and eschew imposing serious cost or budget controls.
For public programs, the devil is in the details. Medicare has been able to spread risks broadly and maintain wide access for the simple reason that the government bluntly requires it to do so. Nearly everyone qualifies for Medicare upon reaching age sixty-five, regardless of health status or income level. This has created at least some sense of social solidarity and given older Americans (across the board) a stake in defending a generous health care system for the elderly. Medicaid, the means-tested health care program for low-income Americans, has had a strikingly different trajectory. It has been far easier to starve Medicaid for funding because lower-income Americans do not enjoy the political clout of the elderly. Also, Medicaid is a mixed state-federal program, while Medicare is primarily a federal program with benefits not varying significantly from state to state.
The public plan that reformers envisioned differed from Medicare in key ways that reinforced the current pathologies of the U.S. health care system. First, supporters talked about the need for competition and choice. Yet employees (and their dependents) who receive health insurance through their workplaces (nearly 160 million Americans) would likely not be free to choose the public plan. These captive consumers might only have the option to go public if their employers decided to switch over to the public system or gave up providing benefits altogether and paid the penalty tax. If that penalty tax is set too low, employers might stop providing health insurance, forcing more of the health care coverage costs onto the government and, ultimately, taxpayers.
Even if the public plan turned out to be cheaper and better than private insurance plans, employers who continue to provide health care coverage would not necessarily offer their employees the public option. For some employers, this would be like drinking the Kool-Aid. In the history of the development of U.S. social policy, business leaders have repeatedly allowed their visceral ideological opposition to governmental programs to trump their immediate bottom-line calculations. The fear is that permitting an expansion of the public sector in one area opens the door to more governmental expansion in other areas. A number of large employers, most notably General Electric (GE), walked away from Clinton’s Health Security Act for precisely this reason.
Another crucial factor is that many large employers are, not surprisingly, large manufacturers of medical devices and other medical products. Is GE ready to funnel its employees into a government-sponsored plan with potentially enormous power to, say, reduce the costs and utilization of MRI machines, a multi-billion a year business for the company? Furthermore, as much as employers begrudge the cost of employees’ health care coverage, many of them do not want to relinquish the paternalistic control that employer-based benefits give them over their workers.
In theory, the public plan should be able to provide better benefits and services at lower costs because it presumably would not be saddled with high administrative costs and pressure to turn a profit for shareholders. But the public program, with its superior benefits and initially lower costs, could end up becoming a magnet for sicker patients in need of costlier care. This would drive up the costs of the public plan, prompting healthier people to flock to less expensive private insurance options.
It is not obvious that the public plan could compete with private plans in terms of costs, quality, and services alone if U.S. insurance companies (unlike those in Europe) remain free to market and advertise their products with few restrictions. One can imagine driving down the highway and seeing massive billboards paid for by private insurers with slogans like: “Should Uncle Sam’s plan tell your doctor what to do?” This could erode public confidence in the government’s ability to solve pressing problems.
The public plan option could undermine public support for governmental intervention in other realms of social policy. It might end up pitting the captive consumers of employerbased private insurance against people enrolled in the public plan. It would be politically explosive if employees covered by private health insurance came to believe that they were providing huge subsidies to a superior public plan in which they were not permitted to enroll. Private insurers would presumably frame their marketing and political strategies around allegations of unfair cost-shifting, putting the public plan further on the defensive.
In short, public plans are not necessarily innately superior when it comes to developing cost-saving innovations. The real question is: under what conditions do the political stars line up to the point where both the government and the public are willing to use their considerable powers as the prime purchasers of health care to rein in providers and insurers? The new public plan could look like the largely unregulated Medicare program in 1965, or the semi-regulated Medicare program in 2009, or today’s underfunded Medicaid program, or the health care equivalent of Fannie Mae and Freddie Mac (the quasi-public mortgage companies that were leading culprits in the recent subprime fiasco and foreclosure crisis).
The Single-Payer Alternative
The public plan option has split organized labor and other key groups. Just like fifteen years ago during the debacle over the Clinton proposal, supporters of a single-payer plan are some of today’s fiercest opponents of a minimalist approach to health care reform. They essentially advocate vaporizing the U.S. health insurance industry and replacing it with a government-run program modeled after Canada’s system. The government would pay most medical bills directly; doctors, hospitals, and other providers would operate within global budgets but remain in the private sector; and everyone would be entitled to a basic package of health benefits. The single-payer message has not changed much from the early 1990s, although supporters invested more effort this time around in mobilizing organized labor and other groups to endorse their position. Hundreds of union locals and dozens of central labor councils and state labor federations passed symbolic resolutions in favor of single-payer legislation, as did the international chapters of many major unions.
A single-payer system has a lot going for it. Single-payer advocates have drawn public attention to the extraordinary pathologies of the U.S. health care system, notably the enormous costs amidst gross lapses in care and coverage and the billions squandered on administrative costs. They also have offered the most progressive tax proposals to finance universal health care. When the Congressional Budget Office (CBO) analyzed all the major health care reform proposals then under consideration in 1993-1994, it concluded that a single-payer plan was the only one likely to achieve universal coverage while saving money. This time around, single-payer advocates have been pushing legislators to conduct hearings on the latest single-payer legislation and to have the CBO cost it out.
Earlier in his political career, Obama spoke strongly in favor of a single-payer system. Today he acknowledges that if he were starting from scratch, single-payer would be preferable but that the best option now is to build on the current system. In the opening months of the health care reform debate, the president, Senator Max Baucus (D-MT)—the chairman of the pivotal Senate Finance Committee—and other leading political players consciously sought to exile or delegitimize single-payer advocates. Meanwhile, they surrounded themselves at the March 2009 health care summit and other leading forums with the “men and women who made their careers killing health care reform,” in the words of the Washington Post. 4
Some key labor leaders publicly made polite noises about a single-payer system, while disparaging it behind the scenes. Most labor leaders focused their energy and resources on backing whatever Obama favored, even though the president was stunningly vague on key issues. Some rallied around the public plan after convincing themselves that it really is a Trojan horse that will ultimately unleash a single-payer plan after enfeebling the private insurance industry. Others signed up because they consider themselves political realists and view the single-payer option as politically dead on arrival.
The Insurance Industry
President Obama and other would-be reformers attempted to skirt an axiom of medical economics that is at the heart of health care politics: “A dollar spent on medical care is a dollar of income for someone.”5 Obama attempted to finesse the politically explosive cost-containment issue by focusing on what one critic called “faith-based savings,” like expanding the use of electronic health care records, and prevention and disease management programs.6 But most experts doubt that these measures will yield sizable savings any time soon.
Health care reform that achieves universal, high-quality, affordable care is fundamentally a redistributive issue with high political and economic stakes. Meaningful cost control will require strong governmental leadership that sets targets or caps on medical spending. This can be done directly, as Canada does with a single-payer system operating within global budgets and that accords private insurers a relatively minor role, or as Britain does with its government-run National Health Service. The alternative is to retain a large private insurance sector, as many European countries do, but keep it (and the medical industry) tightly regulated.
Competition is a weak, indirect way to contain costs in the absence of strong regulatory institutions. Historically, the United States has been shockingly unwilling to seriously regulate its private insurance industry. U.S. health insurance companies are not just underregulated compared to private insurers overseas, but also compared to many other major industries in the United States. A hodgepodge of loose regulations at the state level, enforced by ineffectual and sometimes corrupt state insurance departments, governs the health insurance industry.
Today’s insurance industry is gung-ho on serving as the stick that prods doctors and hospitals to adopt pay-for-performance standards and other cost-cutting and quality control measures. Insurers are outspoken advocates of greater transparency for physicians and hospitals, so that the public is better able to scrutinize their performance and costs. But insurance companies stridently defend their right to keep key information about their own operations confidential. As long as the private insurance industry is allowed to hide behind the cloak of business trade secrets, informed consumer choice—an important ingredient of successful market competition to contain costs—is a myth.7
Beginning in late 2008, U.S. health care insurers made what many commentators have billed as sweeping regulatory concessions. They signaled their willingness to accept all individual applicants, regardless of pre-existing health conditions. They also expressed their willingness to discontinue setting premium rates that are based on health status or gender, but only if Congress mandated that all Americans carry health insurance—i.e., if all Americans were forced to buy their products. These only look like major concessions in the American context, because U.S. insurers have included some whopping caveats. First, they would retain the option of setting rates based on age, geography, and family size in the individual market. This means that premium rates would continue to vary enormously, pricing many people out of the market. Insurers would also retain subtler means to attract healthier subscribers and discourage sicker people from seeking coverage, notably via their extensive marketing budgets and ingenious tactics—like locating their information offices on the upper floors of buildings without elevators. Insurers also made no promises to forego considering health status and other key factors in setting rates for small employers, one of the most profitable segments of the health insurance market.8
Reformers who bemoan the state of the U.S. health care system often bemoan the billions wasted each year on administrative costs, especially medical underwriting which separates the sick from the healthy so as to deny less healthy people insurance policies or charge them exorbitant rates for coverage. But other countries that depend on private insurers to deliver health care benefits—notably Germany, the Netherlands, and Switzerland—engage in medical underwriting to determine which subscribers present the greatest health risks. The difference is that this is a joint endeavor that requires insurers to make their operations far more transparent to governmental regulators who manage elaborate risk adjustment systems. For example, Germany’s hundreds of sickness funds, or private insurers, are required to participate in a risk adjustment mechanism that helps equalize premiums by taking into account dozens of risk factors, not just health status and gender, so that insurers do not cherry-pick people who will use the health care system the least.
The public plan was supposed to force private insurers to become more aggressive with providers in order to hold down costs and prices, or else risk losing customers to the public plan. But why should private insurers be accorded such a preeminent role in defining the public interest in the allocation of health care resources and imposing it on physicians and other providers? Other countries have created formal institutional mechanisms that provide the public and a broad range of stakeholders with a meaningful voice in how to divide up the limited health care pie and monitor health care quality. These formal institutions have real clout and are a long way from the vague and largely unenforceable voluntary promises to cut costs that the U.S. insurance industry and medical providers announced in May 2009, and that President Obama hailed as a watershed moment in health care reform.
Supporters of the public plan solution conceded that the insurance industry needs to be regulated more tightly, but this was not their main focus. Their emphasis on competition reinforced the idea that health care should be treated primarily as a private consumer good distributed by market principles. This undermined the idea of health care as a social good that needs to be organized around underlying principles of social solidarity, not market competition.
Advocates of the public plan jeopardized enormous political capital to get so little. They bent over backwards to convince the public and critics in the insurance industry that they will create a level playing field. This fostered the impression that the insurance industry has been playing fair and square all along. The terms of the debate shifted to the imaginary injustices that a mammoth public plan will inflict on a Lilliputian insurance industry that has historically been too weak and fragmented (or too disinterested) to put the cost-containment screws on providers. This revisionist portrait was at odds with the insurance industry’s real role in the U.S. health care crisis, past and present.
The U.S. insurance industry has been a shrewd behind-the-scenes political operator for well over a century. Each time health care reform has moved to center stage, outcries for more federal action have repeatedly ended up further entrenching the private insurance industry.9 This time may be no different.
Harry and Louise
The public plan solution emerged from the doldrums of the defeated Clinton proposal and out of a very particular reading of what went wrong fifteen years ago. In the revisionist account, Harry and Louise killed health care reform. Harry and Louise starred in a series of infamous commercials funded by the insurance industry. The fictional Harry and Louise sat around their kitchen table fretting that the Clinton plan would force them to change their current health care benefits and maybe even switch doctors.
The ghosts of Harry and Louise have had a striking hold on the current health care debate. The mantra from President Obama, Senator Baucus, Service Employees International Union (SEIU) President Andrew Stern, and other would-be reformers is that most Americans are basically content with their health care coverage and seek a uniquely American solution that keeps the current system of employer-sponsored benefits largely untouched. The biggest impact of the ad campaign (then and now) appears to have been on elite policy and opinion makers, who have persistently overestimated just how much Harry and Louise represented popular sentiment and how satisfied Americans are with their health care coverage. 10
Evidence continues to mount that Americans are profoundly dissatisfied with their health care system and are ready for major changes. The United States is nearly last in public satisfaction compared with other developed countries (and dead last among polled public health experts), and it’s no wonder why. Since the demise of the Clinton plan, the wheels have come off job-based benefits. Some employers have eliminated health benefits altogether while others are doggedly whittling them away.
It is no longer possible for most Americans to have six degrees of separation from the uninsured. With the official unemployment rate surpassing 8 percent in February 2009, a Kaiser Family Foundation survey found that 52 percent of people with employer-sponsored coverage were worried about losing it. Nearly eighty-seven million Americans were uninsured at some point in the last two years. The foreclosure crisis has riveted public attention on the enormous number of Americans who go bankrupt and risk losing their homes because of medical debts.
The minimalist approach to health care reform did not tap into this smoldering public anger over the health care system, or into the explosive public outrage at the financial industry, the business sector, and their congressional patrons in the wake of the economic meltdown. The political futures of several Democratic barons in Congress—including Senators Christopher Dodd and Charles Schumer, and Representative Charles Rangel—are clouded because of their close, see-no-evil ties to the banking and insurance industries nourished over the years by enormous campaign donations from these sectors. The time was ripe for an ambitious health care reform agenda that fundamentally challenged these special interests because the economic meltdown has made legislators on both sides of the congressional aisle particularly vulnerable to charges of shilling for the business sector. Obama’s decision to seed his administration with many free market protégés of Citigroup’s Robert Rubin also made him vulnerable on this score. So did the choice of Nancy-Ann DeParle, who has served as a director of many large health care companies, to be his health care czar.
We are in the midst of an economic meltdown widely understood to be the result of breathtaking malfeasance by the financial sector and its political patrons. Yet Obama and key advisers repeatedly singled out health care expenditures as the leading threat to the country’s long-term economic health. Characterizing health care as primarily an economic issue is costly. It fosters an exaggerated faith in the possibilities of forging productive coalitions with the business and insurance sectors, and diminishes interest in cultivating a wider social movement on behalf of universal health care. This is exactly what happened in 1993-1994.11 It also distracts political and public attention away from arguably more dire threats to the economy, including the opaque bailout of the financial sector, the gargantuan military budget, and the grossly inequitable tax system. It also stokes public hysteria over the costs of Medicare and Social Security, paving the way for major retrenchments in these two central pillars of the U.S. welfare state.
The Obama administration and most other Democratic Party leaders have responded to the health care crisis in the same way that they have responded to the financial crisis. They have taken extreme care not to upset the basic interests of the powerful insurance industry and segments of the medical industry, and not to raise fundamental questions about the political and economic interests that have perpetuated such a dysfunctional health care system. The biggest surprise is how the leadership of organized labor and many supposedly progressive groups has unquestionably followed Obama and congressional Democrats on health care reform. As a consequence, they may be squandering an exceptional political moment. If the U.S. government can essentially seize control of its automobile sector and contemplate the nationalization of some banks, the beginning of the end of the for-profit health insurance industry seems less far-fetched than it once did.
If the Obama administration and leading Democrats calculated that the current political conditions were not fortuitous enough to secure a single-payer plan, they should at least have pushed for a seriously regulated insurance system of the kind that has predominated in Western Europe (and is now under siege by a push for more privatization there). Failure to attempt even that is perilous for the cause of universal health care and for their political futures. The president and the Democrats risk looking (in a couple of years) like Herbert Hoover and the Republicans on the eve of their historic 1932 defeat, rather than FDR and the Democrats on their march to a triumphant re-election in 1936.
There are not many times in American history when the previous administration and ruling party have been so thoroughly discredited, as have former President George W. Bush and the Republican Party; or when the princes of the financial sector have been “stripped naked as leaders and strategists,” in the words of Simon Johnson, former chief economist at the International Monetary Fund.12 Would-be reformers who recently fought so doggedly to essentially create a nonprofit health insurance company did not recognize the potential of this political moment. Under the spell of the Stockholm Syndrome, they identified too closely with their captors—the insurers, the medical industry, and the lure of market-led solutions. Identifying too closely with one’s captors is risky. When the window opens, you don’t make a run for it; indeed, you may not even notice the opening.
1. Robert Pear, “Schumer Offers Middle Ground on Health Care,” New York Times, May 5, 2009; Len M. Nichols and John M. Bertko, “A Modest Proposal for a Competing Public Health Plan” (Washington, D.C.: New America Foundation, March 2009).
2. Jacob S. Hacker, “The Case for Public Plan Choice in National Health Reform: Key to Cost Control and Quality Coverage” (Washington, D.C.: Institute for America’s Future, 2008).
3. Instead of reimbursing hospitals for their itemized costs after the fact, under the prospective payment system hospitals receive a predetermined payment based on fee schedules for the specific diagnoses (the so-called diagnosis related groups, or DRGs).
4. Ceci Connolly, “Ex-Foes of Health-Care Reform Emerge as Supporters,” Washington Post, March 6, 2009.
5. Theodore Marmor, Jonathan Oberlander, and Joseph White, “The Obama Administration’s Options for Health Care Cost Control,” Annals of Internal Medicine 150 (April 7, 2009): 485.
6. Jonathan Oberlander, “Miracle or Mirage? Health Care Reform and the 2008 Election” (lecture, Leonard Davis Institute, University of Pennsylvania, October 10, 2008).
7. Diane Archer, “Making Health Care Work for American Families: Saving Money, Saving Lives,” statement before the U.S. House Committee on Energy and Commerce, Subcommittee on Health, April 2, 2009.
8. Reed Abelson, “Health Insurers Balk at Some Changes,” New York Times, June 3, 2009.
9. Jill Quadagno, One Nation Uninsured: Why the U.S. Has No National Health Insurance (New York: Oxford University Press, 2005), 75; and Jennifer Klein, For All These Rights: Business, Labor, and the Shaping of America’s Public-Private Welfare State (Princeton: Princeton University Press, 2003).
10. Mollyann Brodie, “Impact of Issue Advertisements and the Legacy of Harry and Louise,” Journal of Health Politics, Policy, and Law 26, no. 6 (December 2001): 1353-60.
11. For more on the 1993-1994 debate, see Marie Gottschalk, The Shadow Welfare State: Labor, Business, and the Politics of Health Care in the United States (Ithaca: Cornell University Press, 2000).
12. Simon Johnson, “The Quiet Coup,” Atlantic Online, May 2009, http://www.theatlantic.com/doc/ print/200905/imf-advice (accessed April 4, 2009).
One thing to keep in mind about the recent Thomas Perez–Keith Ellison race for Democratic National Committee chair is that it was pretty much an only-in-America sort of thing. Were we in any kind of parliamentary system – like most countries have – the two sides would probably be in different parties – the Bernie Sanders core of the Ellison
In response to the closer alliance between Wall Street and the Trump Administration, protestors staged a marriage between Donald Trump and Wall Street on Valentine’s Day, complete with giant puppets and fake money raining down on the assembled crowd. Act.tv documented Bernie Sanders’ appearance to denounce Trump’s pretense of ‘Draining the Swamp’, and lying about his ties to Wall Street.
The Consumer Financial Protection Bureau was created in the face of industry resistance. With a Republican Congress and President Trump in the White House, it is facing an attack that could be the final blow.
In a joint statement last week, Sen. Ted Cruz and Rep. John Ratcliffe presented the Repeal CFPB Act, which targets Title X of Dodd-Frank. The congressmen claim that if passed, the Act would “free consumers and small businesses from the CFPB’s regulatory blockades and financial activism.” In fact, it would completely eliminate the CFPB as an agency. (Consumerist, Lawmakers Introduce Legislation that would Abolish the CFPB)
Should the efforts of Cruz and Ratcliffe fail, there is still House Financial Services Committee Chairman Jeb Hensarling’s plan to revise his earlier bill with the aim of weakening, if not completely destroy the CFPB and pacify banks by removing some key components of annual stress tests that evaluate how a bank will perform in a financial crisis.
In a memo to lawmakers, Hensarling outlined the details of his plan. In an article last week, Bloomberg News reported some of the specifics from the memo:
Hensarling is seeking to eliminate much of the Consumer Financial Protection Bureau’s regulatory powers, and transform it into a law enforcement agency. The memo proposes the controversial regulator only be able to pass rules that have been mandated by Congress. The plan would further restrict the agency by eliminating its authority to supervise financial firms and doing away with a public database documenting consumer complaints. (Bloomberg News, New GOP Memo Targets Stress Tests, CFPB in Dodd-Frank Changes)
Hensarling’s other proposal addresses the role of the CFPB director. He has asserted his belief that the bureau should be under the supervision of a single director, and that the director should ‘be removable by the President at-will”. (Bloomberg News) Hensarling has also announced his opinion that the current CFPB director should be fired by Trump before his term ends next summer. The Bureau is presently engaged in appealing a court ruling that would permit the President to do exactly that without cause.
You know things have gotten bad for the banking industry when even the bankers themselves are beating up on their own. After the Consumer Financial Protection Bureau (CFPB) announced back in early September that it was fining Wells Fargo nearly $200 million—the largest fine ever levied by the Bureau—for the “widespread illegal practice” of opening dummy accounts, filling them with depositors’ funds without their knowledge or authorization, and then cashing in on the accounts by assessing consumers’ fees and other charges, you could almost hear the bankers of the world collectively throwing up their hands. “Not that Senator Elizabeth Warren needed more ammunition to protect the CFPB,” grumbled Jaret Seiberg of the Cowen Group, a leading financial services company, “but she has it now.” Camden Fine, president of the Independent Community Bankers of America (ICBA), put it even more bluntly. “Wells’ greed has made it much more difficult for ICBA to get much needed regulatory relief,” Fine groused.
An invention of the Dodd-Frank Wall Street Reform and Consumer Protection Act, the CFPB was created to rein in the bottom-feeders of the financial community. But the revelation that the largest bank in the world by market capitalization had just been caught with its hand in the cookie jar was not what really bothered Fine and Seiberg. Rather, it was the extraordinarily poor timing of the news that rankled them most of all. On the very same day the CFPB issued its consent order against Wells Fargo, the House Financial Services Committee announced that it would begin hearings on a new piece of legislation, the Financial Choice Act, introduced by Republican Committee Chairman Jeb Hensarling of Texas. Hensarling, one of Wall Street’s most contentedly kept men on Capitol Hill (to the tune of $1.2 million in campaign contributions during the 2016 election cycle alone), drafted the Financial Choice Act in effect to gut or destroy the regulatory provisions put in place by Dodd-Frank, and especially Elizabeth Warren’s brainchild, the CFPB (no wonder, then, that Warren has called the bill “Congressman Hensarling’s wet-kiss to the Wall Street banks”). So when Wells Fargo CEO John Stumpf found himself hauled before the public to confess sheepishly that “we make mistakes,” just as Hensarling was beginning the campaign to bring the Financial Choice Act to a floor vote, the ICBA’s Fine grimly rued the inopportune coincidence: “Much-needed CFPB reform is basically DOA,” he acknowledged by way of a postmortem. A little more than a month later, Stumpf himself was DOA—victim of an unceremonious early retirement package that included a “claw back” of nearly $41 million in previously awarded stock bonuses.
If that turns out to be the fate of the Financial Choice Act as well—which, among other things, would replace the CFPB’s existing “consumer watchdog” executive structure with a bipartisan commission subject to the unpredictable currents of congressional appropriations, while repealing its ability to ban financial products deemed to be abusive—it will not have been the first attempt to curtail the Bureau’s regulatory effectiveness. Nor is it likely to be the last. Ever since the 2010 passage of Dodd-Frank, the CFPB has been a perennial target for the banks and financing companies that fall under its umbrella. And with the ready assistance of hired guns like Hensarling, a proliferating array of industries that manufacture increasingly novel (and frequently quite predatory) financial products, which often remain just outside of that umbrella, have been fighting toothand-nail to keep the CFPB out of its turf.
Take, for instance, the world of for-profit educational companies. A state-subsidized boondoggle of alarming proportions, the degree-mill industry has raked in as much as $32 billion annually in taxpayer dollars in recent years, even as the Department of Education reports that 72 percent of for-profit colleges produce graduates who earn less on average than the typical high school dropout in the overall population. For that dubious honor, another study concluded, the typical University of Phoenix or DeVry graduate leaves school “ripped off, unemployed, and deep in debt,” and six times more likely to default on her student loan than a graduate of a non-profit or public college.
This kind of track record of consumer abuse is exactly what the CFPB was created to address, so it was a salutary development when the Bureau secured a $530 million judgment against Corinthian Colleges, one of the industry’s worst offenders, in the fall of 2015, while forcing Corinthian out of business in the process. A similar outcome resulted when the CFPB went after the ITT Technical Institute, which announced it was closing its doors in September 2016. But when the CFPB attempted to push its regulatory authority still further into the workings of the for-profit education industry, the industry pushed back. Noting that forprofit institutions become eligible to participate in massively lucrative federal financial aid programs by being accredited by a recognized agency, the Bureau issued a civil investigative demand— effectively a comprehensive subpoena preliminary to a full investigation—to the century-old Accrediting Council for Independent Colleges and Schools (ACICS), requesting information on how the process had worked for low-performers like Corinthian.
The ACICS responded by refusing to turn over the requested information, while appealing the CFPB’s demand on the legal grounds that the Bureau was “delving into accreditation oversight, not consumer financing, and is overstepping its bounds into an area that is exclusively under the control of the Department of Education.” Meanwhile, the for-profit educational industry’s political allies swung into action as well. Two powerful Congressional Republicans—Tennessee’s Lamar Alexander, chair of the Senate’s Committee on Health, Education, Labor & Pensions; and Minnesota’s John Kline, chair of the House Committee on Education and the Workforce—sent a scathing letter to CFPB Director Richard Cordray, attacking the Bureau’s “unprecedented overreach” and insisting that it “immediately rescind” the information request.
Alexander and Kline explained their opposition to the CFPB’s move by arguing that “determining the role of accreditors for federal purposes is a congressional responsibility, not yours.” But their unacknowledged connections to the worst offenders in the for-profit college industry surely mattered. Eight of the leading for-profit education corporations, including Corinthian and ITT Tech, have been the subject of state- or federal-level investigations in recent years; Lamar Alexander has received substantial campaign contributions from all eight of them. During the last complete campaign cycle, in 2014, John Kline was among the top three congressional recipients of campaign dollars from five of the eight colleges (or from their ownership groups, if privately held). From three of them, Kline received more campaign contributions than any other individual member of Congress.
The political pressure had its intended effect. In April 2016, a conservative federal judge appointed by George W. Bush denied the CFPB’s civil investigative demand, throwing a significant obstacle in the way of the Bureau’s attempts to extend its watchdog duties into the accreditation process.
The for-profit education racket is not the only place where predatory financial companies and their lackeys on the Hill have been working to impede the Bureau’s more expansive ambitions. In recent months, the CFPB has also been trying to shine a light into the shadowy world of structured settlement purchasing—and with remarkably similar results.
Structured settlement purchasing is one of those ubiquitous industries you have probably never heard of. Recognizable for their familiar “get cash now!” television and radio ads, structured settlement purchasers are companies that give consumers immediate lump sum cash payments at discounted rates, in exchange for future streams of payments usually associated with a settlement in a lawsuit. For instance, Freddie Gray—the Baltimore man whose death while in police custody led to weeks of unrest and trials for six Baltimore police officers— had been the beneficiary of a lead-poisoning settlement when he was a child; by the time he was twenty-three, the Washington Post reported, Gray had sold a Maryland-based financial company called Access Funding $146,000 in future payments, in exchange for just $18,300 in lump sum payouts. And Freddie Gray is far from alone—according to the National Structured Settlements Trade Association, between $8 and $10 billion worth of settlement payments are purchased by companies like Access Funding every year, often at deeply discounted prices that leave desperate consumers like Gray with just pennies on the dollar for what their original settlements would have been worth. After the Post story broke, most of Maryland’s judiciary and the state’s attorney general joined Congressman Elijah Cummings in demanding a deeper investigation into “how private companies make profits buying and selling settlements that are meant to ensure victims have reliable incomes, and how we can best protect vulnerable individuals from predatory and abusive practices.”
The largest player in the structured settlement market, by far, is J. G. Wentworth, a Pennsylvania company that does business across the country under a few different brand names. In September 2015, the CFPB issued a civil investigative demand to J. G. Wentworth, which the company refused to comply with on the dubious grounds that, because the company does not lend money to consumers (i.e., provide them with credit) but rather purchases outright their future payment streams, structured settlement payouts are “not a consumer financial product” and, therefore, outside of the CFPB’s purview. The CFPB rejected J. G. Wentworth’s petition to set aside the information request, but the company still refused to comply, throwing the decision to the courts once again and hoping for a favorable ruling like the one that spared the ACICS.
At this writing, the outcome of the CFPB’s efforts to extend its regulatory oversight into the market for structured settlement purchasing also remains up in the air. And that is exactly why so much of the financial community was so frustrated with the poor timing of the Wells Fargo fiasco. By cutting the CFPB off at the knees, the Financial Choice Act is the banking lobby’s most significant effort yet to roll back the regulatory reforms that emerged from the financial crisis of 2008—a crisis that began, remember, because of the lawless and predatory behavior of an industry selling a particularly ubiquitous consumer product. No wonder, then, that JLL Partners, the private equity firm that has owned a majority stake in J. G. Wentworth since 2006, has given more money to Jeb Hensarling than any other member of Congress in each of the last two election cycles.
Assuming Wells Fargo and their ilk can avoid tripping over their own feet, Hensarling, Alexander, Kline, and the rest of their cronies on the Hill know exactly what they need to do to keep the money flowing.