Featured

Contesting the Idea of Progress: Labor’s AI Challenge

Credit for cover and Resnikoff: From Pixabay. Pete Linforth created this image using AI.


The material changes ushered in under the aegis of artificial intelligence (AI) are not leading to the abolition of human labor but rather its degradation. This is typical of the history of mechanization since the dawn of the industrial revolution. Instead of relieving people of work, employers have deployed technology—even the mere idea of technology—to turn relatively good jobs into bad jobs by breaking up craft work into semi-skilled labor and by obscuring the labor of human beings behind a technological apparatus so that it can be had more cheaply. Employers invoke the term AI to tell a story in which technological progress, union busting, and labor degradation are synonymous. However, this degradation is not a quality of the technology itself but rather of the relationship between capital and labor. The current discussion around AI and the future of work is the latest development in a longer history of employers seeking to undermine worker power by  claiming that human labor is losing its value and that technological progress, rather than human agents, are responsible.

AI Is Not a Specific Technology
When tech entrepreneurs speak of AI doing this or AI doing that—like when Elon Musk promised the British Prime Minister a coming age of abundance in which no one will need to work because “AI will be able to do it all,”—they are using the term AI in a way that occludes more than it clarifies.[1] Academic researchers in the field of AI, for example, do not generally use the term AI to describe a specific technology. It is, quite simply, the practice of making “computers do the sorts of things that minds do,” as defined by Margaret A. Boden, an authority in the field.[2] In other words, AI is less a technology and more a desire to build a machine that acts as though it is intelligent. There is no single technology that makes AI distinctive from computer science.

. . . [T]he current discussion around AI centers on . . . machine learning [which is] the use of algorithms to find patterns in large data sets . . . to make statistical predictions.

Much of the current discussion around AI centers on the application of what are known as artificial neural networks to machine learning. Machine learning refers to the use of algorithms to find patterns in large data sets in order to make statistical predictions. Chatbots like ChatGPT are a good example. (A chatbot is a computer program that mimics human conversation so that people can interact with a digital device as if they were communicating with a human being.) Chatbots work by using an immense amount of computational power and very large amounts of data to weigh the statistical likelihood that one word will appear next to another word.

Machine learning generally relies on designers to help the system interpret data. This is where artificial neural networks come into play. (Machine learning and artificial neural networks are only two tools under the general umbrella of AI.) Artificial neural networks are linked software programs (each individual program is called a node) that are each able to compute one thing. In the case of something like ChatGPT (which belongs to the category of Large Language Models), each node is a program running a mathematical model (called a linear regression model) that is fed data, predicts a statistical likelihood, and then issues an output.[3] These nodes are linked together and each link has a varying weight, that is, a numerical rating indicating how important it is, so that each node will influence the final output to a different degree. Basically, neural networks are a complex way of taking in many factors simultaneously while making a prediction to produce an output, such as a string of words as the appropriate response to a question entered into a chatbot.[4]

This imitation is a far cry from human consciousness, but researchers do not understand the mind well enough to actually encode the rules of language into a machine. Instead, they have chosen what Kate Crawford, a researcher at Microsoft Research, calls “probabilistic or brute force approaches.”[5] No human being thinks this way. Children, for example, do not learn language by reading all of Wikipedia and tallying up how many times one word or phrase appears next to another.[6] In addition, these systems are particularly energy intensive and expensive. The cost for training ChatGPT-4 came in at around $78 million; for Gemini Ultra, Google’s answer to ChatGPT, the price tag was $191 million.[7] Human beings, it should be noted, acquire and use language much more cheaply.

In standard machine learning, human beings label different inputs to teach the machine how to organize data and weigh its importance in  determining the final output. For example, many people (paid very poorly) “pre-train” or teach computer programs what things look like, labeling pictures so that a program can differentiate between, say, a vase and a mug. (In a system doing “deep learning,” human beings play a  much smaller programming role. With deep learning, the artificial neural networks in use have more layers than in classical machine learning, and human beings do much less labeling of the elements in a dataset. In other words, it can be fed much rawer, unprocessed data and still organize it.) The GPT in ChatGPT, it is important to note, stands for Generative Pre-trained Transformer, a transformer being a kind of neural network. In the case of ChatGPT, the program was pre-trained by human beings to teach and correct the program as it was fed astronomical amounts of data, mostly written text. In fact, according to the Guardian, contract workers in Kenya employed by OpenAI to train ChatGPT earned between $1.46 and $3.74 an hour to label text and images featuring “violence, self-harm, murder, rape, necrophilia, child abuse, bestiality and incest.” Several workers claimed that these working conditions were exploitative and requested that the Kenyan government launch an investigation into OpenAI.[8]

Thus, AI, as Boden elaborates, “offers a profusion of virtual machines, doing many different kinds of information processing. There’s no key secret here, no core technique unifying the field: AI practitioners work in highly diverse areas, sharing little in terms of goals and methods.”[9] Contemporary use of the term AI, however, tends to black-box discussions of material changes, mystifying the technology in question while also homogenizing many distinct technologies into a single revolutionary mechanism—a deus ex machina that is monolithic and obscure. This effect is not accidental. It serves the interests of capital, and it has a history.

Contemporary use of the term AI [mystifies] the technology . . . This effect is not accidental. It serves the interests of capital . . .

AI and Labor Degradation
AI, in other words, is not a revolutionary technology, but rather a story about technology.[10] Over the course of the past century, unions have struggled to counter employers’ use of the ideological power of technological utopianism, or the idea that technology itself will produce an ideal, frictionless society. (Just one telling example of this is the name General Motors gave its pavilion at the 1939 World’s Fair: “Futurama.”) AI is yet another chapter in this story of technological utopianism to degrade labor by rhetorically obscuring it.[11] If labor unions understand changes to the means of production outside the terms of technological progress, it will become easier for unions to negotiate terms here and now, rather than debate what effect they might have in a vague, all too speculative future.

The uses that employers have made of machine learning and artificial neural networks conforms with the long history of the mechanization of work. The Marxist political economist Harry Braverman’s labor degradation thesis, in which industrial capitalist development tends toward the break-up of craft work, the broader diffusion of the detailed division of labor, and the application of factory regimes to ever more kinds of work, still holds.[12] If anything, managerial use of digital technologies has only accelerated this tendency. Moritz Altenried, a scholar of political economy, recently referred to this as the rise of the “digital factory,” combining the most over-determined, even carceral, elements of traditional factory work with flexible labor contracts and worker precarity.[13] Employers have deployed the use of algorithms to exert immense control over the labor process, using digital platforms to break up jobs and surveil how quickly workers complete those tasks, as with Amazon’s use of algorithms to push warehouse workers, or hail-riding apps speeding up drivers. Digital platforms have allowed employers to extend factory logic practically anywhere. Here, we can see the most “revolutionary” aspect of the technological changes referred to as AI: the mass diffusion of worker surveillance. While digital platforms are not particularly good workers, they are very effective bosses, tracking, quantifying, and compelling workers to labor according to the designs of their employers.

Arguing that machine learning is not categorically different from earlier forms of mechanization is not to say that everything will be fine for workers. Machine learning will continue to aid employers in their project to degrade work. And like earlier forms of mechanization—including the computer-mechanization of white-collar office work since the 1950s—employers have set their sights on turning skilled, white-collar jobs into cheaper, semiskilled jobs. In the second half of the twentieth century, computer manufacturers and employers introduced the electronic digital computer with the aim of reducing clerical payroll costs. They replaced the skilled secretary or clerk with large numbers of poorly paid women operating key-punch machines who produced punch cards to be fed into large, batch-processing computers. The result was more, not fewer clerical workers, but the new jobs were worse than what had existed before. The jobs were more monotonous and the work was sped up. In the last quarter of the twentieth century, employers successfully persuaded middle managers to do clerical labor for themselves (what one consultant called the “bourgeoisification” of clerical work) by giving them desktop computers to do their own typing, filing, and correspondence— work that the company once paid clerical workers to do. This style of job degradation remains typical in white-collar work today.[14]

While digital platforms are not particularly good workers, they are very effective bosses, tracking, quantifying, and compelling workers to labor . . .

While technologies like ChatGPT might seem poised to replace ostensibly white-collar workers like screenwriters, employers are far more likely to use machine learning to break up and deskill jobs in the same way that they deployed older forms of mechanization. Last year, Google pitched a machine learning chatbot named Genesis to the New York Times, the Washington Post, and NewsCorp. A spokesperson for Google acknowledged that the program could not replace journalists or write articles on its own. It would instead compose headlines and, according to the New York Times, provide “options” for “other writing styles.”[15] This is precisely the kind of tool that, marketed as a convenience, would also be useful for an employer who wished to deskill a job.

Like older forms of mechanization, Large Language Models do increase worker productivity, which is to say that greater output does not depend on the technology alone. Microsoft recently aggregated a selection of studies and found that Microsoft Copilot and GitHub’s Copilot—Large Language Models similar to ChatGPT—increased worker productivity between 26 and 73 percent. Harvard Business School concluded that “consultants” using GPT-4 increased their productivity by 12.2 percent while the National Bureau of Economic Research found that call-center workers using “AI” processed 14.2 percent more calls than their colleagues who did not. However, the machines are not simply picking up the work once performed by people. Instead, these systems compel workers to work faster, or deskill the work so that it can be performed by people who are not included in the study’s frame.[16]

For example, in their recent strike, members of the Writers Guild of America (WGA) demanded that movie and television studios be forbidden from imposing “AI” on writers. Chatbots are not currently capable of bodily replacing writers. Rather, it seems more likely that studios would deploy machine learning systems to break up their jobs into a series of discrete tasks, and through the division of labor turn the job of “writer” into smaller, more cheaply paid positions in which writers were now either prompt engineers feeding scenarios into the machine, or finishers,  polishing machine-made scripts into a final product.[17] The WGA’s recent contractual wins regarding AI are limited to the protection of credits and pay, although they had initially set out to reject the use of Large Language Models completely.[18] That bargaining position was actually somewhat unique; since the middle of the twentieth century, unions have generally been unable—due either to weakness or ideological blinders—to treat technology as something open to negotiation.

Examples are also rife of employers deploying “AI” not only to break up jobs but also to obscure the presence of poorly paid human workers, many of them based in the Global South. In the words of sociologist Janet Vertesi, “AI is just today’s buzzword for ‘outsourcing.’” Take, for example, Amazon’s “Just Walk Out” system at its brick-and-motor stores, where customers shopped and walked out without having to go to the cash register because the payment was processed digitally. Amazon has admitted that the “generative AI” that it used to tally up customer receipts actually consisted of workers in India watching surveillance footage and manually drafting itemized bills.[19] In a similar case, several major French supermarket chains boasted that they were using “AI” to spot shoplifters when the surveillance was being conducted by workers in Madagascar watching security footage and earning between 90 and 100 euros a month.[20] Same again with so-called “Voice in Action” technology (whose manufacturer claims it is an “AI-driven” system) that took customers drive-through orders at U.S. fast food restaurants; more than 70 percent of the orders were in fact processed by workers in the Philippines.[21] The anthropologist Mary Gray and senior principal researcher at Microsoft Siddarth Suri have usefully dubbed this practice of hiding human labor behind a digital front, “ghost work.”[22]

. . . [U]nions have generally been unable . . . to treat technology as something open to negotiation.

AI and Ideology—Automation Discourse Redux
But, as mentioned earlier, it would be a mistake to think of AI in primarily technological terms—either as machine learning or even as digital platforms. This brings us to the automation discourse, of which the recent AI hype is the latest iteration. Ideas of technological progress certainly predate the postwar period, but it was only in the years after World War II that those ideas congealed into an ideology that has generally functioned to disempower working people.

The ur-version of this ideology was the automation discourse which arose in the United States in the years following World War II that held that all technological change bent toward the inevitable abolition of human labor, in particular, blue-collar industrial labor. It was the immediate product of two inter-locking phenomena. First, the new institutional strength of organized labor coming out of the militant in 1930s, which posed a threat to capital, and second, the remarkable technological enthusiasm of the post-war era. Since the 1930s, corporate America had  sought to portray itself and its products as of themselves producing the kind of utopian future that left radicals had long associated with political revolution. (For example, the DuPont corporation promised “revolutionary” changes and “better things for better living . . . through chemistry,” instead of, say, the redistribution of property.)[23] Victory in World War II, government-funded technological breakthroughs, and the resulting economic boom seemed to ratify this argument. In the words of Business Week in 1955, there was “a sense that something new and  revolutionary was being born in the laboratories and the factories.”[24] It therefore seemed reasonable to actors from across the political spectrum—from industry leaders, to union officials, to members of the student movement, and even some radical feminists—to think that perhaps American technology could overcome those most painful hallmarks of industrial capitalist production: class struggle and workplace alienation.[25]

Since the 1930s, corporate America had sought to portray itself and its products as of themselves producing the kind of utopian future that left radicals had long associated with political revolution.

Playing into this general sense, a vice president of production at the Ford Motor Company coined the word “automation” to depict the company’s policy to fight unions and degrade working conditions while it retooled as a product of the apolitical and inevitable development of industrial society itself.[26] Ford, and soon practically everyone, depicted “automation” as a revolutionary technology that would fundamentally (and inexorably) change the industrial workplace. The definition of automation was notoriously vague, but still many Americans genuinely believed it would, entirely of its own account, usher in abundance, while doing away with the proletariat and, in the words of sociologist and celebrated public intellectual Daniel Bell, replace it with a highly skilled whitecollar “salariat.”[27]

Across industries, however, what managers and workers referred to as automation just as often resulted in degraded and sped-up work as it did the substitution of human labor with machine action. And yet, for the most part, labor found itself both rhetorically, and to a certain extent intellectually, cowed by the automation discourse. At a 1957 meeting of senior officials representing ten of the largest unions in the United States at the time, Sylvia Gottlieb, the Director of Education and Research for the Communications Workers of America (CWA), summed up the problem: They were unsure whether or not that automation was not the technological revolution that capital said it was, and they needed to take care against “the labor movement becoming identified as ‘weepers’ on this subject,” that is, prophets of doom opposed to technological progress, or even worse, Luddites. Gottlieb concluded that it made sense “to point not only to the problems and difficulties of automation but to acknowledge the tremendous benefits it provides.”[28]

. . . [L]abor found itself both rhetorically, and to a certain extent intellectually, cowed by the automation discourse.

Part of the power of the automation discourse was that it spoke to a techno-progressivism that, even to this day, appeals to certain tendencies on the left, like the so-called Marxist accelerationists who believed that the development of industrialization itself would produce the conditions for a proletarian revolution.[29] At the very least, in the years immediately following World War II, the idea of autonomous technological progress offered the Reuther administration and the United Auto Workers (UAW) cover for the Treaty of Detroit’s retreat on the question of “production standards,” that is, a say over which machines would exist on the shop floor and how workers would use them.[30] Union officials did not know what “automation” would bring, and they largely failed to disentangle teleological stories of technological progress from management’s attempts to control the labor process. The International Longshore and Warehouse Union (ILWU) under Harry Bridges was unique among postwar unions in that it managed to operate within the confines of postwar technological optimism and still get something for its members, letting containerizing shippers buy the union out of dockworker jobs in exchange for generous retirement benefits. Still, this buyout came at the price of a generation of dockworkers (the so-called B-men) who were not eligible for those benefits but whose labor remained particularly sweated.[31] Still, the ILWU was the exception. More typical was the fate of the United Packinghouse Workers of America (UPWA), which at first allowed the company to “automate” (i.e., to bring in power tools) in exchange for somewhat improved retirement benefits and the right to transfer jobs. Workers laid off as a result of labor speedup were advised to take part in job-training programs that the UPWA’s president would later condemn. “What you were doing,” he said, “was training people so that they could be unemployed at a higher level of skill, because they couldn’t get jobs.”[32] As the industry reformed in the second half of the twentieth century, the union disintegrated. Today, meatpacking remains a labor-intensive industry, although now much of it is non-union.

Practically speaking, “AI” has become a synonym for automation, along with a similar if not identical set of unwarranted claims about technological progress and the future of work.33 Workers over the better part of the past century, like most members of the general public, have had a great deal of difficulty talking about changes to the means of production outside the terms of technological progress, and that has played overwhelmingly to the advantage of employers. The notion of technology as, ultimately, a benefit to all and inevitable, even as civilization itself, has made it difficult to criticize. If history is any guide, workers need to reject the teleological claims that capital makes about technology; they themselves must see technological change, not as the organic unfolding of civilization, but as just another aspect of the workplace that should in principle be subject to democratic governance.

AI is not a specific technology. Often enough, it is a story about technology, one that serves to disempower working people. Workers have reason to fear AI, but not because it is in and of itself revolutionary. Rather, workers and organizers should worry because the idea of AI allows employers to pursue some of the oldest methods of industrial labor degradation. In the past, unions have suffered when they took the technological claims of their employers as fact. For labor, it might quite literally pay to refuse to be impressed by technological utopianism. It behooves labor to divorce specific material changes to the labor process from grand narratives of technological progress. Working people should have a say in what kinds of machines they use on the job; they should have some control. The first step in that direction requires that they be able, at the very least, to say, “No,” to the material changes employers seek to make to their workplaces, and to say it without thinking of themselves as impediments to progress.


Notes
1. Chloe Taylor, “Elon Musk Says AI Will Create a Future Where ‘No Job Is Needed’: ‘The AI Will Be Able to Do Everything,’” Fortune, November 3, 2023, available at https://fortune.com/2023/11/03/elon-musk-ai-nojob-needed-work/.
2. Margaret A. Boden, Artificial Intelligence: A Very Short Introduction (Oxford: Oxford University Press, 2018), 1. For a similar definition: Michael Wooldridge, A Brief history of Artificial Intelligence: What It Is, Where We Are, and Where We Are Going (New York: Flatiron Books, 2020),
3. For a discussion of the debate within the field of Artificial Intelligence concerning its definition: Stuart Russell and Peter Norvig, Artificial Intelligence: A Modern Approach, Second Edition (Upper Saddle River, NJ: Pearson Education Inc., 2003), 2. Strangely, Russell and Norvig exclude this particular discussion of definitions from later editions of their textbook, widely understood as the textbook in the field.
3. Large Language Models (LLMs) are programs that generate language by performing a statistical analysis of large amounts of text.
4. For an overview of machine learning and artificial neural networks, in particular as they apply to ChatGPT, consider the University of Central Arkansas, “ChatGPT: What Is It?” available at https://uca.edu/cetal/chat-gpt/; or Stephen Wolfram, “What Is ChatGPT Doing . . . and Why Does It Work?” available at https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/. For a particularly useful series of primers, readers may enjoy the educational series released by IBM on YouTube, available at https://www.youtube.com/@IBMTechnology.
5. Kate Crawford, Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence (New Haven, CT: Yale University Press, 2021), 99. For a provocative Marxist interpretation whose discussion of “connectionist” as opposed to “symbolic” AI is quite useful: Matteo Pasquinelli, The Eye of the Master: A Social History of Artificial Intelligence (London: Verso Books, 2023), 14-15.
6. Noam Chomsky, Ian Roberts, and Jeffrey Watumull, “Noam Chomsky: The False Promise of ChatGPT,” New York Times, March 8, 2023, available at https://www.nytimes.com/2023/03/08/opinion/noam-chomsky-chatgpt-ai.html.
7. Nestor Maslei, Loredana Fattorini, Raymond Perrault, Vanessa Parli, Anka Reuel, Erik Brynjolfsson, John Etchemendy, Katrina Ligett, Terah Lyons, James Manyika, Juan Carlos Niebles, Yoav Shoham, Russell Wald, and Jack Clark, “The AI Index 2024 Annual Report,” AI Index Steering Committee, Institute for Human-Centered AI, Stanford University,
Stanford, CA, April 2024, 5.
8. Niamh Rowe, “‘It’s Destroyed Me Completely’: Kenyan Moderators Decry Toll of Training of AI Models,” The Guardian, August 2, 2023, available at https://www.theguardian.com/
technology/2023/aug/02/ai-chatbot-training-human-toll-content-moderator-meta-open-ai.
9. Boden, “Artificial Intelligence,” 18. 10. Lucy Suchman, “The Uncontroversial ‘Thingness’ of AI,” Big Data & Society, July-December, 2023: 1-5.
11. The classic argument for an understanding of technology as a politically charged term specific to industrial capitalism: Leo Marx, “‘Technology’: The Emergence of a Hazardous Concept,” Social Research 64, no. 3 (Fall 1997): 965-88.
12. Harry Braverman, Labor and Monopoly Capital: The Degradation of Work in the Twentieth Century (New York: Monthly Review Press, 1974).
13. Moritz Altenried, The Digital Factory: The Human Labor of Automation (Chicago: University of Chicago Press, 2022), 7. His use of the concept “digital Taylorism” is quite useful. See also: Alexandra J. Ravenelle, Hustle and Gig: Struggling and Surviving in the Sharing Economy (Oakland: University of California Press, 2019).
14. For the use of the word “bourgeoisification,” see: Connie Winkler, “Office of Future May Not Work, Poppel Warns,” Computerworld, May 21, 1979, 12. On the degradation of clerical work and the desktop computer: Jason Resnikoff, “The Paradox of Automation: QWERTY and the Neuter Keyboard,” Labor, vol. 18, no. 4 (2021), 9-39.
15. Benjamin Mullin and Nico Grant, “Google Tests AI Tool That Is Able to Write News Articles,” New York Times, July 19, 2023, available at https://www.nytimes.com/2023/07/19/business/google-artificial-intelligence-newsarticles. html.
16. “The AI Index 2024 Annual Report,” 272-3.
17. “What Does the Writers’ Strike Tell Us About the Future of A.I. and Jobs?” New York Times Audio, June 1, 2023. https://www.nytimes.com/audio/app/2023/06/06/what-does-the-writers-strike-tell-us-about-the-future-of-ai-and-jobs.html?referringSource=sharing.
18. Drew Richardson, “Hollywood’s AI Issues Are Far From Settled After Writers’ Labor Deal With Studios,” CNBC, October 16, 2023, available at https://www.cnbc.com/2023/10/16/hollywoods-ai-issues-are-far-from-settled-after-wga-deal.html.19. Janet Vertesi, “Don’t Be Fooled: Much ‘AI’ is Just Outsourcing, Redux,” Tech Policy Press, April 4, 2024, available at https://www.techpolicy.press/dont-be-fooled-much-ai-is-just-outsourcing-redux/.
20. Pierric Marissal, “Derrière l’intelligence artificelle «made in france››, des exploités à Madagascar,” l’Humanité, December 9, 2022, available at https://www.humanite.fr/social-et-economie/intelligence-artificielle/derriere-lintelligence-artificielle-made-in-francedes-exploites-a-madagascar.
21. Mia Sato, “An ‘AI’ Fast Food Drive-Thru Is Mostly Just Human Workers in the Philippines,” The Verge, December 8, 2023, available at https://www.theverge.com/2023/12/8/23993427/artificial-intelligence-presto-automation-fast-food-drive-thru-philippines-workers. For the claims of the company, Presto Automation, available at https://presto.com/.
22. Mary L. Gray and Siddharth Suri, Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass (New York: Houghton Mifflin Harcourt, 2019); see also Antonio A. Casilli, “Waiting for Robots: The Ever-Elusive Myth of Automation and the Global Exploitation of Digital Labor,” Sociologias, vol. 23, no. 57 (May-August, 2021), 112-33.
23. Sean Callahan, “DuPont Replaces 1935 Tagline to Reflect Corporate Change,” Ad Age, June 1, 1999. For the use of the word “revolutionary” by DuPont: The Atlanta Constitution, August 8, 1936, 21C.
24. “Special Report to Readers on: Automation,” Business Week, October 1, 1955. https://web-p-ebscohost-com.ezproxy.cul.columbia.edu/ehost/pdfviewer/pdfviewer?vid=2&sid=92a622be-ac9d-4dc5-a612-8c4ce01c9c90%40redis.
25. For an overview of the broad political appeal of the automation discourse in the postwar period, Jason Resnikoff, Labor’s End: How the Promise of Automation Degraded Work (Urbana: University of Illinois Press, 2021).
26. Resnikoff, Labor’s End, 22.
27. Daniel Bell, Work and Its Discontents: The Cult of Efficiency in America (Boston: Beacon, 1956), 49-53.
28. Sylvia B. Gottlieb to J. A. Beirne, Subject: Automation Sub-Committee—AFL-CIO Economic Policy Committee, January 16, 1957, folder 8, box 100, Communications Workers of America Records, Wag. 124, Tamiment Library, New York University.
29. For a particularly bold example of this today: Aaron Bastani, Fully Automated Luxury Communism: A Manifesto (London: Verso, 2019).
30. Robert Asher, “The 1949 Ford Speedup Strike and the Post War Social Compact, 1946-1961,” in Autowork, ed. Robert Asher and Ronald Edsforth with the assistance of Stephen Merlino (Albany: State University of New York Press, 1995), 127-54.
31. Seonghee Lim, “Automation and San Francisco Class ‘B’ Longshoremen: Power, Race, and Workplace Democracy, 1958-1981” (PhD diss., University of California, Santa Barbara, 2015).
32. Roger Horowitz, “Negro and White, Unite and Fight!” A Social History of Industrial Unionism in Meatpacking, 1930-90 (Urbana: University of Illinois Press, 1997), 256
33. Louis Hyman, “It’s Not the End of Work. It’s the End of Boring Work,” New York Times, April 22, 2023, available at https://www.nytimes.com/2023/04/22/opinion/jobs-ai-chatgpt.html.


Author Biography
Jason Resnikoff is assistant professor of contemporary history at the University of Groningen in the Netherlands (Rijksuniversiteit Groningen). His book, Labor’s End: How the Promise of Automation Degraded Work (University of Illinois Press, 2022), explores the ideological origins of automation in the United States in the middle of the twentieth century. He was formerly an organizer for the UAW.

Leave a Reply

Your email address will not be published. Required fields are marked *