Katja Bruisch writes: I recently completed a scholarly monograph – an environmental, economic and energy history of peat in imperial and Soviet Russia. After years of thinking and writing, I approached Cambridge University Press. CUP is a leading academic press and their series Studies in Environment and History has a high reputation in my field. Everyone at CUP was extremely friendly and forthcoming. I am delighted with the book and curious how readers will respond to it. But the process of what publishers call ‘production’ amid the current AI hype raised questions for me about how academic presses value the work of the many people involved in the long chain from manuscript to book. It also made me realise that we need a proper critique of AI in academic publishing and that authors should help develop it.
My thinking about AI developed slowly, meandering through periods of ignorance, hesitation and irritation to a position of active resistance. Like many others, I read a number of disturbing reports about AI and particularly about GenAI chatbots – how they hallucinate, undermine learning and critical thinking, make users dependent, and negatively affect people’s brain activity. With time, I also got concerned by reports demonstrating how the proliferation of AI intersects with wider issues of precarity in the workplace. By the spring of 2025, when the production of my book entered its intensive phase, I had learned enough to feel certain I should cultivate rather than overcome my scepticism about the technology. I also understood that any valuable critique should go beyond the moral panic surrounding student essays that prevails in Higher Education and scrutinise the political economy driving GenAI.
In academic publishing, the adoption of AI tools is part of a broader trend toward casualisation and the reliance on contractors instead of in-house expertise to keep publishing costs low. Algorithmic analysis and automation have become normalised in the workflow of academic journals hosted by for-profit publishers such as Elsevier, where the input from reviewers and editors is slowly declining. During the production of academic monographs, which take years to write, many authors realise that their copy editors and proof-readers are freelancers for whom publishers act as providers of professional gigs rather than regular employers, while typesetting and other processes are routinely relegated to external companies, often based in countries with high levels of inequality and low wages. Indicative of this wider trend was a viral Bluesky thread, which caught my attention in early June. In the thread, a historian shared their experience of spending a lot of time rectifying mistakes that had been inserted into the text while the book was being copy-edited, suggesting that the work had been done, at least in part, by a machine. Several responses indicated that this was not an uncommon experience; all the while copy editors contributed to the thread proposing to do the work instead.
While I never suspected AI involvement when my book was being copy-edited, I had my own causes for irritation. At some point after moving the manuscript into production, the press notified me that they would produce ALT-text image descriptions for the e-book. This ALT-text of max 250 characters per image would be invisible to people reading a hard or electronic copy, but it would be relevant to anyone reading my book with a reading machine that would voice the ALT-text to provide a basic sense of the images. CUP needed to include ALT-text in response to the European Accessibility Act, a new legislative framework that came into force at the end of June 2025 and asks creators of digital and electronic products to take the needs of people with disabilities into account.
The ALT-text I received was of poor quality, inaccurate and partially wrong. Apart from awkward wording, it contained generic phrases like ‘the legend is in a foreign language’. Strangely, one of the image descriptions included untranslated specialist vocabulary in German exactly as it had featured in the inscriptions of the historical photograph it referred to (Torfkraftwerk, which translates as peat-fuelled power plant). Even stranger, the ALT-text for a graph consisted of five lines of numbers. How would this be helpful for anyone with accessibility issues?
The image descriptions had been provided by CUP’s contractor Straive (sic!), which handled the production of my book. Headquartered in Singapore, Straive offers digital solutions to ‘operationalize data analytics and AI for global enterprises’. In an email to my publisher, I expressed my uneasiness about the ALT-text and my suspicion that it could have been produced by AI. I got an apology. The publisher had not wanted to burden authors with the requirements of the new legislation at short notice. Relegating this task to the external supplier was a temporary compromise until they would set up better processes. I was also reassured that the text had not been produced through AI. Fair enough. I couldn’t prove the contrary, but I decided to write the descriptions myself.
Proof of how much AI has already become and will be further entrenched in academic publishing arrived soon after this episode, when CUP asked me to sign an ‘AI subsidiary aggregation addendum’ to my original contract. My signature would authorize the press to make my work available to GenAI companies in return for a blanket royalty rate. CUP presents this as a way for authors to get some remuneration if their work is used to develop LLMs (large language models) – an improvement compared with the unauthorised scraping of content that has made the headlines earlier this year. But signing promised more benefits: There ‘may also be opportunities for your content to have greater visibility and impact if it is properly cited and attributed by AI tools’.
Alongside this positive incentivising, the message subtly introduced pressure, moving from subjunctive to future tense: ‘Additionally, our existing routes to disseminate your research through third parties such as specialist content platforms or indexers and discovery services will increasingly involve AI components, meaning that opting into AI will be crucial for our long term ability to sell and maximise the impact of your book.’ This was serious. By not signing, I would not only give up the extra visibility that AI tools could potentially generate. I actually risked my book losing visibility. I nearly succumbed. So much effort to see my book be marginalised by an algorithm?
Agreeing was clearly the most obvious thing to do. But was it? As an environmental historian, I know that the simultaneous exploitation of labour and nature has underpinned modern economic growth. In fact this story is key to my book. I show that the hardships of peat extraction, often endured by female peat workers, were rendered invisible by large technological systems and by a politics of marginalisation that left workers with little say, while peatlands turned into industrial wastelands. Digital extractivism operates similarly, but it is difficult to detect as our screens, keyboards and web interfaces give us no clues about the social and material contexts of their origin.
The evidence for AI’s extractive nature is overwhelming. Not only have LLMs exploited content stolen from writers, artists, journalists and scholars. The industry’s reliance on massive data sets also requires content moderation that exposes poorly paid workers to the worst content online. Journalists have documented the impact on workers’ mental health, personal relationships and communities. Taking an ecological perspective is no less comforting. Data centres powering AI have a massive demand for energy and threaten decarbonisation efforts. They also use lots of water – often where water is increasingly scarce. It could be said, of course, that most technologies, digital or other, embody injustice and extractivism. But this doesn’t make ethical and ecological concerns about AI as the new default any less valid. In a world on fire, how can we concede so much power to an exploitative technology that produces questionable results, operates in opaque ways, deepens social precarity and fuels the planetary emergency?
Suggesting that AI-assisted tools ‘are increasingly going to be used in everyday life’, CUP’s messaging followed a linear notion of progress, casting AI as an inevitable, external force when it is, in fact, a new product designed to make (though still barely making) profit that was released without a proper assessment of its social and environmental impact. Inevitability is at the heart of the industry’s myth. It is, as Emily M Bender and Alex Hanna write in The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want, a misleading idea that ‘serves to obscure rather than illuminate, what’s at stake when it comes to the current AI boom’. Tech companies do everything to turn inevitability into a lived experience, as they pollute their applications with ‘AI assistants’ and make opting out quite a cumbersome task for technology users. The inevitability myth is powerful. So powerful, I nearly got trapped.
Once I understood that my anxiety about losing impact and visibility only affirmed the narrative of the tech industry, I felt ready to refuse. When I did, a text box opened on my screen asking for my reasons. I typed: ‘Publishers like CUP should use their power and authority to ally with other actors in the publishing business to collectively push back against the interests of the tech industry rather than normalizing GenAI via royalties. For ecological and ethical reasons, I am deeply concerned by CUP’s willingness to work with GenAI companies and by the increasing reliance on AI solutions.’ Much more could be said, and others have made these points much more eloquently. But I sensed I had regained some agency. And it felt right.
I am not naive. My decision will not change my publisher’s approach. In fact, other leading academic presses have signed or intend to sign contracts with AI companies too, presenting these as the best one can get in our brave new world. Neither will my refusal shield my work from being appropriated by LLMs. But within the bigger picture, this is certainly not my greatest worry. My far greater concern is that academic publishers help normalise AI as tech companies use their power to insert their dubious tools into key areas of producing and selling scholarly work.
Many people in academic publishing are probably as unsettled as I am about the new direction of the digital world. And I trust publishers believe that the content-for-royalty-deals are in the best interest of their authors. CUP’s opt-in model is certainly much more respectful than the opt-out models in place elsewhere. Still, I doubt that individual licensing agreements are the most effective way to approach the broader issue. Instead of allowing the tech industry to set the pace, why don’t publishers (particularly large ones) step up collectively to push for rigorous regulation and transparency? Why don’t we see the publishing industry resist against AI companies’ thirst for ever more data? As the authors of the Artificial Power 2025 Landscape Report have argued, it is time to ‘make AI a fight about power, not progress’. Academic publishing is one of the many arenas where this fight needs to get off the ground.
29/10/2025
Katja Bruisch is an environmental historian at Trinity College Dublin. She is the author of Burning Swamps: Peat and the Forgotten Margins of Russia’s Fossil Economy (Cambridge University Press, 2025) and Als das Dorf noch Zukunft war: Agrarismus und Expertise zwischen Zarenreich und Sowjetunion (Böhlau, 2014).

