Strategies for Integrating AI into Publishing Operations

An icon of a key

There are few things that publishing companies are less comfortable doing than integrating complex digital technologies into their day-to-day operations. That’s understandable. AI, in particular, is causing anxiety for everyone, and not just in publishing. It’s new, it’s mysterious, it’s personalized, it’s powerful. People are threatened by AI for numerous reasons. Changing attitudes takes time.

But this is not a great time to be timid with technology.

There are calm and rational approaches available for integrating AI into your publishing operation, regardless of the size of your organization.

The impetus must come from the top. The very top. Senior executives need to embrace a vision of AI’s potentially transformative impact and communicate a program to staff across the organization. The program may be little more than “experiment, document your experiments, and share.” That’s a good start.

The Economist recently referenced an organizational tactic for new technology adoption called “the lighthouse approach.” You create a beacon by selecting one high-profile proof of concept that can be implemented quickly, that everyone can relate to.

Publishing companies are handicapped by the hubbub surrounding copyright: authors are up in arms. A May 2023 Authors Guild survey found that “90 percent of writers believe that authors should be compensated if their work is used to train generative AI technologies,” and 67 percent said they “were not sure whether their publishing contracts or platform terms of service include permissions or grant of rights to use their work for any AI-related purposes.” Those uncertain authors are now asking their publishers if AI is being used in the editing or production of their work, and some powerful authors are insisting that it not be. They’re looking for the AI equivalent of a peanut-free bakery.

This is a thorny problem for publishers—if you can’t use AI on the books you’re planning to publish, what can you use it for?

Developing and communicating AI policies

Despite its widespread use, few publishers have publicly defined their AI policies, and communicated their approaches to AI to the public. The term ‘the public’ has a slippery significance here, when you consider the different publics addressed by trade, scholarly and educational publishers.

For trade publishers the most important audience is authors and their agents. Scholarly publishers face different obstacles, when they consider AI’s promising impact on research, and then AI’s more problematic impact upon converting research into narrative (Avi Staiman wrote a thoughtful post on this topic). For educational publishers, establishing policies is tricky, as AI’s encroachment on the practice of teaching, of education, is multifaceted and complex.

I think that publishers face two big challenges as they move forward with AI technologies. The first is to develop a company position about how to approach AI generally, on how to incorporate AI into their workflows. The second challenge is communicating that position, clearly and unambiguously, to their constituents.

The publisher policies I have seen are mostly flawed. Some of them are in fact policies directed externally, at authors, with a range of admonitions about what is acceptable practice (not much) and what is not acceptable (lots). O’Reilly’s “AI Use Policy for Talent Developing Content for O’Reilly” goes on for pages and pages, with esoteric guidance, such as “DO NOT use any OSS GenAI Models that produce software Output that is subject to the terms of a copyleft or network viral open source license.”

On the other hand scholarly publisher Elsevier, in the “Elsevier Policies” section of its website, includes statements on “Responsible AI Principles,” “Text and Data Mining,” and “The use of generative AI and AI-assisted technologies in writing for Elsevier.”

The few internal, unpublished, publisher policies that I’ve seen are conservative, excessively so. These publishers reacted too quickly to the range of perceived and possible threats, and to their authors’ anxieties, and have hamstrung their own ability to engage robustly with this fast-developing, fast-changing technology.

It’s a given that they will use AI ‘responsibly,’ whatever that means. It’s a given that they have the utmost concern for authors’ intellectual property and for aggressively protecting author’s copyrighted work. (Although, of course, these broad principles must be declared publicly, and often reiterated.)

But what else?

  • Will they allow AI to have a role in editorial acquisitions? Can AI take a look at the slush pile?

  • Will they allow AI to have a role in developmental editing, line editing and copyediting?

  • Will they allow AI to have a role in determining print runs and allocations?

  • In creating accessible ebook files, including alt-text?

  • In aiding audiobook creation in cases where it’s not economically-realistic to hire talented human narrators?

  • In aiding foreign language translation into markets where rights would never be sold?

  • In developing marketing material at scale?

  • In communicating with resellers?

If so they must make this clear, and clearly explain, the thinking behind these policies. Publishers must be brave in countering the many objections of most authors at this time of fear and doubt.

Job considerations

An icon of a key

Only the largest publishers will be able to hire dedicated staff to work with AI software and systems. The average publisher will want to expose all of their staff to AI tools, expecting that each might explore using AI to find efficiencies in their work.

At the February 2024 PubWest conference in Arizona a speaker from outside the publishing industry suggested that one of the uses for AI will be replacing interns. The room burst into flames. She meant well—indeed an April 10, 2024 report in the New York Times describes how Wall Street investment banks are looking to replace many of their interns with AI. Similar to the case in publishing, an obvious concern is: how do you find senior analysts if they can’t start off as junior analysts?

The publishing industry has always relied on internships. A 2019 study found that 80 percent of the people who had worked in publishing for less than fifteen years had previously interned.

In part it’s a way to get the grunt work dispatched at a reasonable cost. But that pales against the larger reality that no publishing school can equip someone to join a publishing company at the level of middle-manager. The only way to develop the skilled staff of tomorrow is to train interns and apprentices today.

The objective here is not to seek to replace interns with AI, but instead to make their work more productive and rewarding using AI tools, benefitting both the intern and the publishing company.