12 December 2025
Authors: Ravi Venkataramani, Sowmya Mahadevan, Jason De Boer, Janani Krishnan, Louise Russell, Amy Jones, Shehnaz Ahmed, Lou Peck, Josephine Weisflog
Bringing industry peers together for sustained, focused discussion is rare. On 20 May 2025, Publisherspeak UK convened expert speakers and publishing professionals in London for a one-day, unconference-style forum that used design-thinking workshops to produce tangible outputs.
This year’s event centred on two core themes: the growing role of AI in scholarly publishing, and the evolving dynamic between publishers and authors. Participants included professionals from scholarly societies, university presses, and commercial publishers. The goal of these breakout sessions was to reach a level of consensus around how we can positively effect change.
This article provides a detailed overview of each of the four breakout sessions, covering what the groups discussed, the challenges they identified, and solutions they identified.
Session 1: “Ensuring trust, equity, and unbiased distribution in an AI-driven world.”
With Artificial Intelligence (AI) increasingly central to how research is conducted, communicated and consumed, publishers need to adapt to this new reality. This breakout group, chaired by Amy Jones (Chief Transformation Officer at Emerald Publishing), focused on a key concern: trust. Specifically, how to maintain the quality, accuracy, and reliability of scholarly information in a landscape where AI tools are greatly impacting and even reshaping research discovery and visibility.
The group explored ways to help users recognise trustworthy content, and reduce the impact of misleading or low-quality outputs from AI systems. One idea was a verified marker, similar to a certification stamp. This would give users a clear signal that a piece of content had gone through a recognised peer review process. It would help researchers and readers identify content they could trust. In turn, publishers would benefit by reinforcing their role as gatekeepers of quality. Success could be measured by greater usage and higher citations of such content.
Another solution discussed by the group was the creation of a hierarchy of sources. Each publisher, or in an ideal scenario a wider industry standard, could define what types of content ought to be considered more trustworthy than others. This might include peer-reviewed journals, books, or specific repositories, as well as grey literature and preprints. These hierarchies could also include recommendations for which AI tools are most appropriate for different kinds of users or research needs. This approach could guide both expert and general users towards high-quality knowledge, while allowing publishers to define and shape the standards being used.
The group also discussed the idea of dynamic metadata. Instead of relying on fixed data sets, AI systems could draw from live metadata, including citations, usage statistics, or notices of corrections. Such a model would have more context when presenting results, support more accurate outputs, and help improve the performance of AI systems over time through training of the large language models (LLMs) with better quality content and metadata. This improvement in LLM training should also help the GenAI vendors to develop better tools with reduced costs.
Another idea was to build interactive bots based on journals. These bots could respond to user behaviour and guide them to relevant, high-quality articles. Rather than letting AI platforms control the user journey completely, these bots would enable publishers to connect with readers. They would also provide a more human-like experience, which might be particularly beneficial for those unfamiliar with accessing and navigating scholarly publications.
Lastly, the team discussed the use of small language models (SLMs). These would act as a buffer between large AI systems and the end user. An SLM could test the quality of an output and screen it before it is presented, helping reduce errors, improving trust, and giving publishers more control over how their content is represented within AI environments.
In summary, the group’s ideas focused on building confidence in the systems that are rapidly reshaping scholarly communication. The goal is to create a more transparent and informed space for knowledge sharing, where users can trust the knowledge they find and publishers continue to play an active and visible role.
Session 2: “The evolving role of AI in academic work: biases, limitations, speed vs. accuracy.”
This session, chaired by Shehnaz Ahmed (Director of Research and Publishing at British Association of Dermatologists), explored the growing presence of AI tools in academic research and the risks they pose to quality, integrity, and transparency. The central premise of research is trust, so how do we maintain trust in the system? While researchers are increasingly using AI to manage large volumes of information or streamline writing, this shift brings important questions:
- Are these tools being used responsibly?
- Do researchers understand and trust these tools, and how can they validate their outputs or identify biases?
- What role should publishers play in setting expectations around the usage of these tools?
The group focused on quality issues that stem from unchecked or inappropriate use of AI. These include poorly reviewed submissions, questions around authorship when AI tools are used in the writing process, and reliance on tools that may introduce bias or factual errors. The consensus among the participants was that without a human in the loop to check and validate them, such issues could significantly damage trust in the research process.
To address this, the group proposed two solutions. The first was an AI audit and badging tool, which would help verify responsible use of AI in research workflows. The tool could flag signs of poor-quality content, such as text generated without proper human oversight. It could also serve as a visible marker of trust, similar to existing metadata tags or author identifiers. Benefits of the successful implementation of such a solution might include fewer retractions, stronger peer review, and greater confidence among readers, funders, and publishers.
The second solution highlighted the need to continue to keep humans in the loop. While AI can be used to provide assistance and guidance, the responsibility of decision-making should ultimately rest with the human user. Additionally, the group encouraged authors to use AI tools only to refine content that they have already written but ensure that the tool is calibrated to preserve accuracy and intent.
Both solutions showed the need for publishers to be active in this space rather than be a passive observer. Since researchers are already using AI tools, the industry needs to provide clear guidance, training, and standards to direct the use. Participants saw value in developing shared practices, promoting recommended tools, and making this guidance visible across conferences, workflows, and editorial policies.
Session 3: “ECRs are disconnected from publishers and either do not know where to submit or are told where to submit.”
One of the key topics discussed during this breakout session, chaired by Lou Peck (Chief Executive Officer and Founder of The International Bunch), was the experience of early career researchers (ECRs), particularly those based in lower and middle-income countries, who often feel disconnected from the publishing process, publishers and journals. The group identified a persistent lack of understanding of journal submission procedures and what value publishers bring, which can leave ECRs uncertain about where to submit their work or overly reliant on the preferences of supervisors and senior colleagues.
The challenge, as framed by this group, was to make the publishing easier and transparent for ECRs. Participants highlighted how ECR researchers are not confident about where to publish and often encounter a confusing system with minimal guidance on how to prepare a manuscript, choose an appropriate journal, or understand what happens to their manuscript after submission, with wide variation across journals. This disconnect can delay publication and deter capable researchers from engaging more fully with academic publishing.
To address this, the group developed two solutions. The first focused on signposting and education. They recommended developing more accessible guidelines for authors, clearly outlining journal scope, the goals of the publication, what editorial offices do, and what authors can expect at every stage of the workflow as a more central resource, rather than organization specific. This could be supported by targeted author workshops that are aimed at demystifying the process and setting clearer expectations. Success indicators of this solution include reduced misdirected submissions, an improvement in the overall quality of manuscripts, and a better experience for authors and editorial teams alike.
The second solution targeted institutions and focused on upskilling through the use of shared toolkits. These toolkits would help institutions and research bodies introduce standard publishing practices as part of ECR development programmes. The idea is to embed a shared understanding between institutions, authors, and publishers, giving ECRs the confidence to engage directly with journals and make informed choices. It also encourages a shift from dependency on senior researchers when it comes to navigating decisions in the publishing process.
Both solutions proposed by this group aim to build stronger, more informed relationships between ECRs and the publishing ecosystem. If successful, the outcomes would include fewer desk rejections, quicker editorial decisions, better alignment between submissions and journal scopes, and a greater sense of inclusion for ECRs, which leads to better retention throughout the lifecycle of the researcher. And perhaps most importantly, these efforts could help cultivate the next generation of editors, reviewers, and scholarly contributors.
Session 4: “Researchers don't think publishers add value in the publishing process.”
This session, chaired by Josephine Weisflog (Senior Product Manager at BMJ Group), tackled a clear but complex challenge: the belief held by many researchers that publishers do not add value to the publishing process. The group began by questioning the assumption behind the problem. They considered whether publishers truly bring value, or whether that value needs to be better understood and communicated. The group then focused on identifying what publishers contribute and how that contribution can be made clearer to researchers.
The group’s first solution focused on transparency and communication. Many researchers submit a paper and then feel disconnected from the process. They may not know what is happening or why certain steps in the process are required. The group recommended that publishers clearly explain each part of the process, covering what happens to a manuscript after submission, who is involved, and why those steps matter. Communicating this more openly could help manage expectations and reduce frustration. Publishers could also explain the expertise involved in handling submissions. This could improve author satisfaction and potentially increase the number of repeat submissions.
The second solution looked at how publishers can demonstrate their role in maintaining trust in research. This involves showing the decisions made behind the scenes, such as rejecting papers that do not meet ethical standards or retracting flawed articles. The group suggested creating identity markers or even a trustworthiness index that highlights this work. By doing so, publishers can demonstrate their work to protect the quality and reliability of the research they publish.
The third solution focused on reach and impact. Authors want to know their work is visible and that it is making a real-world difference. The group discussed how publishers can support this by building on existing citation-based impact indicators and other impact indicators, such as Altmetrics, to highlight how research is being used outside of academic circles. These efforts would help authors understand the broader value of being published in a trusted journal.
This group highlighted the need for publishers to speak more directly to the needs and concerns of researchers. Clear communication, strong ethical standards, and demonstrated impact all help close the gap between what publishers do and what researchers see.
Shared challenges, collective progress
Across the four breakout sessions at Publisherspeak UK 2025, a common message came through clearly: the need for better connection, clearer communication, trust markers, and adoption of clear industry standards across the publishing process to address the key challenges. Each group took on a different challenge, but in identifying solutions, all returned to the importance of making publishing work better for the communities it serves.
Whether the focus was on helping ECRs, showing the value publishers bring, or improving how research is discovered, there was a strong sense that while the challenges are complex, there are real steps the industry can take collectively and collaboratively to make progress. We are excited to see how these initiatives develop and progress, and we encourage members of the scholarly community to take these forward.