Our experts on AI’s impact: Discovery, consumption, and infrastructure

March 9, 2026
Tech Trends report
Tech Trends report

Scholarly publishing has gotten serious about using AI. But what does that mean practically for publishers’ consumption models and tech stacks? Drawing from the latest Tech Trends Report, our experts Charles Hemenway and Martin Rosén-Lidholm weigh in on where the industry is headed and how publishers can keep up.

ChatGPT was released to the public in late 2022. There was little fanfare, no snazzy product launch. But within weeks, millions of us were hooked. The little chat-bot that could became the fastest growing app in history, kicking of a “brave new AI era”.

Three and a bit years later, big, knotty questions about AI’s impact on our jobs, democracy, and innate human creativity remain. But its reshaping of our daily lives and work continues apace.

How has this transformation played out in scholarly publishing? The latest Tech Trends Report from Silverchair and Hum is a reliably great snapshot, featuring perspectives from publishers, institutions, funders, and vendors. It makes clear that the industry’s thinking about AI has fundamentally shifted.

Initial alarm and curiosity gave way to ad-hoc pilots with AI tools, and now, in 2026, there’s a desire to integrate AI deep into core workflows. As Silverchair’s Jeremy Little puts it, “It’s time to move from experimentation to implementation.”

The report hits a lot of fascinating topics, from integrations to research integrity. All topics we care deeply about, and build our platform around, every day here at ChronosHub. So we wanted our experts’ takes on the report’s big questions.

What area of scholarly publishing do you predict will be most transformed by AI in the coming year?

Tackling this one is Charles Hemenway, our Director of Publisher Relations. Chuck’s been helping publishers chart a course through policy and tech disruptions for decades. Here’s how he sees 2026 going:

“The primary impacts will be centered on the discovery and consumption of research.

Authors aren’t just using AI to find the many articles and data they need. They’re using it to actually answer their research’s fundamental questions, distilling insights and building hypotheses that would have taken a huge human effort in the past. It’s a shift that seems inevitable given the surge in scholarly outputs we’re seeing.

This means that publishers’ primary consumers are no longer humans but LLMs and AI agents. And given that these technologies tend to blindly (to a large extent) scrape the scholarly record, that record needs everyone’s protection.

Publishers must build more scrutiny and fraud detection into their workflows, ensuring that they’re “feeding the machine” accurate, trustworthy, and on-brand content.

At the same time, publishers must rethink their production and consumption models to maintain quality and financial viability. AI will be transforming things internally as well as externally. Any problem that was classically solved by people alone will need to be rigorously rethought.

Will the AI consumption model drive market consolidation? Without a doubt. But smaller and medium-size publishers shouldn’t despair. The vendor landscape is brimming with tooling that can empower them.

Every publisher must continue to embrace rigorous exploration, evaluation, and experimentation of vendors and partners, selecting the ones that deliver real and documentable ROI-driven outcomes.”

How will data shape technology trends in 2026?

Answering this question is Martin Rosén-Lidholm, ChronosHub’s VP of Product and Engineering. Martin has forgotten more about AI than most of us will ever know. He sees the interplay between data and agentic AI as key.

“The question is no longer whether publishers have enough data or good enough data. It’s whether their data architectures can answer temporal questions (what happened, in what order, by whom) at the speed that AI-augmented workflows demand.

Today, most publishing systems store only a manuscript’s current status: “accepted” or “rejected”. A record of who reviewed it, what they flagged, what editorial decisions were made, that’s all scattered across disconnected systems. Or lost entirely.

Publishers are beginning to embed AI agents in their workflows. And these agents aren’t just add‑ons, they’re becoming the main consumers of modular APIs, pulling what they need in real time and exchanging data with one another.

These agents need to know things instantly if they’re to be effective. For example, an integrity agent evaluating a submission needs to know right now whether this author has submitted fake papers before. A compliance agent needs to compare funder policy at the time of the grant against the policy at acceptance.

This is also where AI stops being abstract. When an editor asks a conversational assistant, “Show me the full decision history of this manuscript”, the assistant needs a reliable trail of events to respond accurately.

This is why event sourcing, which means recording actions as immutable, ordered facts, rather than constantly overwriting rows in databases, will shape the next generation of publishing software.

Publishers whose architecture captures the full temporal narrative of their content will hold the most valuable asset in the AI era: not the content itself, but the provable, auditable, monetizable record of how that content was created, vetted, and used.”

It’s time to explore flexible infrastructure

The Tech Trends Report echoes much of Chuck’s and Martin’s thinking on infrastructure. As Hum’s John Challice puts it:

“We’re on the cusp of a massive change in the core publishing infrastructure, and publishers are going to have to make some hard calls—what to adapt, what to add, what to abandon.”

At ChronosHub, we help publishers make those calls without needing to gut existing tech stacks and suffer change management nightmares. With our platform, they can connect the systems that work and test and adopt new ones, on their own terms. Reach out if you want to know more.

Share this post

Related stories