Skip to content

Highlights from the STM conference in Frankfurt

Event duration October 21, 2024

After the STM Conference, my head was buzzing with all things AI. I had written nearly 3,000 words for a post-conference blog: a short overview of the different AI-use cases in research publishing. And then, Frankfurt Book Fair came along, with busy days of meetings, dinners, catching up with colleagues. So, the below is a bit of a delayed recap of some of the key takeaways from me STM Conference.

 

Producing research content with the use of AI

This would focus on content-generating AI, LLMs like ChatGTP, and include discussions on what's ethical for authors/researchers to use, and what they should declare.

Non-native English speakers putting a draft manuscript through an LLM to improve the language, using it as an editing service, seems totally fine, but asking an LLM to write an entire paper is not. One step further in that direction we have entire papers created and spat out by papermills. What about the great area in the middle?

A lot of education is needed in this space, I believe. I loved Lisa Janicke-Hinchliffe's example of asking students why they used a certain AI tool to ensure critical thinking skills around the use of these tools rather than just blindly using them.

AI models using published research as source material

This would include the discussion of content-generating, predictive generative AI like ChatGTP making use of scholarly content in order to train the AI. Within this use case is where there are discussions on attribution, licensing, open access vs closed access, copyright exemptions, etc. The conference had a very specific session just on this.

During the day there was a lot of discussion around attribution. "As long as LLMs provide attribution about where they get the content from, and what the original source is, that's fine." There was some discussion about LLMs becoming the new way for researchers to search for and consume content, instead of just googling it. Over dinner someone even shared an anecdote about a doctor using ChatGTP to look up medical questions before treatment!

But it doesn’t seem to be as easy as just including a source and attribution into these existing, openly available models. During one of the breaks, ChronosHub’s founder Christian Grubak explained to me that LLMs like ChatGTP can't actually provide a list of references – these systems are based on predicting the next word, and not spitting out information from a specific source. Daniel Ebetner, CEO at Karger Publishers, made the same point in the closing Executive Panel that LLMs spit out quotes accidentally. My question: why didn't we start the day by laying some ground knowledge down so we're all on the same page to begin with? I do wonder if these models, and spin-offs, will come up with other ways to solve this problem, such as by using a RAG (Retrieval Augmented Generation) model, so it is equally focused on retrieving content from specific sources (and listing them) as well as generating content based on predictions. The difference between LLMs and RAG models was also something I wasn’t familiar with and could have proven a helpful start to the day for me.

RAG models based solely on publisher-produced source material, including sources, sounds like an interesting idea to me, and it seems that licensing content for ‘closed’ use to specific organizations is already happening. Shouldn’t publishers work together on this, to create a 'closed' LLM that is based on research publications only, so the content doesn't sit alongside web content? Surely there’d be interest from libraries in access to something like that – and the need to collaborate to fight against open systems for the greater benefit of protecting the integrity of the scholarly record.

AI-driven tools to optimize workflows and other parts of the research publishing process

Again, a rather 'wide' bucket, and one that has been in use long before the launch of ChatGTP. This would also include AI-driven tools to try and figure out if content has been written by an AI, or any tools that rely on AI-based metadata analysis to make predictions around this, such as research integrity tools, as well as any tools to help with peer review identification and recommendations. I find this space quite interesting, not least because that’s where ChronosHub fits in. Over dinner I learned about an interesting upcoming start-up called  Blueberg.ai, that is looking to use AI to match manuscripts to submission guidelines.

 

In conclusion

For all of these use cases outlined above, as Linda S. Bishai, Research Staff Member at the Institute for Defense Analyses said in the keynote presentation, we must ask ourselves what the benefit is to us as humans, and what the ethical, legal and societal implications could be, especially if mis-used. In all cases, we must ensure that "the human is the loop," as Miriam Maus from IOP Publishing put it.

If you're using AI tools to write content, be honest about it, and check the facts, because ultimately your name is on it - the AI is a mere assistant, and the human has full responsibility, and accountability. If you use AI driven tools to make decisions, use them for data generation, and then check facts and make your own decisions. One example that I found interesting was the discussion around peer review, that AI is very far off being able to do peer review, use intelligence and make a judgement, but that it might be helpful in pulling out some of the signs, verifying data etc. 

 

 

Head of Publisher Relations at ChronosHub
Romy Beard

Romy is specialized in the academic online publishing industry, with a focus on publisher relations. And she’s one of our key experts in Open Access publishing terms. 

Romy Beard
×

Glossary Term Title