In the rapidly evolving landscape of scholarly publishing, the advent of AI and new technologies has sparked considerable discourse. While the buzz around technologies like ChatGPT may seem recent, it’s crucial to recognize AI’s longstanding presence in programs dating back to the late ’90s. The challenges and opportunities it bring have been a longstanding consideration in the scholarly publishing realm, especially as it contemplates the implications of artificial intelligence technology, potentially reshaping centuries-old processes. This consideration has reignited now that the challenges have become more pronounced, particularly in keeping up with its rapid changes.

We explored all of this in our recent Karger in Conversation panel, where experts from the scholarly publishing, librarianship, and research sector highlighted the inevitable impact of AI on the industry, presenting opportunities for efficiency, fraud prevention, and even the reduction of language barriers for researchers. Each speaker brought a unique perspective, revealing both common ground and differing viewpoints. In this brief article we will summarize a few of the key takeaways of the conversation, but to catch the in-depth conversation on one of the most critical and fascinating issues today, be sure to watch the full discussion.

AI is Here to Stay

Michelle Kraft, Library Director at Cleveland Clinic set the stage, pointing out that “there are so many AIs out there that it’s hard to keep track of” and clarifying that “if people just limit their focus to that kind of AI (ChatGPT), (they) are really limiting themselves overall.” This sentiment was echoed by Tech Consultant Phill Jones, who celebrated ChatGPT’s success over the past year, emphasizing its groundbreaking ability to produce more human-like responses, marking a significant leap in text generation. However, Phill acknowledged that this may lead to overblown concerns surrounding ChatGPT, attributing the fear to its convincing mimicry of human language. This sparked anticipation for a deeper exploration of AI implications, balancing the excitement with a cautionary note. The underlying message was clear- AI’s journey has been and will continue to be a long one, and its trajectory is one of perpetual advancement. 

Working in Tandem with AI

Researcher, Clinician, and Karger Ambassador Francesco Andrea Causio joined the conversation, emphasizing the role of AI in assisting researchers like himself and underscoring its crucial importance in his everyday work life. Andrea highlighted the power of AI-generated plain language summaries, serving as valuable tools for understanding intricate subjects beyond researchers’ immediate expertise. However, like Michelle and Phill, he cautioned against blind reliance on them, advocating for cross-verification with experts to ensure accuracy.

Currently, it seems like these technologies have become integral to practical tasks such as summarizing content and synthesizing information. However, as Michelle pointed out, although one of the things that AI does well is the gathering and sorting of information or data, “we still need…people to actually analyze the data.” She also raised a red flag regarding AI-generated complicated searches and citations. Michelle stressed the necessity of human oversight to counter the inherent limitations of language models, pointing out the risks of inaccuracies, “hollow citations,” and “hallucinations”, and emphasized the indispensable role librarians play in guiding researchers through the complex information landscape.

Ethics in AI

ImageTwin CEO, Patrick Starke, rounded up the conversation by highlighting that if creating content, especially in another language, and summarizing texts is one of the main tasks we currently ask of AI, there is also another important side of the coin, which is what he focused on when he co-founded his company for image validation, namely “the application of it to actually identify whether that text (or image) was generated by an artificial intelligence in the first place”. Which brings us to the crucial question of transparency and ethics, which we will explore in a future blog post.

As we navigate the AI frontier, expert consensus points to responsible use, ethical considerations, and collaborative efforts as key pillars for ensuring the reliability and integrity of information in health sciences.

Stay tuned for future blog posts where we will explore the intricacies of our discussion in greater detail and address additional points. We’ll also be addressing questions from the audience, so be on the lookout for more insights.

Related Posts

Rare Disease Day, an annual event held on the last day of February (the rarest day of the year, which...
On November 24 , Christiane Büttner, a psychology PhD candidate at the University of Basel, received the 21st Steven Karger...
On the occasion of the 10th anniversary of the Declaration on Research Assessment (DORA), we had a conversation with Ashley...

Comments

Share your opinion with us and leave a comment below!