Can AI Be Responsible? The Case for Elsevier’s Scopus

If the scholarly publishing community has learned nothing else over the last 5 years, it’s that for better or worse, AI is here to stay. Peer reviewers are using it. Authors are using it. We’ve talked so much about the use of AI in scholarly publishing and the negative connotations it has, but is there a way to use AI “responsibly”? Elsevier seems to think so.

An image of a central node surrounded by smaller nodes that represents thought.

            The publishing giant announced in August of 2023 that they would be launching Scopus, a new AI-database tool described as “a next-generation tool that combines generative artificial intelligence with Scopus’ trusted content and data to help researchers get deeper insights faster, support collaboration and societal impact of research.” Functions of Scopus described by Elsevier at the time included summaries based on abstracts within the Scopus database, “Go Deeper Links” with easy navigation for researchers to further explore topics, and natural language queries that allow users to ask questions in conversational manner with AI chatbots. The tool is designed to help scientists (especially early career researchers) navigate the ever-growing amount of data within their given field to find the relevant sources and references needed to improve their manuscripts before and after submission.

            After months of testing with thousands of global users, Elsevier officially launched the tool to all of its users in early 2024. The tool gathers “trusted content” from more than 29,000 academic journals and 300,000 books totaling more than 1.8 billion citations and 17 million author profiles. In launching the tool, Elsevier put the biggest emphasis on the word “responsible” in its promotional material. They note that their company has “used AI and machine learning responsibly in our products” for over 10 years.

            One of the key notes in its promotional material for the tool is “legal and technology protections to ensure zero data exchange or use of Elsevier data to train OpenAI’s public model.” The fear of data manipulation and data collecting is one of the major negative connotations related to AI tools. The tool also claims to have source transparency built in, guaranteeing a reference for any data collected, which in theory should result in fewer instances of “hallucinations” (false results generated by AI based on material that isn’t viable).

Nonetheless, Elsevier is responsible enough to admit that their tool has limitations stating that “it’s currently impossible to entirely eliminate inaccurate responses.” The company notes that it is constantly working to refine and develop new technology with the tool in order to further reduce the chances of hallucinations and false results for its customers.

The tool is available to Elsevier customers with sign in credentials. Some researchers may have access to the tool through their research institution. Do you think AI can be used by publishers in a “responsible” way to assist researchers? Is there any way to completely remove “hallucinations” from the equation when it comes to AI-generative research tools? Let us know in the comments below.

By: Chris Moffitt
Chris is a Managing Editor at Technica Editorial

You May Also Be Interested In

I’ll Give You Proof!

I’ll Give You Proof!

At first glance, copy editing and proofing might seem like very similar tasks—and they do, indeed, have plenty in common. But a copy editor with a sharp eye for detail will recognize that these are entirely separate processes with entirely separate skill sets. On the...

AI Detection Software is Hit or Miss According to Most Experts

For this year’s Peer Review Week, the theme selected was “Innovation and Technology in Peer Review”. We’ve spoken at length in recent posts about how AI and technology have changed the nature of peer review. We’ve specifically discussed ChatGPT’s growing role in peer...

The Technica Advantage

At Technica Editorial, we believe that great teams cannot function in silos, which is why every member of our staff is cross-trained in editorial support and production. We train our employees from the ground up so they can see how each role fits into the larger publishing process. This strategy means Technica is uniquely positioned to identify opportunities to improve and streamline your workflow. Because we invest in creating leaders, you get more than remote support — you get a partner.