If the scholarly publishing community has learned nothing else over the last 5 years, it’s that for better or worse, AI is here to stay. Peer reviewers are using it. Authors are using it. We’ve talked so much about the use of AI in scholarly publishing and the negative connotations it has, but is there a way to use AI “responsibly”? Elsevier seems to think so.
The publishing giant announced in August of 2023 that they would be launching Scopus, a new AI-database tool described as “a next-generation tool that combines generative artificial intelligence with Scopus’ trusted content and data to help researchers get deeper insights faster, support collaboration and societal impact of research.” Functions of Scopus described by Elsevier at the time included summaries based on abstracts within the Scopus database, “Go Deeper Links” with easy navigation for researchers to further explore topics, and natural language queries that allow users to ask questions in conversational manner with AI chatbots. The tool is designed to help scientists (especially early career researchers) navigate the ever-growing amount of data within their given field to find the relevant sources and references needed to improve their manuscripts before and after submission.
After months of testing with thousands of global users, Elsevier officially launched the tool to all of its users in early 2024. The tool gathers “trusted content” from more than 29,000 academic journals and 300,000 books totaling more than 1.8 billion citations and 17 million author profiles. In launching the tool, Elsevier put the biggest emphasis on the word “responsible” in its promotional material. They note that their company has “used AI and machine learning responsibly in our products” for over 10 years.
One of the key notes in its promotional material for the tool is “legal and technology protections to ensure zero data exchange or use of Elsevier data to train OpenAI’s public model.” The fear of data manipulation and data collecting is one of the major negative connotations related to AI tools. The tool also claims to have source transparency built in, guaranteeing a reference for any data collected, which in theory should result in fewer instances of “hallucinations” (false results generated by AI based on material that isn’t viable).
Nonetheless, Elsevier is responsible enough to admit that their tool has limitations stating that “it’s currently impossible to entirely eliminate inaccurate responses.” The company notes that it is constantly working to refine and develop new technology with the tool in order to further reduce the chances of hallucinations and false results for its customers.
The tool is available to Elsevier customers with sign in credentials. Some researchers may have access to the tool through their research institution. Do you think AI can be used by publishers in a “responsible” way to assist researchers? Is there any way to completely remove “hallucinations” from the equation when it comes to AI-generative research tools? Let us know in the comments below.
By: Chris Moffitt
Chris is a Managing Editor at Technica Editorial