Integrity in the age of AI: navigating authorship, accountability and ethical publishing
Downloads
The rapid emergence of generative artificial intelligence (AI) is reshaping the landscape of academic writing and scholarly publishing, offering both opportunities and ethical challenges. This editorial reflects on the growing implications of AI-assisted authorship, particularly regarding authorship, accountability, and research integrity. While AI tools can improve clarity and accessibility, their misuse raises concerns about the erosion of critical thinking and the reliability of scientific results.
A key issue highlighted in this discussion is the increasing incidence of false or unverifiable references generated through AI-assisted processes. These practices threaten the credibility of scholarly communication and increase the burden on editors and reviewers. In response, this editorial emphasizes the importance of responsible AI use, transparency in authorship, and rigorous source verification.
Furthermore, the article outlines the journal's commitment to maintaining high standards of quality and ethical publishing. This includes strengthening editorial policies, promoting integrity in manuscript preparation, and encouraging accountability among authors. The editorial also calls on the academic community to adopt responsible practices in the use of emerging technologies.
In line with international publishing standards, this editorial is designed to provide critical perspectives, articulate editorial responsibilities, and offer a forward-looking stance on ethical scientific communication. Ultimately, maintaining trust and credibility in the AI era requires a collective commitment to transparency, rigor, and integrity at every stage of the publication process.
Azaria, A., Azoulay, R., & Reches, S. (2024). ChatGPT is a Remarkable Tool—For Experts. Data Intelligence, 6(1), 240–296. https://doi.org/10.1162/dint_a_00235
BaHammam, A. (2025). The Transparency Paradox: Why Researchers Avoid Disclosing AI Assistance in Scientific Writing. Nature and Science of Sleep, Volume 17, 2569–2574. https://doi.org/10.2147/NSS.S568375
Bhattacharyya, M., Miller, V. M., Bhattacharyya, D., & Miller, L. E. (2023). High Rates of Fabricated and Inaccurate References in ChatGPT-Generated Medical Content. Cureus. https://doi.org/10.7759/cureus.39238
Chelli, M., Descamps, J., Lavoué, V., Trojani, C., Azar, M., Deckert, M., Raynier, J.-L., Clowez, G., Boileau, P., & Ruetsch-Chelli, C. (2024). Hallucination Rates and Reference Accuracy of ChatGPT and Bard for Systematic Reviews: Comparative Analysis. Journal of Medical Internet Research, 26, e53164. https://doi.org/10.2196/53164
Cheng, A., Nagesh, V., Eller, S., Grant, V., & Lin, Y. (2025). Exploring AI Hallucinations of ChatGPT. Simulation in Healthcare: The Journal of the Society for Simulation in Healthcare, 20(6), 413–418. https://doi.org/10.1097/SIH.0000000000000877
COPE Council. (2019). COPE Discussion Document: Authorship. https://doi.org/10.24318/cope.2019.3.3
Dwivedi, Y. K., Kshetri, N., Hughes, L., Slade, E. L., Jeyaraj, A., Kar, A. K., Baabdullah, A. M., Koohang, A., Raghavan, V., Ahuja, M., Albanna, H., Albashrawi, M. A., Al-Busaidi, A. S., Balakrishnan, J., Barlette, Y., Basu, S., Bose, I., Brooks, L., Buhalis, D., … Wright, R. (2023). Opinion Paper: “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. International Journal of Information Management, 71, 102642. https://doi.org/10.1016/j.ijinfomgt.2023.102642
Elsevier. (2023). The use of generative AI and AI-assisted technologies in writing for Elsevier. https://www.elsevier.com/about/policies-and-standards/the-use-of-generative-ai-and-ai-assisted-technologies-in-writing-for-elsevier
Moore, E. (2025). The case of the fake references in an ethics journal. https://retractionwatch.com/2025/12/02/fake-references-chatgpt-journal-academic-ethics-springer-nature-whistleblowing/
Nature. (2023, January 26). Tools such as ChatGPT threaten transparent science; here are our ground rules for their use. Nature Research. Nature. https://doi.org/10.1038/d41586-023-00191-1
Pareek, A., Ro, D. H., Karlsson, J., & Martin, R. K. (2024). Machine learning/artificial intelligence in sports medicine: state of the art and future directions. Journal of ISAKOS, 9(4), 635–644. https://doi.org/10.1016/j.jisako.2024.01.013
Rajaratnam, V., Omar, U. F., Kee, K., & Kaliya-Perumal, A.-K. (2026). Citation Inaccuracies and the Need for Multi-Level Oversight in AI-Assisted Medical Writing. Standards, 6(1), 10. https://doi.org/10.3390/standards6010010
Sawamura, S., Bito, T., Ando, T., Masuda, K., Kameyama, S., & Ishida, H. (2024). Evaluation of the accuracy of ChatGPT’s responses to and references for clinical questions in physical therapy. Journal of Physical Therapy Science, 36(5), 234-239. https://doi.org/10.1589/jpts.36.234
Spinellis, D. (2025). False authorship: an explorative case study around an AI-generated article published under my name. Research Integrity and Peer Review, 10(1), 8. https://doi.org/10.1186/s41073-025-00165-z
Stokel-Walker, C., & Van Noorden, R. (2023). What ChatGPT and generative AI mean for science. Nature, 614(7947), 214–216. https://doi.org/10.1038/d41586-023-00340-6
United Nations Educational, S. and C. O. (2021). Recommendation on the ethics of artificial intelligence. United Nations Educational.
van Dis, E. A. M., Bollen, J., Zuidema, W., van Rooij, R., & Bockting, C. L. (2023). ChatGPT: five priorities for research. Nature, 614(7947), 224–226. https://doi.org/10.1038/d41586-023-00288-7
Walters, W. H., & Wilder, E. I. (2023). Fabrication and errors in the bibliographic citations generated by ChatGPT. Scientific Reports, 13(1), 14045. https://doi.org/10.1038/s41598-023-41032-5
Zhou, D., Keogh, J. W. L., Ma, Y., Tong, R. K. Y., Khan, A. R., & Jennings, N. R. (2025). Artificial intelligence in sport: A narrative review of applications, challenges and future trends. Journal of Sports Sciences, 1–16. https://doi.org/10.1080/02640414.2025.2518694
Copyright (c) 2026 Yulingga Nanda Hanief

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.









