Generative AI Chatbot for Engineering Scientific Journal
Main Article Content
Abstract
Abstract: This study focuses on exploring the potential of artificial intelligence as an alternative, effective, and user-preferred tool for answering inquiries, compared to traditional methods such as FAQs or email and ticketing systems. The study highlights how AI can enhance efficiency and accuracy in processing and responding to inquiries from readers, authors, and reviewers, by providing immediate and customized answers based on the analysis of information available on the journal's website and the data fed to the chatbot. Through in-depth discussions and an analysis of the inquiries received over a full six months, totaling about 3000 inquiries, the study demonstrates the good ability of the chatbot to understand complex inquiries and provide satisfactory answers. The study indicates that chatbots can reduce the workload on editorial teams of scientific journals by automating responses to routine inquiries, allowing staff to dedicate more time to editorial and academic tasks. One of the key aspects of training is teaching the chatbot to provide correct answers to various inquiries and to avoid responding to negative or redundant inquiries. The research explores the challenges of applying AI in this context, including the need to train smart models to understand specific academic language and ensure accuracy in responses, as well as addressing privacy concerns and data security. The importance of designing flexible and adaptable AI systems to meet the diverse requirements of different scientific journals and their users is emphasized. The study concludes that artificial intelligence is a promising tool for improving the interaction between academic journals and their communities, offering an effective alternative to traditional systems. It highlights the necessity for ongoing research and development to enhance AI capabilities. Notably, the AI tool currently lacks a direct method for correcting its wrong answers, which is one of the most effective learning tools used by parents to correct their children's answers. One of the key recommendations of the study is that AI training should be conducted in stages.
Metrics
Article Details

This work is licensed under a Creative Commons Attribution 4.0 International License.
THIS IS AN OPEN ACCESS ARTICLE UNDER THE CC BY LICENSE http://creativecommons.org/licenses/by/4.0/
Plaudit
References
Getting started with AI Assist: https://help.tawk.to/article/getting-started-with-ai-assist
Training Apollo to answer FAQs and provide links to products using CSV files: https://community.tawk.to/t/ai-assist-best-way-to-make-sure-your-ai-answers-consistently-and-directs-users-to-correct-urls/1397
Base Prompt Templates for changing Apollo's behavior: https://community.tawk.to/t/ai-assist-collection-of-the-most-useful-base-prompt-instructions-that-you-can-copy/2319
How to correct Apollo if it provides a wrong answer: https://community.tawk.to/t/ai-assist-what-to-do-if-apollo-is-providing-the-wrong-information/1227
Esfandiari N, Kiani K, Rastgoo R. (2023). A conditional generative chatbot using a transformer model. arXiv preprint arXiv:2306.02074.
Conroy G. How ChatGPT and other AI tools Could Disrupt Scientific Publishing. Nature 2023; 622(7982): 234-236. DOI: https://doi.org/10.1038/d41586-023-03144-w