GENERATIVE AI POLICIES

INTRODUCTION 

This policy is developed with reference to established international guidelines on the use of generative artificial intelligence (AI) in scholarly publishing, including:

  • STM        : Recommendations for classifying AI use in academic manuscript preparation
  • Elsevier   : The application of generative AI and AI-assisted technologies within the review process. 
  • WAME    : Chatbots, generative AI, and scholarly manuscripts 

Jurnal Online Informatika (JOIN) recognizes the growing role of artificial intelligence in supporting research and academic writing. The journal acknowledges the opportunities offered by generative AI tools, particularly in supporting idea development, accelerating research workflows, assisting data analysis, improving writing quality, organizing submissions, supporting non-native English authors, and expediting research dissemination. In response to the rapid evolution of AI technologies, JOIN provides guidance for authors, editors, and reviewers regarding the appropriate use of such tools, with the understanding that these guidelines may be updated as the field continues to develop.

Generative AI technologies are developing rapidly, especially in consumer and professional applications. Although these tools can enhance creativity and productivity, they also introduce significant risks that must be carefully managed. Generative AI is capable of producing diverse outputs, including text, images, audio, and synthetic data. Common examples include ChatGPT, Copilot, Gemini, Claude, NovelAI, Jasper AI, and so on.

Key risks associated with current generative AI tools include:

  1. Inaccuracy and bias: As probabilistic systems rather than factual authorities, generative AI tools may generate incorrect information, fabricated content (hallucinations), or biased outputs that are difficult to identify and correct.

  2. Insufficient attribution: Generative AI often does not follow accepted scholarly standards for accurately attributing sources, ideas, quotations, or citations.

  3. Confidentiality and intellectual property concerns: Many generative AI platforms operate through third-party services that may not guarantee adequate data protection, confidentiality, or copyright safeguards.

  4. Unintended secondary use: AI providers may reuse user inputs or outputs (for example, for model training), potentially infringing upon the rights of authors, publishers, or other stakeholders.

 

AUTHORS 

Authors are permitted to use generative AI tools (such as ChatGPT or other GPT-based models) for limited purposes, including improving grammar, language quality, and readability. However, full responsibility for the originality, accuracy, validity, and ethical integrity of the manuscript remains with the authors. Any use of generative AI must comply with JOIN’s editorial policies on authorship and publication ethics, and authors must carefully review and verify all AI-generated content.

JOIN supports the responsible use of generative AI tools, provided that appropriate standards for confidentiality, data security, and copyright protection are upheld. Acceptable uses include:

  • Idea generation and conceptual exploration

  • Language and writing enhancement

  • Interactive literature searching using LLM-based tools

  • Literature categorization

  • Coding support

Authors must ensure that all submitted content meets rigorous scholarly and scientific standards and that the work is fundamentally authored by humans.

Generative AI tools must not be listed as authors, as they cannot assume responsibility for the content, consent to publication, or enter into copyright and licensing agreements. Authorship requires accountability, ethical responsibility, and legal assurances—obligations that only humans can fulfill.

Any use of generative AI tools must be transparently disclosed within the manuscript. Authors are required to provide a statement specifying the tool’s name and version, the manner of its use, and the purpose for which it was employed. This disclosure should appear in the Methods or Acknowledgements section. Such transparency enables editors to assess whether the tool has been used appropriately. JOIN retains the authority to decide on publication to ensure adherence to ethical and editorial standards.

Authors must also ensure that the selected AI tools are appropriate for their intended use and that their terms of service provide sufficient protections related to intellectual property, confidentiality, and data security.

Manuscripts should not rely on generative AI in ways that undermine core scholarly responsibilities, including:

  • Unreviewed or unedited AI-generated text or code

  • Use of synthetic data as a replacement for missing empirical data without sound methodology

  • Generation of inaccurate content, including abstracts or supplementary materials

Such practices may lead to editorial review or investigation.

JOIN currently prohibits the use of generative AI for creating or altering images, figures, or original research data intended for publication. This includes charts, tables, medical images, image fragments, computer code, and formulas. Manipulation refers to any alteration such as adding, removing, obscuring, or relocating elements within visual materials.

Human oversight and transparency must guide the use of AI-assisted technologies throughout the entire research lifecycle. As ethical standards and technologies evolve, JOIN will continue to revise its editorial policies accordingly.

 

EDITORS AND PEER REVIEWERS 

JOIN upholds strict standards of editorial integrity, confidentiality, and transparency. The use of unpublished manuscripts in generative AI systems by editors or reviewers may compromise confidentiality, intellectual property rights, and personal data protection. Therefore, editors and peer reviewers are strictly prohibited from uploading unpublished manuscripts, images, files, or related information into generative AI tools. Violations of this policy may constitute breaches of intellectual property rights.

Editors

Editors are responsible for safeguarding the confidentiality of submissions and peer review processes. Uploading unpublished manuscripts or associated materials into generative AI platforms poses significant risks to data security, confidentiality, and proprietary rights. Accordingly, such practices are strictly forbidden.

Peer Reviewers

Peer reviewers, as subject-matter experts, must not use generative AI tools to evaluate, summarize, or assess submitted manuscripts or proposals. Unpublished content, including any related files or images, must not be uploaded into AI systems.

Generative AI may only be used to support language improvement in review reports. Nevertheless, peer reviewers remain fully responsible for the accuracy, judgment, and integrity of their evaluations.