Generative AICG Policies

Scientia et PRAXIS acknowledges the increasing use of Artificial Intelligence Content Generation (AICG) tools such as ChatGPT, Perplexity, Elicit, Iris.ai, and other Large Language Models (LLMs). These tools may assist in the early stages of drafting, summarizing, or structuring information, but their use must adhere to principles of academic integrity, transparency, and human accountability.

In line with the Committee on Publication Ethics (COPE) recommendations, the journal adopts the following guidelines:

  1. No authorship by AICG tools
    AICG tools cannot be credited as authors. They lack legal status, intellectual responsibility, and the ability to take accountability for the content or approve the final version of a manuscript.

  2. Disclosure requirement
    If authors use AICG tools to generate text, summarize sources, or structure arguments, this use must be fully and transparently disclosed in the Methods or Acknowledgments section. Authors must indicate the name of the tool, the purpose for which it was used, and the extent of its contribution.

  3. Human responsibility
    Authors retain full responsibility for the accuracy, originality, and proper citation of any material produced with the help of AICG tools. Authors must verify all AI-generated content and ensure that no fabricated references or plagiarism is introduced.

  4. Permissible uses
    The use of AICG tools for purely technical purposes (such as grammar checks, stylistic suggestions, or spelling corrections) does not require disclosure, provided no substantive content is generated.

  5. Editorial review
    Editors and reviewers must not rely solely on generative AICG tools to assess manuscripts. Editorial decisions must be based on critical human evaluation and follow recognized ethical standards.

  6. Editorial oversight
    Reviewer selection, article editing, and publication decisions are the sole responsibility of the human editorial team.