AI Policy

Policy on the Use of Artificial Intelligence (AI) and Automated Tools in Academic Publishing

Aligned with the guidelines of the Committee on Publication Ethics (COPE)

1. Introduction and Scope
Revista Inclusiones and Editorial recognize that artificial intelligence (AI) tools, including Large Language Models (LLMs) and other generative technologies, are transforming research and academic publishing processes. While these tools can offer legitimate opportunities to optimize certain aspects of scholarly work, they do not replace the critical thinking, intellectual creativity, or ethical responsibility that characterize scientific production.

This policy establishes the rules governing the use of generative and automated AI tools by authors, reviewers, and editors in all editorial processes of Revista Inclusiones. This policy is based on the Committee on Publication Ethics (COPE) Position Statement on Authorship and AI (2023) and the principles of transparency and integrity that guide our editorial work.

For the purposes of this policy, "generative AI tools" are understood as technologies that create new content (text, images, code, data) from training data, such as ChatGPT, Gemini, Claude, Copilot, DALL-E, Midjourney, and similar tools. They are distinguished from conventional assistive tools (integrated spell checkers, reference managers such as Zotero or Mendeley), which do not require declaration.

2. Policy for Authors

2.1. Fundamental Principle: Authorship is Exclusively Human
In accordance with COPE’s position, AI tools cannot be credited as authors or co-authors of any manuscript submitted to Revista Inclusiones. AI tools cannot take responsibility for the submitted work, are not legal entities, and therefore cannot declare conflicts of interest, grant consent, or manage copyright and licensing agreements.
Furthermore, generative AI cannot be cited as a source in bibliographic references, as it does not produce verifiable original knowledge nor can it guarantee the accuracy or reliability of its content.

2.2. Permitted Uses Without Declaration
The following uses of automated tools are considered assistive and do not require declaration:
a) Basic spelling and grammar correction (built-in checkers in word processors).
b) Management and organization of bibliographic references using specialized software (Zotero, Mendeley, EndNote).
c) Minor adjustments to formatting and citation style.

2.3. Permitted Uses with Mandatory Declaration
Authors must declare the use of generative AI when it is employed for purposes that go beyond simple linguistic correction, editing, and formatting. These uses include, among others:
d) Substantial improvement of writing, restructuring of paragraphs, or reformulation of arguments.
e) Translation of texts or sections of the manuscript.
f) Generation or editing of images, graphics, or visual elements.
g) Assistance in writing or generating code for data analysis.
h) Support in reviewing or compiling bibliographic sources.
i) Any other use that generates new content or substantially modifies existing manuscript content.

2.4. Non-Permitted Uses
Generative AI tools must not be used to:
j) Interpret data or formulate scientific conclusions. The interpretation of results and the formulation of findings are the exclusive intellectual responsibilities of human authors.
k) Generate full articles or articles substantially drafted by AI without significant human supervision.
l) Fabricate data, results, or bibliographic references.
m) Produce content that introduces bias, false information, or erroneous interpretations into the research.

2.5. Author Responsibility
Authors are fully responsible for the entire content of their manuscript, including those sections in which AI tools participated. This responsibility covers the verification of accuracy, originality, and relevance of all content, and extends to any breach of ethical publishing standards. Authors must ensure that AI-assisted content does not contain:
- Hallucinated or non-existent references.
- Incorrect scientific claims.
- Biased interpretations.
- Plagiarized or unattributed material.

2.6. Declaration of AI Use
When submitting a manuscript, authors must include an AI Use Declaration located immediately before the References/Bibliography section. This declaration must specify:
- The name and version of the AI tool used.
- The specific purpose for which it was employed.
- The sections of the manuscript where it was used.
- Confirmation that the author reviewed and verified all AI-generated or assisted content.
If no generative AI was used, authors must include the following statement: "The authors declare that no generative artificial intelligence tools were used in the preparation of this manuscript."

Declaration example:
"The author(s) used ChatGPT (OpenAI, GPT-4 version, January 2025) to improve the clarity and grammar of the Introduction and Theoretical Framework sections. All content was reviewed and verified by the author(s), who assume full responsibility for the accuracy of the manuscript."

3. Policy for Reviewers (Peer Reviewers)
Peer review is a fundamental pillar of academic integrity and is based on confidentiality, disciplinary competence, and human judgment. Consequently, Revista Inclusiones establishes the following guidelines:

Prohibition of Generative Use
Reviewers must not use generative AI tools to draft their evaluation reports. AI-generated evaluations present significant risks, such as: violation of manuscript confidentiality, superficial or generic feedback, introduction of bias, false information (including non-existent references), and hidden instructions (prompt injection).

Protection of Confidentiality
It is strictly forbidden to input the content of manuscripts under evaluation into public AI tools (such as ChatGPT, Gemini, or other cloud-based services), as this may compromise the confidentiality of the review process and expose unpublished information to third-party platforms.

Limited Assistive Use
The use of AI tools is permitted exclusively for the linguistic correction of the reviewer’s own evaluation text (e.g., grammar or style), provided that no confidential information from the manuscript is shared and such use is declared to the editor. Editing and rewriting via AI may be acceptable if declared to the editor.

Detection of AI Use in Manuscripts
If a reviewer suspects the undeclared use of generative AI in a manuscript under review, they must report their concerns to the editor, who will proceed according to established procedures.

4. Policy for Editors and Editorial Team
Human Oversight
All editorial decisions (acceptance, rejection, request for revisions) are made exclusively by qualified human editors. AI tools do not participate in editorial decision-making regarding manuscripts.

Editorial Use of Automated Tools
The editorial team may use automated tools for non-decisory support tasks, such as similarity detection via Crossref’s Similarity Check (powered by iThenticate), formatting verification, metadata processing, and identification of potential integrity issues. Results from these tools will always be verified by an editor or member of the editorial team before any action is taken.

Prohibitions for Editors
Editors must not use generative AI tools to draft editorial decision letters, summaries of unpublished research, or manuscript evaluations.

5. Consequences of Non-Compliance
Failure to declare the use of AI or the misuse of generative AI tools may be considered a violation of ethical publishing standards. Revista Inclusiones reserves the right to:
n) Reject the manuscript at any stage of the editorial process.
o) Request the inclusion or correction of the AI use declaration before proceeding with the evaluation.
p) Proceed with the retraction of the article if non-compliance is detected after publication, following COPE guidelines.
q) Notify the authors' affiliated institutions in serious cases.
r) Refrain from inviting reviewers who inappropriately use generative AI to draft their review reports.

All cases will be investigated in accordance with COPE guidelines regarding misconduct in academic publishing.

6. Commitment to Transparency and Author Support
Revista Inclusiones is committed to the education and guidance of its authors. During the initial suitability assessment, our editorial team will provide clear guidance on the requirements of this policy and assist authors in the correct formulation of their AI use declaration when necessary.

We recognize that the use of AI is not, in itself, a factor for rejection. A manuscript will not be rejected solely due to the declared use of AI tools, provided that such use is transparent, responsible, and compliant with this policy. The final decision will always depend on the academic merit evaluated through peer review and the Editorial Committee.

7. Review and Update of this Policy
This policy will be reviewed periodically and updated as necessary to reflect technological advances, new guidelines from COPE and other relevant organizations, as well as the evolution of best practices in academic publishing.

Regulatory References
Committee on Publication Ethics (COPE). "COPE Position Statement: Authorship and AI Tools" (2023). https://publicationethics.org/guidance/cope-position/authorship-and-ai-tools

Committee on Publication Ethics (COPE). "Guidance on AI in Peer Review" (2024). https://publicationethics.org/news/cope-publishes-guidance-on-ai-in-peer-review