Make a Submission |
Artificial Intelligence Policy
The Bhinneka Tunggal Ika Journal acknowledges the increasing role of artificial intelligence (AI) tools in research and scholarly communication. This policy establishes clear ethical guidelines for the responsible use of AI technologies in the preparation, review, and publication of manuscripts.
1. Definition & Scope
This policy applies to all AI-assisted technologies, including but not limited to:
-
Large Language Models (e.g., ChatGPT, Gemini, Claude)
-
AI-based writing assistants (e.g., Grammarly Pro, Wordtune)
-
AI-powered data analysis, coding, or visualization tools
-
AI image generators and multimedia content creators
2. Author Responsibilities & Permitted Uses
Allowed with Full Disclosure
Authors may use AI tools for:
-
Improving readability, grammar, and language (especially for non-native speakers)
-
Generating initial ideas or outlines for literature review sections
-
Translation of quoted material or supplementary text
-
Data cleaning, coding assistance, or statistical analysis
-
Formatting references or checking citation consistency
Prohibited Uses
Authors must not:
-
List an AI tool as a co-author or contributor
-
Use AI to generate entire sections of a manuscript without substantial human intellectual input, analysis, and validation
-
Submit AI-generated content as original human thought, analysis, or interpretation
-
Use AI to fabricate data, references, or sources
-
Use AI to manipulate images or results deceptively
3. Disclosure & Documentation Requirement
If any AI tool was used in the manuscript preparation process, authors must:
-
Include a statement in the "Acknowledgments" or "Methods" section (for technical use) detailing:
-
The name and version of the AI tool used
-
The specific purpose of its use (e.g., "ChatGPT-4 was used for initial language polishing and grammar checking")
-
The extent of its use and how the output was validated by the authors
-
-
Take full responsibility for the accuracy, integrity, and originality of all content, including any AI-assisted output. The authors are ultimately accountable for the entire submitted work.
4. Peer Review & Editorial Process
-
Reviewers are prohibited from using AI tools to analyze, summarize, or draft reviewer comments on confidential manuscripts, as this breaches confidentiality and data privacy obligations.
-
Editors will not use AI tools to make substantive editorial decisions (e.g., accept/reject) without human oversight and judgment.
-
All manuscript evaluations must be based on human expert assessment.
5. AI-Generated Images & Multimedia
-
AI-generated images, figures, or multimedia content are generally not permitted unless their creation is the explicit subject of the research (e.g., a paper about AI art).
-
If used, they must be clearly labeled as AI-generated in the caption and the generation process must be described in the methods.
6. Detection & Violations
-
The editorial office reserves the right to use AI-detection tools as part of the screening process if AI misuse is suspected.
-
Submissions found to have used AI in a prohibited manner or without proper disclosure will be subject to:
-
Immediate rejection or retraction if already published.
-
Notification of the authors' institutions in cases of severe misconduct.
-
A ban on future submissions from the authors for a defined period.
-
7. Policy Rationale
This policy is guided by the core academic principles of:
-
Transparency: Clear disclosure of all tools used in knowledge creation.
-
Accountability: Authors must own and stand by their intellectual contribution.
-
Originality: Scholarship must reflect genuine human intellectual effort and critical analysis.
-
Fairness: Ensuring a level playing field for all authors and maintaining the integrity of the peer review process.