Artificial Intelligence Policy

The Journal of Religion, Local Politics, and Law acknowledges the evolving role of artificial intelligence (AI) tools in the research and writing process. This policy establishes clear ethical guidelines for the transparent and accountable use of such technologies by authors, reviewers, and editors.

1. Authorship and AI-Generated Content

  • AI as a Tool, Not an Author: AI tools (e.g., large language models, chatbots, image generators) cannot be listed as an author on a submitted manuscript. Authorship implies intellectual responsibility, accountability, and the ability to defend the work, which AI systems cannot fulfill.

  • Full Transparency is Mandatory: Authors must explicitly disclose any use of AI-assisted technologies in the preparation of their manuscript. This includes, but is not limited to:

    • Use for idea generation, literature searching, or structuring.

    • Use for writing assistance, paraphrasing, or language polishing.

    • Use for data analysis, coding, or visualization.

    • Generation of images, figures, or other graphical elements.

  • Declaration and Responsibility: A statement detailing the AI tool used (e.g., ChatGPT, Claude, Copilot), its purpose, and the extent of its use must be included in the Acknowledgments or a dedicated "AI Assistance Declaration" section at the end of the manuscript. Authors are solely and fully responsible for the entire content of their manuscript, including any portions developed with AI assistance. They must ensure the accuracy, integrity, and originality of all content and are accountable for any ethical breaches or plagiarism.

2. Use in the Peer Review Process

  • Confidentiality Prohibition: Reviewers are strictly prohibited from using AI tools to analyze, summarize, or generate reports on confidential manuscripts. Uploading any part of a submitted manuscript to an AI platform violates our peer review confidentiality policy and constitutes a serious ethical breach.

  • Permissible Use: Reviewers may use AI tools for personal, non-confidential tasks such as checking their own report for language clarity or grammar, provided no manuscript content is disclosed.

3. Use in Editorial Work

  • Editors may use AI tools for administrative tasks that do not compromise confidentiality (e.g., initial screening for formatting, managing correspondence). Final editorial decisions must be made by human editors based on expert peer review and their own scholarly judgment.

4. Prohibited Uses and Violations

  • Submitting manuscripts where the central argument, analysis, or scholarly insight is substantially generated by AI without critical human intellectual leadership.

  • Using AI to fabricate data, references, or literature.

  • Failing to disclose the use of AI assistance.

  • Violating the confidentiality of the review process via AI tools.

5. Policy Enforcement

Manuscripts found to violate this policy will be subject to immediate rejection or retraction if already published. Unreported or unethical use of AI may lead to further sanctions, including notification of authors' institutions and a ban on future submissions.


This policy is subject to review as technology and scholarly norms evolve. Its core principle remains: Human intellectual creativity, oversight, and accountability are paramount in scholarly publishing.