The proliferation and continually increasing sophistication of AI-assisted language tools and large language models (LLMs, such as ChatGPT, Deepseek, etc.) have opened new avenues for research as well as technical writing assistance, but the ethics and best practices for their use continue to evolve. These tools may generate useful information and content but are also prone to errors and inconsistencies. Furthermore, while AI can be very useful to assist in editing toward a clearer paper, particularly for non-English native speakers, it cannot replace the actual research and content creation.
To this end, AI-generated content can be used within SPE publications under the following conditions:
- AI language tools may not be listed as an author. The AI tool cannot sign publishing agreements or transfers of copyright.
- If AI language tools are used within a paper proposal submission, their use should be clearly noted at the end of the abstract as to how the tool was used, i.e. English clarification or grammar, literature review aid, etc.
- Any AI-generated content that is used within a manuscript should be thoroughly vetted, fact-checked, and disclosed with published references by the authors. All authors listed on a manuscript are equally responsible for this content quality and authenticity.
- If AI language tools are used within a manuscript, their use should be clearly explained within the methodology or acknowledgment section of the paper. This includes their use in assisting with the writing or proofing of the manuscript. If AI-generated content is included within a manuscript without an explanation, this can be grounds for rejection of the work at the discretion of SPE and may result in a code of conduct review.
- The authors of the manuscript or paper proposal will be held responsible for any errors, inconsistencies, incorrect references, plagiarism, or misleading content included in the AI tool.