Many educators worry about AI's impact on academic integrity, especially as it becomes more difficult to judge whether a person or computer did the work. Unfortunately, as yet, there are no automated tools for AI detection that do not have a high false-positive, mistakenly judging human writing as AI.
Instead, multiple sources recommend doing the following:
As early as possible, including in your syllabus, set clear expectations concerning the use of AI tools in your class.
Specify when AI tools are/ are not permitted.
When AI is allowed, what work is expected to be done by the student?
When allowing AI tools, students need to be transparent with their use, just as if they were citing another source.
Set your policy for citing AI. APA, MPA, and Chicago all have rules for citing AI-generated material.
Simple prompts make it easy for AI to regurgitate information. Ones that demand more nuance and creativity are much harder for an AI to answer.
When designing coursework, run assignments through an AI tool. If it answers easily, edit and try again.
Some sources recommend scaffolding writing assignments, as it is much harder for a student to submit a final assignment crafted with AI if you have other examples of their work. Most people have their own style and tone, and it is hard for an AI to replicate that.
These recommendations are merely a starting point. As AI tools evolve, this framework can be a basis for future action.