Guidance for Maricopa Faculty Regarding Generative AI Tools and Academic Misconduct

Summary

AI Detection Tools like GPT Zero and Turnitin’s AI Detector are not, on their own, effective measures of academic misconduct. All such tools have the potential for false positive results. Because generative AI policies vary from class to class, faculty are responsible for setting clear expectations in their own classes for how and when generative AI tools may be used by students. If a faculty member suspects a student has used generative AI in violation of the policies set for their class, they should first collect artifacts supporting their suspicion, then they should present their concerns to the student directly, and finally, they should focus on the next steps for the student. The classroom strategies described at the end of this paper have been shown to encourage student academic integrity.

Generative AI Tools and AI Detectors

In November of 2022, Open AI was released ChatGPT to the public. In February of 2023, Microsoft launched the AI-powered Bing Copilot. The next month, Google released its own generative AI tool, Google Gemini. While generative AI was not a new technology at the time, these developments put generative AI tools in the hands of the general public, and students and faculty began to employ these tools almost immediately. With relatively simple prompting, users could create unique AI-generated content that imitated human cognition and communication. Educators reacted to these changes in a variety of ways. Some in higher education saw the vast potential for the use of these tools to support teaching and learning. Others were wary of the technology, fearing that academic integrity could be compromised. The AI Task Force recognizes faculty members’ legitimate interest in ensuring authentic and original critical thinking and communication. This paper provides faculty with guidance on navigating questions of academic integrity and misconduct within the context of generative AI.

Tools purporting to detect AI-generated content sprang up as suddenly as the AI tools themselves. GPT Zero was released in January of 2023. It bills itself as bringing “transparency to humans navigating a world filled with AI content.” Turinitin.com, a plagiarism detection tool available to all Maricopa faculty, added an AI detector to its Canvas software in February of 2023. Other tools, likeZeroGPT, Originality.ai, and Winston AI, have since been released. Detection tools for AI-generated images, like AI or Not, have also been developed. While none of these tools will clearly explain exactly how their proprietary software works, we know that the detectors for written text generally measure at least two different qualities: perplexity and burstiness. Perplexity is a measure of the predictability of word choices. Generative AI tools construct writing one word at a time, choosing the likeliest next word based on complex statistical algorithms. A tool with very low perplexity will produce the most expected next word in a sentence. A tool with very high perplexity would produce very unexpected (and often nonsensical) content. Burstiness, then, is a measure of the predictability of sentence structure and length. AI tends to produce sentences with predictable sentence lengths and structures. These are the qualities (among others) that make it detectable. Functionally, these detection tools are used either as standalone websites (like GPT Zero), as browser extensions, or as tools embedded in Canvas (like Turnitin.com). For standalone websites, users simply copy and paste text into the tool, and the tool produces a report that indicates the likelihood that content was, in whole or in part, AI-generated. For the embedded tools, users have access to the Turnitin tool for individual assignments in Canvas and student work will automatically be analyzed for plagiarism and AI detection upon submission.

The Effectiveness of AI Detectors

These detection tools are not, on their own, effective measures of academic misconduct. Turnitin’s website on Academic Integrity and AI Writing explains that “the AI writing indicator is best used to inform the educator’s judgment, not to be the sole measure of academic integrity.” GPTZero has a similar disclaimer in the FAQs on its website: “The nature of AI-generated content is changing constantly. As such, these results should not be used to punish students.” Unlike plagiarism detection, which can exactly match student writing to already-existing text, AI detection relies on the statistical likelihood that writing was AI-produced based on criteria like perplexity and burstiness. The internet is awash in strategies for duping AI detection tools. An AI-generated false-negative paper can easily be manufactured by a savvy student. Perhaps more troublesome is the potential for false positives. While AI detection tools report relatively low false positive rates (Turnitin claims a rate of less than 1%, with a 4% rate at the sentence level), the chance still exists. Perhaps complicating the false-positive scenario is the ubiquity of writing tools like Grammarly, Packback, and Microsoft and Google’s spelling and grammar checkers which are, after all, AI tools (though most don’t tend to think of them that way).  Furthermore, research has found that false positive rates are even higher for non-native English speakers, leading to potential discrimination. In one study, “over half of the non-native English writing samples were misclassified as AI-generated, while the accuracy for native samples remained near perfect.” Faculty who use these tools uncritically risk falsely (and perhaps discriminatorily) accusing students of academic misconduct with potentially disastrous consequences for the student. For these reasons, some institutions, like Vanderbilt University, have disabled Turnitin’s AI detector.

The Importance of Setting Clear Expectations for the Use of AI

Some faculty members have encouraged their students to use AI tools throughout the class. Others have forbidden its use at any stage of the process of content development (writing, art, code, etc.). Because of these disparate approaches, it is essential that each faculty member clearly communicate their expectations for the use of generative AI in their class. The MCCCD AI Task Force has already distributed guidance on model syllabus statements that faculty may adopt in whole or in part. Guidance should also be provided for individual assignments. Faculty should also clearly communicate their process for determining if student work was AI-generated. Turinitin’s AI Ethical Checklist may be a helpful resource for students and faculty alike. Introducing students to generative AI tools in class may help them see how and when these tools can be used appropriately. Some institutions have provided students with free access to Grammarly and other AI tools, implying a blanket endorsement of the use of those tools. Without clearly set expectations, students may be uncertain about when and how these powerful tools may be used. Furthermore, faculty members cannot reasonably discipline a student for violating a policy that has not been clearly communicated, especially when policies are so varied from one class to the next and from one academic discipline to the next.

Determining if a Student Has Used AI

If a faculty member suspects that a student has inappropriately used AI to create content for the class, the instructor should first gather artifacts documenting that suspicion. An AI detector’s analysis may be one element, but it should not be the sole source of information. Faculty members should also look for other indicators. Because of the process by which AI tools create content, they will sometimes generate misleading or inaccurate claims. These can include the use of nonexistent sources, gross errors of fact, or the reiteration of outdated information. These errors are called “hallucinations.” Faculty members may note the presence of these hallucinations as further evidence of the use of generative AI. Faculty members may also compare the writing style of one assignment to that of previous assignments. Has the student’s voice, tone, and/or diction changed considerably? AI may be the cause of that shift. There have even been overt cases where a student copied and pasted AI-generated content without reviewing it, and the submission includes some self-identification by the AI tool. For example, when asked for an opinion, AI tools might respond as follows, “As a large language model, I am not able to form my own opinions. However, I can provide you with information and perspectives from a variety of sources to help you form your own opinion” (“What is your opinion on the effectiveness of communism?” prompt. Google Gemini, 6 November 2023, gemini.google.com). Additionally, students who copy and paste text (from any source) often neglect to apply a consistent font and formatting. All of these clues together may help a faculty member determine whether or not an AI tool has been used to generate content for an assignment.

What to Do if You Suspect Academic Misconduct

Faculty members should begin by having a conversation with the student. Here’s where the AI Ethical Checklist would be particularly useful since it will help the faculty member and the student prepare for that conversation. The faculty member should present the reasons for their concerns with the artifacts they have gathered. During the conversation, it is helpful to give the student the benefit of the doubt. Remember that AI detection tools are faulty (at least 1/100 findings are false positives). Allow the student time to speak–to explain their process and the reasons for their actions. Perhaps they were confused about the course policies or faculty member’s expectations. Ask them about the sources they used and the tools they used to develop their work. It’s also a good idea to be solutions-focused. Will the student be able to rework the assignment for full or partial credit? Could a creative additional assignment help the student to explore their academic choices? While the student may still receive a zero for the assignment (assuming the course policies clearly communicate expectations), help them to see how this impacts their larger course grade and their academic progress in their field of study. This can be an important teachable moment for the student in the larger scheme of their academic career. It is important to remember that students have a right to due process in such cases. These processes are explained in the following policy documents: Final Course Grade Appeal (Residential Faculty Agreement 20.7), Instructional Grievance Process (Administrative Regulations Appendix S-6), Administrative Regulation 2.3: Scholastic Standards for Students (Administrative Regulations 2.3.11), and the Student Code of Conduct (Student Code of Conduct).

For more information

If you have questions about these resources, please email the District Artificial Intelligence Task Force.