Generative artificial intelligence has altered the assessment landscape in higher education. Large language models (LLMs) like ChatGPT, Claude, and Gemini can now generate essays, solve structured problems, summarize readings, and produce discussion posts that appear coherent and academically polished. AI detection tools are often unreliable, biased, and/or insufficient as integrity safeguards. Rather than surveillance, the sustainable response is to change how assessments are designed.
AI Vulnerability Rubric
The AI Vulnerability Rubric below provides a structured way to evaluate how susceptible a given assessment is to uncritical AI use (e.g., copying and pasting responses, uncritically using AI outputs, or outsourcing thinking entirely). Rather than attempting to create “AI-proof” assignments, the rubric helps you make intentional design decisions that increase authenticity, require visible thinking, and align tasks more tightly with course learning outcomes. It can be used when designing new assessments, revising existing ones, or conducting program-level reviews.
If most of your criteria fall under AI Resistant or Somewhat AI Resistant, your assessment likely reduces the payoff of uncritical AI substitution. If several areas fall under Somewhat AI Vulnerable or AI Vulnerable, you may want to rethink your assessment.
AI Resistant
The task is highly specific, context-rich, and constrained. It requires engagement with particular course materials, recent discussions, datasets, or case details that are not broadly available to LLMs. Generic responses would be visibly inadequate.
Somewhat AI Resistant
The task includes some constraints (e.g., required readings or case studies), but could still be answered generically with minimal adaptation.
Somewhat AI Vulnerable
The task is moderately open-ended and could be completed using widely available examples or standard explanations with minimal tailoring.
AI Vulnerable
The task is broad and generic (e.g., “Discuss the causes of…”). A high-quality response can easily be generated by AI without access to course-specific context.
AI Resistant
Students must connect the task to personal experience, original data collection, fieldwork, applied practice, and/or individual perspective in ways that cannot be convincingly fabricated at scale.
Somewhat AI Resistant
Personalization is encouraged but not central to success.
Somewhat AI Vulnerable
Personal connection is optional and not clearly evaluated.
AI Vulnerable
The task is entirely abstract and detached from individual experience or context.
AI Resistant
Students must submit staged drafts, outlines, annotations, revision notes, and/or reflection statements explaining their reasoning and decision-making. Evidence of intellectual process is integral to success.
Somewhat AI Resistant
Some process evidence is required (e.g., draft or brief reflection), but it is limited or lightly weighted.
Somewhat AI Vulnerable
A final product is required, with optional or informal process checkpoints.
AI Vulnerable
Only a polished final product is submitted. There is no requirement to demonstrate thinking, iteration, or development.
AI Resistant
The task requires explicit integration of course lectures, class discussions, instructor-provided frameworks, and/or locally generated materials. Students must cite or reference specific course artifacts.
Somewhat AI Resistant
The task references course materials but does not require deep engagement or direct integration.
Somewhat AI Vulnerable
The task can be completed using general disciplinary knowledge without direct reference to course-specific content.
AI Vulnerable
The assignment does not meaningfully require engagement with course-specific materials.
AI Resistant
Students must demonstrate ownership of their work. The assessment structure requires students to explain how they arrived at their conclusions and why they made specific choices.
Somewhat AI Resistant
Students provide some explanation of their thinking, but ownership is only partially demonstrated or lightly weighted in grading.
Somewhat AI Vulnerable
Students submit a final product with minimal explanation of how it was developed. Demonstration of underlying reasoning is limited or optional.
AI Vulnerable
Students submit only the final product with no requirement to explain or demonstrate their thinking.
AI Resistant
The task mirrors real disciplinary practice (e.g., policy memo, lab report with real data, design brief, case analysis with stakeholder constraints). It requires judgment, synthesis, justification, and application beyond summary.
Somewhat AI Resistant
The task approximates real-world application but retains elements of artificiality or formulaic response structures.
Somewhat AI Vulnerable
The task emphasizes knowledge reproduction or summary rather than applied problem-solving.
AI Vulnerable
The task primarily measures recall or surface-level explanation.
AI Resistant
The assignment clearly specifies permissible and prohibited AI uses, requires disclosure and documentation of AI assistance, and integrates AI literacy into expectations.
Somewhat AI Resistant
AI expectations are mentioned but not operationalized or assessed.
Somewhat AI Vulnerable
AI use is not clearly addressed in assignment instructions.
AI Vulnerable
There is no guidance on AI use, leaving expectations ambiguous.
References
- Bielousva, G. (2023, April 24). Creating AI-proof assignments: A guide for university professors in social sciences and humanities. LinkedIn. https://www.linkedin.com/pulse/creating-ai-proof-assignments-guide-university-social-gra%C5%BEina/
- Carleton College. (Summer 2025). AI-resistant assignments. https://www.carleton.edu/writing/resources-for-faculty/working-with-ai/ai-resistant-assignments/
- Colorado State University. Academic integrity: How do I AI proof my assignments? https://tilt.colostate.edu/how-do-i-ai-proof-my-assignments/
- Kickbusch, S., Ashford-Rowe, K., Kemp, A., Boreland, J., & Huijser, H. (2025). Beyond detection: Redesigning authentic assessment in an AI-mediated world. Education Sciences, 15(11), 1537. https://www.mdpi.com/2227-7102/15/11/1537
- Lodge, J. M., Bearman, M., Dawson, P., Gniel, H., Harper, R. Liu, D., McLean, J., Ucnik, L. & Associates (2025). Enacting assessment reform in a time of artificial intelligence. Tertiary Education Quality and Standards Agency, Australian Government. https://www.teqsa.gov.au/sites/default/files/2025-09/enacting-assessment-reform-in-a-time-of-artificial-intelligence.pdf
- Macmillan Learning. (2024, April 4). Bits on bots: How to AI-proof any assignment. https://community.macmillanlearning.com/t5/bits-blog/bits-on-bots-how-to-ai-proof-any-assignment/ba-p/20031
- Northern Illinois University Center for Innovative Teaching and Learning. (2024, February 28). Generative-AI-resistant assignments. https://citl.news.niu.edu/2024/02/28/generative-ai-resistant-assignments/
- Northern Michigan University Center for Teaching and Learning. Creating AI-resistant assignments, activities, and assessments (designing out). https://nmu.edu/ctl/creating-ai-resistant-assignments-activities-and-assessments-designing-out
- Perkins, M., Furze, L., Roe, J., & MacVaugh, J. (2024). The Artificial Intelligence Assessment Scale (AIAS): A framework for ethical integration of generative AI in educational assessment. Journal of University Teaching and Learning Practice, 21(6). https://open-publishing.org/journals/index.php/jutlp/article/view/810
- Scarfe, P., Watcham, K., Clarke, A., & Roesch, E. (2024). A real-world test of artificial intelligence infiltration of a university examinations system: A “Turing Test” case study. PLOS ONE, 19(6). https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0305354
- Teaching Channel. (2024, September 27). Outsmarting the bots: 5 strategies to create AI-resistant assignments. https://www.teachingchannel.com/k12-hub/blog/outsmarting-the-bots-5-strategies-to-create-ai-resistant-assignments/
- University of Chicago. Strategies for designing AI-resistant assignments. https://genai.uchicago.edu/en/resources/faculty-and-instructors/strategies-for-designing-ai-resistant-assignments
- University of Michigan–Flint. Assessments in an AI era: Discussions and strategy. https://www.umflint.edu/genai/assignments-in-the-ai-era/discussions-ai-redesign/
- Washington University in St. Louis Center for Teaching and Learning. AI resistant assignments. https://ctl.wustl.edu/resources/ai-resistant-assignments/
- Weber-Wulff, D., Anohina-Naumeca, A., Bjelobaba, S., Foltýnek, T., Guerrero-Dib, J., Popoola, O., Šigut, P., & Waddington, L. (2023). Testing of detection tools for AI-generated text. International Journal for Educational Integrity, 19. https://link.springer.com/article/10.1007/s40979-023-00146-z
- Xia, Q., Weng, X., Ouyang, F., Lin, T. J., & Chiu, T. K. F. (2024). A scoping review on how generative artificial intelligence transforms assessment in higher education. International Journal of Educational Technology in Higher Education, 21. https://link.springer.com/article/10.1186/s41239-024-00468-z