An AI Vulnerability Rubric for Your Assessments

Generative artificial intelligence has altered the assessment landscape in higher education. Large language models (LLMs) like ChatGPT, Claude, and Gemini can now generate essays, solve structured problems, summarize readings, and produce discussion posts that appear coherent and academically polished. AI detection tools are often unreliable, biased, and/or insufficient as integrity safeguards. Rather than surveillance, the sustainable response is to change how assessments are designed.

AI Vulnerability Rubric

The AI Vulnerability Rubric below provides a structured way to evaluate how susceptible a given assessment is to uncritical AI use (e.g., copying and pasting responses, uncritically using AI outputs, or outsourcing thinking entirely). Rather than attempting to create “AI-proof” assignments, the rubric helps you make intentional design decisions that increase authenticity, require visible thinking, and align tasks more tightly with course learning outcomes. It can be used when designing new assessments, revising existing ones, or conducting program-level reviews.

If most of your criteria fall under AI Resistant or Somewhat AI Resistant, your assessment likely reduces the payoff of uncritical AI substitution. If several areas fall under Somewhat AI Vulnerable or AI Vulnerable, you may want to rethink your assessment.

AI Resistant
The task is highly specific, context-rich, and constrained. It requires engagement with particular course materials, recent discussions, datasets, or case details that are not broadly available to LLMs. Generic responses would be visibly inadequate.

Somewhat AI Resistant
The task includes some constraints (e.g., required readings or case studies), but could still be answered generically with minimal adaptation.

Somewhat AI Vulnerable
The task is moderately open-ended and could be completed using widely available examples or standard explanations with minimal tailoring.

AI Vulnerable
The task is broad and generic (e.g., “Discuss the causes of…”). A high-quality response can easily be generated by AI without access to course-specific context.

AI Resistant
Students must connect the task to personal experience, original data collection, fieldwork, applied practice, and/or individual perspective in ways that cannot be convincingly fabricated at scale.

Somewhat AI Resistant
Personalization is encouraged but not central to success.

Somewhat AI Vulnerable
Personal connection is optional and not clearly evaluated.

AI Vulnerable
The task is entirely abstract and detached from individual experience or context.

AI Resistant
Students must submit staged drafts, outlines, annotations, revision notes, and/or reflection statements explaining their reasoning and decision-making. Evidence of intellectual process is integral to success.

Somewhat AI Resistant
Some process evidence is required (e.g., draft or brief reflection), but it is limited or lightly weighted.

Somewhat AI Vulnerable
A final product is required, with optional or informal process checkpoints.

AI Vulnerable
Only a polished final product is submitted. There is no requirement to demonstrate thinking, iteration, or development.

AI Resistant
The task requires explicit integration of course lectures, class discussions, instructor-provided frameworks, and/or locally generated materials. Students must cite or reference specific course artifacts.

Somewhat AI Resistant
The task references course materials but does not require deep engagement or direct integration.

Somewhat AI Vulnerable
The task can be completed using general disciplinary knowledge without direct reference to course-specific content.

AI Vulnerable
The assignment does not meaningfully require engagement with course-specific materials.

AI Resistant
Students must demonstrate ownership of their work. The assessment structure requires students to explain how they arrived at their conclusions and why they made specific choices.

Somewhat AI Resistant
Students provide some explanation of their thinking, but ownership is only partially demonstrated or lightly weighted in grading.

Somewhat AI Vulnerable
Students submit a final product with minimal explanation of how it was developed. Demonstration of underlying reasoning is limited or optional.

AI Vulnerable
Students submit only the final product with no requirement to explain or demonstrate their thinking.

AI Resistant
The task mirrors real disciplinary practice (e.g., policy memo, lab report with real data, design brief, case analysis with stakeholder constraints). It requires judgment, synthesis, justification, and application beyond summary.

Somewhat AI Resistant
The task approximates real-world application but retains elements of artificiality or formulaic response structures.

Somewhat AI Vulnerable
The task emphasizes knowledge reproduction or summary rather than applied problem-solving.

AI Vulnerable
The task primarily measures recall or surface-level explanation.

AI Resistant
The assignment clearly specifies permissible and prohibited AI uses, requires disclosure and documentation of AI assistance, and integrates AI literacy into expectations.

Somewhat AI Resistant
AI expectations are mentioned but not operationalized or assessed.

Somewhat AI Vulnerable
AI use is not clearly addressed in assignment instructions.

AI Vulnerable
There is no guidance on AI use, leaving expectations ambiguous.

References