Risepoint Faculty Center
Man laughing and smiling at his computer.
AI focus Job aid

An AI vulnerability rubric for your assessments

  |  8 min read

Generative artificial intelligence has altered the assessment landscape in higher education. Large language models (LLMs) like ChatGPT, Claude, and Gemini can now generate essays, solve structured problems, summarize readings, and produce discussion posts that appear coherent and academically polished. AI detection tools are often unreliable, biased, and/or insufficient as integrity safeguards. Rather than surveillance, the sustainable response is to change how assessments are designed.

AI vulnerability rubric

The AI vulnerability rubric below provides a structured way to evaluate how susceptible a given assessment is to uncritical AI use (e.g., copying and pasting responses, uncritically using AI outputs, or outsourcing thinking entirely). Rather than attempting to create “AI-proof” assignments, the rubric helps you make intentional design decisions that increase authenticity, require visible thinking, and align tasks more tightly with course learning outcomes. It can be used when designing new assessments, revising existing ones, or conducting program-level reviews.

AI resistantSomewhat AI resistantSomewhat AI vulnerableAI vulnerable
SpecificityThe task is highly specific, context-rich, and constrained. It requires engagement with particular course materials, recent discussions, datasets, or case details that are not broadly available to LLMs. Generic responses would be visibly inadequate.The task includes some constraints (e.g., required readings or case studies), but could still be answered generically with minimal adaptation.The task is moderately open-ended and could be completed using widely available examples or standard explanations with minimal tailoring.The task is broad and generic (e.g., “Discuss the causes of…”). A high-quality response can easily be generated by AI without access to course-specific context.
PersonalizationStudents must connect the task to personal experience, original data collection, fieldwork, applied practice, and/or individual perspective in ways that cannot be convincingly fabricated at scale.Personalization is encouraged but not central to success.Personal connection is optional and not clearly evaluated.The task is entirely abstract and detached from individual experience or context.
Required process documentationStudents must submit staged drafts, outlines, annotations, revision notes, and/or reflection statements explaining their reasoning and decision-making. Evidence of intellectual process is integral to success.Some process evidence is required (e.g., draft or brief reflection), but it is limited or lightly weighted.A final product is required, with optional or informal process checkpoints.Only a polished final product is submitted. There is no requirement to demonstrate thinking, iteration, or development.
Course-specific anchoringThe task requires explicit integration of course lectures, class discussions, instructor-provided frameworks, and/or locally generated materials. Students must cite or reference specific course artifacts.The task references course materials but does not require deep engagement or direct integration.The task can be completed using general disciplinary knowledge without direct reference to course-specific content.The assignment does not meaningfully require engagement with course-specific materials.
OwnershipStudents must demonstrate ownership of their work. The assessment structure requires students to explain how they arrived at their conclusions and why they made specific choices.Students provide some explanation of their thinking, but ownership is only partially demonstrated or lightly weighted in grading.Students submit a final product with minimal explanation of how it was developed. Demonstration of underlying reasoning is limited or optional.Students submit only the final product with no requirement to explain or demonstrate their thinking.
Authentic assessment designThe task mirrors real disciplinary practice (e.g., policy memo, lab report with real data, design brief, case analysis with stakeholder constraints). It requires judgment, synthesis, justification, and application beyond summary.The task approximates real-world application but retains elements of artificiality or formulaic response structures.The task emphasizes knowledge reproduction or summary rather than applied problem-solving.The task primarily measures recall or surface-level explanation.
Transparency of AI policyThe assignment clearly specifies permissible and prohibited AI uses, requires disclosure and documentation of AI assistance, and integrates AI literacy into expectations.AI expectations are mentioned but not operationalized or assessed.AI use is not clearly addressed in assignment instructions.There is no guidance on AI use, leaving expectations ambiguous.

If most of your criteria fall under AI resistant or Somewhat AI resistant, your assessment likely reduces the payoff of uncritical AI substitution. If several areas fall under Somewhat AI vulnerable or AI vulnerable, you may want to rethink your assessment.


References