Apply judgment to AI use

Completed

In this unit you apply the mental model to real educator tasks that involve AI. First, you study a worked example that models how an educator reasons through task fit and safeguards. Next, you complete a guided practice where you classify tasks and check your reasoning against feedback. Finally, you choose one task from your own context and write a short decision log entry. The focus is clarity and accountability, not speed.

Task fit decision log

Use this model to document professional judgment when deciding whether AI support is appropriate.

Task scenario Category What must stay human Safeguard
Summarizing student exit tickets Appropriate with human oversight Interpret student meaning and decide next steps Sample original work to verify the summary and remove identifying details
Drafting family communication Appropriate for AI support Final review for tone, accuracy, and community fit Review and revise before sending
Brainstorming support strategies Appropriate with human oversight Professional judgment about student context and needs Use as brainstorming only and verify alignment with student data
Assigning final grades Not appropriate for AI Ethical judgment and accountability for evaluation Keep grading decisions fully human; use tools only for organization

Decision callouts

  • Decision callout 1: If the task affects evaluation, placement, or grading, human responsibility must remain primary.
  • Decision callout 2: If student meaning matters, sample the original source before using the summary.
  • Decision callout 3: If a message goes to families, review for accuracy, tone, and local context.
  • Decision callout 4: If the tool suggests strategies, treat them as brainstorming and verify fit.

Common pitfall

  • Treating polished language as proof of accuracy: Fluency can hide missing evidence, bias, or misunderstanding.

Scenarios

In these scenarios, determine how an educator reasons through the task, names the category, and explains why the decision works.

Summarize student exit tickets

  • Scenario: A teacher considers using an AI system to summarize student exit tickets from a lesson on persuasive writing.
  • Modeled reasoning: The task is to identify patterns, not to evaluate individual students or assign grades. Exit tickets often include partial thinking that requires interpretation, so the educator verifies accuracy and removes identifying details.
  • Decision: Appropriate with human oversight.
  • Why this decision works: AI can help surface patterns, but the educator remains responsible for meaning and next steps.

Draft a general family communication

  • Scenario: A teacher considers using AI to draft a general message to families about upcoming class projects and important dates.
  • Modeled reasoning: The task is informational and doesn't involve evaluation or sensitive student data. The educator reviews for tone, clarity, and alignment with community expectations.
  • Decision: Appropriate for AI support.
  • Why this decision works: AI can assist with drafting routine communication while human review maintains trust.

Recommend instructional supports for a student

  • Scenario: A teacher considers using AI to suggest strategies for supporting a student who is struggling with reading comprehension.
  • Modeled reasoning: The task involves student-specific needs and professional judgment. AI can suggest general strategies, but educators decide what fits the student and setting.
  • Decision: Appropriate with human oversight.
  • Why this decision works: AI can support brainstorming, but responsibility for student outcomes remains fully human.

Assign final grades

  • Scenario: A teacher considers using AI to determine final grades based on student assignments and assessments.
  • Modeled reasoning: Final grading decisions carry consequences and require transparency and accountability. AI cannot understand growth, effort, or context that shape fair evaluation.
  • Decision: Not appropriate for AI.
  • Why this decision works: Grading requires human responsibility. Tools may support organization, but not the decision.