Teaching AI History with ELIZA: A Hands-On Middle School Lesson
Use ELIZA as a hands-on lab to teach chatbot history and AI literacy. Practical lesson plan, classroom-ready activities, and 2026 policy tips.
Turn fragmented AI lessons into a laser-focused lab: teach how chatbots work using ELIZA
Teachers tell me the same thing in 2026: students are curious about AI, but classroom materials are scattered, policy guidance keeps changing, and it’s hard to show how modern claims map to actual systems. This lesson uses ELIZA — the 1960s therapist-bot — as a hands-on lab to teach chatbot history, computational thinking, and AI literacy. It’s low‑tech adaptable, standards-friendly, and built to spark critical thinking about what AI can and can’t do.
The 2026 context: why ELIZA matters now
Late 2025 and early 2026 brought renewed focus on AI literacy in schools, stronger district policies about disclosure and classroom use, and more accessible tools that let students experiment with models safely. While contemporary large language models (LLMs) dominate headlines, ELIZA offers a clear teaching advantage: it’s deliberately simple and transparent. When students chat with it, the gaps between surface intelligence and underlying mechanics become visible.
ELIZA as a teaching tool exposes rule-based pattern matching, limited context tracking, and scripted transformations — ideas that are easy to demonstrate and contrast with probabilistic behaviors in modern LLMs. Recent classroom reports (January 2026) show middle schoolers quickly learned the difference between “sounding human” and actually understanding intent when they compared ELIZA with newer chatbots. That contrast is the heart of AI literacy.
When students chatted with ELIZA in January 2026, they uncovered how AI really works — and doesn’t — and built stronger critical and computational thinking skills.
Learning objectives (aligned to classroom goals)
- Explain the basic mechanism behind ELIZA’s responses: pattern matching and rule-based transformations.
- Compare and contrast rule-based chatbots and statistical LLMs in clear, observable terms.
- Practice computational thinking by designing simple rule patterns and testing edge cases.
- Develop critical thinking about claims made by AI tools: reliability, bias, and transparency.
- Create a simple ELIZA-style chatbot (role-play, block-coding, or short Python script) and document how it fails.
Materials, tech, and prep
Want flexibility? Here are three setups depending on your tech comfort and district policies.
- Low-tech — Role-play ELIZA on paper: printed scripts and student pairs.
- Mid-tech — Block coding in Scratch/Snap or a Google Colab with a minimal Python notebook that simulates ELIZA.
- High-tech — Use a hosted ELIZA emulator or sandboxed LLMs (district-approved) to compare outputs; ensure privacy compliance.
Prep time: 30–60 minutes. If you’ll run the coding version, test the notebook or Scratch project once before class and prepare a short teacher demo transcript.
Lesson plan: 60–90 minute class (or two 45-minute blocks)
Minute-by-minute breakdown
- 0–10 min: Hook & prediction — ask students what makes a chatbot "understand" a user.
- 10–20 min: Demo ELIZA — teacher chats live (project the transcript) and models annotation.
- 20–50 min: Student lab — chat in pairs with ELIZA, log transcripts, and mark patterns.
- 50–70 min: Compare with a modern chatbot (controlled example) and discuss differences.
- 70–90 min: Build & reflect — design a simple ELIZA rule, test it, and complete a reflection sheet.
Step-by-step classroom activity
1. Warm-up (prediction and attitude)
Ask two quick questions and have students turn-and-talk: “If a computer repeats what I say back cleverly, does it understand me?” and “What would convince you a bot really understands?” Collect 3–5 answers and write them visibly.
2. Demonstration: chat with ELIZA
Open an ELIZA emulator or run your script. Type short prompts and show how ELIZA often reflects phrases back, uses pronoun swaps, and asks probing questions.
Example transcript (teacher demo):
Student: I’m stressed about tests. ELIZA: Why do you say you are stressed about tests? Student: Because I never have enough time. ELIZA: Tell me more about having enough time.
After the demo, model annotating the transcript. Highlight where ELIZA matched keywords (stressed, time) and where it used canned follow-ups rather than original reasoning. Point out one failure case where ELIZA misinterprets an idiom.
3. Student lab: chat, log, and annotate
Students work in pairs. Give each pair one of these options:
- Chat with the hosted ELIZA and log the transcript.
- Role-play: one student acts as ELIZA using a printed pattern sheet; the other plays the user.
- Run the Scratch/Colab ELIZA if available and test inputs.
Ask pairs to:
- Collect 6–8 turns of dialogue.
- Underline the words ELIZA used to decide its response.
- Flag one response that felt "wrong" and explain why.
4. Comparison: ELIZA vs modern chatbots
Project a short transcript from a modern LLM (pre-vetted for safety). Guide students to compare:
- Consistency and context tracking across turns.
- Evidence of understanding versus statistical reply patterns.
- Where each system fails and the consequences of those failures.
5. Build a mini-ELIZA (role-play or code)
Challenge students to design one ELIZA rule set that handles greetings and emotions. Provide this scaffolding:
- Keyword list (e.g., sad, happy, stress, angry)
- Response templates with pronoun swapping (I → you, my → your)
- Fallback question templates when no keyword matches
For coding groups, provide this simplified pseudocode to implement in Scratch or Python:
patterns = {
'sad': ['I\'m sorry you feel sad. Why do you think you feel sad?', ...],
'test': ['Why do you say you are worried about tests?', ...]
}
input = user_input.lower()
for keyword in patterns:
if keyword in input:
response = choose_random(patterns[keyword])
response = swap_pronouns(response)
print(response)
break
else:
print(choose_random(fallback_questions))
Encourage students to test edge cases (i.e., mixed keywords, negations, slang) and note failures.
Assessment and evidence of learning
Use a formative rubric with three artifacts:
- Transcript with annotations (shows observation skills)
- Mini-ELIZA rule set or role-play video (shows computational thinking)
- Reflection paragraph answering: Did ELIZA understand you? Why/why not?
Sample rubric criteria:
- Identification of patterns and errors (0–4)
- Quality of rule design and testing (0–4)
- Depth of reflection and critical questions (0–4)
Differentiation and accessibility
For students needing support, use role-play and sentence frames. For advanced learners, add an extension: compare ELIZA’s rule engine with a tiny finetuned model's outputs and measure reliability.
Classroom tech, privacy & policy considerations (2026 update)
By 2026 many districts require explicit disclosure when AI tools are used in classrooms and insist on approved vendor lists. Before using any hosted tool, confirm:
- Student data privacy and whether transcripts are stored.
- District policy on external AI services and necessary parental notifications.
- Model documentation (model cards) and whether the tool has an educator-friendly safety mode.
When online tools aren’t allowed, the role-play and offline notebook options provide the same learning outcomes without exposing student data. Recent 2025–26 trends show more vendors publishing model cards and sandboxed educational APIs — check for these as you plan.
Classroom examples and a short case study
In January 2026, a middle school in a mid‑sized district ran this ELIZA lab as part of a digital citizenship unit. Students first chatted with ELIZA, then wrote essays comparing its responses to a district-approved LLM. Teachers reported three measurable gains:
- Improved ability to identify specific failure modes (e.g., pronoun confusion, lack of long-term context).
- Stronger skeptical questioning — students asked for evidence rather than accepting fluent answers.
- Higher engagement in computational tasks when the activity used block-coding to implement rules.
Common pitfalls and how to avoid them
- Expecting ELIZA to be "smart": frame the exercise explicitly as a historical, rule-based system.
- Uncontrolled comparisons: always pre-screen modern chatbot transcripts for age-appropriateness.
- Insufficient scaffolding for coding: provide starter templates and pair novices with peers or teacher aides.
Actionable teacher checklist
- Decide your tech level (role-play, Scratch, Python, or hosted ELIZA) and prepare materials.
- Pre-test one demo transcript and mark three annotation points you’ll highlight.
- Prepare reflection prompts and rubric; share rubric with students before the activity.
- Confirm district policy and privacy rules if using hosted services; opt for local/offline variations when needed.
- Plan a follow-up: have students design public-facing tips for their peers about evaluating AI claims.
Extension activities (project ideas)
- Create a museum-style timeline poster comparing ELIZA, early chatbots, and modern LLMs with visual markers for context length, memory, and transparency.
- Host an "AI claims court": students bring headlines about AI capabilities and argue whether the claim is supported by evidence.
- Build a tiny hybrid system that pairs a rule-based ELIZA front-end with a constrained LLM for safe fallback — document when each component is used.
Key takeaways for educators
- ELIZA is a teaching amplifier: its simplicity makes technical tradeoffs obvious and teachable.
- Comparisons build literacy: juxtaposing ELIZA with a modern chatbot helps students pinpoint how modern systems still fail.
- Hands-on design fosters computational thinking: designing rule sets and testing edge cases is authentic problem solving.
- Policy-aware practice is essential: 2025–26 trends mean teachers must factor privacy and disclosure into any AI activity.
Reflection prompts to close the lesson
- How did ELIZA decide what to say? Give two examples from your transcript.
- Describe one situation where ELIZA’s reply could be misleading or harmful.
- How would you explain the difference between ELIZA and a modern chatbot to a friend?
Final thoughts and a quick challenge
ELIZA is more than a retro curiosity — it’s a classroom tool that surfaces foundational ideas about language, rules, and the limits of automation. In an era (2026) of rapid AI adoption and evolving school policy, teaching students to interrogate AI claims is as important as teaching any core subject.
Ready for a simple classroom experiment? Try this challenge: run the ELIZA activity this week and collect one anonymized transcript that surprised you. Use it in the next staff meeting to open a practical conversation on district AI policy and curriculum planning.
Call to action
If you want a ready-to-print lesson pack (student handouts, rubric, and a starter Scratch/Colab project), sign up on our teacher resources page or email our curriculum team. Try the lesson, share a student transcript, and we’ll send a free rubric template you can adapt to your standards.
Related Reading
- Replacing VR Hiring Rooms: Practical Alternatives After Meta’s Workrooms Shutdown
- Gifts Under $50 for the Tech-Obsessed Coworker
- Ethical Use of AI and Deepfakes in Islamic Educational Media
- Star Wars Fan Afterparty: How to Host a Filoni‑Era Reaction Stream
- Top 12 Stocking Stuffers for Little Cyclists Who Love Games and Collectibles
Related Topics
Unknown
Contributor
Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.
Up Next
More stories handpicked for you
Quick Audit: Is Your Institution Ready to Trust AI with Strategy?
Recruiting Creators for Educational Content in an Era of Paid AI Marketplaces
Measuring Learning Gains from Short-Form AI Videos: Metrics That Matter
From Hallucinations to Helpful Hints: Training AI Tutors with Human-Centered Prompts
Student Privacy and Monetization: If AI Pays Creators, What About Student Work?
From Our Network
Trending stories across our publication group