Skip to content

Q&A — tough questions, canned answers

After the elevator pitch — ready replies so nothing catches you flat-footed.

“Why BKT instead of DKT (Deep Knowledge Tracing)?”

Section titled ““Why BKT instead of DKT (Deep Knowledge Tracing)?””

DKT is an LSTM trained on answer sequences — needs ~10510^5 training examples; we’re cold-starting without that corpus. It’s also opaque: you can’t justify item picks to teachers.

BKT sacrifices ~5% accuracy on giant datasets but stays interpretable, runs day-one on literature defaults, and supports EM tuning as data arrives.

“Where do P(L0),P(T),P(S),P(G)P(L_0), P(T), P(S), P(G) come from?”

Section titled ““Where do P(L0),P(T),P(S),P(G)P(L_0), P(T), P(S), P(G)P(L0​),P(T),P(S),P(G) come from?””

Literature defaults 0.2 / 0.1 / 0.1 / 0.2 from Corbett & Anderson (1995). They generalize across subjects. On live data we’ll fit via Expectation–Maximization (Baum–Welch for HMMs) — notebook NB-3 EM fitting demos recovery on synthetics.

IRT models ability + difficulty — great for testing (SAT/GRE), weak for tutoring because it ignores how students learn over time. BKT tracks explicit mastery state that updates each response — right choice for adaptive learning.

Comparison walk-through: NB-4 IRT vs BKT.

SR optimizes review intervals to fight forgetting — awesome for memorization, weaker for conceptual math insight. Could combine later — different problem dimension.

ChatGPT hallucinates arithmetic, loses track of student level, and can’t guarantee curated math content. We don’t trust LLMs to author tasks — teachers curate the bank; AI selects problems only.

Also, a TalTech course on student AI use cites a 10 / 80 / 10 split: ~10% genuinely learn and use AI as an accelerator, ~10% don’t study either way, and ~80% turn AI into a crutch and end up understanding less. Handing a class direct ChatGPT access pushes that 80% straight into crutch mode. MATx picks the task for the student from a curated bank — and never lets the AI solve it for them.

Demo hybrid: students mark final answers/steps in structured fields — reliable digit/checkmark OCR. Full handwritten math CV is separate R&D — not MVP; we disclose that on roadmap slides.

Core engine — Bayesian Knowledge Tracing on micro-skills. Four-parameter Bayes updates since 1995. Teachers see why recommendations happen. EM refit planned post-MVP; defaults unblock shipping today.

MVP count = honest figure from Andri. We didn’t chase volume — micro-skill tagging is the expensive teacher-led bottleneck, not synthetic generation.

“How are you different from Khan Academy / Opiq?”

Section titled ““How are you different from Khan Academy / Opiq?””

Khan/Opiq optimize student journeys. MATx centers teachers — saving ~10 hrs/week on differentiated worksheets + grading while surfacing whole-class mastery maps.

Micro-skills + Estonian descriptive feedback; tasks authored by teachers (Andri), AI selects sequencing only.

Estonian schools — math teachers grades 5–9 juggling 20+ students and 10+ weekly hours on differentiation. ~25% of basic-school grads failing national math — acute pain.

Student records stay with teacher/school (SQLite prototype; on-prem roadmap). Micro-skills map to curriculum facts curated by practicing teachers — objective tagging reduces fairness drift versus opaque scoring models.

MVP = single class. SQLite + Next.js comfortably handles dozens of schools before Postgres/Redis becomes necessary. Selector math unchanged.

“eKool / Stuudium / Moodle integrations?”

Section titled ““eKool / Stuudium / Moodle integrations?””

Roadmap — hackathon scope excludes integrations to protect core narrative.

“Is there a companion tool for practising the computations themselves?”

Section titled ““Is there a companion tool for practising the computations themselves?””

Yes — Tom Kabel’s MATx. We teach modeling (text → equation), he trains solving (equation → number). Estonian school maths 7–9, same audience.

The map between our 9 microskills and his 9 competencies is on the Bridge to MATx page. During the hackathon we ship the bridge from our side; technical integration of the model (our bktUpdate into his submit-handler in one line) is Phase 3, post-hackathon, by agreement.

Difference: we’re not random drills, not AI-authored items, not UI chrome. Real adaptive model explainable line-by-line — different category.

Possible — mitigation: product built with teachers (Andri), booked schedule proves practitioner credibility + pilot channel.

Surface similarities fade — transparent probabilistic core + curated Estonian content + teacher-first explanations diverges from “ChatGPT for teens.”

  • Don’t invent accuracy stats — MVP honesty wins.
  • Don’t promise GA in three months — respect Andri’s teaching load.
  • Avoid leaning on “AI / LLM / GPT” in primary pitch — it blurs what you actually ship.
  • Don’t demo generic chatbots — off-brand.
  • Class heatmap — strongest visual; reviewers anchor on it immediately.
  • Ingest → state → next worksheet loop — 30 seconds.
  • Estonian explanation strings — differentiation proof.

See docs/03-demo-plan.md for a 3-minute rehearsal script.