Q&A — tough questions, canned answers
After the elevator pitch — ready replies so nothing catches you flat-footed.
Technical prompts
Section titled “Technical prompts”“Why BKT instead of DKT (Deep Knowledge Tracing)?”
Section titled ““Why BKT instead of DKT (Deep Knowledge Tracing)?””DKT is an LSTM trained on answer sequences — needs ~ training examples; we’re cold-starting without that corpus. It’s also opaque: you can’t justify item picks to teachers.
BKT sacrifices ~5% accuracy on giant datasets but stays interpretable, runs day-one on literature defaults, and supports EM tuning as data arrives.
“Where do come from?”
Section titled ““Where do P(L0),P(T),P(S),P(G)P(L_0), P(T), P(S), P(G)P(L0),P(T),P(S),P(G) come from?””Literature defaults
0.2 / 0.1 / 0.1 / 0.2from Corbett & Anderson (1995). They generalize across subjects. On live data we’ll fit via Expectation–Maximization (Baum–Welch for HMMs) — notebook NB-3 EM fitting demos recovery on synthetics.
“What about IRT?”
Section titled ““What about IRT?””IRT models ability + difficulty — great for testing (SAT/GRE), weak for tutoring because it ignores how students learn over time. BKT tracks explicit mastery state that updates each response — right choice for adaptive learning.
Comparison walk-through: NB-4 IRT vs BKT.
“How about spaced repetition / Anki?”
Section titled ““How about spaced repetition / Anki?””SR optimizes review intervals to fight forgetting — awesome for memorization, weaker for conceptual math insight. Could combine later — different problem dimension.
“Couldn’t ChatGPT do this?”
Section titled ““Couldn’t ChatGPT do this?””ChatGPT hallucinates arithmetic, loses track of student level, and can’t guarantee curated math content. We don’t trust LLMs to author tasks — teachers curate the bank; AI selects problems only.
Also, a TalTech course on student AI use cites a 10 / 80 / 10 split: ~10% genuinely learn and use AI as an accelerator, ~10% don’t study either way, and ~80% turn AI into a crutch and end up understanding less. Handing a class direct ChatGPT access pushes that 80% straight into crutch mode. MATx picks the task for the student from a curated bank — and never lets the AI solve it for them.
“Does handwritten scanning work?”
Section titled ““Does handwritten scanning work?””Demo hybrid: students mark final answers/steps in structured fields — reliable digit/checkmark OCR. Full handwritten math CV is separate R&D — not MVP; we disclose that on roadmap slides.
“Is it complicated?”
Section titled ““Is it complicated?””Core engine — Bayesian Knowledge Tracing on micro-skills. Four-parameter Bayes updates since 1995. Teachers see why recommendations happen. EM refit planned post-MVP; defaults unblock shipping today.
“How large is the task bank?”
Section titled ““How large is the task bank?””MVP count = honest figure from Andri. We didn’t chase volume — micro-skill tagging is the expensive teacher-led bottleneck, not synthetic generation.
Product prompts
Section titled “Product prompts”“How are you different from Khan Academy / Opiq?”
Section titled ““How are you different from Khan Academy / Opiq?””Khan/Opiq optimize student journeys. MATx centers teachers — saving ~10 hrs/week on differentiated worksheets + grading while surfacing whole-class mastery maps.
Micro-skills + Estonian descriptive feedback; tasks authored by teachers (Andri), AI selects sequencing only.
“Who’s the customer?”
Section titled ““Who’s the customer?””Estonian schools — math teachers grades 5–9 juggling 20+ students and 10+ weekly hours on differentiation. ~25% of basic-school grads failing national math — acute pain.
“Bias / fairness / privacy?”
Section titled ““Bias / fairness / privacy?””Student records stay with teacher/school (SQLite prototype; on-prem roadmap). Micro-skills map to curriculum facts curated by practicing teachers — objective tagging reduces fairness drift versus opaque scoring models.
“Scale?”
Section titled ““Scale?””MVP = single class. SQLite + Next.js comfortably handles dozens of schools before Postgres/Redis becomes necessary. Selector math unchanged.
“eKool / Stuudium / Moodle integrations?”
Section titled ““eKool / Stuudium / Moodle integrations?””Roadmap — hackathon scope excludes integrations to protect core narrative.
“Is there a companion tool for practising the computations themselves?”
Section titled ““Is there a companion tool for practising the computations themselves?””Yes — Tom Kabel’s MATx. We teach modeling (text → equation), he trains solving (equation → number). Estonian school maths 7–9, same audience.
The map between our 9 microskills and his 9 competencies is on the Bridge to MATx page. During the hackathon we ship the bridge from our side; technical integration of the model (our
bktUpdateinto his submit-handler in one line) is Phase 3, post-hackathon, by agreement.
Defensive responses
Section titled “Defensive responses”“Sounds like another math app.”
Section titled ““Sounds like another math app.””Difference: we’re not random drills, not AI-authored items, not UI chrome. Real adaptive model explainable line-by-line — different category.
“Teachers won’t adopt.”
Section titled ““Teachers won’t adopt.””Possible — mitigation: product built with teachers (Andri), booked schedule proves practitioner credibility + pilot channel.
“Too similar to competitor X.”
Section titled ““Too similar to competitor X.””Surface similarities fade — transparent probabilistic core + curated Estonian content + teacher-first explanations diverges from “ChatGPT for teens.”
Don’t say
Section titled “Don’t say”- Don’t invent accuracy stats — MVP honesty wins.
- Don’t promise GA in three months — respect Andri’s teaching load.
- Avoid leaning on “AI / LLM / GPT” in primary pitch — it blurs what you actually ship.
- Don’t demo generic chatbots — off-brand.
If there’s time live
Section titled “If there’s time live”- Class heatmap — strongest visual; reviewers anchor on it immediately.
- Ingest → state → next worksheet loop — 30 seconds.
- Estonian explanation strings — differentiation proof.
See docs/03-demo-plan.md for a 3-minute rehearsal script.