Skip to content

Selector simulator

Left panel — P(L) for each of the 9 micro-skills. This is the probability that the student has mastered the corresponding micro-skill (P(mastered)).

Centre — the task the selector currently puts first. It is the top-1 of all 20 tasks ordered by closeness of the computed P(solve)P(\text{solve}) to the target value 0.7 (zone of proximal development).

Right panel — top-3 candidates with their justification: the task’s P(solve)P(\text{solve}) (geometric mean over its micro-skills — see Multi-skill tasks) and the distance from the ZPD target 0.7.

The Solved / Wrong buttons apply the BKT update to the P(L) of all the chosen task’s micro-skills (formula from Bayes step by step) and re-compute the recommendations.

Student:
Skill mastery, P(L)
L1 — additive (+/−)0.20
L1 — multiplicative (×/÷)0.20
L1 — mixed0.20
L2 — additive (+/−) + context0.20
L2 — multiplicative (×/÷) + context0.20
L2 — mixed + context0.20
L3 — additive (+/−) + 3+ quantities0.20
L3 — multiplicative (×/÷) + 3+ quantities0.20
L3 — mixed + 3+ quantities0.20
Current task (top-1)
MD-19 · L3 · P(solve) = 0.34
The 2nd number is 2 times greater than the 1st. The 3rd is 6 greater than the 1st. The 4th is 3 less than the 2nd.
Canonical definition
Let I number = x
II = 2x
III = x + 6
IV = 2x − 3
Top-3 candidates
MD-19 · L3P=0.34 · score=0.02
The 2nd number is 2 times greater than the 1st.
distance from ZPD 0.7: 0.36
MD-20 · L3P=0.34 · score=0.02
Bianca is 5 years older than Alexandra.
distance from ZPD 0.7: 0.36
MD-01 · L1P=0.34 · score=0.02
One number is greater than another by 5.
distance from ZPD 0.7: 0.36
  1. Newbie → 5 correct in a row. You can see the selector gradually raise the difficulty — from L1 to L2 — as the base skills’ P(L) grows.
  2. Strong +/−, weak ×/÷. The geometric mean “drags down” the joint P(solve)P(\text{solve}), so the selector avoids tasks where the multiplicative skill is one of the core micro-skills. The top fills with L1.add / L2.add tasks — as expected.
  3. Confident L1 → wrong on T2.add. A single wrong answer steers recommendations into the same ZPD pocket: the selector keeps working exactly where the student “leaked”.

BktSimulator (on the Step by step page) teaches the math of a single micro-skill: one P(L), one chart, the ”✓/✗” buttons update only that one. It is an explanation of the algorithm’s internals.

MatrixSelectorSim is the whole system: 9 P(L) at once, 20 tasks, ranking by ZPD, penalty for recently-shown tasks, geometric mean across micro-skills. The same thing web/lib/bkt.ts does in the /api/recommend route — only in the browser, with no server.

  • Per-step ingest. If the worksheet were scanned step-by-step, an error on “subtract from both sides” would only update that micro-skill, not the entire “constellation” of the task. Here, only the aggregate outcome (solved or not) is observed.
  • BKT parameters are shared (P(L₀)=0.2, P(T)=0.1, P(S)=0.1, P(G)=0.2). The real engine may vary them per micro-skill.
  • Fatigue. The selector does not know that the current task is the fifth in a row on the same skill.

All of that lives outside the hackathon scope, but everything is visible in the code (web/lib/bkt.ts) and discussed in the Inside the code section.

Where to go next — practising the solving

Section titled “Where to go next — practising the solving”

Companion: MATx. The simulator above picks a defining microskill — which modeling task to give the student. Solving the resulting equation is a different domain (computation), and Tom Kabel has a separate tool for that. The page Bridge to MATx has the exact transition map: “after mastering our T2.add the student is ready for Tom’s vorrandid.lihtsad-vorrandid”.