Selector in action
Algorithm at a glance
Section titled “Algorithm at a glance”flowchart TD S["StudentState · P(L) vector"] --> Loop["Each task in pool"] Loop --> P["Joint P(solve) geometric mean over skills"] P --> C["Closeness — Gaussian kernel around 0.7"] P --> R["Rarity share of skills with P(L) below 0.4"] C --> Score["score = closeness + 0.15 * rarity"] R --> Score Score --> Sort[Sort by descending score] Sort --> Recent[Exclude tasks from last 5 attempts] Recent --> TopN[Top-N recommendations]That’s the moving parts. No neural nets, no gradients, no training loop — just weighted ranking.
Step by step
Section titled “Step by step”1. Student state
Section titled “1. Student state”state = { student_id: "S-12", mastery: { "linear_eq.expand_brackets": 0.166, // dropped after errors "arith.signs": 0.831, // strong "arith.distributive_law": 0.45, "linear_eq.move_to_one_side": 0.62, "linear_eq.divide_by_coefficient": 0.78, // ... }, history: [/* recent answers */]}2. Score each task’s fit
Section titled “2. Score each task’s fit”Task T-147 with skills [expand_brackets, signs]:
Closeness:
Rarity (1 of 2 skills below 0.4):
Score:
3. Top-N after sorting
Section titled “3. Top-N after sorting”The selector scans the pool, scores each task, sorts descending, drops tasks seen in the last five attempts (avoid repeats), returns top-5.
Real code
Section titled “Real code”export function recommend( state: StudentState, pool: Task[], topN = 5, opts?: SelectorOptions, params?: BktParams): ScoredTask[] { const recentIds = new Set( state.history.slice(-5).map((h) => h.task_id) ); const scored = pool .filter((t) => !recentIds.has(t.id)) .map((t) => scoreTaskForStudent(state, t, opts, params)); scored.sort((a, b) => b.score - a.score); return scored.slice(0, topN);}When every task is a bad fit
Section titled “When every task is a bad fit”Suppose Ivan’s for parentheses is 0.166 and the pool lacks “parentheses + gentle arithmetic” items — everything bundles harder skills. Then what?
The selector still returns the least bad option — but top pick shows low closeness. That’s a signal:
- to the teacher: “item bank doesn’t cover real class gaps”;
- to Andri (content lead): “need easier drills on this skill.”
Teacher UI can surface “weak recommendation” when closeness < 0.3 — part of explainability.
Tie-breaks and rare-skill bonus
Section titled “Tie-breaks and rare-skill bonus”When closeness ties, rareSkillBonus favors tasks training weak skills. Why it matters:
- without bonus the selector may stall in comfort zone;
- with bonus it explores weaknesses for remediation.
rareSkillBonus = 0.15 balances:
- too small (0.05): never pushes into weak skills — stagnation;
- too large (0.5): always brutal difficulty — frustration.
0.15 roughly says: “when closeness ties, prefer tasks mixing one weak skill among two.”
When the selector looks wrong
Section titled “When the selector looks wrong”When smoke-testing weird tops, reasons usually boil down to:
- Mastery vector uninitialized — missing skill defaults to
params.pInit = 0.2. Fine for cold start. - Stale history — two-week gap; mastery should decay. Out of MVP scope.
- Taxonomy imbalance — Andri tagged skill A on 80% of items, skill B on 5%; selector skews to A. Fix via tagging policy.
See explainability — surfacing reasons simplifies debugging and earns teacher trust.