Skip to content

Selector in action

flowchart TD
S["StudentState · P(L) vector"] --> Loop["Each task in pool"]
Loop --> P["Joint P(solve) geometric mean over skills"]
P --> C["Closeness — Gaussian kernel around 0.7"]
P --> R["Rarity share of skills with P(L) below 0.4"]
C --> Score["score = closeness + 0.15 * rarity"]
R --> Score
Score --> Sort[Sort by descending score]
Sort --> Recent[Exclude tasks from last 5 attempts]
Recent --> TopN[Top-N recommendations]

That’s the moving parts. No neural nets, no gradients, no training loop — just weighted ranking.

state = {
student_id: "S-12",
mastery: {
"linear_eq.expand_brackets": 0.166, // dropped after errors
"arith.signs": 0.831, // strong
"arith.distributive_law": 0.45,
"linear_eq.move_to_one_side": 0.62,
"linear_eq.divide_by_coefficient": 0.78,
// ...
},
history: [/* recent answers */]
}

Task T-147 with skills [expand_brackets, signs]:

  • P(solve1)=0.1660.9+0.8340.2=0.317P(\text{solve}_1) = 0.166 \cdot 0.9 + 0.834 \cdot 0.2 = 0.317
  • P(solve2)=0.8310.9+0.1690.2=0.782P(\text{solve}_2) = 0.831 \cdot 0.9 + 0.169 \cdot 0.2 = 0.782
  • Pjoint=0.3170.7820.498P_{\text{joint}} = \sqrt{0.317 \cdot 0.782} \approx 0.498

Closeness: closeness=exp((0.4980.7)20.03)0.260\text{closeness} = \exp\left(-\frac{(0.498 - 0.7)^2}{0.03}\right) \approx 0.260

Rarity (1 of 2 skills below 0.4): rarity=0.5\text{rarity} = 0.5

Score: score=0.260+0.150.5=0.335\text{score} = 0.260 + 0.15 \cdot 0.5 = 0.335

The selector scans the pool, scores each task, sorts descending, drops tasks seen in the last five attempts (avoid repeats), returns top-5.

export function recommend(
state: StudentState,
pool: Task[],
topN = 5,
opts?: SelectorOptions,
params?: BktParams
): ScoredTask[] {
const recentIds = new Set(
state.history.slice(-5).map((h) => h.task_id)
);
const scored = pool
.filter((t) => !recentIds.has(t.id))
.map((t) => scoreTaskForStudent(state, t, opts, params));
scored.sort((a, b) => b.score - a.score);
return scored.slice(0, topN);
}

Suppose Ivan’s P(L)P(L) for parentheses is 0.166 and the pool lacks “parentheses + gentle arithmetic” items — everything bundles harder skills. Then what?

The selector still returns the least bad option — but top pick shows low closeness. That’s a signal:

  • to the teacher: “item bank doesn’t cover real class gaps”;
  • to Andri (content lead): “need easier drills on this skill.”

Teacher UI can surface “weak recommendation” when closeness < 0.3 — part of explainability.

When closeness ties, rareSkillBonus favors tasks training weak skills. Why it matters:

  • without bonus the selector may stall in comfort zone;
  • with bonus it explores weaknesses for remediation.

rareSkillBonus = 0.15 balances:

  • too small (0.05): never pushes into weak skills — stagnation;
  • too large (0.5): always brutal difficulty — frustration.

0.15 roughly says: “when closeness ties, prefer tasks mixing one weak skill among two.”

When smoke-testing weird tops, reasons usually boil down to:

  1. Mastery vector uninitialized — missing skill defaults to params.pInit = 0.2. Fine for cold start.
  2. Stale history — two-week gap; mastery should decay. Out of MVP scope.
  3. Taxonomy imbalance — Andri tagged skill A on 80% of items, skill B on 5%; selector skews to A. Fix via tagging policy.

See explainability — surfacing reasons simplifies debugging and earns teacher trust.