Skip to content

P(solve) and the zone of proximal development

Given P(L)P(L) per micro-skill we can predict the probability a student solves a specific problem right now — and pick one in the sweet spot.

The P(solve)P(\text{solve}) formula

Section titled “The P(solve)P(\text{solve})P(solve) formula”

For a task requiring one skill:

P(solve)=P(L)(1P(S))+(1P(L))P(G)P(\text{solve}) = P(L) \cdot (1 - P(S)) + (1 - P(L)) \cdot P(G)

“Knows and doesn’t slip” plus “doesn’t know but guesses.”

Numerically:

P(L)P(L)P(solve)P(\text{solve})
0.00.20
0.20.34
0.50.55
0.70.69
0.90.83
1.00.90

So P(solve) always sits between P(G)=0.20P(G) = 0.20 and 1P(S)=0.901 - P(S) = 0.90 — never 0 (guess chance) and never 1 (slip chance).

Lev Vygotsky, Soviet psychologist of the 1930s, crystallized the guiding idea:

Students grow fastest on tasks just beyond current ability — not what they already ace (no growth), not what’s far away (frustration, dropout).

That’s the Zone of Proximal Development (ZPD).

Quantitatively: “just beyond” in studies often maps to P(solve)0.7P(\text{solve}) \approx 0.7 — tough enough to stretch, not hopeless.

Task selection algorithm:

  1. For each candidate problem compute the student’s P(solve)P(\text{solve}).
  2. Pick the task minimizing P(solve)0.7|P(\text{solve}) - 0.7|.
  3. Recommend it.

In code we use a Gaussian “distance to target”:

closeness(p)=exp((p0.7)20.03)\text{closeness}(p) = \exp\left(-\frac{(p - 0.7)^2}{0.03}\right)
// Closeness to target — Gaussian-ish, peaks at target=0.7
const closeness = Math.exp(-Math.pow(pSolveJoint - target, 2) / 0.03);

Closer P(solve)P(\text{solve}) is to 0.7, higher closeness; peak exactly at 0.7.

Gaussian: smooth, symmetric, penalises extremes fast. Absolute difference is sharp and lumps «almost 0.7» with «way off».

Gaussian:

  • symmetric around 0.7;
  • punishes far-off tasks quickly;
  • smooth — no kinks, stable selector behavior.

Denominator 0.03 controls width:

  • 0.03 ⇒ tasks roughly [0.55,0.85][0.55, 0.85] still get closeness > 0.5;
  • 0.01 — too tight (“exactly 0.7 ± 0.05”);
  • 0.10 — too loose (“anything between 0.4 and 1.0 feels fine”).

Tune if needed; 0.03 works well in practice.

What if ZPD pushes toward a strong skill only

Section titled “What if ZPD pushes toward a strong skill only”

Teacher scenario: Ivan’s arithmetic sits at 0.85, parentheses at 0.40. Which task?

Naïvely chasing P(solve)0.7P(\text{solve}) \approx 0.7 might favor arithmetic-heavy items (combined with other micro-skills) and never train parentheses.

So the selector adds a rare-skill bonus:

// Rarity bonus: how many of this task's skills are below 0.4?
const undertrained = task.microskills.filter(
(s) => (state.mastery[s] ?? params.pInit) < 0.4
).length;
const rarity = undertrained / task.microskills.length;
const score = closeness + rareBonus * rarity;

Translation: if many required skills have P(L)<0.4P(L) < 0.4, bump the task slightly in ranking — exploration so we don’t stall in comfort zone.

Drag target and σ². Watch how the ZPD “window” narrows — tighter σ² ⇒ pickier selector around target.

closeness = exp(−(p−target)²/σ²). Highest at the peak, drops fast to the sides. The smaller σ², the narrower the «ZPD window».

Back to Ivan after six tasks (previous chapter):

  • P(L)=0.166P(L) = 0.166 for parentheses.
  • Single-skill P(solve)=0.1660.9+0.8340.2=0.317P(\text{solve}) = 0.166 \cdot 0.9 + 0.834 \cdot 0.2 = 0.317.

Too hard — closeness ≈ 0; selector won’t lead with that.

Better:

  • mix two skills (parentheses + familiar arithmetic);
  • joint P(solve)P(\text{solve}) lands ~0.55–0.65 — nearer ZPD;
  • trains the weak spot without total frustration.

See multi-skill tasks.