Skip to content

NB-2 — Parameter sensitivity

Defaults 0.2 / 0.1 / 0.1 / 0.2 for (P(L0),P(T),P(S),P(G))(P(L_0), P(T), P(S), P(G)) aren’t magic or curve-fit mythology. This notebook breaks each knob and shows what snaps.

import numpy as np
import matplotlib.pyplot as plt
from itertools import product
def bkt_update(pL, observed, p):
pS, pG, pT = p['pSlip'], p['pGuess'], p['pTransit']
if observed:
post = (pL * (1 - pS)) / (pL * (1 - pS) + (1 - pL) * pG)
else:
post = (pL * pS) / (pL * pS + (1 - pL) * (1 - pG))
return post + (1 - post) * pT
def trajectory(p, answers, pL0=None):
pL = pL0 if pL0 is not None else p['pInit']
trace = [pL]
for a in answers:
pL = bkt_update(pL, a, p)
trace.append(pL)
return trace

Experiment 1 — “trusting guesses” (P(G)=0.5P(G) = 0.5)

Section titled “Experiment 1 — “trusting guesses” (P(G)=0.5P(G) = 0.5P(G)=0.5)”

Student answers correctly ten times in a row.

defaults = {"pInit": 0.2, "pTransit": 0.1, "pSlip": 0.1, "pGuess": 0.2}
broken_g = {**defaults, "pGuess": 0.5}
answers = [True] * 10
t_def = trajectory(defaults, answers)
t_brk = trajectory(broken_g, answers)
print(f"После 10 ✓ — defaults: P(L) = {t_def[-1]:.3f}")
print(f"После 10 ✓ — P(G)=0.5: P(L) = {t_brk[-1]:.3f}")
# defaults: 0.881 ← модель «поверила» в обучение
# P(G)=0.5: 0.612 ← модель «угадывание объясняет всё», ученик не учится

At P(G)=0.5P(G) = 0.5 the model distrusts correct answers — even after ten successes P(L)P(L) fails to reach 0.7. Students loop drills forever.

Experiment 2 — “trusting slips” (P(S)=0.5P(S) = 0.5)

Section titled “Experiment 2 — “trusting slips” (P(S)=0.5P(S) = 0.5P(S)=0.5)”

Same student, ten incorrect answers.

broken_s = {**defaults, "pSlip": 0.5}
answers = [False] * 10
t_def = trajectory(defaults, answers)
t_brk = trajectory(broken_s, answers)
print(f"После 10 ✗ — defaults: P(L) = {t_def[-1]:.3f}")
print(f"После 10 ✗ — P(S)=0.5: P(L) = {t_brk[-1]:.3f}")
# defaults: 0.072 ← модель видит «он не знает»
# P(S)=0.5: 0.181 ← модель «слипа объясняет всё», даёт сложные

At P(S)=0.5P(S) = 0.5 the model distrusts mistakes — even ten failures leave inflated mastery. Weak learners receive tasks that are too hard — demoralizing.

Experiment 3 — “frozen learner” (P(T)=0P(T) = 0)

Section titled “Experiment 3 — “frozen learner” (P(T)=0P(T) = 0P(T)=0)”
no_transit = {**defaults, "pTransit": 0.0}
answers = [True] * 50
t_def = trajectory(defaults, answers)
t_brk = trajectory(no_transit, answers)
print(f"50 ✓ — defaults: P(L) = {t_def[-1]:.3f}")
print(f"50 ✓ — P(T)=0: P(L) = {t_brk[-1]:.3f}")
# defaults: 0.999 ← ученик может «учиться», достигает мастерства
# P(T)=0: 0.998 ← BTW даже без transit достигает, через демонстрацию...
# но: разница в скорости — defaults быстрее

Subtlety: P(T)=0P(T) = 0 doesn’t forbid reaching high P(L)P(L) — demonstration still lifts probability — but speed suffers and early trajectory flattens.

fig, ax = plt.subplots(figsize=(8, 4))
ax.plot(t_def, label='defaults (P(T)=0.1)', color='#9333ea', linewidth=2)
ax.plot(t_brk, label='P(T)=0', color='#ef4444', linewidth=2, linestyle='--')
ax.set_xlabel('шаг'); ax.set_ylabel('P(L)')
ax.legend(); ax.grid(alpha=0.3)
plt.show()

Experiment 4 — “hyper learner” (P(T)=0.5P(T) = 0.5)

Section titled “Experiment 4 — “hyper learner” (P(T)=0.5P(T) = 0.5P(T)=0.5)”
fast_transit = {**defaults, "pTransit": 0.5}
answers = [True] * 5
t_def = trajectory(defaults, answers)
t_brk = trajectory(fast_transit, answers)
print(f"5 ✓ — defaults: P(L) = {t_def[-1]:.3f}")
print(f"5 ✓ — P(T)=0.5: P(L) = {t_brk[-1]:.3f}")
# defaults: 0.628
# P(T)=0.5: 0.981 ← осваивает за 3-4 правильных ответа

P(T)=0.5P(T) = 0.5 isn’t realistic for school math — maybe OK for trivial adult micro-skills.

patterns = {
'все правильно': [True] * 10,
'все ошибки': [False] * 10,
'через один': [True, False] * 5,
'старт ошибками': [False] * 5 + [True] * 5,
'старт верно': [True] * 5 + [False] * 5,
}
configs = {
'defaults': defaults,
'P(G)=0.5': {**defaults, 'pGuess': 0.5},
'P(S)=0.5': {**defaults, 'pSlip': 0.5},
'P(T)=0': {**defaults, 'pTransit': 0.0},
'P(T)=0.5': {**defaults, 'pTransit': 0.5},
}
mat = np.zeros((len(configs), len(patterns)))
for i, (cn, cp) in enumerate(configs.items()):
for j, (pn, ans) in enumerate(patterns.items()):
mat[i, j] = trajectory(cp, ans)[-1]
fig, ax = plt.subplots(figsize=(9, 4))
im = ax.imshow(mat, cmap='RdYlGn', vmin=0, vmax=1, aspect='auto')
ax.set_xticks(range(len(patterns))); ax.set_xticklabels(patterns.keys(), rotation=20)
ax.set_yticks(range(len(configs))); ax.set_yticklabels(configs.keys())
for i in range(mat.shape[0]):
for j in range(mat.shape[1]):
ax.text(j, i, f'{mat[i,j]:.2f}', ha='center', va='center', fontsize=9)
plt.colorbar(im, label='итоговый P(L)')
plt.title('Финальный P(L) после 10 ответов: параметры × паттерн')
plt.tight_layout(); plt.show()

Same pSolve(pL) widget as chapter 4. Drag sliders — intuition check: P(S)+P(G)>1P(S) + P(G) > 1 makes the line slope downward — higher mastery lowers solve probability — nonsense.

A straight line. P(solve) equals P(G) at P(L)=0 and 1−P(S) at P(L)=1. Slope = 1−P(S)−P(G).

ParameterToo lowToo high
P(L0)P(L_0)newcomers look weak → tasks too easyfirst tasks too hard → frustration
P(T)P(T)“no learning,” stagnationunrealistically fast mastery
P(S)P(S)trusts every correct answer — confuses mastery with lucknever trusts mistakes — weak learners forced upward
P(G)P(G)never trusts correct answers → infinite drillexcuses blind guessing

Why defaults work: Corbett & Anderson (1995) show EM-fitted parameters across domains cluster ~0.1–0.3 each. Defaults 0.2 / 0.1 / 0.1 / 0.2 sit near that center.

  • NB-3 — EM fitting (soon): recover parameters from real data without guessing.
  • NB-1 — BKT from scratch — baseline implementation powering every experiment.