NB-2 — Parameter sensitivity
Defaults 0.2 / 0.1 / 0.1 / 0.2 for aren’t magic or curve-fit mythology. This notebook breaks each knob and shows what snaps.
import numpy as npimport matplotlib.pyplot as pltfrom itertools import product
def bkt_update(pL, observed, p): pS, pG, pT = p['pSlip'], p['pGuess'], p['pTransit'] if observed: post = (pL * (1 - pS)) / (pL * (1 - pS) + (1 - pL) * pG) else: post = (pL * pS) / (pL * pS + (1 - pL) * (1 - pG)) return post + (1 - post) * pT
def trajectory(p, answers, pL0=None): pL = pL0 if pL0 is not None else p['pInit'] trace = [pL] for a in answers: pL = bkt_update(pL, a, p) trace.append(pL) return traceExperiment 1 — “trusting guesses” ()
Section titled “Experiment 1 — “trusting guesses” (P(G)=0.5P(G) = 0.5P(G)=0.5)”Student answers correctly ten times in a row.
defaults = {"pInit": 0.2, "pTransit": 0.1, "pSlip": 0.1, "pGuess": 0.2}broken_g = {**defaults, "pGuess": 0.5}
answers = [True] * 10t_def = trajectory(defaults, answers)t_brk = trajectory(broken_g, answers)
print(f"После 10 ✓ — defaults: P(L) = {t_def[-1]:.3f}")print(f"После 10 ✓ — P(G)=0.5: P(L) = {t_brk[-1]:.3f}")# defaults: 0.881 ← модель «поверила» в обучение# P(G)=0.5: 0.612 ← модель «угадывание объясняет всё», ученик не учитсяAt the model distrusts correct answers — even after ten successes fails to reach 0.7. Students loop drills forever.
Experiment 2 — “trusting slips” ()
Section titled “Experiment 2 — “trusting slips” (P(S)=0.5P(S) = 0.5P(S)=0.5)”Same student, ten incorrect answers.
broken_s = {**defaults, "pSlip": 0.5}
answers = [False] * 10t_def = trajectory(defaults, answers)t_brk = trajectory(broken_s, answers)
print(f"После 10 ✗ — defaults: P(L) = {t_def[-1]:.3f}")print(f"После 10 ✗ — P(S)=0.5: P(L) = {t_brk[-1]:.3f}")# defaults: 0.072 ← модель видит «он не знает»# P(S)=0.5: 0.181 ← модель «слипа объясняет всё», даёт сложныеAt the model distrusts mistakes — even ten failures leave inflated mastery. Weak learners receive tasks that are too hard — demoralizing.
Experiment 3 — “frozen learner” ()
Section titled “Experiment 3 — “frozen learner” (P(T)=0P(T) = 0P(T)=0)”no_transit = {**defaults, "pTransit": 0.0}
answers = [True] * 50t_def = trajectory(defaults, answers)t_brk = trajectory(no_transit, answers)
print(f"50 ✓ — defaults: P(L) = {t_def[-1]:.3f}")print(f"50 ✓ — P(T)=0: P(L) = {t_brk[-1]:.3f}")# defaults: 0.999 ← ученик может «учиться», достигает мастерства# P(T)=0: 0.998 ← BTW даже без transit достигает, через демонстрацию...# но: разница в скорости — defaults быстрееSubtlety: doesn’t forbid reaching high — demonstration still lifts probability — but speed suffers and early trajectory flattens.
fig, ax = plt.subplots(figsize=(8, 4))ax.plot(t_def, label='defaults (P(T)=0.1)', color='#9333ea', linewidth=2)ax.plot(t_brk, label='P(T)=0', color='#ef4444', linewidth=2, linestyle='--')ax.set_xlabel('шаг'); ax.set_ylabel('P(L)')ax.legend(); ax.grid(alpha=0.3)plt.show()Experiment 4 — “hyper learner” ()
Section titled “Experiment 4 — “hyper learner” (P(T)=0.5P(T) = 0.5P(T)=0.5)”fast_transit = {**defaults, "pTransit": 0.5}
answers = [True] * 5t_def = trajectory(defaults, answers)t_brk = trajectory(fast_transit, answers)
print(f"5 ✓ — defaults: P(L) = {t_def[-1]:.3f}")print(f"5 ✓ — P(T)=0.5: P(L) = {t_brk[-1]:.3f}")# defaults: 0.628# P(T)=0.5: 0.981 ← осваивает за 3-4 правильных ответаisn’t realistic for school math — maybe OK for trivial adult micro-skills.
Heatmap — parameters × answer patterns
Section titled “Heatmap — parameters × answer patterns”patterns = { 'все правильно': [True] * 10, 'все ошибки': [False] * 10, 'через один': [True, False] * 5, 'старт ошибками': [False] * 5 + [True] * 5, 'старт верно': [True] * 5 + [False] * 5,}configs = { 'defaults': defaults, 'P(G)=0.5': {**defaults, 'pGuess': 0.5}, 'P(S)=0.5': {**defaults, 'pSlip': 0.5}, 'P(T)=0': {**defaults, 'pTransit': 0.0}, 'P(T)=0.5': {**defaults, 'pTransit': 0.5},}
mat = np.zeros((len(configs), len(patterns)))for i, (cn, cp) in enumerate(configs.items()): for j, (pn, ans) in enumerate(patterns.items()): mat[i, j] = trajectory(cp, ans)[-1]
fig, ax = plt.subplots(figsize=(9, 4))im = ax.imshow(mat, cmap='RdYlGn', vmin=0, vmax=1, aspect='auto')ax.set_xticks(range(len(patterns))); ax.set_xticklabels(patterns.keys(), rotation=20)ax.set_yticks(range(len(configs))); ax.set_yticklabels(configs.keys())for i in range(mat.shape[0]): for j in range(mat.shape[1]): ax.text(j, i, f'{mat[i,j]:.2f}', ha='center', va='center', fontsize=9)plt.colorbar(im, label='итоговый P(L)')plt.title('Финальный P(L) после 10 ответов: параметры × паттерн')plt.tight_layout(); plt.show()Same pSolve(pL) widget as chapter 4. Drag sliders — intuition check: makes the line slope downward — higher mastery lowers solve probability — nonsense.
A straight line. P(solve) equals P(G) at P(L)=0 and 1−P(S) at P(L)=1. Slope = 1−P(S)−P(G).
Takeaways
Section titled “Takeaways”| Parameter | Too low | Too high |
|---|---|---|
| newcomers look weak → tasks too easy | first tasks too hard → frustration | |
| “no learning,” stagnation | unrealistically fast mastery | |
| trusts every correct answer — confuses mastery with luck | never trusts mistakes — weak learners forced upward | |
| never trusts correct answers → infinite drill | excuses blind guessing |
Why defaults work: Corbett & Anderson (1995) show EM-fitted parameters across domains cluster ~0.1–0.3 each. Defaults 0.2 / 0.1 / 0.1 / 0.2 sit near that center.
- NB-3 — EM fitting (soon): recover parameters from real data without guessing.
- NB-1 — BKT from scratch — baseline implementation powering every experiment.