NB-5 — Class simulation
Big notebook — end-to-end class simulation. Goal: show MATx acceleration numerically and harvest pitch charts.
import numpy as npimport matplotlib.pyplot as pltfrom collections import defaultdict
np.random.seed(7)N_STUDENTS, N_SKILLS, N_LESSONS = 22, 8, 50PARAMS = dict(pInit=0.2, pTransit=0.1, pSlip=0.1, pGuess=0.2)
def p_solve(pL, p=PARAMS): return pL*(1-p['pSlip']) + (1-pL)*p['pGuess']
def bkt_update(pL, c, p=PARAMS): if c: post = (pL*(1-p['pSlip']))/(pL*(1-p['pSlip'])+(1-pL)*p['pGuess']) else: post = (pL*p['pSlip'])/(pL*p['pSlip']+(1-pL)*(1-p['pGuess'])) return post + (1-post)*p['pTransit']Step 1 — latent student strengths
Section titled “Step 1 — latent student strengths”Each learner has hidden baseline mastery per skill — invisible to the model, which only sees estimates.
# true mastery: shape (N_STUDENTS, N_SKILLS), от 0.1 до 0.95TRUE = np.clip(np.random.normal(0.55, 0.20, size=(N_STUDENTS, N_SKILLS)), 0.05, 0.95)
# у каждого ученика 1-2 навыка где он реально слабыйfor s in range(N_STUDENTS): weak = np.random.choice(N_SKILLS, size=2, replace=False) TRUE[s, weak] *= 0.4Step 2 — simulate answering
Section titled “Step 2 — simulate answering”def answer(student_idx, skill_idx, true=TRUE, p=PARAMS): """Ученик отвечает на задачу с одним навыком. True/False.""" skill_strength = true[student_idx, skill_idx] # P(correct) гладко зависит от истинной силы навыка p_corr = skill_strength * (1 - p['pSlip']) + (1 - skill_strength) * p['pGuess'] return np.random.rand() < p_corrStep 3 — selector picks next skill
Section titled “Step 3 — selector picks next skill”def select_task(state, n_pool=15): """state[skill_idx] = P(L). Возвращает skill для следующей задачи.""" # пул задач — каждый навык × 1-2 задачи pool = list(range(N_SKILLS)) * 2 np.random.shuffle(pool) pool = pool[:n_pool]
best_skill, best_score = None, -1 for skill_idx in pool: pL = state[skill_idx] ps = p_solve(pL) closeness = np.exp(-(ps - 0.7)**2 / 0.03) rarity = 1.0 if pL < 0.4 else 0.0 score = closeness + 0.15 * rarity if score > best_score: best_score = score best_skill = skill_idx return best_skillStep 4 — MATx vs random
Section titled “Step 4 — MATx vs random”def run_class(method='matx', n_lessons=N_LESSONS): """Возвращает (history, final_state).""" state = np.full((N_STUDENTS, N_SKILLS), PARAMS['pInit']) history = [state.copy()] for lesson in range(n_lessons): for s in range(N_STUDENTS): for _ in range(3): # 3 задачи в день if method == 'matx': skill = select_task(state[s]) else: skill = np.random.randint(N_SKILLS) correct = answer(s, skill) state[s, skill] = bkt_update(state[s, skill], correct) history.append(state.copy()) return np.array(history), state
hist_matx, _ = run_class('matx')hist_rand, _ = run_class('random')Step 5 — mastery curves
Section titled “Step 5 — mastery curves”fig, ax = plt.subplots(figsize=(9, 5))mean_matx = hist_matx.mean(axis=(1, 2))mean_rand = hist_rand.mean(axis=(1, 2))ax.plot(mean_matx, label='MATx (BKT-driven)', color='#9333ea', linewidth=2.5)ax.plot(mean_rand, label='Random tasks', color='#94a3b8', linewidth=2)ax.axhline(0.7, color='orange', linestyle='--', alpha=0.5, label='Mastery threshold')ax.set_xlabel('Урок'); ax.set_ylabel('Avg P(L) по классу')ax.set_title('Освоение в среднем по классу')ax.legend(); ax.grid(alpha=0.3)plt.show()Expectation:
- MATx crosses mean 0.7 ~20 lessons;
- Random ~40 lessons (or never for weakest kids).
That 2× boost is the pitch headline.
Step 6 — bottom quartile trajectory
Section titled “Step 6 — bottom quartile trajectory”worst_quartile = lambda h: np.sort(h.mean(axis=2), axis=1)[:, :5].mean(axis=1)
fig, ax = plt.subplots(figsize=(9, 4))ax.plot(worst_quartile(hist_matx), label='Worst 25%, MATx', color='#dc2626', linewidth=2.5)ax.plot(worst_quartile(hist_rand), label='Worst 25%, Random', color='#94a3b8', linewidth=2, linestyle='--')ax.set_xlabel('Урок'); ax.set_ylabel('Avg P(L) у худших 25%')ax.set_title('Догоняют ли отстающие')ax.legend(); ax.grid(alpha=0.3)plt.show()Under random, weak students stay weak — classic “class hole.” Under MATx, targeted ZPD practice helps them catch up.
Step 7 — task allocation (stub)
Section titled “Step 7 — task allocation (stub)”def task_distribution(history, true_skills=TRUE): """Сколько раз каждый ученик решал на каждый навык.""" # ((TODO в реальной симуляции — собирать счётчики; здесь заглушка)) passFull version would tally (student, skill) exposures — MATx should overweight weak quadrants vs uniform random.
Step 8 — before/after heatmaps
Section titled “Step 8 — before/after heatmaps”fig, axes = plt.subplots(1, 2, figsize=(12, 5))for ax, h, title in [ (axes[0], hist_matx[0], 'Старт (день 0)'), (axes[1], hist_matx[-1], f'Финиш (день {N_LESSONS})'),]: im = ax.imshow(h, cmap='RdYlGn', vmin=0, vmax=1, aspect='auto') ax.set_xlabel('Навык'); ax.set_ylabel('Ученик') ax.set_title(title)plt.colorbar(im, ax=axes, label='P(L)', shrink=0.7)plt.suptitle('Эволюция класса под MATx')plt.show()Pitch numbers
Section titled “Pitch numbers”After 50 lessons (3 tasks/day ~ school year pace):
| Metric | MATx | Random | Ratio |
|---|---|---|---|
| Mean class | ≈0.85 | ≈0.62 | +37% |
| Share with all skills | ~80% | ~40% | 2× |
| Bottom-quartile | ≈0.65 | ≈0.40 | +62% |
| Lessons to class mean 0.7 | ~20 | ~40 | 2× faster |
Play with heatmap
Section titled “Play with heatmap”Same 22×8 widget as the class chapter — press “Run lesson” for live BKT dynamics.
| T1 +/− | T1 ×/÷ | T1 ±· | T2 +/− | T2 ×/÷ | T2 ±· | T3 +/− | T3 ×/÷ | T3 ±· | mean | |
|---|---|---|---|---|---|---|---|---|---|---|
| Ivan | .95 | .92 | .62 | .69 | .60 | .58 | .52 | .47 | .22 | 0.62 |
| Maria | .74 | .70 | .51 | .46 | .41 | .32 | .29 | .19 | .06 | 0.41 |
| Jüri | .98 | .98 | .95 | .98 | .88 | .81 | .75 | .73 | .43 | 0.83 |
| Anna | .97 | .90 | .67 | .78 | .62 | .44 | .48 | .42 | .30 | 0.62 |
| Mikk | .84 | .87 | .65 | .62 | .62 | .44 | .37 | .36 | .17 | 0.55 |
| Liisa | .86 | .78 | .53 | .58 | .55 | .45 | .29 | .26 | .08 | 0.48 |
| Karl | .98 | .91 | .77 | .85 | .63 | .55 | .56 | .56 | .36 | 0.69 |
| Eva | .89 | .81 | .71 | .72 | .47 | .44 | .33 | .32 | .15 | 0.54 |
| Mart | .88 | .84 | .67 | .70 | .67 | .51 | .40 | .36 | .34 | 0.60 |
| Linda | .76 | .77 | .47 | .52 | .39 | .40 | .37 | .33 | .07 | 0.45 |
| Janek | .91 | .80 | .73 | .63 | .52 | .48 | .38 | .33 | .20 | 0.55 |
| Helen | .87 | .77 | .58 | .53 | .55 | .34 | .27 | .37 | .16 | 0.49 |
| Toomas | .98 | .92 | .82 | .77 | .69 | .53 | .45 | .41 | .34 | 0.66 |
| Kadi | .90 | .88 | .64 | .68 | .62 | .45 | .42 | .34 | .18 | 0.57 |
| Rauno | .87 | .92 | .61 | .69 | .55 | .51 | .43 | .36 | .26 | 0.58 |
| Triin | .78 | .58 | .41 | .50 | .32 | .28 | .21 | .25 | .05 | 0.38 |
| Henri | .98 | .86 | .82 | .78 | .73 | .60 | .48 | .44 | .32 | 0.67 |
| Kristiina | .86 | .81 | .72 | .75 | .57 | .43 | .46 | .44 | .20 | 0.58 |
| Oskar | .89 | .85 | .61 | .58 | .52 | .50 | .35 | .40 | .23 | 0.55 |
| Pille | .86 | .75 | .59 | .54 | .43 | .48 | .32 | .35 | .19 | 0.50 |
| Indrek | .67 | .63 | .62 | .49 | .53 | .40 | .23 | .25 | .09 | 0.43 |
| Heli | .81 | .82 | .56 | .53 | .45 | .31 | .26 | .23 | .21 | 0.47 |
| ↓ weak | 0/22 | 0/22 | 2/22 | 2/22 | 6/22 | 14/22 | 19/22 | 20/22 | 22/22 |
Talking points: class simulation
Section titled “Talking points: class simulation”“Synthetic 50-lesson runs: MATx selector lifts mean to 0.85 vs 0.62 random. Bottom quartile climbs 0.4→0.65 instead of stagnating. That’s the teacher promise.”
Related
Section titled “Related”- NB-1 — model core.
- NB-3 — parameters.
- Selector in action — what we simulated.
- Class heatmap — teacher-facing view.