Update formulas — compact
This chapter is a quick reference. Derivation lives in chapter 5; interactive drill — chapter 7.
Posterior — Bayes’ rule
Section titled “Posterior — Bayes’ rule”After each attempt first compute the posterior — updated belief given the observation (correct / incorrect):
Learning step
Section titled “Learning step”Then add the chance of learning during this attempt:
With the learner always gains a little — even on mistakes.
P(solve) — for prediction
Section titled “P(solve) — for prediction”When choosing the next task we need the probability the student solves this problem now:
“Knows and doesn’t slip” plus “doesn’t know but guesses.”
Implementation
Section titled “Implementation”These three formulas are the adaptive engine’s heart. In code they fit in ~15 lines:
export function pSolve(pL: number, params: BktParams = DEFAULT_BKT): number { return pL * (1 - params.pSlip) + (1 - pL) * params.pGuess;}
export function bktUpdate( pL: number, observedCorrect: boolean, params: BktParams = DEFAULT_BKT): number { const { pSlip, pGuess, pTransit } = params; const posterior = observedCorrect ? (pL * (1 - pSlip)) / (pL * (1 - pSlip) + (1 - pL) * pGuess) : (pL * pSlip) / (pL * pSlip + (1 - pL) * (1 - pGuess)); return posterior + (1 - posterior) * pTransit;}Formula cheat sheet
Section titled “Formula cheat sheet”| What | Formula | Role |
|---|---|---|
| Update (correct) | “correct answer → more confidence” | |
| Update (wrong) | “mistake → less confidence (no panic)” | |
| Learning | “can learn during attempt” | |
| Predict | “probability of solving?” |
All four are deterministic. No sampling, no black boxes, no neural nets — eighth-grade arithmetic.