Skip to content

Update formulas — compact

This chapter is a quick reference. Derivation lives in chapter 5; interactive drill — chapter 7.

After each attempt first compute the posterior — updated belief given the observation (correct / incorrect):

P(Lcorrect)=P(L)(1P(S))P(L)(1P(S))+(1P(L))P(G)P(L \mid \text{correct}) = \frac{P(L) \cdot (1 - P(S))}{P(L) \cdot (1 - P(S)) + (1 - P(L)) \cdot P(G)} P(Lwrong)=P(L)P(S)P(L)P(S)+(1P(L))(1P(G))P(L \mid \text{wrong}) = \frac{P(L) \cdot P(S)}{P(L) \cdot P(S) + (1 - P(L)) \cdot (1 - P(G))}

Then add the chance of learning during this attempt:

P(Lnew)=Pposterior+(1Pposterior)P(T)P(L_{\text{new}}) = P_{\text{posterior}} + (1 - P_{\text{posterior}}) \cdot P(T)

With P(T)=0.1P(T) = 0.1 the learner always gains a little — even on mistakes.

When choosing the next task we need the probability the student solves this problem now:

P(solve)=P(L)(1P(S))+(1P(L))P(G)P(\text{solve}) = P(L) \cdot (1 - P(S)) + (1 - P(L)) \cdot P(G)

“Knows and doesn’t slip” plus “doesn’t know but guesses.”

These three formulas are the adaptive engine’s heart. In code they fit in ~15 lines:

export function pSolve(pL: number, params: BktParams = DEFAULT_BKT): number {
return pL * (1 - params.pSlip) + (1 - pL) * params.pGuess;
}
export function bktUpdate(
pL: number,
observedCorrect: boolean,
params: BktParams = DEFAULT_BKT
): number {
const { pSlip, pGuess, pTransit } = params;
const posterior = observedCorrect
? (pL * (1 - pSlip)) / (pL * (1 - pSlip) + (1 - pL) * pGuess)
: (pL * pSlip) / (pL * pSlip + (1 - pL) * (1 - pGuess));
return posterior + (1 - posterior) * pTransit;
}
WhatFormulaRole
Update (correct)P(L)(1P(S))P(L)(1P(S))+(1P(L))P(G)\frac{P(L)(1-P(S))}{P(L)(1-P(S)) + (1-P(L))P(G)}“correct answer → more confidence”
Update (wrong)P(L)P(S)P(L)P(S)+(1P(L))(1P(G))\frac{P(L)P(S)}{P(L)P(S) + (1-P(L))(1-P(G))}“mistake → less confidence (no panic)”
LearningPpost+(1Ppost)P(T)P_{\text{post}} + (1-P_{\text{post}}) \cdot P(T)“can learn during attempt”
PredictP(L)(1P(S))+(1P(L))P(G)P(L)(1-P(S)) + (1-P(L))P(G)“probability of solving?”

All four are deterministic. No sampling, no black boxes, no neural nets — eighth-grade arithmetic.