Skip to content

EU AI Act — what it is and how it concerns us

Regulation (EU) 2024/1689 (the EU AI Act) — the framework regulation for AI systems in the EU. Adaptive learning in the school context is Annex III high-risk. Obligations come into force in stages from August 2026 to August 2027. Our pipeline — deterministic BKT plus a template explanation engine — meets the traceability, explainability and human oversight requirements by construction, with no separate compliance infrastructure.

The EU AI Act splits AI systems into 4 risk tiers:

  1. Unacceptable — banned (social scoring, manipulative AI).
  2. High-risk — allowed, but with obligations (Art. 9–15).
  3. Limited risk — transparency obligations (chatbots).
  4. Minimal risk — no obligations.

The high-risk list in Annex III (item 3) includes:

AI systems intended to be used … for the purpose of determining access to or admission of natural persons to educational and vocational training institutions … or for the purpose of evaluating learning outcomes … or for the purpose of assessing the appropriate level of education that an individual will receive or will be able to access …

That is: systems that decide what to study next, or evaluate learning outcomes, are high-risk. The selector in our architecture (and getNextRecommendedQuestion in MATx) qualifies.

DateWhat enters into force
1 August 2024AI Act published, in force
2 February 2025Bans on unacceptable-risk uses (Art. 5)
2 August 2025Rules for general-purpose AI (GPT and similar)
2 August 2026Bulk of high-risk obligations (for systems placed on the market after this date)
2 August 2027Full application to systems already on the market before 2026

Source: Article 113 — Entry into force and application.

Annex III high-risk — where MATx + matx-hack land

Section titled “Annex III high-risk — where MATx + matx-hack land”
Annex III pt. 3 sub-pointApplies to us?
(a) Determining access / admissionNo — we don’t decide who gets admitted
(b) Evaluating learning outcomesYes — BKT P(L)P(L) is an objective mastery estimate
(c) Assessing appropriate level of educationPartly — the selector picks the difficulty level
(d) Detecting prohibited behaviour (cheating-detection)No

That means Art. 9–15 apply in full once we go to a production pilot.

ArticleRequirement (simplified)Our status
9 — Risk managementDocument risks across lifecycleRoadmap (after pilot)
10 — Data governanceTraining data representative, free of biasN/A — we have no ML training. Tasks are curated by a teacher (data/matx-define/)
11 — Technical documentationArchitecture, design rationale, known limitationsThis guide is the documentation
12 — Record-keeping (logs)Auto-logging of inference eventsEvery bktUpdate is reproducible from (prior, isCorrect, params); an opt-in audit_trace JSONB adds full history
13 — Transparency to usersThe user knows they’re using an AIThe teacher sees the reason (per microskill); the student sees a deterministic system, not an LLM
14 — Human oversightFinal decision rests with a humanThe teacher sees the heatmap + reason and can override the recommendation or assign a task manually
15 — Accuracy & robustnessDocumented accuracy, cybersecurityTBD — pilot data

Why deterministic BKT is compliant by construction

Section titled “Why deterministic BKT is compliant by construction”

The main risks of high-risk AI are opacity (“black box”), hallucination, and training-data bias. Our pipeline:

  • Closed-form math. P(L)P(L) after an update is computed from (prior, isCorrect, P_S, P_G, P_T) via Bayes’ rule. An auditor can independently replay the result from any point in history.
  • No LLM in the production path. No generative component — nothing to hallucinate. Explanations are assembled by a template engine (web/lib/explain.ts) with a fixed set of Estonian phrases.
  • Curated task base. No AI-generated tasks — therefore no training data, therefore no training-bias. Tasks are tagged by a practising teacher.
  • Microskill split is objective. The 9 microskills define.tN.{add,mul,mix} reflect subject-matter categories, not socio-demographic ones — Art. 10 fairness doesn’t apply.

These properties cover the requirements directly:

RequirementCovered by
Art. 12 — record-keepingReplayable from bktUpdate(prior, ...)
Art. 13 — transparencyA plain formula, not “AI magic”
Art. 14 — human oversightTeacher override via UI, always
Art. 15 — accuracyPilot evaluation + EM-fitting of parameters

If Tom integrates @matx-hack/bkt-core (PR 1), MATx inherits this compliance profile automatically:

  • bktUpdate — the same closed-form formula in his submit-handler;
  • student_skill_state (a new table) plus an optional audit_trace JSONB cover Art. 12 record-keeping;
  • his existing reason heuristic (Harjuta... / Kinnista... / Korda...) is already Art. 13 transparency;
  • the teacher in his teacher-analytics dashboard is Art. 14 oversight.

No separate compliance infrastructure under the AI Act is needed.

Now (hackathon demo):

  • ✅ Architecture + documentation in place (Art. 11)
  • ✅ The math is closed (Art. 12, 13)
  • ✅ Teacher in the loop (Art. 14)

Roadmap (after the pilot):

  • Enable the audit_trace JSONB column — accumulates (timestamp, prior, isCorrect, posterior) for every attempt
  • Pilot-evaluation accuracy metrics (Art. 15)
  • Risk-management document (Art. 9)
  • DPIA — Data Protection Impact Assessment (GDPR + AI Act)

eatf.eu — an additional attestation layer

Section titled “eatf.eu — an additional attestation layer”

eatf.eu (Anton’s parallel project) provides model attestation, response signing and audit-trail centrally with one API call. Where needed, we wrap bkt-core as an attestable component:

attempt → bktUpdate → eatf.attest({input, output, params, modelHash})
→ audit_trace + signature

This isn’t required for compliance (Art. 12 is covered by a plain JSONB column), but it gives an external independent record for the auditor.