EU AI Act — what it is and how it concerns us
Regulation (EU) 2024/1689 (the EU AI Act) — the framework regulation for AI systems in the EU. Adaptive learning in the school context is Annex III high-risk. Obligations come into force in stages from August 2026 to August 2027. Our pipeline — deterministic BKT plus a template explanation engine — meets the traceability, explainability and human oversight requirements by construction, with no separate compliance infrastructure.
Regulatory context
Section titled “Regulatory context”The EU AI Act splits AI systems into 4 risk tiers:
- Unacceptable — banned (social scoring, manipulative AI).
- High-risk — allowed, but with obligations (Art. 9–15).
- Limited risk — transparency obligations (chatbots).
- Minimal risk — no obligations.
The high-risk list in Annex III (item 3) includes:
AI systems intended to be used … for the purpose of determining access to or admission of natural persons to educational and vocational training institutions … or for the purpose of evaluating learning outcomes … or for the purpose of assessing the appropriate level of education that an individual will receive or will be able to access …
That is: systems that decide what to study next, or evaluate
learning outcomes, are high-risk. The selector in our architecture
(and getNextRecommendedQuestion in MATx) qualifies.
Key dates
Section titled “Key dates”| Date | What enters into force |
|---|---|
| 1 August 2024 | AI Act published, in force |
| 2 February 2025 | Bans on unacceptable-risk uses (Art. 5) |
| 2 August 2025 | Rules for general-purpose AI (GPT and similar) |
| 2 August 2026 | Bulk of high-risk obligations (for systems placed on the market after this date) |
| 2 August 2027 | Full application to systems already on the market before 2026 |
Source: Article 113 — Entry into force and application.
Annex III high-risk — where MATx + matx-hack land
Section titled “Annex III high-risk — where MATx + matx-hack land”| Annex III pt. 3 sub-point | Applies to us? |
|---|---|
| (a) Determining access / admission | No — we don’t decide who gets admitted |
| (b) Evaluating learning outcomes | Yes — BKT is an objective mastery estimate |
| (c) Assessing appropriate level of education | Partly — the selector picks the difficulty level |
| (d) Detecting prohibited behaviour (cheating-detection) | No |
That means Art. 9–15 apply in full once we go to a production pilot.
Art. 9–15 requirements and our status
Section titled “Art. 9–15 requirements and our status”| Article | Requirement (simplified) | Our status |
|---|---|---|
| 9 — Risk management | Document risks across lifecycle | Roadmap (after pilot) |
| 10 — Data governance | Training data representative, free of bias | N/A — we have no ML training. Tasks are curated by a teacher (data/matx-define/) |
| 11 — Technical documentation | Architecture, design rationale, known limitations | This guide is the documentation |
| 12 — Record-keeping (logs) | Auto-logging of inference events | Every bktUpdate is reproducible from (prior, isCorrect, params); an opt-in audit_trace JSONB adds full history |
| 13 — Transparency to users | The user knows they’re using an AI | The teacher sees the reason (per microskill); the student sees a deterministic system, not an LLM |
| 14 — Human oversight | Final decision rests with a human | The teacher sees the heatmap + reason and can override the recommendation or assign a task manually |
| 15 — Accuracy & robustness | Documented accuracy, cybersecurity | TBD — pilot data |
Why deterministic BKT is compliant by construction
Section titled “Why deterministic BKT is compliant by construction”The main risks of high-risk AI are opacity (“black box”), hallucination, and training-data bias. Our pipeline:
- Closed-form math. after an update is computed from
(prior, isCorrect, P_S, P_G, P_T)via Bayes’ rule. An auditor can independently replay the result from any point in history. - No LLM in the production path. No generative component — nothing
to hallucinate. Explanations are assembled by a template engine
(
web/lib/explain.ts) with a fixed set of Estonian phrases. - Curated task base. No AI-generated tasks — therefore no training data, therefore no training-bias. Tasks are tagged by a practising teacher.
- Microskill split is objective. The 9 microskills
define.tN.{add,mul,mix}reflect subject-matter categories, not socio-demographic ones — Art. 10 fairness doesn’t apply.
These properties cover the requirements directly:
| Requirement | Covered by |
|---|---|
| Art. 12 — record-keeping | Replayable from bktUpdate(prior, ...) |
| Art. 13 — transparency | A plain formula, not “AI magic” |
| Art. 14 — human oversight | Teacher override via UI, always |
| Art. 15 — accuracy | Pilot evaluation + EM-fitting of parameters |
What it means for MATx
Section titled “What it means for MATx”If Tom integrates @matx-hack/bkt-core
(PR 1), MATx inherits this compliance profile automatically:
bktUpdate— the same closed-form formula in his submit-handler;student_skill_state(a new table) plus an optionalaudit_trace JSONBcover Art. 12 record-keeping;- his existing reason heuristic (
Harjuta.../Kinnista.../Korda...) is already Art. 13 transparency; - the teacher in his teacher-analytics dashboard is Art. 14 oversight.
No separate compliance infrastructure under the AI Act is needed.
What it means for us (matx-hack)
Section titled “What it means for us (matx-hack)”Now (hackathon demo):
- ✅ Architecture + documentation in place (Art. 11)
- ✅ The math is closed (Art. 12, 13)
- ✅ Teacher in the loop (Art. 14)
Roadmap (after the pilot):
- Enable the
audit_trace JSONBcolumn — accumulates(timestamp, prior, isCorrect, posterior)for every attempt - Pilot-evaluation accuracy metrics (Art. 15)
- Risk-management document (Art. 9)
- DPIA — Data Protection Impact Assessment (GDPR + AI Act)
eatf.eu — an additional attestation layer
Section titled “eatf.eu — an additional attestation layer”eatf.eu (Anton’s parallel project) provides model
attestation, response signing and audit-trail centrally with one API
call. Where needed, we wrap bkt-core as an attestable component:
attempt → bktUpdate → eatf.attest({input, output, params, modelHash}) → audit_trace + signatureThis isn’t required for compliance (Art. 12 is covered by a plain JSONB column), but it gives an external independent record for the auditor.
Useful links
Section titled “Useful links”- Regulation (EU) 2024/1689 — official text (eur-lex)
- Annex III — the high-risk list
- Article 113 — entry-into-force dates
- artificialintelligenceact.eu — community summaries
- European Commission — Regulatory framework on AI
- eatf.eu — attestation framework