Skip to content
GEA

Architecture, not policy

Sovereignty as architectural principle

How the architecture makes violation impossible — not merely forbidden

The principle that individuals should control their own data has moved from advocacy to implementation. Data cooperatives, self-sovereign identity frameworks and the European Data Governance Act converge on the same structural insight: personal data must be governed by the person it describes, not by the organisation that processes it. This page describes how the GEA architecture applies that logic to education — where the data is arguably more intimate than a health record. A health record captures what happened to your body. A lifelong learning profile captures how your mind works.

The mechanism

Apple's Private Cloud Compute, launched in 2024, demonstrated the core mechanism at global scale: stateless computation, enforceable code-signing guarantees, no privileged access for employees, non-targetability of requests, and verifiable transparency of the running codebase.

Apple can be stateless — process and forget. A lifelong learning system cannot. The profile must persist over years and follow the learner between hubs, cities and countries. The architecture separates two concerns Apple could collapse into one: stateless processing combined with stateful encrypted storage under the learner's control.

During a session, the Mentor operates on decrypted profile data in volatile memory — the same ephemeral processing Apple demonstrated. When the session ends, the data is re-encrypted. The encryption key belongs to the learner, managed through Self-Sovereign Identity. Without the key, the data is mathematically unreadable. The hub is an access point, not a storage location. If the operating organisation is compromised, acquired or pressured by a government, what it holds is ciphertext.

Custodial keys for children

Key management for a lifelong system that begins in early childhood is a serious architectural challenge, not a solved problem. A six-year-old cannot manage cryptographic keys.

The architecture addresses this through a custodial model with graduated transfer: parents hold the child's keys initially, managed through a threshold cryptography scheme where multiple guardians (parents, the hub, a recovery trustee) each hold key shares. No single party can access the data alone, but a defined quorum can recover access if a key is lost. As the learner matures, key shares transfer progressively: the adolescent gains independent shares that create private zones parents cannot access, while parents retain emergency recovery shares until full legal majority.

Key loss triggers a recovery protocol — not data destruction — using the threshold scheme. Social recovery (designating trusted contacts who together can reconstruct access) provides an additional safety net. These mechanisms draw on established building blocks (Shamir's Secret Sharing, social recovery wallets, custodial bridges), but their application to a lifelong educational profile with graduated autonomy is novel and requires careful implementation.

What no single actor sees

No single actor sees the complete picture. The Mentor sees the learner's profile deeply but cannot issue credentials. The Audit-Agent verifies evidence but sees only the evidence stream and statistical signatures, not the full profile. The governance layer sees system health but not individual data. No human at any level of the operating organisation sees raw data or psychometric profiles.

The learner sees an interpreted cockpit: progress, strengths, trajectories. Parents see what the child shares — milestones, areas needing attention — never emotional patterns or psychometric details. Teachers receive discrete action impulses: "have a conversation," "this learner needs encouragement." Never diagnoses. Employers see only verified competency evidence for exactly the skills requested. Nothing about the process, the profile or the person beyond what was chosen to share.

A state that demands access gets nothing — not because a contract forbids it, but because no interface exists. The separation between educational data and executive systems is architectural: no API endpoint connects them. Seizing a hub yields encrypted storage; the keys are with the learners.

Threat model

The sovereignty claims above require a concrete threat model. The following table maps specific threats to architectural mitigations. The architecture does not claim invulnerability. It claims that the attack surface is reduced to the individual learner's key custody — a fundamentally different security posture than any system where a central operator holds the keys.

Ten threats and the architectural mitigations that answer them. Source: Pochmann 2026, Section 6.
01Operating organisation compromised or acquiredHub stores only ciphertext; keys are with learners. Acquiring the organisation yields no usable data.
02State demands bulk access to learner dataNo API endpoint connects educational data to external systems. No technical interface exists to comply, even under legal compulsion.
03State bans the system entirelyHubs are portable, operable by NGOs, and functional in island mode without central coordination. The architecture supplements existing state systems where possible and runs independently where necessary.
04Insider at hub level attempts data exfiltrationStateless processing — data exists decrypted only in volatile memory during sessions. No persistent plaintext on disk. Hardware attestation ensures only signed code runs.
05Rogue Mentor instance profiles learners for external useAudit-Agent monitors all Mentor outputs in a separate execution context. The Mentor has no network egress path except through the encrypted storage and evidence pipelines.
06Side-channel attack on volatile memory during a sessionHardware-level mitigation — trusted execution environments (TEEs) with memory encryption, as in Apple's Secure Enclave and AMD SEV. Residual risk exists and is bounded by session duration.
07Key loss by learner (accidental)Threshold recovery via guardian quorum (see custodial keys above). Data is recoverable; no child loses their educational history due to a lost device.
08Parent coercion — accessing an adolescent's private zonesThe threshold scheme requires the adolescent's own key share for private zones. Parent shares alone are insufficient.
09Re-identification from anonymised research dataResearch AI operates in secure enclaves. Differential privacy with conservative epsilon bounds. No human sees aggregated data. Publication threshold enforced inside the enclave.
10Coordinated attack — state plus insider plus physical seizureDefence in depth: even with physical hub access and a compromised insider, data remains encrypted with keys distributed across learner-held shares. The attack must also compromise the learner's personal key shares — equivalent to physical coercion of the individual, which no technical system can fully prevent.
A state that demands access gets nothing — not because a contract forbids it, but because no interface exists.

The political threat

A state that bans the system rather than trying to break it deserves direct attention. The states where Lost Einsteins are most concentrated are often the states most likely to reject an education system they cannot inspect or control.

The architecture addresses this through deployment flexibility, not political confrontation. In cooperative states, the system operates as a supplement within existing education systems — the least threatening integration model. In hostile environments, the hub design enables operation by NGOs, religious organisations or community groups, with island mode ensuring functionality without central coordination. The hubs are portable, deniable in their specificity (a shipping container with solar panels and tablets), and operationally independent.

This does not solve the political problem. No architecture can force a state to permit education it opposes. But the technical architecture does not add political dependencies beyond those inherent in any educational intervention.

Sovereignty grows with the learner

Data sovereignty grows with the learner. For young children, parents co-determine sharing through the custodial key model described above. As the learner matures, control shifts — the adolescent defines private zones parents cannot see, enforced by the threshold scheme. In adulthood, sovereignty is complete.

The kill-switch

The right to be forgotten is an accompanied process, not a button. Triggering it produces an immediate functional lock, then a mandatory human conversation, then a waiting period, and only then irreversible cryptographic destruction — if the learner still chooses it. The architecture refuses to make permanent decisions easy enough to make in a moment of crisis.

Same architecture, two levels

The same architecture that protects individuals enables learning research at unprecedented scale. Millions of Mentors accompany millions of learners. Anonymised, aggregated data flows into a research layer: which explanations work for which cognitive patterns? Where do learners systematically fail? Which paths through the Knowledge Graph are efficient?

The research AI operates in secure enclaves under the same guarantees as the individual Mentor. No human sees the aggregated data. Insights flow back as abstract patterns — "learners with pattern X benefit from approach Y" — never as individual profiles. Same architecture, same protection, two levels: individual (full profile, only this learner's Mentor) and collective (anonymised, only the research AI).

The system does not just get better at teaching. It gets better at understanding what knowledge is and how humans acquire it. Sovereignty is not the enemy of collective learning. It is the precondition.

Architecture paper, Section 6, in full. DOI 10.5281/zenodo.18759134, CC BY 4.0.