CurricuLLM is independently audited and assessed for youth AI safety, and built to meet the UK Department for Education's Generative AI product safety standards.
CurricuLLM has been independently verified by leading youth AI safety bodies.
CurricuLLM is built to meet the Department for Education's standards for safe, generative AI in schools and colleges in England. Here is how we address each of the thirteen DfE standards.
CurricuLLM is clear about what the product is and is not for: a curriculum-aligned tutor and teacher assistant for English schools, designed for supervised classroom and homework use.
CurricuLLM supports the DfE's defined use cases — content creation and delivery, personalised learning and accessibility, assessment and analytics, digital assistant, research and writing aid, and learner engagement — with clear scope for each.
Layered content filtering reliably prevents pupils from generating or accessing harmful or age-inappropriate material, with safeguards that hold throughout a conversation and adjust to age and context.
Safeguarding signals are surfaced to designated leads in near real time. Schools see flagged conversations, policy breaches, and aggregate usage patterns through the safeguarding dashboard, and can export reports for incident records.
CurricuLLM is built for reliability, security and robustness, including hardened defences against prompt-injection and jailbreak attempts, encryption in transit and at rest, and role-based access controls.
CurricuLLM is operated in line with the UK GDPR and the Data Protection Act 2018, with transparent data handling, lawful bases for processing, and a clear no-training commitment: pupil and teacher work is never used to train third-party foundation models.
Schools own the lessons, assessments, and student work created in CurricuLLM. We are transparent about the third-party models used for inference and the licensing terms that apply to generated outputs.
CurricuLLM is rigorously red-teamed and evaluated for safety, accuracy and curriculum alignment before release. We use both automated and human-in-the-loop testing, with documented evaluations available to schools.
CurricuLLM operates with formal accountability: documented risk assessments for every use case, a clear complaints and incident-handling process, and named owners for AI safety, privacy, and safeguarding.
CurricuLLM is designed to scaffold thinking, not to do the thinking. We monitor and limit cognitive offloading — the tutor questions and prompts pupils to reason for themselves rather than generating finished answers on their behalf.
CurricuLLM does not anthropomorphise the tutor or simulate friendship. Interactions are designed to keep pupils oriented towards their teachers, peers and trusted adults, and to avoid emotional dependence.
CurricuLLM detects signs of distress, signposts pupils to trusted adults and recognised support services, uses safe and supportive response language, and surfaces concerns to the school's designated safeguarding lead.
CurricuLLM does not use manipulative or persuasive techniques to keep pupils engaged. There are no dark-pattern engagement loops, sycophancy, or behavioural nudges designed to maximise time on platform.