CurricuLLM LogoCurricuLLM
In the ClassroomFeaturesPricingTraining HubDevelopersFAQ
Age-appropriate AI, assessment integrity, and why the curriculum is the cleanest guardrail
8 March 2026

Age-appropriate AI, assessment integrity, and why the curriculum is the cleanest guardrail

Dan Hart

Dan Hart

CEO, Co-Founder, CurricuLLM

This week covers why age appropriateness cannot be bolted on to AI in schools, takeaways from the Sydney Morning Herald Schools Summit, a new CurricuLLM release, an interview with the Future Government Institute, Alpha Schools and the claims worth watching closely, and why businesses remain liable for what their chatbots say.


Age appropriateness is a design problem, not a policy afterthought

The Economic Times reports that age appropriateness has to be a core principle when introducing AI in schools. The experts quoted push a few consistent themes, including only allowing vetted tools inside secure school ecosystems.

My take is that you cannot bolt "age appropriate" on as a policy later. The system itself has to tailor the experience by age and stage, including what it will and won't do, the language it uses, and the kinds of tasks it supports. The cleanest guardrail we already have is the curriculum. If you anchor AI interactions to curriculum outcomes and progression, you get age-appropriate scope by default, but you also need to be able to customise general content and features by age level too.

If you would like us to use our benchmark to assess how well your current tool aligns to the curriculum, please reach out and let us know.


CurricuLLM release: admin, governance, and the small things that matter

New admin options to assist governance and integration with systems in schools released today: SIS integration, and feature restrictions by role.

But my favourite part of this release is the tiny but noticeable improvements to UI and speed.

Free for all teachers in Australia and New Zealand at curricullm.com.


Sydney Morning Herald Schools Summit: the debate has moved on

A few AI takeaways from the Sydney Morning Herald Schools Summit.

There was a pretty clear consensus that the debate has moved on from "should schools use AI?" to "what problem are we trying to solve, and what must never be automated?" AI should operate in the service of learning, but it cannot substitute for learning, teacher judgement, or the relationships that underpin great teaching.

Multiple speakers reinforced that knowledge and fundamentals matter more, not less, in an AI world. If students and teachers do not have strong foundations, AI just accelerates confusion. The skill set schools need to build is the ability to question outputs, detect errors, and form original ideas, not just produce answers quickly.

Assessment integrity is the pressure point. Schools still need confidence that work demonstrates attainment, and that means clearer guidance, better task design, and better norms around when and how AI is used.

AI can lower the social cost of asking questions, especially for younger students, but it also introduces risks: unhealthy private reliance, less healthy academic risk-taking, and fewer opportunities for shared learning moments.


From EduChat to CurricuLLM: an interview with Future Government Institute

It was a delight to share some insights with the Future Government Institute about the incredible teams inside the Department and how that shaped the CurricuLLM build.


Alpha Schools: hype, scepticism, and numbers worth watching

There has been a lot of media lately about Alpha Schools, both the hype and the scepticism. When someone tells you they have reinvented education, you should take that with caution. I listened to the full conversation on the Moonshots podcast with their co-founders, and some of what they are describing is hard to dismiss:

  • Their seniors are averaging 1535 on the SAT (would be good to see how that compares with a similar socio-economic cohort)
  • Students are completing a full academic program in two hours a day
  • 40 to 60 percent of their high school students asked to skip summer break
  • Starting salary for teachers is six figures USD
  • They had 80,000 applicants for those roles
  • Metrics are used encouragingly — "you are five hours away" rather than "you got 62 percent"
  • 90 percent of students say they love school
  • A closed feedback loop improves the system every eight weeks

Worth watching closely.


Businesses remain liable for what their chatbots say

This seems thoroughly sensible, but ups the importance of rigorous evaluation, red teaming, and continuous governance. As generative AI chatbots become more common in customer service, regulators and legal experts are making it clear that businesses remain responsible for what their systems tell customers.

Under Australian Consumer Law, companies cannot deflect liability by blaming the technology if a chatbot provides misleading information about prices, refunds, or products. Recent examples, from incorrect pricing to unsafe advice, highlight how even well-guarded systems can produce unpredictable responses.


The thread tying this together

This week's theme is that the quality of AI in education depends on how deliberately it is designed, measured, and governed. Age appropriateness is not a checkbox. Assessment integrity is not solved by policy alone. And even outside education, liability follows the outcome, not the tool. The systems that work will be the ones that anchor AI to real structure, whether that is a curriculum, a set of consumer obligations, or a feedback loop that actually closes.

AI for schools is not just about access anymore. It is about whether the system is built to serve the learner in front of it, at the right level, with the right safeguards, and with someone accountable for what it does.

Back to all posts
CurricuLLM Logo
CurricuLLM

AI for schools

Product

FeaturesPricingDevelopersUse CasesFAQ

Company

About usPrivacy policyStatusContact

Resources

Terms of useSupportTraining hubEventsBlogResearchPress