This week, Australia's new online safety codes bring AI chatbots into the regulatory frame, CNN and CCDH surface alarming gaps in chatbot safety for teens, Stargate shows early signs of infrastructure normalisation, and positive examples from the British Council, a Lagos early learning project, and Karnataka schools show what deliberate AI for schools can look like.
Australia's Age-Restricted Material Codes now apply to AI chatbots
Today Australia's Age-Restricted Material Codes come into full effect, introducing stronger protections for children online. Platforms including search engines, social media, gaming services, app stores and websites must now take meaningful steps to prevent minors from being exposed to pornography, high-impact violence, self-harm content and other age-inappropriate material.
Importantly, this also applies to AI companion chatbots. If a chatbot is capable of generating sexually explicit, violent, or self-harm related content, it must now implement age assurance before allowing access to that material. As AI becomes more conversational and personal, companies deploying AI systems are responsible for how those systems behave.
You must be testing for this behaviour even if this is not your product's purpose. Guardrails, testing, and governance are not optional features anymore. They are core design requirements.
Stargate: early signs of infrastructure normalisation
Reports this week suggested that OpenAI and Oracle dropped plans to expand the flagship Stargate AI data centre campus in Abilene, Texas by an additional 600MW after financing negotiations dragged on and OpenAI's infrastructure needs shifted. The core Stargate build itself has not been cancelled. Construction continues and the extra compute capacity is expected to be deployed at other sites instead. The original Stargate initiative was announced as a massive AI infrastructure program targeting up to $500 billion of investment over several years, with multiple campuses planned across the US.
I think this shows the first signs of normalisation. When infrastructure cycles are capital-intensive, plans inevitably get reshaped as financing, power availability and demand forecasts evolve. Perhaps investors are now scrutinising ROI rather than assuming extraordinarily aggressive growth. AI usage is still rising rapidly, but the industry may be entering a phase where projects need clearer economics.
CNN and CCDH: popular chatbots are failing teen safety tests
This is a timely reminder that not all AI is safe by default, especially for young people. CNN and CCDH found that across hundreds of tests, many of the most popular chatbots gave teen users information that could support violent planning, including weapon guidance.
We have tested these types of prompts against CurricuLLM and can confirm the safe responses we would expect in a school setting. I welcome studies like this because they strengthen the industry's test data and help surface new edge cases. In our own testing, we identified that we are still slightly too permissive in some location-related responses, and we are already working to tighten that further.
Even with that, our results were materially stronger than the chatbots highlighted in this report. If your school is introducing AI for students, choose one that has been built specifically for safe use in schools.
British Council: personalisation with safeguards
This British Council piece discusses how AI can be used in ways that genuinely improve learning while keeping teachers and learners at the centre. Personalisation and efficiency only matter if they sit alongside real safeguards on bias, privacy, access and human oversight.
Families, AI, and culturally relevant learning
This is a useful example of where AI can add value in education without trying to replace the human relationships that matter most. When families are involved in creating learning materials, AI can help make content more culturally relevant, more engaging, and more connected to children's real lives.
Education AI should not flatten identity or impose one generic version of learning. It should help teachers, families and communities shape resources that reflect who their learners are, while keeping human judgement, care and context at the centre.
Shiksha Copilot: a large-scale deployment in Karnataka
In a large real-world deployment across government schools in Karnataka, teachers used Shiksha Copilot to create and adapt lesson plans. The paper found it reduced lesson planning time, eased bureaucratic workload, lowered teaching-related stress, and supported more activity-based teaching.
English lesson plans were generally reliable, but Kannada translations needed substantial linguistic editing. Teachers wanted tools like this embedded into existing professional communities, not treated as a standalone AI product.
The thread tying this together
This week's theme is accountability. Australia's new codes make it clear that if your system can produce harmful content, you are responsible for preventing it. The CNN and CCDH findings show how far mainstream chatbots still fall short on basic safety for young users. And the positive examples, from the British Council, from Lagos, from Karnataka, show that AI in education works when it is designed around the people it serves, not deployed and hoped for the best.
AI for schools has to be safe by design, not safe by luck. That means testing, governance, and guardrails are not features you add later. They are the foundation.

