This week covers the economics behind AI video's retreat, NYC's 30-page AI playbook for 1,600 public schools, the Australian Government's formal expectations for data centre and AI infrastructure developers, and new research from Common Sense Media on how families are thinking about AI.
What failed with AI video is economics, not the idea
There will be plenty of people celebrating this as the death of AI slop, and plenty saying they knew AI video was a fad all along. This SMH piece captures the moment well.
I think that reads the moment too narrowly. What has failed here is not the idea of AI video. It is the economics of serving it at scale. Video generation is still expensive, hard to moderate, and difficult to productise in a way that holds up under real usage.
The demand has not disappeared. The capability has not disappeared. What is missing, for now, is a cost curve and infrastructure stack that make it viable at broad scale. When compute gets faster and cheaper, I expect AI video to return in force.
That is usually how these things go. Early excitement overshoots. The first wave struggles under technical and commercial reality. Then the underlying capability improves, costs fall, and the category comes back in a much more practical form.
NYC releases a 30-page school AI playbook
NYC just released a 30-page AI playbook for its 1,600 public schools. This is the first formal guidance from the largest school district in the country.
Teachers can use AI for brainstorming lesson plans, research, scheduling, translation and drafting communications. They cannot use it for grading, discipline or crafting specialised learning plans for students with disabilities. Students can use AI to explore topics and support research but submitting AI-generated work as their own is still cheating.
The structure is a traffic light system. Green for approved use cases, yellow for proceed with caution, red for prohibited. Public feedback sessions are planned over the coming months with an updated guide due in June.
This is a significant step for school AI policy at scale. The traffic light model is practical because it gives teachers and students a clear framework without trying to legislate every possible scenario. More districts will need something like this, and the ones that move first will shape the pattern for everyone else.
Australian Government sets expectations for AI infrastructure developers
The Australian Government published its formal expectations for data centre and AI infrastructure developers. Five areas cover national interest and data sovereignty, clean energy contribution, water sustainability, workforce investment, and research and innovation.
Hyperscalers and neoclouds will be expected to provide compute access on favourable terms to Australian startups, researchers and not-for-profits. They will also be expected to deploy engineers and researchers locally and invest in Australian supply chains.
Proposals that do not align with the expectations will not be prioritised through Commonwealth regulatory assessments. Non-genuine proposals that congest approval pathways are explicitly not welcome.
The government is connecting social licence to operate with tangible contributions to the local innovation ecosystem, and using regulatory prioritisation as the mechanism. For school AI and education technology more broadly, this matters because the availability of local compute, local talent, and favourable access terms directly affects whether Australian-built tools can compete and scale.
Common Sense Media: how families are thinking about AI
New research from Common Sense Media adds to the picture of how families are thinking about AI.
Parents and teenagers both see AI as something that will shape everyday life, learning, work, and the future more broadly. The report suggests many young people are already using AI more than adults realise, including for learning, while parents remain concerned about safety, privacy, and longer term impacts on jobs and development.
Parents and teenagers are not uniformly optimistic or pessimistic about AI. The research shows a mix of interest, uncertainty, and concern, with clear demand for better safety standards, stronger privacy protections, and more responsible use of AI in contexts involving children.
This reinforces why school AI needs to be purpose-built for education rather than adapted from consumer tools. When families are asking for stronger safety standards and privacy protections, the answer is not to layer policies on top of general-purpose AI. It is to start with tools that were designed for the school context from the beginning, with curriculum alignment, age-appropriate behaviour, and transparent data practices built in.
The thread tying this together
This week's theme is that what scales depends on what is built underneath. AI video did not fail because the technology was bad. It failed because the infrastructure economics were not ready. NYC's playbook works because it gives schools a usable structure, not just a set of aspirations. Australia's expectations framework works because it ties regulatory access to real contributions. And the Common Sense Media research shows that families want AI that is safe and responsible, not just powerful.
School AI follows the same pattern. The tools that will last are the ones built on the right foundations: curriculum alignment, safety by design, transparent governance, and infrastructure that serves learners and teachers rather than extracting from them.

