People are getting fast at using AI, but that is not the same as getting good at working with it. This week the conversation spans what fluency actually looks like when people use AI, how Australian independent schools are approaching adoption at a whole-school level, and why the broader policy debate needs to shift from whether AI is good or bad to who benefits and who carries the cost.
Speed is not fluency: what Anthropic's AI Fluency Index tells us
Anthropic's AI Fluency Index looked at 9,830 anonymised, multi-turn Claude.ai conversations from a 7-day window in January 2026 and tracked 11 observable fluency behaviours.
The headline finding is encouraging: 86 percent of conversations showed iteration and refinement. Those conversations averaged roughly double the number of fluency behaviours compared with single-turn exchanges (2.7 vs 1.3). They were also far more likely to include evaluation behaviours like questioning the model's reasoning and calling out missing context.
But there is a catch. In conversations where the AI produced artifacts, such as code, documents, apps, and tools, users became more directive up front but less evaluative afterward. The report found drops in requests to identify missing context (-5 percentage points), checking facts (-4 percentage points), and questioning reasoning (-3 percentage points).
This matters for AI for schools. If the pattern holds in education, it suggests people trust output more when AI produces something tangible, even though that is exactly when they should be checking hardest. Teachers adopting AI tools for planning, resource creation, and assessment need to build the habit of evaluating output, not just directing input. Fluency is not just about getting an answer. It is about knowing when the answer is wrong.
Independent Schools Australia: what good whole-school AI adoption looks like
Independent Schools Australia has published a report looking at how schools are approaching AI adoption at a whole-school level, with a set of case studies showing what good practice can look like in different contexts.
A few of the case studies that stood out:
Nyangatjatjara College (NT): Students and teachers using AI to create learning resources on Country, including avatars, VR, and 360-degree imagery for digital field trips. The case study also highlights challenges like cultural relevance, translation for Pitjantjatjara language, and Indigenous data sovereignty.
Hills Christian Community School (SA): A whole-school approach that blends digital innovation with outdoor learning, including interdisciplinary projects using AI, VR/AR, and sustainability themes, plus trials of adaptive tools to support diverse learners.
Scotch College Adelaide (SA): A model that involves students in shaping how AI is used, including students co-leading staff professional learning and mentoring younger peers.
St Hilda's Anglican School for Girls (WA): An AI-enhanced student dashboard approach that brings multiple data sources together and uses AI-generated summaries to support teachers, with an emphasis on assisting professional judgement rather than replacing it.
Horizons College of Learning and Enrichment (QLD): A whole-school AI strategy and standalone AI policy aligned to the Australian Framework for Generative AI in Schools, alongside practical teacher workflows for creating differentiated scaffolds and reducing admin load.
What ties these together is that none of them treat AI as a standalone rollout. Each school has embedded AI into a broader conversation about teaching, learning, and school improvement. That is the pattern that scales. AI for schools works when it is part of the system, not bolted onto it.
The policy question is not whether AI is good or bad — it is who benefits
Bernie Sanders argues in The Guardian that the scale and speed of the AI shift is ahead of policy, and if we do nothing, the default outcome is that the gains flow upward and the disruption flows down. The question is not whether AI is good or bad. It is who benefits, and who carries the costs.
Maybe the work will shift, rather than simply disappear. In my team we are already seeing the impact. We are doing more, but not with less. The big change is faster prototyping. AI makes it cheap to test ideas, draft versions, and explore options quickly.
That means two things happen at once. First, you unlock a whole pile of lower-benefit solutions that were not worth doing before, because the cost-benefit ratio used to be too lopsided. Second, you can take bigger swings on riskier projects, because you can prototype and test ideas faster, learn sooner, and either double down or bail out without burning weeks.
The surprising part is where the time goes. We are spending more time with people. More time clarifying the real problem, getting context, sense-checking, and making decisions together. Less time stuck in the queue of repetitive admin work. I can imagine this happening in a lot of roles.
Picture a customer service centre where staff spend more time with the customer because they are not chained to the service queue all day, and AI is doing the triage, summaries, and routine follow-ups in the background.
If that is the direction, then the policy debate should not just be about stopping AI or speeding it up. It should be about designing for that outcome on purpose. More human time, not less human value.
For AI for schools, the same principle applies. The goal is not to automate teaching. It is to free teachers to spend more time on the parts of their work that are irreducibly human: building relationships, exercising professional judgement, and responding to the student in front of them.
The thread tying this together
This week's theme is the difference between speed and direction. People are getting faster at using AI, but fluency requires evaluation, not just iteration. Schools are adopting AI, but the ones doing it well are embedding it in whole-school strategy, not treating it as a tech rollout. And the broader policy conversation needs to move past good-versus-bad to focus on who benefits and how we design for the outcomes we actually want.
AI for schools is not a technology question anymore. It is a design question. The schools, systems, and policymakers that treat it that way will be the ones that get the most value from it, and distribute that value most fairly.

