CurricuLLM LogoCurricuLLM
In the ClassroomFeaturesPricingTraining HubDevelopersFAQ
Dashboards, divergent thinking, and the workflows AI is quietly reshaping in schools
7 February 2026

Dashboards, divergent thinking, and the workflows AI is quietly reshaping in schools

Dan Hart

Dan Hart

CEO, Co-Founder, CurricuLLM

There is a widening gap between what teams want right now and what enterprise platforms can deliver without weeks of setup. That tension is reshaping software, workflows, and the way we think about AI for schools.

This week I have been pulling on several threads: what happens when AI makes software replicable, what learning loses when answers become frictionless, and why the workflows we have inherited might be the wrong shape for the tools we now have.


Custom dashboards and the gap enterprise software needs to close

On Thursday I shared a small moment from my week. My team built a custom dashboard for one meeting. It was quick, purpose-built, and it landed because it matched exactly what the room needed.

One of my team built it with very little coding experience. All they needed was imagination and a clear picture of what the room needed. It was not created in PowerBI or Tableau. It was vibed — faster than building the same thing in a traditional dashboard tool.

Now, doing this at scale across an organisation would be chaos. But as a one-off, locally stored, purpose-built artefact, it was a great and safe use of the tech.

It also highlighted something important. There is a widening gap between what teams want right now and what enterprise platforms can deliver without weeks of setup.

The question this raises for AI for schools is whether we get a new category of enterprise software: a safe "container" for vibed tools, with governance, access controls, audit trails, and secure connectors into enterprise data. Teams could build what they need without turning the organisation into the Wild West.


AI is eating software — and investors are paying attention

This Sydney Morning Herald piece feels like the market catching up to that same idea, just at a much bigger scale. The concern is not "AI will make software better." The concern is "AI might start doing the job the software used to do." If agent tools can complete tasks end to end, investors start to question how durable the classic SaaS model really is — and whether seat-based pricing holds up when productivity rises and fewer seats are needed.

That is why the story is suddenly being told as an existential one for software companies. Not because every product disappears, but because moats get tested in a different way. The interface shifts. The workflow shifts. Value moves closer to outcomes, and away from features.

This AFR piece captures the same mood: fast-improving AI is making more software feel replicable, and investors are rethinking which moats are real.

For AI for schools, this matters because the same dynamic is playing out in education technology. Tools that are slow to adapt risk being overtaken by teams that can build fit-for-purpose solutions in hours.


The seduction of skipping the struggle

This Irish Examiner piece nails a feeling a lot of educators have: AI makes the answer frictionless, but learning is not just about the answer.

There is a scene in The Matrix where Trinity goes from "Not yet" to "Let's go" in seconds. That is the seduction of AI in education. Upload the skill. Skip the struggle. Move on.

The authors say — and I agree:

> "The emotions of learning are often challenging: frustration, impatience, self-doubt — the lived experience of inhabiting the space of 'not yet', trying to force the head to boldly go where it has not gone before."

Maybe the job now is not to upload more knowledge. Maybe it is to protect the space where students can still say "Not yet." For anyone thinking about AI for schools, this is a design question as much as a pedagogical one: how do we build tools that preserve productive struggle rather than engineering it away?


The 2026 International AI Safety Report and what it means for education

The 2026 International AI Safety Report offers one of the most comprehensive global overviews of how nations, institutions, and industries are approaching AI safety. While much of the focus is on technical risk and governance, the education sector features too.

AI is reshaping education, but the shift is raising new challenges alongside new opportunities:

  • AI tools are already embedded in curriculum planning, assessment, and tutoring. But their widespread use by students is disrupting traditional homework and exam models. Schools are rethinking how to uphold integrity in an AI-assisted world.
  • Policymakers are questioning early exposure. It is still unclear how much AI tools influence critical thinking, habits, or independence over the long term.
  • AI literacy is emerging as a public safety issue. Teaching people how to recognise and reason about AI-generated content is essential, but it is not a full solution. Even well-informed users are vulnerable to convincing misinformation.
  • Organisations are being encouraged to integrate AI training into governance frameworks. Education is seen as a core part of managing AI risks responsibly.
  • Workforce development is a priority. As job roles evolve, upskilling and lifelong learning programs are critical to keeping people informed, employable, and AI-aware.

For AI for schools, this report reinforces a clear message: safety, literacy, and governance are not optional extras. They are foundational.


Can machines be creative? New research on divergent thinking

New research in Scientific Reports benchmarked "divergent creativity" in LLMs against 100,000 humans, using the Divergent Association Task (DAT) plus creative writing tasks scored with the same automated metrics.

The DAT measures how well you perform at writing ten single nouns that are as different in meaning and usage from each other as possible. Top models can beat the average human on this specific measure, and prompting can reliably push scores higher. But humans still dominate the top end.

The best model results at the time of the study did not catch the mean of the more creative human cohorts. I would really like to see this retested with the latest models to see whether the trend is moving up.

For those of us working on AI for schools, this is a useful reminder. AI can augment creative processes, but the most original thinking still comes from humans — particularly when they have been taught to think divergently. That is a skill worth protecting in the curriculum.


Redesigning workflows: from human limitations to AI capabilities

Use AI as a "force multiplier" for what you do, not a replacement for what makes you you. (Marketoonist captures this beautifully.)

Most of our processes were built to manage human limitations. It is time to start considering how they look for AI capabilities.

I have been reflecting on why our workflows look the way they do. When you deconstruct them, you realise most standard operating procedures are actually just risk mitigation to prevent mistakes from having an impact:

  • We delay decisions because effort is expensive — we do not want to waste time doing the wrong thing.
  • We add verification layers because we lose attention and make errors.
  • We write heavy documentation at hand-offs to bridge the gap between different skill sets.

But when AI starts performing these steps, the constraints change — and so could the process. We are moving from an era of scarcity (human time) to an era of context and speed. However, we are also swapping human fatigue for AI hallucinations.

Some ideas on how to redefine the workflow:

  1. Shift to "Context First." AI thrives on context. Instead of documenting at the end for a handoff, we need to document knowledge first. The earlier you feed the model the "why" and the "how," the better the output.
  2. Shift to "Parallel Exploration." AI is fast. We do not need to delay construction to save effort. We can generate five different prototypes in the time it used to take to plan one, allowing us to release value earlier.
  3. Shift the Guardrails. We no longer need to check for human tiredness or typos. The new risks are hallucination and confident inaccuracy.

We should not just slot AI into human-speed workflows. We need to redesign the flow around the speed — and the specific risks — of AI.

At the AI Hub, we are dedicating this year to experimenting with these changes to see what actually works and what does not. The implications for AI for schools are significant: if schools redesign their workflows around AI capabilities rather than just automating existing ones, the gains could be transformational.


The real question: what should learning look like when AI is everywhere?

AI in education is stuck in a loop. One side says AI will save teachers. The other says it will wreck learning. This Hechinger Report article argues both narratives miss the real question: what should learning look like when AI is everywhere?

It pushes us to stop treating efficiency as the goal, because "saving time" can easily just lock schools deeper into the same old factory model.

The real risk is not AI replacing the human parts of school. The risk is failing to define and protect what is most human — belonging, purpose, creativity, critical thinking, and connection — and then designing learning intentionally around those outcomes.


What does this mean for AI for schools?

If you connect the threads across this week, the direction of travel is clear:

  1. The gap between "what teams need" and "what platforms deliver" is widening. Schools and organisations that can build purpose-fit tools safely will move faster.
  2. Learning is not the answer — it is the struggle. AI tools in schools must be designed to preserve productive difficulty, not eliminate it.
  3. Safety and governance are becoming non-negotiable. The 2026 International AI Safety Report makes the case that AI literacy and responsible use frameworks are foundational, not optional.
  4. Workflows need redesigning, not just automating. The processes we inherited were built for human constraints. AI introduces new capabilities and new risks that demand a different shape.
  5. The goal is human outcomes, not efficiency. If AI for schools only helps schools do the old model faster, we miss the moment. The question is what we choose to protect and prioritise.
Back to all posts
CurricuLLM Logo
CurricuLLM

AI for schools

Product

FeaturesPricingDevelopersUse CasesFAQ

Company

About usPrivacy policyStatusContact

Resources

Terms of useSupportTraining hubBlogResearchPress