CurricuLLM LogoCurricuLLM
In the ClassroomFeaturesPricingTraining HubDevelopersFAQ
Global perspectives on AI adoption and the gap between use and capability
14 February 2026

Global perspectives on AI adoption and the gap between use and capability

Dan Hart

Dan Hart

CEO, Co-Founder, CurricuLLM

From Mexico's high AI uptake to Australia's governance frameworks, education systems globally are wrestling with the same question: how do we close the gap between AI adoption and capability in schools?

This week has been about looking beyond our immediate context and learning from what is happening internationally. The conversation about AI for schools is remarkably similar across countries, but the responses are starting to diverge in interesting ways.


Mexico: high adoption, uncertain capability

Mexico has emerged as one of the highest AI uptake markets globally. Google and Ipsos report that 66 percent of Mexico's population uses AI, exceeding the global average. Among students over 18, that figure climbs to 85 percent. Demand for AI-related skills reportedly grew 148 percent from 2023 to 2025.

But high adoption does not mean deep capability. The same research shows many students are worried about overreliance, and many are not sure how to use AI well. That gap between adoption and capability is where the real risk sits.

As education technology commentator Erick Ramírez notes, we have been here before. One device per child, big spend, weak impact. AI for schools will follow the same path if it is treated as a standalone rollout instead of a curriculum and capability redesign.

AI will not fix inequality by default. But with good system design, it can reduce friction, expand access to feedback, and free teachers to spend more time on the parts of learning that are human.


AI is not reducing work — it is intensifying it

A Harvard Business Review study examined 40 workers at a tech company and found something many educators are also experiencing: AI is not giving people time back. Instead, they worked faster, took on a broader scope of tasks, and let work stretch into more hours of the day, often without being asked, because it felt easy and rewarding to keep going.

The researchers found that AI accelerated certain tasks, which raised expectations for speed. Higher speed made workers more reliant on AI, which widened the scope of tasks attempted. This further expanded the quantity and density of work.

While workers initially felt more productive, the intensified workload led to cognitive fatigue, burnout, weakened decision-making, and lower quality work over time.

This has direct implications for AI for schools. If we introduce AI tools without redesigning workflows and expectations, we risk intensifying teacher workload rather than reducing it. The goal should be to protect time for what matters most, not to fill every minute with faster output.


CurricuLLM in the media

IT Brief Australia covered our new Studio Mode release, highlighting the shift from chatbot-style interactions to file-based resource generation. Teachers do not need another tool that just talks back. They need something that produces.

Studio Mode combines teacher-created materials, built-in curriculum content for Australian and New Zealand curricula, and real-time student progress data to generate classroom resources including differentiated quizzes, leveled reading materials, marking guides, infographics, and reteach resources.

It is FREE for all teachers in Australia and New Zealand. If you have not tried it yet, get involved.


AI and multilingual education: performance drops in other languages

New research presented at ACL 2025 tested leading AI models on education tasks across multiple languages including Hindi, Arabic, Farsi, Telugu, Ukrainian, Czech, Mandarin, and German. The tasks included identifying student misconceptions, providing targeted feedback, interactive tutoring, and grading.

The findings are clear: when you test models on education tasks in other languages, performance drops compared with English. The drop is noticeable even for widely spoken languages. For lower resource languages, it can be severe, with far more errors and less consistency.

This highlights an equity risk. The models most likely to be deployed in lower income contexts are often smaller and cheaper. Those models tend to degrade more across languages, which means the gap widens for the people who most need reliable support.

CurricuLLM uses the same top performing models referenced in this research, and we are carefully monitoring multilingual performance as we expand beyond English-speaking markets.


Participatory governance: bringing educators and communities into AI policy

The Australian Public Policy Institute released a policy insights paper on governing AI in education, with contributions from Professor Kalervo Gulson and research highlighting platforms like EduChat.

AI for schools is moving faster than policy. The paper argues that policy responses have often been reactive, and that this makes it harder to steer AI toward the outcomes we actually want.

It also calls out a governance blind spot. If we rely mainly on technical expertise to identify harms, we miss critical insights from educators, students, and communities. The people closest to the impact often have the least say.

The research recommends operationalising local engagement to identify AI risks and benefits, establishing a permanent Advisory Council for AI in Education, and strengthening equity and inclusion in EdTech procurement.

It is a strong reminder that participatory approaches bring students, teachers, parents, and communities into policy design earlier, and more often, rather than treating governance as a one off document.


LEGO Education: foundations over FOMO

LEGO Education announced a new K-8 Computer Science and AI curriculum designed in partnership with MIT, Tufts University, and the Computer Science Teachers Association.

What stood out to me was their approach: foundations over FOMO. They are choosing to start with a clear definition of AI literacy, and building from real classroom needs, not the newest model release.

It sounds obvious, but it is the hard part. What should a Year 4 student actually understand about AI? What misconceptions will they pick up? What do teachers need to feel confident? And how do we align it to standards and real learning goals, not hype?

The curriculum includes 30 inquiry-based, standards-aligned lessons with both screen-free and digital components. Students engage with pretrained machine learning models and can train their own models to interact with LEGO hardware, building understanding of how AI technology works.

For those of us working on AI for schools, this is a useful reminder. Start with the foundations. Build from real needs. And prioritise understanding over novelty.


The thread tying this together

The global picture is clear: AI adoption in education is high, but capability is lagging. Systems are moving fast, but policy is struggling to keep up. Tools are proliferating, but equity gaps are widening.

What matters now is not just how many people are using AI in schools, but how well they are using it, and whether the systems we are building are designed to close gaps or widen them.

The countries and organisations that will succeed are the ones that treat AI for schools as a system design challenge, not a technology rollout. That means curriculum redesign, teacher capability, participatory governance, multilingual equity, and workflow rethinking.

We have been here before with EdTech. We know what does not work. The question is whether we can learn from it this time.

Back to all posts
CurricuLLM Logo
CurricuLLM

AI for schools

Product

FeaturesPricingDevelopersUse CasesFAQ

Company

About usPrivacy policyStatusContact

Resources

Terms of useSupportTraining hubBlogResearchPress