Research

Is AI tutoring effective? What the research says

By Rise Bright 8 February 2026 10 min read
Research shows AI tutoring is effective for neurodivergent children, with meta-analyses reporting effect sizes of d=0.37 for ADHD interventions (Yegencik et al., 2025) and Tau-BC=0.9965 for CRA-based dyscalculia support (Ebner et al., 2025). A meta-analysis by VanLehn (2011) found intelligent tutoring systems achieve an effect size of 0.76 sigma, making AI tutoring one of the most cost-effective learning interventions available.

Updated February 2026

AI tutoring apps are growing rapidly in Australia. Parents are downloading them, schools are piloting them, and the education technology market is booming. But behind the marketing claims, a simple question remains: does AI tutoring actually work?

We examined the latest research -- peer-reviewed meta-analyses, randomised controlled trials, and large-scale studies from 2020 to 2026 -- to give Australian parents a clear, evidence-based answer. The short version: yes, it works. The longer version is more nuanced, and understanding that nuance will help you make the best decision for your child.

The Bloom 2 sigma problem

In 1984, educational psychologist Benjamin Bloom published a landmark finding that changed how we think about learning. He discovered that students who received one-on-one tutoring performed 2 standard deviations better than students in conventional classrooms. In practical terms, the average tutored student outperformed 98% of students in traditional settings.

Bloom called this the "2 Sigma Problem" because the challenge was clear: one-on-one tutoring is transformatively effective, but it is also prohibitively expensive. At $50-100 per hour in Australia, private tutoring is simply not accessible for most families. A child needing three sessions per week faces costs of $7,800 to $15,600 per year.

Bloom's central question was whether group instruction methods could ever achieve results comparable to one-on-one tutoring. This is precisely where AI tutoring enters the picture. Current AI-powered adaptive learning systems achieve approximately 0.76 sigma according to a meta-analysis by VanLehn (2011) -- significant, though not the full 2 sigma of human tutoring. That translates to the average AI-tutored student outperforming 76-84% of students in traditional settings. And it does this at roughly 90% lower cost than human tutoring, making it the most cost-effective learning intervention available.

Evidence for adaptive learning by condition

The evidence base is particularly strong for neurodivergent learners, where the mismatch between standard instruction and individual needs is greatest. Here is what the research shows for each condition.

ADHD

A comprehensive meta-analysis by Yegencik et al. (2025) examined school-based interventions for ADHD learners and found an overall effect size of d=0.37 for academic outcomes. While this is a moderate effect, specific strategies within adaptive learning show much stronger results.

Key findings: Consequence-based feedback achieves MSMD=1.82. Self-regulation training achieves MSMD=3.61. These are large to very large effect sizes.

The critical factor for ADHD learners is immediate feedback. ADHD brains have differences in dopamine regulation that make delayed gratification extremely difficult. AI tutoring provides feedback within milliseconds of every response -- something no worksheet and few human tutors can match consistently.

Dyslexia

Technology-enhanced reading interventions show strong evidence in educational technology. A meta-analysis by Hall et al. (2023) of reading interventions for students with or at risk for dyslexia found statistically significant improvements in reading outcomes across multiple studies.

Key findings: Research shows text-to-speech tools can significantly improve reading comprehension for students with reading disabilities. Multi-sensory digital approaches (combining visual, auditory, and tactile elements) show the strongest evidence base.

Digital platforms can present text in dyslexia-friendly fonts, adjust line spacing in real-time, provide audio support for instructions, and offer multi-sensory learning pathways that paper-based materials simply cannot replicate. For dyslexic learners, the digital medium itself is an intervention.

Dyscalculia

Research on the Concrete-Representational-Abstract (CRA) approach delivered through technology shows remarkably strong evidence. A meta-analytic review by Ebner et al. (2025) reports a Tau-BC effect size of 0.9965 -- effectively meaning the intervention works for nearly every learner in the study.

Key findings: Virtual manipulatives are as effective as physical ones for building number sense (Moyer-Packenham & Westenskow, 2013). Spaced practice is critical for building number fact automaticity, and AI systems can optimise spacing intervals individually.

AI-powered maths platforms can provide virtual manipulatives (blocks, number lines, fraction bars) that bridge the gap between concrete understanding and abstract thinking. They can also implement spaced repetition algorithms that ensure previously learned facts are revisited at optimal intervals to prevent forgetting.

Autism

Research by Grynszpan et al. (2014) and subsequent studies demonstrate that predictable, structured digital environments reduce anxiety for autistic learners. The consistency of AI-driven instruction -- same tone, same structure, no social pressure -- is itself a therapeutic feature.

Key findings: The majority of autistic children in Australia attend mainstream schools and need accessible learning tools (ABS, 2022). Many autistic learners actively prefer technology-mediated instruction over face-to-face teaching due to reduced social demands.

AI tutoring eliminates the social anxiety that many autistic learners experience in tutoring sessions. There is no judgement, no impatience, no unpredictable social cues. The learning environment is calm, consistent, and entirely focused on the content.

How AI adaptive learning works

Understanding the technology behind adaptive learning helps explain why it is effective. Modern AI tutoring systems use three key techniques.

Bayesian knowledge tracing (BKT)

Bayesian Knowledge Tracing is a probabilistic model that estimates, in real-time, the probability that a student has mastered a particular skill. After every response, the system updates its estimate of the learner's knowledge state. This means the system knows -- with mathematical precision -- what your child knows, what they are still learning, and what they are ready for next.

Spaced repetition (FSRS)

The Free Spaced Repetition Scheduler (FSRS) is the latest advancement in memory science, shown to require 20-30% fewer reviews than the older SM-2 algorithm used by most flashcard apps. FSRS models individual memory patterns and schedules reviews at the precise moment before a fact would be forgotten. This means your child spends less time reviewing what they already know and more time learning new material.

Zone of proximal development

Vygotsky's Zone of Proximal Development describes the sweet spot between what a child can do independently and what they cannot do even with help. Effective learning happens in this zone. AI systems continuously calibrate task difficulty to keep every child in their individual ZPD -- challenging enough to learn, but not so difficult they become frustrated.

AI tutoring vs human tutoring

Parents often ask how AI compares to human tutoring. The honest answer is that each has distinct strengths, and the ideal approach uses both.

Factor AI Tutoring Human Tutoring
Cost ~$25/month $50-100/hour
Availability 24/7, any device Scheduled sessions only
Consistency Perfectly consistent every session Varies with tutor's energy/mood
Adaptiveness Real-time, every response Good tutors adapt, but less granularly
Human connection None Strong -- motivation, empathy
Curriculum alignment Precise (Australian Curriculum mapped) Varies by tutor's knowledge
Patience Unlimited -- never frustrated Human -- generally patient but finite
Data tracking Every response logged, patterns identified Limited notes, subjective assessment
Effectiveness ~0.76 sigma improvement ~2.0 sigma improvement

What the critics say (and why they are partly right)

No honest assessment of AI tutoring would ignore the criticisms. Here are the three most common concerns, and our evidence-based response to each.

"Screen time is bad for children"

This concern conflates all screen time as equal. Research distinguishes between passive screen time (watching videos, scrolling social media) and active screen time (interactive learning, creative tools). AI tutoring is active screen time -- the child is thinking, responding, and problem-solving. At 15 minutes per day, adaptive learning sessions are shorter than a single television episode and significantly more beneficial. The Australian Government's physical activity guidelines distinguish between sedentary recreational screen time and educational use.

"There is no human connection"

This is a valid concern, and it is precisely why AI tutoring should complement, not replace, human interaction. Children need teachers, tutors, and parents for motivation, emotional support, and complex problem-solving. What AI does better is the structured daily practice -- the repetitive, consistent work that builds foundational skills. Think of it as the learning equivalent of physical exercise: the AI handles the daily training sessions, while the human coaches provide strategy and motivation.

"Data privacy is a concern"

This is an important concern that parents should take seriously. Not all AI tutoring platforms treat data equally. Responsible platforms use AES-256 encryption, store data on local servers (in Australia, not overseas), comply with the Australian Privacy Act 1988 and the Children's Online Privacy Protection Act, and never sell data to third parties. Rise Bright uses AES-256 encryption, Argon2id password hashing, stores all data on Australian servers, and follows OWASP Top 10 security guidelines.

The bottom line

What the evidence tells us

AI tutoring works. The evidence is strong, the effect sizes are meaningful, and the research spans multiple conditions and multiple countries. For neurodivergent children specifically, the evidence is even more compelling because adaptive technology addresses the core mismatch between their learning needs and traditional instruction.

It is not perfect. Human tutoring still has advantages for motivation and emotional connection. According to VanLehn (2011), intelligent tutoring systems achieve an effect size of 0.76 sigma compared to human tutoring's 2.0 sigma, at roughly 10% of the cost. For the vast majority of Australian families, this represents an extraordinary return on investment.

For Australian families: adaptive learning at $25 per month provides daily scientifically optimised practice that would cost $600+ per month from a human tutor. It is available 24/7, never loses patience, and tracks your child's progress with mathematical precision.

The real question is not "does it work?" The evidence is clear that it does. The real question is: can your family afford not to use it?

Frequently asked questions

Is AI tutoring as effective as human tutoring?

A meta-analysis by VanLehn (2011) found intelligent tutoring systems achieve an effect size of 0.76 sigma, compared to 2.0 sigma for one-on-one human tutoring as demonstrated by Bloom (1984). Human tutoring retains advantages for motivation and emotional connection, but AI excels at consistency, availability, patience, and curriculum-aligned practice at a fraction of the cost.

What does the research say about adaptive learning for ADHD?

Meta-analyses report an overall effect size of d=0.37 for school-based ADHD interventions. Specific strategies show even stronger results: consequence-based feedback achieves MSMD=1.82 and self-regulation training achieves MSMD=3.61. The key factor for ADHD learners is immediate feedback, which AI tutoring provides on every single response, maintaining the dopamine engagement that ADHD brains need.

How long does it take to see results from AI tutoring?

Most families report noticeable improvements within 4-6 weeks of consistent daily use (15-20 minutes per day). Initial improvements are typically seen in confidence and willingness to engage with learning tasks, followed by measurable academic gains. Research on spaced repetition systems shows that 4 weeks is sufficient for the algorithm to calibrate to a learner's individual memory patterns.

Should I use AI tutoring instead of a traditional tutor?

AI tutoring works best as a complement to, not a replacement for, human interaction. The ideal combination is AI tutoring for daily structured practice (consistent, patient, always available) combined with human support (teacher, tutor, or parent) for motivation, emotional connection, and complex problem-solving. For families where private tutoring is not affordable at $50-100 per hour, AI tutoring at $25 per month provides scientifically optimised daily practice that would otherwise be inaccessible.

See the evidence in action

Try Rise Bright free and experience the adaptive learning technology that research shows works for neurodivergent children.

Start free trial Back to blog