Imagine this.
You find a stranger on the internet who seems nice. Other people say they’re “cool with kids”. One parent even gave them a doll through which they can talk to a four-year-old at home.
On that basis, you give them a class, shut the door, and hope for the best.
No safeguarding checks. No interview. No references. No DBS. No training. Just vibes.
It’s absurd when we describe it like that – but it’s uncomfortably close to how some schools are starting to “hire” AI.
We discover a shiny tool on social media, see a few clever outputs, and before long it’s helping write reports, design lessons, or even interact with students directly. Often with less scrutiny than we’d apply to a supply teacher covering a single lesson.
For me, AI in schools should always be viewed through a safeguarding lens. Not because AI is inherently evil – but because it is powerful, opaque, and trained on a mix of the best and worst of the internet. That combination demands the same disciplined thinking we already use when we bring any adult into contact with children.
Below is how I frame it when working with school leaders.
What AI actually is (in plain English)
We don’t need every teacher to become a machine learning engineer, but we do need enough shared understanding to make sensible decisions.
At its core, a modern AI model (like ChatGPT, Gemini, Claude, etc.) is:
- A giant stack of very simple “neurons” (small pieces of code that take an input and produce an output).
- These neurons are connected in layers – a neural network.
- During training, the AI is shown vast amounts of text (and now images, audio, code…) and repeatedly nudged to get better at predicting what comes next.
If you type:
“The cat sat on the …”
The model has learned that “mat” is a highly likely next word, but sometimes it chooses other reasonable options. That tiny bit of randomness is why the same prompt can give different answers each time – and why creativity and hallucination are two sides of the same coin.
Critically:
- No one “programmes in” understanding.
- No one can look at the middle of GPT-4 and say, “this neuron here is the ‘honesty’ neuron”.
- At scale (billions or trillions of parameters), we literally cannot inspect it all in a meaningful way.
The model doesn’t know things. It has learned patterns of text that sound like knowing.
That’s why hallucinations will never be completely eliminated. They can be reduced, mitigated, and redirected – but not removed.
If we accept that, the question for schools becomes:
Given that AI is powerful, opaque and fallible, how do we treat it responsibly around children and sensitive data?
My answer: the same way we treat adults – through safeguarding.
The safeguarding analogy: treat AI like a new member of staff
When a new teacher joins your school, you don’t simply hand them a timetable and a lanyard. You have a structured, layered approach to risk:
- Pre-employment checks
- DBS / police checks
- References
- Employment history
- Right to work
- Induction and training
- Safeguarding and child protection
- Data protection and acceptable use
- Behaviour and assessment policies
- Ongoing oversight
- Lesson observations and learning walks
- Line management and coaching
- Student and parent feedback
- Performance reviews
- Boundaries and access
- Clear expectations about communication with students
- Restrictions on devices, platforms and channels
- Escalation routes when concerns arise
We already know how to manage powerful, imperfect humans around children. The mistake is treating AI as a tool that either “works” or “doesn’t” and skipping all of this thinking.
Here’s how the analogy maps.
1. Vetting your AI: due diligence and data protection
What we do with staff:
- We confirm they are who they say they are.
- We check their history.
- We ensure they understand confidentiality.
What we should do with AI tools:
- Know exactly what you’re using
“Free AI tool I found online” is not good enough. You need:- A named product
- A clear privacy policy
- A data processing agreement
- A contactable company behind it
- Use education-appropriate licences
For tools like ChatGPT or Gemini, that means:- Paying for plans where your data is not used to train public models
- Having organisation-level controls, not dozens of individual logins
- Ensuring staff know which accounts are “safe” and which are not
- Respect existing data policies
A simple rule of thumb: If your policy says “this must not go on the internet”, it must also not go into a commercial AI. That includes:- Safeguarding records
- Sensitive pastoral notes
- Medical information
- Anything that could identify a vulnerable child or family
- Consider local or “on-prem” models for sensitive use
For many tasks (summarising policies, drafting letters, generating worksheets), a smaller, local model running on a school server or trusted platform is more than enough – and the data never leaves your environment.
This is your “DBS check” for AI. If you wouldn’t hire a teacher without it, don’t roll out an AI system without it either.
2. Classroom access: supervision, not unsupervised companionship
We wouldn’t hire a brand-new teacher and immediately give them:
- A closed room
- With your youngest pupils
- No induction
- No observations
- No idea what they’re actually saying
Yet that’s effectively what happens when we hand a primary-age pupil an AI “friend” on a tablet or buy an “AI companion” toy that talks to them over the internet.
We know from documented cases that AI systems can:
- Reinforce conspiracy thinking
- Encourage dangerous behaviour (including suicidal ideation or disordered eating)
- Echo harmful content they’ve absorbed during training
- Be manipulated via prompt injection to leak data or follow malicious instructions
So, the safeguarding principle must be:
No unsupervised, open-ended AI chat for children.
Instead:
- For younger pupils, AI should:
- Be embedded inside tightly-scoped learning apps
- Focus on narrow tasks (phonics, times tables, specific comprehension exercises)
- Operate within clearly defined, curriculum-aligned boundaries
- Be transparent to staff (monitorable logs, clear controls)
- For older pupils, when using general-purpose AI:
- Do it in class, with a teacher present
- Make critical evaluation part of the learning objective
- Talk explicitly about hallucinations, bias and limitations
- Set tasks where AI is a starting point, not the final product
Think of it as supervised placement. Would you let a trainee teacher run a class alone in week one? Probably not. So don’t let a model you barely understand “teach” your pupils unsupervised either.
3. Human in the loop: observation, moderation and professional judgement
When a new teacher joins, you:
- Drop into lessons
- Look at books
- Talk to students
- Offer coaching
You don’t assume that because they interviewed well, everything they say from that point onward is automatically safe and effective.
AI is no different.
Some practical ways to keep “a human in the loop”:
- Moderation workflows
- Students can submit AI-generated work, but staff must review before anything is published or shared externally.
- AI-drafted emails or reports are always edited and signed off by a human.
- Gatekeeping for student queries
A great example I’ve seen: students can send a question to an AI (via Gemini, for instance), but the response is first emailed to a teacher, who approves or rejects it before it goes back to the student. The AI never has direct, unmonitored access. - Clear roles for AI
Decide explicitly:- What AI is allowed to do (first drafts, idea generation, re-phrasing, quiz creation).
- What it is not allowed to do (set final grades, write safeguarding reports, give mental health advice, communicate 1:1 with pupils out of sight of staff).
- Culture of verification
Build habits like:- “Trust, but verify” – especially for factual claims and citations.
- Cross-checking statistics and laws with original sources.
- Treating AI as “an eager but unreliable assistant”, not an oracle.
This is your equivalent of ongoing line management. AI doesn’t get a free pass just because it’s clever.
4. Training for staff, students and parents
We wouldn’t expect a new teacher to succeed without induction, and we shouldn’t expect AI to be used well without training the humans around it.
At minimum:
- Staff training should cover:
- Basic understanding of how generative AI works
- Strengths and limitations (including hallucinations and bias)
- Data protection implications and approved tools/accounts
- Practical classroom strategies and example workflows
- Student education should cover:
- What AI is and isn’t (pattern prediction, not “thinking”)
- How to write good prompts and evaluate responses critically
- Why they must never share personal or sensitive information
- How AI-generated work interacts with academic honesty policies
- Parents should hear:
- What tools the school is using and why
- How pupil data is being protected
- How they can support healthy, critical use at home
- Why AI “friends” and unsupervised chatbots are a safeguarding concern
If we don’t provide this, we push families towards random apps and TikTok advice – exactly where we have least visibility and control.
5. A simple rule for school leaders
If you’ve skimmed everything else, here is the short version I use when talking to SLT:
Treat AI like a powerful, slightly unpredictable new member of staff.
- Vet it.
- Train it (and the people using it).
- Supervise it.
- Limit what it has access to.
- Keep a human in the loop.
Do that, and you can get extraordinary value from AI while staying aligned with your safeguarding duties.
Skip those steps, and you’re essentially putting an unvetted stranger in front of your children because “other people online said they were cool”.
As a profession, we already know how to do this properly. Safeguarding isn’t a separate conversation from AI – it’s the framework that should shape the whole thing.
