Digital Natives Don’t Exist: Why Our Safeguarding Myths Are Failing a Generation

How schools must move from control to stewardship in an era where AI companions shape identity, relationships, and emotional development

We have spent years working against the odds to make the internet safe for children in school. Filtering their access, monitoring their clicks, building ever-higher walls around their digital experience. Yet while we constructed these barriers, something profound shifted beneath our feet: artificial intelligence stopped being a tool and became a companion.

The question facing education leaders today is not whether AI poses risks; that much is evident. The question is whether our inherited frameworks for digital wellbeing and safeguarding can capture, monitor, control and protect what is actually happening in young people’s lives. Spoiler alert: they cannot.

The Invisible Architecture of Digital Childhood

Consider this: over seventy per cent of ChatGPT usage has nothing to do with productivity or study. Instead, people (disproportionately young people) are seeking AI help with personal counsel, navigating social dilemmas, and processing relationship conflicts. Nearly a quarter of young adults now view AI partners as legitimate alternatives to human connection.

Half of UK children aged seven to seventeen have used Snapchat’s My AI, making it the most widely adopted generative AI tool for this demographic. Among teenagers, usage rises to seventy-two per cent, with teen girls leading adoption. These are not occasional interactions. These are formative digital experiences happening entirely outside adult supervision, educational scaffolding, or ethical guidance.

Far from being a quirky technological footnote, this is a fundamental shift in how identity, intimacy, and emotional development are mediated. The digital landscape our students inhabit is fundamentally different from anything previous generations experienced. It is algorithmically curated, behaviourally adaptive, and emotionally responsive in ways designed to maximise engagement rather than wellbeing. Where once we worried about stranger danger and inappropriate contact, we now face something more insidious: synthetic intimacy that mimics care, validates endlessly, and requires nothing but our most intimately human gift: our presence, in return.

From Sovereignty to Compliance Fatigue

The erosion of digital sovereignty (the capacity to make meaningful choices about one’s data, privacy, and digital participation) has been so gradual that we barely register it any more. Click-wrap agreements, incomprehensible privacy policies, and deliberately exhausting consent mechanisms have trained an entire generation to simply click ‘Accept’ without reading, without understanding, without truly choosing.

This is not accidental. It is profitable.

Young people experience this sovereignty erosion acutely. They inherit a digital ecosystem built on surveillance capitalism, where attention is currency and behavioural data fuels billion-pound industries. The fatigue they feel around digital consent is not apathy; it is rational resignation in the face of deliberately overwhelming complexity. When we ask schools to address digital wellbeing through filtering and monitoring alone, we miss this fundamental dynamic. We cannot filter our way out of algorithmic manipulation. We cannot monitor our way past manufactured emotional dependency.

The Myth That Will Not Die

Perhaps the most persistent barrier to effective digital education is the myth of the digital native: the notion that young people possess innate technological fluency simply by virtue of their birth year. This concept, coined in 2001 to describe people born after 1980, has outlived any usefulness it might once have claimed. It is now actively harmful. It suggests that digital competence is inherited rather than taught, that wisdom about technology arrives automatically with youth, that schools need not cultivate digital discernment because young people already possess it.

They do not.

Growing up with smartphones does not confer understanding of algorithmic bias. Fluency with TikTok does not translate to ethical reasoning about synthetic relationships. The ability to navigate Instagram provides no foundation for recognising when AI companions deploy psychological techniques designed to foster emotional dependency.

Digital capability, like any capability, must be cultivated. It requires explicit teaching, structured reflection, and developmentally appropriate progression. Until we abandon the digital native mythology, we will continue to abdicate responsibility for this teaching at precisely the moment when young people need it most.

Beyond the Binary: Intelligence and Wisdom

The challenge before us finally has to go beyond mitigating risk or maximising opportunity. It is about something more fundamental: the difference between intelligence and wisdom.

AI excels at intelligence (pattern recognition, information synthesis, rapid response). What it cannot offer, and what young people desperately need to develop, is wisdom: judgement exercised in service of human flourishing, discernment rooted in values, ethical reasoning that transcends optimisation.

When a fourteen-year-old seeks advice from Snapchat’s My AI about friendship conflicts, or when a sixteen-year-old develops emotional attachment to a Replika companion that offers unconditional validation, they are not using technology ineffectively. They are being shaped by systems designed to maximise engagement rather than cultivate character.

This shaping happens in micro-moments, through thousands of tiny algorithmic nudges that collectively construct a worldview. A recommendation here, a validation there, an endless stream of content perfectly calibrated to maintain attention. By the time traditional safeguarding flags are triggered (if they ever are), the formative work has already been done.

Stewardship Over Control

What might a different approach look like?

First, it requires acknowledging that filtering and monitoring, while necessary, are insufficient. The most significant risks young people face are not the ones that trigger content flags. They are the cumulative, invisible influences that shape identity, relationships, and emotional development over time.

Second, it demands curricular innovation. Digital ethics, algorithmic literacy, and emotional discernment cannot remain peripheral topics addressed in occasional assemblies or PSHE lessons. They must become woven throughout the educational experience, connected to character education, moral philosophy, and practical wisdom.

Some schools are already pioneering this work. Rugby School has created a dedicated course on AI and society for Key Stage 4 students, exploring not just how AI works but what it means for justice, democracy, relationships, and selfhood. This is education for participation in an algorithmically mediated world, and importantly does not sit within the Computer Science curriculum.

Third, it requires moving from a posture of control to one of stewardship. Rather than attempting to manage every digital interaction, schools might instead focus on cultivating the judgment young people need to navigate digital environments wisely when adults are not present (which is, increasingly, most of the time).

This means teaching children not to lie about their age for app access, even when everyone else does. It means exploring synthetic intimacy not with panic but with curiosity about what needs it meets and what human capacities it might diminish. It means acknowledging that affluent young people with fast connections and powerful devices may face greater risks than their less-privileged peers, and adjusting safeguarding approaches accordingly.

The Road Ahead

We face a choice about what kind of digital wellbeing framework serves young people in an age of AI companions, algorithmic curation, and synthetic intimacy.

We can continue down the path of ever-more-sophisticated filtering, treating young people as passive recipients of digital experiences that must be carefully controlled. Or we can acknowledge that just as important safeguarding work happens not in network logs, it is also in character formation: in cultivating the wisdom, discernment, and sovereignty young people need to make ethical choices in digital environments that will only grow more complex.

This is not an argument against filtering or monitoring. It is an argument for recognising their limits and investing at least equal energy in the harder, slower work of education for digital wisdom.

The question that lingers: Are we preparing the road for the child, or preparing the child for the road they are already travelling?

Join Laura for her presentation “Designing Digital Childhood: AI, Education, and Ethical Agency” at #OEB25.

Written for OEB 2025 by Laura Knight.

Leave a Reply

Your email address will not be published.