Gaps in AI Literacy provision
In this post, we continuing to share outputs from a project we’re working with the Responsible Innovation Centre for Public Media Futures (RIC), hosted by the BBC.
A values-based list of barriers faced by 14–19 year olds in the UK
In this post, we continuing to share outputs from a project we’re working with the Responsible Innovation Centre for Public Media Futures (RIC), hosted by the BBC. We’ve already published:
- What does AI Literacy look like for young people aged 14–19?
- What makes for a good AI Literacy framework?
- Core Values for AI Literacy
- AI Literacy or AI Literacies?

This project has involved both desk research and input from experts in the form of a survey, individual interviews, and a roundtable which we hosted a couple of weeks ago. One area we wanted to ensure we understood were gaps in existing provision around AI Literacies for young people.
The gaps we identified were focused on the 14–19 age range in the UK, with a long list of areas with many themes. We have organised and summarised these around the core values identified in a previous post.
The gaps reflect a pattern seen across education, media, and wider society: provision is uneven. It is often shaped by short-term thinking and competing interests. Overall, it is limited by a lack of clear leadership or coordination.
Unfortunately, many interventions around AI Literacies are focused on technical skills or compliance. These do not connect with young people’s real interests or lived experiences, nor do they address the deeper ethical, social, and cultural questions raised by AI.
As a result of this, many learners — especially those already facing disadvantage — are left with fragmented support and few opportunities to develop genuine agency or critical judgement.

Human Agency and Informed Participation
- Lack of systemic, rights-based frameworks: There is little structured provision to help young people shape, question, or influence AI, with most education focused on adapting to technology rather than encouraging agency or clarifying institutional responsibilities.
- Dominance of industry narratives: Commercial interests and tech industry funding often drive the agenda, narrowing the conversation and limiting opportunities for young people to challenge prevailing narratives or understand the political dimensions of AI.
- Insufficient progression and curriculum integration: There is no standardised, dynamic curriculum or progression framework for AI Literacies, especially for post-16 learners, and limited integration across subjects beyond computing or digital studies.
- Teacher confidence and support gaps: Many teachers lack confidence, training, and adaptable resources to support the development of AI Literacies, resulting in inconsistent, sometimes contradictory, messaging and limited support for critical engagement.
- Disconnect between knowledge and action: Awareness of AI bias, manipulation, or power structures does not reliably translate into agency or behavioural change, with motivation and broader social context often overlooked.
Equity, Diversity, and Inclusion
- Persistent digital and social divides: Access to tools and resources to develop AI Literacies is highly unequal, shaped by school policies, family resources, and broader digital divides, with privileged students often able to bypass restrictions.
- Lack of cultural and global adaptation: Most resources are developed in the global north and do not reflect the needs or realities of diverse cultural, socioeconomic, or linguistic backgrounds, including those from in the global south.
- Barriers for marginalised groups: AI tools and resources can disadvantage non-native English speakers, students with disabilities, and those with limited digital access, reinforcing existing inequalities.
- Neglect of visual and multimodal literacy: There is insufficient focus on images, deepfakes, and multimodal content, despite their growing importance for misinformation and manipulation.
- Resource design and authenticity: Overly polished, anthropomorphised, or inaccessible resources can alienate young people; there is a need to co-design authentic, relatable, and context-driven materials that reflect lived experiences with young people from a range of background
Creativity, Participation, and Lifelong Learning
- Short-termism and lack of sustainability: Funding and interventions are often short-lived, with little focus on long-term, joined-up strategies or progression frameworks.
- Imbalance between creativity and consumption: Most young people are consumers, not creators, of AI content; there is insufficient emphasis on participatory, creative, and hands-on engagement with AI.
- Restrictive and risk-averse policies: Overly strict barriers on access to AI tools in schools can limit meaningful learning opportunities and create anxiety or underground use.
- Missed opportunities for experiential and peer learning: There is underuse of hands-on, constructionist, and peer-led approaches, which are effective for this age group and for a rapidly evolving field like AI.
- Failure to address entrenched digital habits: Many interventions come too late to shift established digital habits; young people may have high digital skill but lack guidance on purposeful, critical, or participatory use.
Critical Thinking and Responsible Use
- Overemphasis on technical skills: Current provision is skewed towards prompt engineering and functional tool use, with insufficient attention to understanding different kinds of AI, ethical reasoning, systemic impacts, and critical engagement.
- Insufficient ethical, environmental, and societal focus: Real-world harms, environmental costs, and the broader impact of AI are rarely discussed, leaving gaps in understanding responsible use.
- Media and information literacy gaps: Algorithmic and data literacy gaps: Young people struggle to understand how data shapes AI outputs, how to assess real versus fake (including deepfakes), and how to evaluate, challenge or seek redress for algorithmic decisions or AI-generated content.
- Anthropomorphism and mental models: Many young people, particularly younger teens, misattribute human-like qualities to AI, affecting their critical judgement and ability to interrogate outputs.
- Lack of robust assessment and evidence: There is a shortage of baseline data on AI literacy levels and limited frameworks for evaluating the effectiveness and impact of interventions, especially in terms of behavioural change.
Upholding Human Rights and Wellbeing
- Disconnection from youth interests and lived experience: AI Literacy resources often fail to connect to young people’s real interests (creativity, sports, mental health), focusing instead on employability or compliance.
- Socio-emotional and privacy risks: Young people may use AI for companionship or advice, sharing sensitive information without understanding privacy or data risks; frameworks rarely address identity, trust, or changing markers of adulthood.
- Confusion and inconsistency in terminology: There is no consensus on what “AI literacy” means, and inconsistent definitions can intimidate learners or place excessive responsibility on individuals.
- Unclear responsibility and leadership: It remains unclear who should lead on the development of AI Literacies. Schools, parents, government, industry, and third sector bodies all have a role to play, but the current situation leads to fragmented provision and a lack of accountability.
- Neglect of digital relationships and boundaries: The role of AI as an “invisible third party” in relationships, and the shifting boundaries of privacy and identity, are rarely addressed in current resources.

Next up
We’re still finalising our framework for AI Literacies and will be sharing it soon. Meanwhile, you can follow our work on this topic so far at https://ailiteracy.fyi.
Please do get in touch if you have projects and programmes that can benefit from our experience and expertise in education and technology!
Acknowledgements
The following people have willingly given up their time to provide invaluable input to this project:
Jonathan Baggaley, Prof Maha Bali, Dr Helen Beetham, Dr Miles Berry, Prof. Oli Buckley, Prof. Geoff Cox, Dr Rob Farrow, Natalie Foos, Leon Furze, Ben Garside, Dr Daniel Gooch, Dr Brenna Clarke Gray, Dr Angela Gunder, Katie Heard, Prof. Wayne Holmes, Sarah Horrocks, Barry Joseph, Al Kingsley MBE, Dr Joe Lindley, Prof. Sonia Livingstone, Chris Loveday, Prof. Ewa Luger, Cliff Manning, Dr Konstantina Martzoukou, Prof. Julian McDougall, Prof. Gina Neff, Dr Nicola Pallitt, Rik Panganiban, Dr Gianfranco Polizzi, Dr Francine Ryan, Renate Samson, Anne-Marie Scott, Dr Cat Scutt MBE, Dr Sue Sentance, Vicki Shotbolt, Bill Thompson, Christian Turton, Dr Marc Watkins, Audrey Watters, Prof. Simeon Yates, Rebecca Yeager
Discussion