What makes for a good AI Literacy framework?
Reviewing the landscape and sharing our approach

WAO is currently working on a project with the Responsible Innovation Centre for Public Media Futures (RIC), hosted by the BBC. Our focus is on AI Literacy for 14–19 year olds and you can read more context in our project kick-off blog post.
One of the deliverables for this project is to review AI Literacy frameworks, with a view to either making a recommendation, or coming up with a new one. It’s not just a case of choosing one with a pretty diagram!
Frameworks are a way in which an individual or organisation can indicate what is worth paying attention to in a given situation. Just as the definition of ‘AI Literacy’ varies by context, the usefulness of a framework depends on the situation. In this post, we share the judgements we made using criteria we developed and share our process in case it is useful for your own work.
Narrowing down the list
While there can be some commonality and overlaps between frameworks for different contexts, the diversity of possible situations is huge. There can never be a single ‘perfect’ framework suitable for every situation. For example, just imagine what ‘AI Literacy’ might look like for (adult) engineers and developers compared with children of primary school age. As with our work at Mozilla you can define what a ‘map’ of new literacies might look like, but it can only ever be one of many that describe the overall ‘territory’.
With our work on this project, we had to bear in mind our audience (14–19 year olds) and the mission of the BBC. There is a long history of Critical Media Literacy which is particularly relevant to our research here, and which was one of the factors when reviewing frameworks.
With a relatively short project timeline of three months, we needed a way to quickly classify approximately forty frameworks and related documents we have collected. We shared relevant details with Perplexity AI (using the Claude 3.7 Sonnet model) over multiple conversations. This helped us reduce our initial list of around 40 frameworks to a more manageable 25.
Coming up with criteria
Next, we came up with some criteria by which to judge them. These criteria were informed by our own work in the area for 15+ years, along with either interviews or surveys with over 35 experts in the field. While these criteria are meant as a heuristic for this project, they are also a useful starting point for asking questions about any project relating to new literacies.
- Definition of AI — ensures everyone has the same starting point
- Development process — adds transparency and credibility
- Target audience — helps match the framework to its users
- Real-world relevance — shows how ideas work in practice
- AI safety and ethics — addresses both risks and responsible use
- Skills and competencies listed — clarifies what learners should be able to do
- Reputable source — increases trust in the framework
We included both safety and ethics because both are needed for using AI in a responsible and trustworthy way.
Categorising the most relevant frameworks
We used a traffic light (red/yellow/green) categorisation system to score each framework on the above criteria. Only one of the frameworks we reviewed, the OECD Framework for Classifying AI Systems, meets all criteria with a ‘green’ rating.
There are several other frameworks which we judge as meeting the criteria as ‘green’ except for one criterion (‘yellow’). Listed alphabetically by organisation, these are:
- Artificial Intelligence in Education (Digital Promise)
- AI Literacy in Teaching and Learning: A Durable Framework for Higher Education (EDUCAUSE)
- Digital Competence Framework for Citizens (European Commission)
- Developing AI Literacy With People Who Have Low Or No Digital Skills (Good Things Foundation)
- AI competency framework for students (UNESCO)
There are other frameworks which we have decided to include which included two or more criterion as ‘yellow’. For example, the Open University’s Critical AI Literacy Framework, Ng, et al’s article, and Prof. Maha Bali’s blog post linked from her framework all do a good job of defining Critical AI Literacies. We would also note that the Digital Education Council’s list of skills and competencies relating to AI Literacy is useful to pair with those from EDUCAUSE, UNESCO, and the European Commission.
Next steps
As mentioned earlier, our brief for this project involves either making an informed recommendation of a framework, or to come up with our own. We’re currently leaning toward the latter, but either choice will be the subject of a future blog post.
If you have questions, concerns, comments, or indeed a particularly useful resource which you think would be useful for this project, please do get in touch. You can leave a comment here, or use the contact details on our main website to get in touch!
Discussion