How to Build an AI Literacy Programme That Actually Works

I recently read a case study about how Radio-Canada (Canada’s French-language broadcaster) launched its AI literacy initiative in 2023, it was aiming to do more than just upskill a few journalists. It set out to transform how the entire newsroom thought about AI: demystify the tools, encourage safe experimentation, and integrate AI into daily workflows.

The initiative began with a three-hour foundational workshop for all staff from reporters, editors to videographers, covering the essentials of prompt engineering, bias risks, and ethical guardrails. From there, role-specific training followed, plus “office hours” where staff could get personalised help.

The results were tangible. Journalists began suggesting AI-enabled story ideas, experimenting with AI-assisted audio editing, and, crucially, asking sharper questions about AI’s limitations. By making AI literacy a shared competency, Radio-Canada created a culture where responsible experimentation was encouraged but grounded in awareness of risk.

That is the model many teams and organizations need: AI literacy that is both enabling and protective.


Why AI Literacy Training Matters

In organizations that skip literacy training, staff often treat AI tools as magic help buttons, pasting sensitive documents into ChatGPT or letting AI outputs circulate unchecked. HR and IT teams end up firefighting data risks and misinformation issues that could have been prevented.

The EU AI Act, or other guidelines on responsible AI, all signal the same expectation: staff need baseline AI knowledge, not just policy PDFs hidden in intranets that nobody reads and remembers. AI literacy training ensures employees know how to use AI within approved boundaries, question and fact-check outputs, and use AI safely for their specific roles.


Step 1: Anchor the Training in Policy

An AI literacy programme only works if it’s rooted in a clear policy foundation. Before running training sessions, publish an AI Acceptable Use Policy (AUP) or department-specific guidelines that answer:

  • Which AI tools are approved?
  • What data must never be entered?
  • Who is accountable for outputs?

💡 Pilot insight: In a health sector HR training, staff were shown concrete “red lines” from their own AUP, such as “never paste patient records into ChatGPT.” This policy-first approach meant that staff could immediately connect training content to their daily decisions.


Step 2: Build Role-Specific Modules

One of the quickest ways an AI training programme fails is by being too generic. Tailor content to different roles:

  • HR teams need to understand bias risks in AI recruitment tools and safe drafting of job ads.
  • IT teams need guidance on monitoring AI tools, managing integrations, and handling access requests.
  • Frontline staff need practical skills for everyday tasks like summarisation, drafting, and safe prompting.

💡 Case study: A financial services company avoided “AI fatigue” by splitting training into two tracks: “risk awareness for client-facing staff” and “tool configuration for IT.” Staff found the content far more relevant and actionable.


Step 3: Make It Interactive

Staff skim through static PowerPoints, but they learn through application. Effective AI literacy training uses:

  • Experimentation to let users play with available tools that show instant results.
  • Scenario exercises (e.g. “Should you paste this customer email into Mistral?”).

💡 Example: Introducing a chatbot for internal AI policies reduced IT helpdesk tickets by 20%, because staff could self-serve answers like “Is Google Gemini approved here?”


Step 4: Track, Refresh, and Iterate

AI literacy is not a one-off training event, it’s a continuous learning cycle.

  • Refresh training every 3-6 months to reflect new tools and risks.
  • Track quiz performance to spot where staff misunderstand policy.
  • Collect feedback to adapt modules over time.

Organizations that keep AI literacy current find that employees adopt AI more confidently.


Checklist: A Programme That Works

✅ Clear AI policy foundation
✅ Role-specific training modules
✅ Interactive formats (quizzes, chatbot, scenarios)
✅ Regular refresh and monitoring


Final Word

The Radio-Canada case shows that AI literacy training can be more than a compliance exercise. Done well, it’s a cultural shift: employees gain confidence, organizations reduce risk and shadow AI and AI use becomes both innovative and responsible.

To make internal AI policies and guidelines easier, platforms like Oregani provide the missing link, combining policy hubs, training and a chatbot in one place.