The EU AI Act introduces a risk-based framework for AI, setting obligations based on whether a system is high-risk, limited-risk, or prohibited. For organisations, this means compliance cannot be left to chance, internal AI policies are now essential.
An internal AI policy acts as the organisation’s AI playbook: it explains what AI systems are in use, how they are assessed, who is responsible for oversight, and what procedures are in place to meet the Act’s obligations. Without such a framework, companies risk regulatory penalties, reputational harm, and operational confusion.
Step 1: Understand the Scope of the EU AI Act
Before drafting a policy, organisations must understand the basics:
- What counts as an AI system? The EU Commission defines it broadly as any machine-based system that generates outputs (predictions, content, decisions) that can influence environments.
- Risk categories:
- Unacceptable risk – prohibited systems such as social scoring.
- High risk – AI in employment, education, law enforcement, healthcare, critical infrastructure.
- Limited risk – systems requiring transparency, e.g. chatbots.
- Minimal risk – everyday applications like spam filters.
- Who is responsible? The Act distinguishes between providers, deployers, and users, each with specific obligations.
Step 2: Map Your AI Systems
A compliant policy begins with visibility. Organisations should:
- Create an inventory of all AI tools and systems currently in use.
- Identify whether they were developed internally or procured from third parties.
- Classify each according to the EU AI Act’s risk categories.
- Pay close attention to high-risk systems, which trigger the most obligations.
Step 3: Assign Clear Roles and Responsibilities
Internal AI governance works only when responsibilities are explicit. Policies should define:
- Leadership accountability: senior management should own compliance.
- AI governance committees: to oversee policies, approve tools, and monitor risks.
- Operational responsibilities: technical teams, HR, and legal staff should know their roles.
- Human oversight mechanisms: required for high-risk AI, ensuring staff can intervene when needed.
Step 4: Define Policy Areas
Strong internal policies cover the full AI lifecycle. Key areas include:
- Risk management: procedures to identify, assess, and mitigate risks throughout development and deployment.
- Data governance: ensuring training and testing data are high quality, representative, and free from bias.
- Documentation & transparency: technical documentation, logs, and clear instructions for users.
- Human oversight: protocols for monitoring and fallback if AI malfunctions.
- Vendor management: due diligence when procuring external AI tools, with contractual clauses ensuring compliance.
- AI literacy & training: regular staff training to meet the EU AI Act’s requirement for AI literacy.
- Auditing & monitoring: routine internal audits, incident reporting, and mechanisms for continuous improvement.
Step 5: Build a Practical Compliance Checklist
When creating or updating policies, organisations should ask:
- Have we identified all AI systems and risk levels?
- Do we have a governance structure with clear accountability?
- Are high-risk systems supported by documentation, oversight, and risk management processes?
- Are employees trained on AI literacy and prohibited practices?
- Are vendor contracts reviewed for compliance obligations?
- Is there a monitoring and audit process in place?