Your progress
0 / 40
Discuss your gaps with Matt
πŸ“š

Article 4 β€” AI Literacy Obligations

Ensuring staff understand the capabilities, limitations, and risks of AI tools they use

0 / 10
1.1
All staff who deploy or use AI systems have received documented AI literacy training.
Article 4 requires "sufficient AI literacy" β€” training must be role-appropriate and documented with attendance records.
1.2
Training covers AI-specific risks: hallucinations, outdated patterns, insecure code generation, and over-reliance.
Generic "AI awareness" training does not satisfy Article 4. Risks specific to AI coding assistants must be addressed explicitly.
1.3
Role-differentiated training delivered: developers, security engineers, managers, and legal/compliance staff trained separately.
A developer's AI literacy needs differ materially from a CTO's or a Works Council representative's.
1.4
Competency assessment completed and results recorded for all AI tool users.
A documented assessment (quiz, practical exercise, or sign-off) demonstrates that training was effective, not just attended.
1.5
Refresher training cadence defined (minimum annual) and scheduled.
AI capabilities and risks evolve rapidly. A one-time training does not constitute ongoing "sufficient AI literacy."
1.6
AI literacy module included in new hire onboarding for all roles that will interact with AI systems.
Ensures continuous compliance as headcount grows, not just for current staff.
1.7
A designated AI Responsible Person (or team) is identified with accountability for AI Act compliance.
Similar to a DPO under GDPR. Does not need to be a dedicated role, but must be assigned and documented.
1.8
Internal AI usage policy published, versioned, and acknowledged by all relevant staff.
Policy must cover: acceptable use, prohibited uses, data handling, IP ownership, output validation responsibilities.
1.9
AI usage policy updated to reflect any new AI tools deployed in the last 12 months.
Organisations often add AI tools (GitHub Copilot, Cursor, Claude, Gemini) without updating governance documents.
1.10
Training materials are version-controlled and maintained alongside the AI usage policy.
Auditors will ask to see the materials used. "We did a session" without materials is not sufficient evidence.
πŸ”

High-Risk System Classification

Identifying which AI systems in your organisation trigger high-risk obligations under Annex III

0 / 10
2.1
A complete inventory of all AI systems in use across the engineering organisation has been compiled.
Include AI coding assistants (Copilot, Cursor, Codeium), LLM APIs, AI-powered testing tools, and AI in CI/CD pipelines.
2.2
Each AI system has been assessed against the Annex III high-risk categories to determine classification.
AI systems used in HR, employment, education, critical infrastructure, law enforcement, or safety components require high-risk obligations.
2.3
AI tools used in HR processes (performance reviews, hiring, workforce analysis) are explicitly classified.
Using AI to analyse developer productivity metrics for HR decisions (promotions, PIPs) triggers Annex III high-risk classification.
2.4
General-Purpose AI (GPAI) models in use are identified, and provider documentation reviewed for systemic risk flags.
GPAI models with β‰₯10²⁡ FLOPs training compute face additional systemic risk obligations under Chapter V.
2.5
Prohibited AI practices reviewed (Annex II / Article 5) and confirmed absent from your organisation.
Prohibited practices include subliminal manipulation, social scoring, real-time biometric identification in public spaces, and emotion inference in workplaces.
2.6
AI coding assistants assessed specifically for use in safety-critical system development (medical devices, automotive, aerospace).
AI-generated code used in safety-critical software may elevate the overall system's risk classification regardless of the tool's own classification.
2.7
Third-party AI components in your supply chain assessed for their own classification status.
If you deploy or integrate AI models built by third parties, you may inherit deployer obligations even if you are not the provider.
2.8
Risk classification for each AI system is documented, dated, and signed off by the designated AI Responsible Person.
Classification is a point-in-time assessment. Document who made the decision, when, and on what basis.
2.9
A re-classification trigger process is defined: new AI tool adoption, significant model updates, and new use cases all prompt reassessment.
Classification can change β€” a coding assistant used for billing logic in a financial system may have a different risk profile than one generating utility scripts.
2.10
Classification decisions are accessible to the Works Council (where applicable) and legal/compliance teams.
In DACH jurisdictions, Works Councils have information rights regarding technology deployments that affect employees.
πŸ“‹

Technical Documentation Requirements

Articles 11–15: Documentation, logging, transparency, human oversight, and robustness

0 / 10
3.1
Technical documentation prepared for each high-risk AI system per Article 11, covering purpose, architecture, training data, and performance.
Article 11 documentation must be maintained throughout the system lifecycle and be available to market surveillance authorities on request.
3.2
Automatic logging and audit trails enabled for all high-risk AI system outputs (Article 12).
Logs must capture inputs, outputs, and confidence scores where applicable. Retention period should be defined β€” minimum 6 months for high-risk systems.
3.3
Transparency obligations met: users are informed when they are interacting with an AI system (Article 13).
AI-generated code suggestions, AI chat interfaces, and automated code review tools all require disclosure to the human receiving the output.
3.4
Human oversight mechanisms documented and implemented for all high-risk AI systems (Article 14).
Oversight must be meaningful β€” a mandatory code review of AI-generated output by a qualified engineer satisfies this for AI coding tools.
3.5
Accuracy, robustness, and cybersecurity measures documented for each AI system in use (Article 15).
Includes SAST/DAST tooling calibrated for AI-generated code vulnerabilities, and adversarial input testing where applicable.
3.6
An AI system card (or equivalent model card) is maintained for each AI tool in production use.
Documents intended purpose, known limitations, training data sources, known biases, and version history. A lightweight internal document is sufficient for non-GPAI tools.
3.7
Post-market monitoring plan defined for high-risk AI systems, including performance drift detection.
LLM models degrade or shift behaviour with provider updates. Monitoring should include periodic output quality assessments against defined benchmarks.
3.8
Incident reporting process defined: serious malfunctions of high-risk AI systems reported to the relevant national market surveillance authority.
Reporting is required for incidents that result in death, serious injury, damage to property, or significant disruption of essential services.
3.9
Data governance documentation in place: training data sources, data quality measures, and bias assessments recorded.
Particularly relevant if you fine-tune or self-host models. For third-party API tools, request the provider's data card and retain it as part of your documentation.
3.10
Quality management system (QMS) in place for the AI system development and deployment lifecycle.
A QMS does not need to be ISO 9001 certified. A documented process covering design, testing, change management, and decommissioning is sufficient.
🀝

Works Council Readiness (DACH β€” Β§87 BetrVG)

Co-determination obligations under German labour law for AI tool deployments affecting employees

0 / 10
4.1
The Works Council (Betriebsrat) has been formally informed about the deployment of all AI coding tools affecting employees.
Under Β§80 BetrVG, the Works Council has information rights. Failure to inform before deployment creates grounds for injunction.
4.2
Co-determination right under Β§87(1) No. 6 BetrVG formally addressed: technical monitoring capability of AI tools assessed.
Β§87(1) No. 6 requires Works Council consent for any technical device that could monitor employees' behaviour or performance β€” which most AI coding tools technically can.
4.3
A Betriebsvereinbarung (works agreement) has been drafted or is in negotiation covering AI coding tool deployment.
A Betriebsvereinbarung can pre-emptively define the terms of AI tool use, preventing future disputes. It should cover acceptable use, data access, and prohibited applications.
4.4
AI tools are confirmed in writing NOT to be used for individual developer performance monitoring.
Using AI-generated code metrics (acceptance rates, lines generated) as performance KPIs is prohibited without Works Council consent and violates GDPR as a secondary purpose.
4.5
AI tools are confirmed NOT to be used for behavioural surveillance of employees (keystroke timing, session duration, idle detection).
Some AI coding tools offer telemetry features that could constitute surveillance under Β§87(1) No. 6. Ensure these features are disabled or that consent is documented.
4.6
A Data Protection Impact Assessment (DPIA) completed under GDPR Article 35 for any AI tool processing personal employee data.
If the AI tool processes employee code, prompts, or usage data that could be linked to an individual, a DPIA is likely required. Many AI tool DPIAs are inadequate β€” review carefully.
4.7
The works agreement or deployment framework includes audit rights for the Works Council.
Works Council representatives have the right to verify that the agreed terms are being followed. Document how this access is provided (e.g., quarterly report, read-only dashboard access).
4.8
An employee opt-out or override mechanism is documented for any AI system that influences working conditions.
While opt-out from a coding assistant is typically straightforward, the process should be formally documented and free of implied professional consequences.
4.9
Legal review of the AI deployment framework completed by a German labour law specialist (for German entities).
EU AI Act compliance and BetrVG compliance are distinct but intersecting. Both must be addressed β€” a technology lawyer alone is insufficient for DACH deployments.
4.10
Annual review scheduled for the works agreement and DPIA to reflect changes in AI tools, model versions, or use cases.
A Betriebsvereinbarung signed in 2024 may not cover GitHub Copilot Agent Mode or new agentic AI workflows introduced in 2025–2026. Annual review is essential.