Your organisation just deployed an AI system. Your ISO 27001 Annex A controls didn't change. Your risk register should have.
I keep seeing this. Organisations adopt AI tooling (chatbots, document processing, automated decisions) and nobody touches the ISMS. Risk assessment stays the same. SOA stays the same. Internal audit comes around and nobody asks whether the AI introduces risks the existing controls don't cover.
ISO/IEC 42001:2023 is a management system standard for AI. Same Annex SL structure as ISO 27001: context, leadership, planning, support, operation, performance evaluation, improvement. It has its own Annex A controls, and some overlap directly with 27001's. Most organisations won't certify against 42001 any time soon. But if you're using AI and you hold ISO 27001, the risks 42001 addresses are already in your ISMS scope. You're just not treating them.
Where 42001 overlaps with your 27001 controls
Risk assessment (Clause 6.1.2). If your organisation deployed a customer-facing AI system and your risk register doesn't mention model bias, hallucination, data poisoning, or over-reliance on automated outputs, that's a gap. Your auditor should find it.
Data handling (Annex A.8). Training data, prompt logs, model outputs. These all need classification, retention policies, access controls. If your data classification policy predates your LLM deployment, it almost certainly doesn't cover prompt data. Evidence gap.
Access control (A.5.15, A.8.3). Who can change the prompts? Fine-tune the model? Adjust the decision thresholds? If your access control evidence doesn't cover these, the picture is incomplete.
Supplier relationships (A.5.19–A.5.23). Most organisations use third-party AI services. OpenAI, Anthropic, Google, Microsoft. What data are you sending to the API? What are the processing terms? What happens when the provider changes their retention policy? Your supplier risk assessment needs to cover this.
Incident management (A.5.24–A.5.28). What counts as an AI incident? Biased outputs? A hallucinated response sent to a customer? A prompt injection that leaks data? If your incident procedure doesn't define these, you can't evidence that you're managing them.
Awareness and training (A.6.3). Your people are using AI tools. Do they know what they can and can't input? Do they know the classification rules for AI interactions? If your awareness programme doesn't cover AI use, that's an observation at minimum.
The audit question nobody is asking
If your organisation uses AI, and your ISO 27001 internal audit doesn't evaluate the controls around that use — is the audit complete?
Clause 9.2 requires the audit to determine whether the ISMS conforms to the organisation's own requirements and to the requirements of the standard. If AI introduces risks that aren't in your risk register, controls that aren't in your SOA, and evidence that doesn't exist, the audit should surface that. If it doesn't, there's a gap in the audit itself.
CBs are starting to ask. Not because ISO 42001 certification is required, but because AI use touches information assets. It falls within your ISMS scope whether you've acknowledged it or not.
Combined audits under ISO 19011:2018
ISO 19011:2018 already covers combined audits, where multiple management system standards are audited together. Where two standards share common requirements, you evaluate the evidence once rather than twice.
Organisations holding both 27001 and 42001 will need this. The evidence overlaps. The risk assessments overlap. The controls overlap. Running them separately wastes time and creates inconsistency.
Even if you're not pursuing 42001 certification, looking at your 27001 internal audit through the lens of AI risk will strengthen it. It's the difference between auditing what your ISMS covered three years ago and auditing what it needs to cover now.
Before your next audit
If you're using AI, even simple tools, and your next ISO 27001 internal audit is approaching, here's what I'd check:
- Risk register. AI-specific risks in there? Not generic "technology risk". Specific risks tied to the AI systems you're actually using.
- SOA. Do your applicable controls cover AI data handling, access control, supplier management?
- Training records. Does your awareness programme address AI use policies?
- Supplier assessments. Have you assessed your AI service providers under your supplier management controls?
- Incident procedures. Do they define AI-related incidents?
- Data classification. Does your scheme cover prompts, outputs, training data?
If most of these are no, your internal audit should find that. If it doesn't, the CB probably will.
I review evidence against every relevant clause, including the ones that changed when you started using AI. That's what Manylder delivers.