Cheat Sheet: Agentic-AI Security Architecture Example
A working document to broaden awareness of a holistic AI security pattern
Agent Identity and Access Management
We need to verify the agent's identity via attestation, before issuing bound credentials and unique attributes. We need to be dynamically granting access to data, apps, and other agents. All wrapped in ephemeral and just-in-time concepts.
Considerations:
Secret-zero issue - how to attest processes or requesting party before credentials are minted, issued and bound
How to identify what permissions an agent needs?
How to verify a human has access to an agent?
Should the agent “claim” permissions from the requesting party?
How to handle agent to agent combined permissions risk and union of access?
How to handle MFA requirements? Is a possession factor enough?
Should credentials be auto-rotated, or rotated only during high risk?
Data and Knowledge Protection
Policies are needed to prevent unauthorized use of critical data by AI agents, ensuring access permissions are based on the type of data or resource being accessed - so this is a combination of PEP and also data confidentiality protection measures.
Considerations:
Can existing DLP and DSPM products support the necessary discovery, classification and risk tagging of data resources?
Can data objects be protected at rest, in transit and in use?
Are concepts like homomorphic encryption and multi-party computing, understood, applicable?
Are integrity protection mechanisms necessary?
Are content authenticity measures in place? Are they scaleable? Can they be verified offline and at scale?
How is the data returned back from an agent protected?
Agent Operational Integrity and Resilience
We need to ensure the agent's behavioral integrity is upheld by detecting and preventing tampering, drifting off-task, or malicious manipulation. This is tricky as some behaviour changes are expected for optimization and learning.
Considerations:
What does success look like for an agent?
Can that success be articulated during creation?
Can agents be compared to each other?
Can and should agent behaviour be compared and tied to a human-requestor’s behaviour?
Are historical behaviour patterns for agents useful?
Agents are likely to be targets - how can they be protected?
Can they be stolen, replicated, impersonated?
Agentic Governance, Risk and Compliance (GRC)
We need to establish guidelines for agent behavior, risk changes, and ensuring that the agent's actions are auditable and traceable. Everything needs to be in sync for regulatory adherence.
Considerations:
End to end traceability is important - that links agent tasks to human tasks
Can agent actions be reversed?
At what point do agent events need approval?
What is the process for agent creation?
What level of explainability is available?
What level of agent action transparency is available?
Policy Engine with Runtime Enforcement
A system that provides granular, adaptive access control that changes dynamically based on real-time context and assessed risk levels. It enforces policies for access to corporate systems, inter-agent sharing, access periods (JIT), and memory retention. Can today’s human-centric PBAC systems do this?
Considerations:
How are policies created?
How are policies stored and made available to enforcing systems?
Are policies part of the governance model?
How are policies changed and analysed for effectiveness?
How should policy be changed post-incident?
Can enforcement points scale and be effective away from a central hub?
Human Oversight, Accountability, and Attribution
We need to ensure that all actions by autonomous agents can be traced back to an originating human user or organizational policy, often utilizing "human-in-the-loop" mechanisms for critical or sensitive actions.
Considerations:
These are extension points that anchor both policy design and enforcement and how the GRC components are executed



