Iris 365 Ltd — Information Security Advisory
The three tools and their current status
Claude Cowork
Desktop automation agent
Research previewClaude in Chrome
Browser agent
BetaClaude in Excel
Spreadsheet agent
Beta / previewScope note: this advisory covers Claude specifically, but the risks apply to agentic AI generally. The security risks identified here — prompt injection, absence of data classification awareness, audit trail gaps, and the OneDrive sync cascade risk — are not unique to Anthropic's tools. They are properties of agentic AI operating in a Windows and Microsoft 365 environment, regardless of vendor. This advisory focuses on Claude Cowork, Claude in Chrome, and Claude in Excel because they are the tools under evaluation. The same risk framework should be applied to any agentic AI tool — including Microsoft Copilot Actions, OpenAI Operator, and Google Gemini agents — before deployment.
Why agentic AI is different — and why it matters here
A conversational AI tool (like Claude Chat) generates text for a human to review. An agentic AI tool acts — it reads files, submits forms, browses websites, and executes tasks autonomously with minimal human oversight of each step. Regulators assess what a system can do, not how it is marketed. These tools must be treated as operational risk systems, not AI assistants.
Key risks identified
No data classification awareness
None of the tools can read Purview sensitivity labels or enforce DLP policies. Any restriction to "non-regulated data" relies entirely on user behaviour — it is not a technical control.
OneDrive sync cascade
SharePoint-synced folders are indistinguishable from local folders. A single broad folder grant can silently expose every synced client SharePoint library on the device.
Prompt injection
Malicious instructions embedded in documents, web pages, or spreadsheet cells can redirect the agent to perform actions the user did not authorise — with no visible warning.
No audit trail
Activity logs are stored locally only. Agent-driven changes to SharePoint files appear as the user's own actions. There is no forensic capability following an incident.
No enterprise admin controls
No role-based scoping, no admin console, no MDM integration. Any user with local admin rights can install these tools with no central visibility.
Preview / beta stage — no SLA
Anthropic has not designated these tools as production-ready. No SLA, no enterprise support tier. Security model and permissions may change without notice between versions.
Recommended use conditions
Recommended — with strict conditions
Not recommended — no exceptions
Recommended immediate actions
When will this position be reviewed? This advisory will be updated when Anthropic delivers centralised audit logging and role-based administrative controls — the minimum threshold for reconsidering regulated deployment. Monitor Anthropic's product roadmap. CIMA has not yet issued specific AI guidance; absence of guidance does not imply permission. These risks apply equally to agentic AI tools from other vendors (Microsoft, OpenAI, Google) — the same framework should be applied before any such tool is deployed.