Iris 365 Ltd — Information Security Advisory

Claude Agentic AI Tools: Iris 365 Advisory Note

Ref: IRIS365-AI-ADV-002  ·  March 2026  ·  Audience: Senior Management, Risk & Compliance

Our position

Anthropic's three agentic AI tools — Cowork, Claude in Chrome, and Claude in Excel — are not suitable for use with regulated data, client files, or any workflow requiring an audit trail in their current form. Limited internal use on isolated, non-client workstations is permissible under strict technical conditions. The default position for all managed devices is: block installation until further notice.

The three tools and their current status

Claude Cowork

Desktop automation agent

Research preview

Claude in Chrome

Browser agent

Beta

Claude in Excel

Spreadsheet agent

Beta / preview

Scope note: this advisory covers Claude specifically, but the risks apply to agentic AI generally. The security risks identified here — prompt injection, absence of data classification awareness, audit trail gaps, and the OneDrive sync cascade risk — are not unique to Anthropic's tools. They are properties of agentic AI operating in a Windows and Microsoft 365 environment, regardless of vendor. This advisory focuses on Claude Cowork, Claude in Chrome, and Claude in Excel because they are the tools under evaluation. The same risk framework should be applied to any agentic AI tool — including Microsoft Copilot Actions, OpenAI Operator, and Google Gemini agents — before deployment.

Why agentic AI is different — and why it matters here

A conversational AI tool (like Claude Chat) generates text for a human to review. An agentic AI tool acts — it reads files, submits forms, browses websites, and executes tasks autonomously with minimal human oversight of each step. Regulators assess what a system can do, not how it is marketed. These tools must be treated as operational risk systems, not AI assistants.

Key risks identified

No data classification awareness

None of the tools can read Purview sensitivity labels or enforce DLP policies. Any restriction to "non-regulated data" relies entirely on user behaviour — it is not a technical control.

OneDrive sync cascade

SharePoint-synced folders are indistinguishable from local folders. A single broad folder grant can silently expose every synced client SharePoint library on the device.

Prompt injection

Malicious instructions embedded in documents, web pages, or spreadsheet cells can redirect the agent to perform actions the user did not authorise — with no visible warning.

No audit trail

Activity logs are stored locally only. Agent-driven changes to SharePoint files appear as the user's own actions. There is no forensic capability following an incident.

No enterprise admin controls

No role-based scoping, no admin console, no MDM integration. Any user with local admin rights can install these tools with no central visibility.

Preview / beta stage — no SLA

Anthropic has not designated these tools as production-ready. No SLA, no enterprise support tier. Security model and permissions may change without notice between versions.

Recommended use conditions

Recommended immediate actions

  1. 1Issue interim guidance confirming these tools are not approved for client data pending formal risk acceptance
  2. 2Audit current installations — check all managed devices including those with local admin access
  3. 3Block installation via WDAC / Intune app policy and browser extension management policy
  4. 4Disable or audit OneDrive sync on any workstation approved for limited use
  5. 5Review NDA and confidentiality agreements for AI processing scope with legal counsel
  6. 6Add all three tools to the AI Use Register and DPA sub-processor schedule

When will this position be reviewed? This advisory will be updated when Anthropic delivers centralised audit logging and role-based administrative controls — the minimum threshold for reconsidering regulated deployment. Monitor Anthropic's product roadmap. CIMA has not yet issued specific AI guidance; absence of guidance does not imply permission. These risks apply equally to agentic AI tools from other vendors (Microsoft, OpenAI, Google) — the same framework should be applied before any such tool is deployed.