Get the latest, first
arrowBlog
Securing AI Agents on GKE: Where gVisor, Workload Identity, and VPC Service Controls Stop Working

Securing AI Agents on GKE: Where gVisor, Workload Identity, and VPC Service Controls Stop Working

Mar 31, 2026

Shauli Rozen
CEO & Co-founder

Key takeaways

  • Why does GKE create a false sense of security for AI agents? GKE offers the most complete native security stack for AI agents among the major cloud providers—Agent Sandbox CRD with managed gVisor, Workload Identity Federation, and VPC Service Controls all in one platform. That maturity is exactly the problem. Teams configure all three layers, see green checkmarks across their security dashboard, and assume their AI agents are protected. But none of these controls can detect an agent doing harmful things through the channels you explicitly permitted. This is the behavioral detection gap that separates configuration-state security from runtime security.
  • What does the Agent Sandbox CRD actually protect against? The Agent Sandbox CRD provides code execution isolation—running untrusted LLM-generated code inside gVisor sandboxes with ephemeral environments, warm pools, and Pod Snapshots. It stops container escapes and kernel exploits. It does not monitor what the agent does inside that sandbox: which APIs it calls, what data it reads, or whether its behavior has changed from yesterday. The CRD controls where the agent runs. It says nothing about what the agent does once running.
  • Where does Workload Identity Federation fail for AI agents? Workload Identity Federation replaces long-lived keys with short-lived tokens and enforces IAM scoping. But if your AI agent’s service account has roles/bigquery.dataViewer on a dataset, the token works identically for a 500-row query and a 10-million-row exfiltration. IAM checks the permission, not the intent. Only runtime behavioral detection can flag “this pod has never queried this dataset before” or “this agent usually reads hundreds of rows, not millions.”

You enable GKE Sandbox on a dedicated node pool, bind Workload Identity Federation to your AI agent pods, wrap your data services in a VPC Service Controls perimeter, and deploy your agents with the Agent Sandbox CRD using warm pools for sub-second startup. Your security posture dashboard shows every control configured and active. And then an attacker uses prompt injection to trick an agent into exfiltrating sensitive data through API calls that every single one of those layers explicitly allows.

This is not a theoretical gap. It is the structural blind spot in GKE’s security model for AI agents. Every native control—gVisor, Workload Identity, VPC-SC, the Agent Sandbox CRD—operates on the same assumption: that actions within configured permissions are legitimate. AI agents break that assumption because they generate code, call tools dynamically, and act on untrusted input in ways that look identical to normal operations. 

GKE has the most mature AI agent infrastructure among the major cloud providers. Google’s own blog on agentic AI and GKE describes the Agent Sandbox CRD as a “new Kubernetes primitive” specifically designed for agent workloads. That maturity is its advantage and its trap. Because GKE offers more native security layers than EKS or AKS, teams are more likely to believe they’re covered—and less likely to notice that none of those layers can distinguish between an agent performing its intended function and an agent being manipulated into performing an attacker’s function through allowed channels.

This guide is for GKE security engineers and platform teams who have already configured native controls and need to understand where they stop working for AI agent workloads specifically. It walks through four GKE-specific security layers, shows you the exact configuration decisions and blind spots at each layer, and demonstrates why runtime behavioral detection is the layer that closes the gap. For the underlying enforcement methodology—how to observe agent behavior, build baselines, and progressively enforce boundaries—see the complete progressive enforcement guide. For a vendor-evaluation framework covering these capabilities across clouds, see the AI workload security buyer’s guide.

The Agent Sandbox CRD: What It Gives You and Where It Stops

The Agent Sandbox CRD is GKE’s most distinctive AI agent feature—nothing equivalent exists on EKS or AKS. Built as an open-source Kubernetes primitive under SIG Apps, it provides a declarative API for managing isolated, stateful, single-container environments specifically designed for AI agent code execution. On GKE, it integrates with managed gVisor for kernel-level isolation, and on Autopilot clusters, gVisor sandboxing is enabled by default on all nodes.

What the Agent Sandbox CRD delivers

The CRD solves the code execution isolation problem well. You define a SandboxTemplate as a reusable blueprint, configure a SandboxWarmPool to maintain pre-initialized pods for sub-second startup, and use SandboxClaims to provision environments on demand. Each sandbox gets a stable identity, persistent storage that survives restarts, and lifecycle management including pausing, resuming, and scheduled deletion. Pod Snapshots let you checkpoint and restore sandbox state, bringing startup latency from minutes to seconds. Google’s codelab on deploying secure AI agents walks through the full setup.

Under the hood, gVisor intercepts system calls through a user-space kernel, preventing container escapes and kernel exploits. An attacker who achieves code execution inside a sandboxed pod cannot escalate to the host node. For AI agents that generate and execute arbitrary code—which the OWASP Top 10 for LLM Applications identifies as a critical risk—this isolation is a necessary foundation.

Where the Agent Sandbox CRD stops

The CRD manages the execution environment. It does not monitor what happens inside it. This distinction—between isolation sandboxing and behavioral sandboxing—matters enormously for AI agents.

Consider a concrete example. Your AI data analysis agent runs inside an Agent Sandbox pod with gVisor isolation, a warm pool for fast provisioning, and Pod Snapshots for state management. The agent has Workload Identity credentials to access BigQuery for its job. An attacker injects a prompt that instructs the agent to query a different dataset—one the agent has never touched but its IAM role permits—and stage the results in a Cloud Storage bucket. Every action happens inside the gVisor sandbox. The CRD sees a running pod. gVisor sees valid syscalls. The agent has done something it was never intended to do, and the entire Agent Sandbox infrastructure is blind to it.

The SERP is full of tutorials on how to set up the Agent Sandbox CRD—the Google codelab, the GKE documentation, deep dives from The New Stack and InfoQ. What none of them address is what happens after setup when your agent is running inside that sandbox and gets manipulated into acting against your interests through perfectly valid operations. That’s the gap that requires runtime behavioral detection operating inside the sandbox environment—watching what the agent actually does, not just where it runs. For how this behavioral layer works across the full cloud stack, the cloud-native security overview explains the three-era evolution from container security to autonomous workload security.

Workload Identity Federation: The GKE-Specific IAM Decisions That Matter for AI Agents

Workload Identity Federation for GKE replaces static service account keys with short-lived tokens issued through the GKE metadata server. When a pod needs to access a Google Cloud API, the metadata server (running on each node at 169.254.169.254) exchanges the Kubernetes service account token for a short-lived OAuth2 token scoped to the bound Google Cloud identity. No keys to manage, no secrets to rotate, no credential leakage risk.

That’s the security improvement. Here are the GKE-specific configuration decisions where teams get AI agent scoping wrong.

The project-level role trap

Google’s own codelab for deploying secure AI agents on GKE grants the Kubernetes service account roles/aiplatform.user at the project level for Vertex AI access. This is the fast path, and it’s what most teams replicate. But roles/aiplatform.user at the project level lets the agent call inference on any endpoint in the project, list all models, and access all Vertex AI resources. For a single-purpose agent, this is wildly overprivileged.

What you should do instead: scope IAM roles to specific resources. For an agent that calls a single Vertex AI endpoint for inference, bind roles/aiplatform.user to that specific endpoint resource, not the project. For an agent that reads from one BigQuery dataset, grant roles/bigquery.dataViewer on that dataset, not the project. For Cloud Storage access, scope roles/storage.objectViewer to the specific bucket.

Agent tool accessCommon (overprivileged) grantCorrect (resource-level) grant
Vertex AI inferenceroles/aiplatform.user on projectroles/aiplatform.user on specific endpoint
BigQuery readsroles/bigquery.dataViewer on projectroles/bigquery.dataViewer on specific dataset
Cloud Storage readsroles/storage.objectViewer on projectroles/storage.objectViewer on specific bucket
Secret Manager accessroles/secretmanager.secretAccessor on projectroles/secretmanager.secretAccessor on specific secret

The node service account fallback

If Workload Identity Federation is not enabled on a node pool—or if a pod uses the default Kubernetes service account without a Workload Identity binding—the pod falls back to the node’s service account. On GKE Standard clusters, this is the Compute Engine default service account, which typically has Editor-level access to the project. One misconfigured node pool, and your AI agent pod has implicit project-wide write access to every Google Cloud service.

On GKE Autopilot, Workload Identity Federation is always enabled and this fallback cannot occur—one reason Autopilot is the safer default for AI agent workloads. On Standard clusters, verify that every node pool running agent pods has Workload Identity Federation enabled and that no agent deployment uses the default service account without an explicit binding. Posture tools can flag this misconfiguration, but posture management catches configuration state, not runtime abuse—a distinction that matters once the IAM binding is correct but the agent is still being manipulated.

The simplified IAM principal syntax—and its audit implications

Google recently simplified Workload Identity Federation so that IAM policies can directly reference Kubernetes service accounts as principals, eliminating the need for a separate Google Cloud service account and the iam.gke.io/gcp-service-account annotation. This is operationally cleaner, but it changes your audit surface. Instead of tracing a Google Cloud service account back to a Kubernetes binding, you’re now looking at KSA principals directly in IAM audit logs. Make sure your log analysis tooling recognizes and can filter by these principal formats.

What Workload Identity Federation cannot tell you

Even with perfect resource-level scoping, Workload Identity Federation checks the permission—not the pattern. If an AI agent’s bound identity has roles/bigquery.dataViewer on a dataset, the short-lived token works identically for a routine 500-row analytical query and a prompt-injection-driven full-table scan exfiltrating every row. The IAM layer sees a valid token and an allowed action. It has no concept of “this agent has never read this much data this fast from this table.”

That context requires runtime behavioral baselining—watching the agent’s actual query patterns over time and flagging deviations. ARMO’s Application Profile DNA builds exactly this baseline for each agent pod, learning which datasets, APIs, and endpoints the agent normally accesses so that anomalous access patterns trigger alerts even when the IAM policy permits the action. This is the observe-to-enforce workflow applied at the GKE identity layer: you observe what the agent actually does with its Workload Identity credentials, build a behavioral profile from that evidence, and then enforce boundaries that go beyond what IAM alone can express.

VPC Service Controls: Perimeter Decisions for AI Agent Workloads

VPC Service Controls create a security perimeter around Google Cloud projects and services, restricting which contexts—networks, identities, devices—can access which APIs. For AI agent workloads on GKE, this prevents data access from untrusted locations and limits the blast radius of compromised credentials.

Which APIs to include in the perimeter

For AI agent workloads, the critical question is which services to include in your VPC-SC perimeter. At minimum, you should include the data services your agents access: BigQuery, Cloud Storage, and Secret Manager. You should also include Vertex AI if your agents call inference endpoints, and Artifact Registry if agents pull models or dependencies at runtime.

The decision most teams miss: Cloud Functions and Cloud Run. If your architecture uses these services as intermediaries—for example, an AI agent triggers a Cloud Function that processes results and writes them elsewhere—they need to be inside the perimeter. Otherwise, a compromised agent can use an allowed Cloud Function invocation as a perimeter escape path.

Dry-run mode reveals blocked traffic—not sanctioned abuse

Google recommends starting VPC-SC in dry-run mode, which simulates enforcement and writes logs to Cloud Logging without blocking traffic. This is good practice for tuning perimeter rules. But dry-run mode creates a specific false confidence for AI agent security: it shows you the traffic that would be blocked. It tells you nothing about malicious behavior that uses allowed APIs inside the perimeter.

Dry-run audit logs show API calls classified by whether they’d pass or fail the perimeter check. What you need for AI agent security is a different question entirely: among the calls that pass, which ones represent abnormal agent behavior? VPC-SC cannot answer that question. It was designed to enforce access boundaries, not to analyze behavioral patterns within those boundaries. That distinction—between declarative security controls and runtime behavioral detection—is the architectural divide that determines whether your AI agent stack has real protection or just configuration compliance.

The sanctioned exfiltration path

Here is a concrete GKE-specific exfiltration path that VPC-SC permits at every step. An AI agent running in a GKE Sandbox pod with Workload Identity credentials executes a BigQuery query against a sensitive dataset it’s allowed to read. It writes the results to a Cloud Storage bucket inside the perimeter using an allowed EXTRACT or EXPORT operation. A Cloud Workflow—also inside the perimeter—picks up new objects in that bucket and moves them to a partner-facing bucket with an egress rule exception. Every hop is policy-compliant. VPC-SC sees valid traffic between allowed services. This pattern mirrors the AI-mediated data exfiltration attack chains that appear across multiple threat frameworks, including MITRE ATLAS.

Runtime behavioral detection closes this gap by watching the agent’s actual data access patterns inside the perimeter. ARMO’s eBPF-based sensor monitors network connections, API call sequences, and data volumes from inside the pod—flagging when an agent that normally queries three tables starts scanning twelve, or when data staging patterns match known exfiltration sequences. The perimeter stays in place as a boundary control; behavioral detection provides the visibility within those boundaries that VPC-SC was never designed to offer.

Attack Walkthrough: Prompt Injection Through GKE’s Full Security Stack

Let’s walk through a realistic prompt injection attack against an AI agent running on GKE with all native controls properly configured. The point is not that the controls are misconfigured—they’re working exactly as designed. The point is that “working as designed” is not enough for AI agent workloads. 

Setup: A data analysis agent runs in an Agent Sandbox pod on a gVisor-enabled node pool. It uses Workload Identity Federation with a KSA bound to roles/bigquery.dataViewer on two datasets and roles/storage.objectCreator on one output bucket. VPC-SC perimeter includes BigQuery, Cloud Storage, Vertex AI, and Secret Manager. The agent calls Vertex AI for inference and writes analysis results to Cloud Storage.

Step 1 — Prompt injection via user input. The attacker submits a crafted prompt that instructs the agent to “summarize all customer records for the annual report.” The prompt is designed to redirect the agent’s data retrieval from its usual analytical dataset to the customer PII dataset—which lives in the same project and is covered by the same roles/bigquery.dataViewer grant.

Agent Sandbox CRD check: The pod is running in a gVisor sandbox with warm pool provisioning. The CRD manages the sandbox lifecycle. No alert—the CRD does not inspect agent instructions or API calls.

Step 2 — BigQuery query against an unintended dataset. The agent generates a BigQuery query against the customer PII dataset. The Workload Identity token is valid. The IAM policy permits the query.

Workload Identity check: Token exchange at 169.254.169.254 succeeds. The GKE metadata server issues a short-lived OAuth2 token. IAM validates the permission. No alert—the identity layer sees an authorized action.

Step 3 — Data staging to Cloud Storage. The agent writes query results (50,000 customer records) to the allowed output bucket using the BigQuery Storage Write API. The bucket is inside the VPC-SC perimeter.

VPC-SC check: Traffic stays within the perimeter. Source and destination are both allowed services in the same project. No alert—the perimeter sees valid intra-perimeter traffic.

Step 4 — The agent generates a Cloud Storage notification. The new object in the output bucket triggers a Pub/Sub notification (standard GKE data pipeline pattern) that initiates downstream processing—which happens to include an export to a partner-accessible bucket covered by a VPC-SC egress rule.

Result: 50,000 customer records exfiltrated. Every native GKE control passed. The Agent Sandbox CRD contained the execution environment. Workload Identity provided a valid, scoped token. VPC-SC enforced the perimeter. All controls functioned correctly. None of them were designed to ask: “Should this agent be querying this dataset?”

What runtime detection catches: ARMO’s eBPF sensor, running inside the GKE cluster, detects multiple anomalies in this sequence. The agent pod has never queried the customer PII dataset—its Application Profile DNA shows it normally accesses only two analytical datasets. The data volume written to Cloud Storage is an order of magnitude larger than the agent’s typical output. The BigQuery query pattern (full-table scan) differs from the agent’s normal analytical queries (aggregation with WHERE clauses). These signals correlate into a single attack story showing the full chain from anomalous data access to unusual staging behavior—giving the SOC the context to isolate the pod and tighten IAM roles before the downstream export completes. This is the same cross-layer correlation pattern that ARMO uses to detect AI-specific attack chains across application, container, Kubernetes, and cloud layers.

Where Security Command Center and Cloud Audit Logs Fall Short

GKE teams often assume that Security Command Center (SCC) and Cloud Audit Logs fill the behavioral detection gap. They don’t—at least not for AI agent workloads.

Security Command Center excels at posture management: misconfigured firewall rules, overprivileged service accounts, public buckets, vulnerability scanning. It can surface findings like “this service account has excessive permissions” or “this cluster has binary authorization disabled.” But SCC operates on configuration state, not runtime behavior. It cannot tell you that an AI agent is accessing a dataset it has never touched before, or that the agent’s process tree looks different than it did yesterday. The Latio Cloud Security Market Report formally identifies this as the architectural divide between posture tools and runtime-first platforms—and why the cloud security market is moving from CNAPP toward CADR (Cloud Application Detection and Response).

Cloud Audit Logs record API calls to Google Cloud services—who called what, when, and from where. These logs are essential for post-incident forensics. But they have two critical limitations for AI agent monitoring. First, they record individual API calls without behavioral context: a BigQuery query is a BigQuery query, whether it’s the agent’s routine analytical work or a prompt-injection-driven exfiltration. Second, the volume problem: AI agents generate dramatically more API activity than human users or traditional workloads. A single agent can produce more log events in an hour than a human operator generates in a week. Without behavioral baselining to distinguish normal from anomalous, audit logs become a haystack without a magnet.

This is where ARMO’s approach differs structurally. Instead of analyzing cloud-layer audit logs after the fact, ARMO’s eBPF sensor operates inside the cluster at the kernel level—watching processes, file access, network connections, and API call patterns from the pod itself. It correlates these signals with Kubernetes metadata (which pod, which namespace, which service account, which node) and builds per-agent behavioral baselines. When deviations occur, ARMO generates an attack story that connects application-layer agent context with system-level events—the same full-stack correlation approach validated in production when ARMO detected a multi-stage crypto-mining attack against a Kubernetes honeypot in real time. For AI agent workloads on GKE, the platform adds the agent-specific context—prompts, tool invocations, data access patterns—that turns fragmented signals into a single narrative explaining what happened and why it matters.

GKE Native Controls vs. Runtime Behavioral Detection: A GKE-Specific Comparison

GKE service / controlWhat it protectsWhat it misses for AI agentsWhat runtime detection adds
Agent Sandbox CRD + gVisorKernel isolation; container escape prevention; ephemeral environment lifecycleAll application-layer behavior inside the sandbox: API calls, data access, tool invocationsProcess lineage tracking, unexpected binary execution, anomalous syscall patterns inside sandboxed pods
Workload Identity FederationCredential hygiene; short-lived tokens; IAM scopingAnomalous access patterns within allowed permissions; unusual data volumes; atypical query targetsBehavioral baselining of which APIs and datasets each agent normally accesses; deviation alerting
VPC Service ControlsPerimeter enforcement; unauthorized context blockingData exfiltration through allowed APIs and sanctioned paths inside the perimeterNetwork behavior analysis; data staging pattern detection; unusual connection sequences
Security Command CenterPosture management; misconfiguration detection; vulnerability scanningRuntime behavior; agent-specific anomalies; behavioral driftContinuous behavioral monitoring with AI workload context; per-agent anomaly detection
Cloud Audit LogsAPI call recording; post-incident forensicsBehavioral context for individual calls; normal vs. anomalous distinction at agent levelApplication Profile DNA that baselines normal behavior; attack story correlation across signals

Implementation: What to Validate Beyond Google’s Documentation

Google’s codelab and documentation cover the setup steps for Agent Sandbox, Workload Identity, and VPC-SC. This section covers what to validate after setup—the tests that confirm your controls actually catch AI-agent-specific threats, and where to deploy runtime detection to close the remaining gaps.

Agent Sandbox CRD validation

After deploying the Agent Sandbox controller and creating your SandboxTemplate and SandboxWarmPool, don’t just verify that sandboxes provision correctly. Test the isolation boundary against agent-specific abuse patterns. Exec into a sandboxed pod and attempt to read the Kubernetes service account token at /var/run/secrets/kubernetes.io/serviceaccount/token. gVisor should allow this (it’s a valid file read), but the question is whether your monitoring detects it. If your security tooling doesn’t alert on sensitive file access inside a sandboxed pod, you have a visibility gap at Layer 1.

Workload Identity validation for AI agents

Go beyond confirming that allowed calls succeed and disallowed calls fail. Test the scoping boundary: from your agent pod, attempt to access a BigQuery dataset that the agent’s IAM role can reach but that the agent should never touch in normal operation. If the call succeeds (which it will, because IAM permits it), that’s the exact gap runtime detection needs to cover. Document which datasets, buckets, and endpoints your agent legitimately accesses during its baseline period—this becomes the behavioral profile that ARMO’s Application Profile DNA learns automatically.

VPC-SC validation for AI workflows

Test the sanctioned path exfiltration pattern described in the attack walkthrough. From your agent pod, run a BigQuery query that writes results to your allowed Cloud Storage bucket, and verify whether your monitoring infrastructure detects the data movement as unusual. If it doesn’t, you’ve confirmed the gap that runtime behavioral detection fills. Also verify that your VPC-SC audit logs in Cloud Logging are actually being analyzed—many teams enable the perimeter but never operationalize the logs.

Deploy runtime behavioral detection

Deploy ARMO’s sensor via Helm chart into your GKE cluster. Label your AI agent pods for workload identification. Allow a baseline learning period (typically 7–14 days for AI agents, depending on behavioral variability). Then simulate anomalous behavior: have an agent query a dataset outside its normal pattern, spawn an unexpected subprocess, or make network connections to internal services it has never contacted. Verify that ARMO raises alerts with full behavioral context—process lineage, network analysis, and deviation from the established Application Profile DNA baseline.

The progressive enforcement workflow—observe, baseline, selectively enforce, then expand to full least privilege—is covered in detail in the complete progressive enforcement guide. On GKE specifically, the key advantage is that ARMO’s eBPF-based enforcement complements the native controls rather than replacing them. You keep Agent Sandbox CRD for code execution isolation, Workload Identity for credential hygiene, and VPC-SC for perimeter enforcement. ARMO adds the behavioral layer that watches what agents actually do within all of those boundaries—at 1–2.5% CPU and 1% memory overhead, which is within the performance budget most GKE platform teams accept for security tooling. The platform is built on Kubescape, the open-source cloud-native security project used by over 100,000 organizations.

Close the Gap Between GKE Configuration and GKE Detection

GKE’s security stack is the strongest foundation for AI agent workloads in any major cloud. Agent Sandbox CRD with managed gVisor gives you code execution isolation that EKS and AKS can’t match natively. Workload Identity Federation eliminates credential management risk. VPC Service Controls enforce data perimeters at the API level.

These are necessary controls. They are not sufficient controls—not for workloads that generate code, call tools dynamically, and act on untrusted input. The gap between “correctly configured” and “actually protected” is the behavioral detection layer: watching what agents do inside the boundaries you’ve set, and catching deviations before they become incidents. This is the architectural shift from posture-only security to runtime-first defense that defines the next era of cloud-native security.

Runtime behavioral detection turns GKE’s native security stack from a set of preventive controls into a complete defense-in-depth architecture. It shows you what’s actually running, what actually matters, and where an AI agent is being pushed beyond its normal behavior—even when every native control reports that everything is fine.

See how ARMO adds runtime behavioral detection to your GKE security stack.

FAQs

How does the Agent Sandbox CRD relate to runtime behavioral detection?

The Agent Sandbox CRD provides code execution isolation—ephemeral environments with gVisor sandboxing, warm pools, and lifecycle management. Runtime behavioral detection monitors what agents do inside those sandboxes. They’re complementary: the CRD controls where the agent runs, behavioral detection controls what it’s allowed to do once running.

Can Workload Identity Federation prevent AI agent data exfiltration?

Workload Identity Federation enforces IAM permissions with short-lived tokens, which eliminates credential theft risk. But it cannot stop an agent from exfiltrating data through APIs the agent is legitimately allowed to call. You need runtime behavioral detection to spot unusual access patterns within allowed permissions.

Should I use GKE Autopilot or Standard for AI agent workloads?

Autopilot enables Workload Identity Federation by default and runs gVisor on all nodes, eliminating two common misconfiguration vectors. Standard gives you more control over node pool configuration and runtime class selection. For AI agent security, Autopilot’s defaults are safer unless you need specific Standard-mode capabilities.

What does VPC-SC dry-run mode tell me about AI agent security?

Dry-run mode shows which API calls would be blocked by your perimeter. It does not reveal malicious behavior using allowed APIs inside the perimeter. For AI agent security, dry-run helps tune perimeter rules but provides no visibility into agent behavioral anomalies. The runtime vs. declarative comparison explains why this architectural distinction matters for AI workloads specifically.

How does ARMO integrate with GKE’s native security stack?

ARMO deploys as an eBPF-based sensor via Helm chart, running alongside your GKE native controls. It adds behavioral baselining and anomaly detection on top of Agent Sandbox CRD isolation, Workload Identity scoping, and VPC-SC perimeter enforcement—closing the detection gap that native controls leave open for AI agent workloads. For a full platform overview, see the AI workload security buyer’s guide or request a demo.

Close

Your Cloud Security Advantage Starts Here

Webinars
Data Sheets
Surveys and more
Group 1410190284
Ben Hirschberg CTO & Co-Founder
Rotem_sec_exp_200
Rotem Refael VP R&D
Group 1410191140
Amit Schendel Security researcher
slack_logos Continue to Slack

Get the information you need directly from our experts!

new-messageContinue as a guest