SECURE NETWORK ACCESS
Corey O'Connor December 18, 2025 4 minute read

Like Ashtrays on Airplanes: Why Every Company Must Secure AI Agents—Even If They Ban Them

AI agents are rapidly moving into enterprise core environments, running on servers, virtual machines, and Kubernetes clusters where traditional access controls fall short. As these autonomous workloads expose new APIs and machine-to-machine pathways, organizations need a way to extend Zero Trust beyond user endpoints and into the infrastructure itself. Agentic AI Core Protection, a new capability within AppGate ZTNA, applies identity-centric controls at the point of execution so AI innovation can move forward without introducing unmanaged risk.

For decades, inflight smoking has been banned. Yet every commercial airplane still has ashtrays.

Not because airlines expect passengers to light up—but because safety engineers understand a fundamental truth: policy alone does not eliminate behavior. If someone breaks the rule, it’s safer to control the risk than pretend it doesn’t exist. The same principle now applies to AI agents inside the enterprise.

Organizations may set guidelines for responsible AI use, but the reality is far less orderly. Many teams now have explicit mandates to explore AI capabilities—and to move quickly—while security controls lag behind. The result is a wave of rapid, decentralized experimentation happening inside environments never originally designed for autonomous workloads. It’s a productive shift, but without guardrails, it introduces real and growing exposure Ignoring that reality doesn’t reduce risk. It increases it.

AI Agents Are Already Inside the Enterprise Core

AI agents are no longer confined to SaaS tools or external APIs. Enterprises are deploying them directly inside servers, virtual machines, and Kubernetes clusters—often to meet security, performance, or compliance requirements that external services can’t satisfy. These agents may automate workflows, analyze data, perform inference, or interact with internal systems at machine speed.

But this shift introduces a critical problem.

AI agents frequently expose APIs, service ports, or web interfaces. They rely on credentials, tokens, and machine-to-machine communication. And unlike human users, they don’t log in through browsers or sit behind traditional endpoint protections. Most Zero Trust Network Access (ZTNA) solutions were never built to secure this kind of workload.

As a result, enterprises face a growing gap between where AI runs and where Zero Trust enforcement stops.

Why Banning AI Agents Doesn’t Solve the Problem

Many organizations respond by tightening policies, specifically blocking tools, issuing mandates, or discouraging any experimentation that falls outside of strict guardrails. But just as with inflight smoking, prohibition doesn’t eliminate risk when behavior persists despite the rules.

Employees and teams will continue to experiment. Developers will deploy local models. Automation will creep into core systems. And without proper controls, that experimentation creates:

  • Invisible attack surfaces from exposed APIs and services
  • Unmanaged machine identities operating outside identity governance
  • Lateral movement paths between workloads
  • Compliance blind spots in regulated environments

The issue isn’t whether AI agents should exist. The issue is whether they’re secured by design or left to operate in the shadows.

Zero Trust Must Extend Beyond Human Users

Traditional ZTNA focuses on users, devices, and remote access. That model breaks down when the “user” is an autonomous workload running headless inside core infrastructure.

To secure AI agents effectively, Zero Trust must operate at the network and workload layer, enforcing identity-based access for both humans and machines. That means:

  • Verifying machine identity, not just user identity
  • Applying least-privilege entitlements to APIs and services
  • Preventing unauthorized lateral movement between workloads
  • Cloaking infrastructure so services remain invisible until authenticated

This is where Agentic AI Core Protection, a new capability within AppGate ZTNA, comes into play.

Securing AI Agents at the Core with AppGate ZTNA

Agentic AI Core Protection extends Zero Trust directly into servers, VMs, and Kubernetes clusters—where AI agents actually run. Rather than relying on perimeter controls or cloud brokers, AppGate ZTNA enforces security at the point of execution.

Key capabilities include:

  • Linux Headless Client to enforce ZTNA on servers and VMs without a UI
  • Kubernetes-native enforcement using sidecar and node-level controls
  • Identity-centric, dynamic access policies tied to role, posture and context
  • Infrastructure cloaking using Single Packet Authorization (SPA) to eliminate reconnaissance

Every interaction—human or machine—is authenticated, authorized, and continuously evaluated before access is granted. AI agents can operate autonomously, but only within the boundaries explicitly defined by policy  

The Business Case for “Ashtrays” in AI Security

Securing AI agents isn’t about encouraging risky behavior. It’s about making robust security attainable so it supports and strengthens the work instead of becoming a burden that gets ignored.

By extending Zero Trust to agentic workloads, organizations can:

  • Reduce exposure from shadow AI and unsanctioned experimentation
  • Prevent data exfiltration and unauthorized internal access
  • Maintain compliance across hybrid and multi-cloud environments
  • Enable innovation without slowing development teams

Just like ashtrays on airplanes, these controls don’t signal approval. They signal responsibility.

Secure Innovation Starts with Architectural Honesty

AI agents are becoming part of the enterprise whether policies keep up or not. The organizations that succeed won’t be the ones that pretend otherwise—they’ll be the ones that design for it.

Agentic AI Core Protection gives security teams a way to extend Zero Trust where it’s needed most: inside the core, at machine speed, and without blind spots. If AI agents are inevitable, securing them should be too.

Explore how Agentic AI Core Protection secures AI agents inside the enterprise core.

Receive News and Updates From AppGate