The NHI Security Illusion: Why Your Tools Detect Everything and Protect Nothing

Adi Chemoul's avatar
Adi Chemoul VP Marketing

Table of Contents

The Non-Human Identity Crisis


Over the past decade, cloud adoption and API-first architectures have exploded. Every microservice, CI/CD pipeline, third-party integration, and automation script requires credentials to function. Today’s organizations manage tens of thousands of API keys, service account tokens, certificates, and secrets, collectively known as Non-Human Identities (NHIs).

Unlike human identities, NHIs proliferate unchecked, rarely expire, and often hold excessive privileges. When the GitGuard leak exposed over 10 million secrets on GitHub, and when CircleCI’s breach compromised thousands of customer secrets, the industry woke up to a critical gap: we had no systematic way to manage, monitor, or secure non-human identities at scale.

This realization spawned a new category of NHI security tools. But three years in, organizations are still experiencing breaches. The problem? These tools aren’t solving the actual problem.

Why Current NHI Tools Are Failing

1. Limited Visibility: Log-Based Detection Misses the Full Picture

Most NHI tools rely on scanning logs, code repositories, and configuration files to detect exposed secrets. This approach has fundamental limitations:

  • Integration-dependent blind spots – They only see what they’re integrated with. Secrets in proprietary systems, legacy applications, or new SaaS tools remain invisible.
  • Point-in-time snapshots – Log scanning provides historical data, not real-time awareness of active secrets in production.
  • Incomplete coverage requires tool sprawl – Organizations deploy multiple complementary tools (repo scanners, log analyzers, CSPM tools) just to achieve partial visibility, creating fragmented insights and operational complexity.

The result: You’re blind to a significant portion of your NHI attack surface.

2. Static Risk Posture: Not Risk-Based on Reality

Current tools assess risk based on static configurations, whether a secret is overprivileged, improperly scoped, or detected in a repository. But they lack runtime context:

  • No visibility into actual usage – Is this API key actively being used? When was it last called? From where?
  • No understanding of exploitability – Which secrets, if compromised, could actually cause damage versus dormant credentials with no real-world impact?
  • No prioritization based on business context – Not all exposed secrets carry equal risk, but tools treat them uniformly, flooding teams with thousands of “critical” findings.

The result: Security teams drown in alerts they can’t prioritize, while truly critical risks hide in the noise.

3. No Effective Remediation: Jira Tickets Aren’t Security

Perhaps the most glaring failure: every existing tool considers opening a Jira ticket as “remediation.”

Here’s what actually happens:

  • Tool detects an exposed secret
  • Creates a ticket assigned to a developer
  • Developer is already buried in work
  • Ticket sits in backlog for weeks or months
  • Secret remains exposed and exploitable

This isn’t remediation, it’s offloading responsibility and hoping someone eventually gets to it. Meanwhile:

  • Developers lack context to prioritize the ticket
  • Manual secret rotation is complex and error-prone
  • Teams fear breaking production, so they delay
  • The “remediated” number in dashboards is a fiction

The result: Organizations have impressive detection metrics but unchanged security outcomes.

4. The Treadmill Problem: Creation Outpaces Remediation

Even when teams heroically remediate dozens of exposed secrets, hundreds more are being created simultaneously:

  • Developers spin up new services with hardcoded credentials
  • CI/CD pipelines generate new API keys
  • Third-party integrations add more service accounts
  • Shadow IT proliferates unmanaged secrets

You’re running on a treadmill that’s speeding up. No matter how fast you remediate, the backlog grows. This approach cannot win long-term, it’s mathematically unsustainable.

The result: The NHI problem compounds over time despite significant investment in security tools.

So What Are Teams Ending Up Doing

Recognizing these gaps, organizations attempt to solve NHI security by combining multiple tools:

  • NHI scanners – These tools provide limited visibility, primarily detecting only what they are directly connected to. They rely on log ingestion and can become costly for customers at scale.
  • Vault or Secrets Manager – Almost every organization has one (or multiple), but these are storage solutions, not security tools. They’re safes where you store secrets; they don’t tell you which secrets are exposed, overprivileged, or actively exploited.
  • CSPM tools – Offer limited visibility mostly for major cloud providers (AWS, GCP, Azure) and a limited set of supported applications, with no remediation capabilities, just more alerts to manually triage.

But this Frankenstein approach creates new problems:

  • Fragmented visibility – Each tool shows a different piece of the puzzle; no unified view of your actual NHI risk
  • Alert fatigue – Multiple tools generating overlapping alerts with no centralized prioritization
  • No coordinated remediation – Each tool operates in isolation; rotating a secret in Vault doesn’t update the scanner’s findings
  • Operational overhead – Security teams spend more time managing tools than managing risk

Companies experiencing major NHI-related breaches were already using these tools. Despite deploying scanners, vaults, and CSPM platforms, their non-human identities remained exploitable. Detection without prevention leaves the door wide open.

A New Approach Is Required

The fundamental problem is that existing tools treat NHI security as a detection problem when it’s actually a governance and access control problem.

We don’t solve human identity security with scanning tools. We solve it with centralized identity governance platforms that enforce least-privilege access, monitor usage in real-time, and automate lifecycle management.

The same principles apply to non-human identities:

What’s Needed: A Unified NHI Governance Platform

  1. Centralized Control Across All NHI Types
  • API keys, service accounts, OAuth tokens, certificates, database credentials—managed in one platform
  • Works across cloud providers (AWS, GCP, Azure), SaaS applications, on-prem systems, and custom infrastructure
  1. Real-Time Runtime Visibility
  • See which secrets are actively used, when, and by what
  • Understand actual exploitability and business impact
  • Risk scoring based on live data, not static configurations
  1. Policy-Based Governance
  • Apply cloud IAM security principles (least privilege, ephemeral access, policy enforcement) to ALL non-human identities
  • Centralized policy engine that works uniformly across your entire tech stack
  1. Built-In Remediation
  • Don’t create tickets, fix the problem automatically
  • Don’t rotate compromised secrets, revoke excessive permissions, enforce identity-based access policies instead
  • Zero additional work for development teams

The Bottom Line

Existing NHI tools are failing because they’re solving the symptom and not curing the disease. They’re detection engines in a world that needs governance platforms.

Organizations need a centralized NHI governance and control platform, one that applies proven identity security principles across all non-human access, finds and remediates risks automatically, and scales with the explosive growth of machine identities.

It’s time to stop detecting the problem and start preventing it.

Still Using Secrets?

Let's Fix That.

Get a Demo

AI Agents Are Getting Powerful. What’s Stopping Them from Exfiltrating Your Data?

Rita Katzir's avatar
Rita Katzir VP Product

Table of Contents

AI agents are rapidly moving from experimental tools to first-class actors inside production environments. They call LLM APIs, orchestrate workflows through MCP servers query databases like Postgres and Snowflake, and trigger actions in third‑party SaaS platforms such as Stripe.

But from a security perspective, there’s an uncomfortable truth most organizations haven’t fully confronted yet:

AI agents are operating with the same static credentials model we already know is broken-only now at machine speed and machine scale.

For CISOs, this isn’t just another secrets management problem. It’s a new, amplified data‑exfiltration risk surface.

The Hidden Risk: AI Agents Inherit the Worst of Static Credentials

Today, most AI agents authenticate exactly like legacy services:

  • Long‑lived API keys for LLM providers
  • Static database credentials for Postgres or Snowflake
  • Hard‑coded SaaS tokens for Stripe or internal APIs
  • Broad permissions granted “just in case” the agent needs them

From a risk standpoint, this creates a perfect storm:

  • No identity context – Credentials don’t know who is using them or why
  • No execution awareness – Access looks the same whether it’s a valid task or a malicious prompt
  • No blast‑radius control – A single leaked key can unlock massive datasets
  • No runtime enforcement – Once issued, the key works everywhere, all the time

When an AI agent is compromised-through prompt injection, model manipulation, supply‑chain risk, or simple misconfiguration-the organization has no meaningful way to contain the damage.

This is how silent data exfiltration happens.

Why Traditional Controls Fail Against Agentic Access

Security teams often try to compensate by layering controls around static credentials:

  • Network restrictions
  • Token rotation policies
  • Monitoring and anomaly detection

These help-but they don’t solve the core issue.

Static credentials are non‑contextual by design. They can’t express:

  • Which agent is calling the service
  • What task the agent is executing
  • What data should be accessible right now
  • Whether the request violates business or security intent

For human identities, we solved this years ago with identity‑aware access, conditional policies, and least privilege.

For AI agents, most organizations are still stuck in 2010.

The Shift: From Secrets to Policy‑Based Agent Identity

Preventing data exfiltration by AI agents requires a fundamental shift:

Stop authenticating agents with static secrets. Start authorizing them with policy‑based identity.

In a policy‑based model:

  • AI agents authenticate using a strong, verifiable runtime identity
  • Access is granted dynamically, not pre‑embedded in code
  • Every request is evaluated against real‑time policy
  • Permissions are scoped to task, resource, and time

Instead of asking:

“Does this API key look valid?”

The system asks:

“Should this agent, performing this action, access this resource right now?”

That question is the difference between control and blind trust.

Applying Policy‑Based Access to LLMs, MCP, and SaaS

Policy‑based identity isn’t theoretical-it applies directly to the systems AI agents already touch.

LLM Services

  • Limit which models an agent can access
  • Restrict prompt and response scopes
  • Enforce usage boundaries per task or environment
  • Prevent cross‑tenant or cross‑context leakage

MCP and Internal Services

  • Bind agent identity to specific workflows
  • Prevent lateral movement between services
  • Enforce service‑to‑service least privilege dynamically

Databases and Data Platforms

  • Grant just‑in‑time access to Postgres or Snowflake
  • Restrict queries to approved schemas or datasets
  • Automatically revoke access when the task completes

Third‑Party SaaS (e.g., Stripe)

  • Scope actions to specific operations (read vs. write vs. execute)
  • Prevent high‑impact actions outside approved flows
  • Eliminate long‑lived tokens embedded in agent logic

The result: even if an agent is manipulated, its ability to exfiltrate data is structurally constrained.

Why This Matters to CISOs Now

AI agents change the economics of risk.

They:

  • Operate continuously
  • Act autonomously
  • Chain multiple systems together
  • Access high‑value data at scale

A single compromised agent can do more damage, faster, than dozens of traditional workloads and services.

From a CISO perspective, this raises hard questions:

  • Can we prove which agent accessed sensitive data?
  • Can we enforce least privilege at runtime-not design time?
  • Can we contain an incident without taking systems offline?
  • Can we show auditors that agent access is governed and intentional?

Static credentials cannot answer these questions.

Policy‑based identity can.

The Solution: How Hush Enables Secure Agentic Access

Preventing data exfiltration by AI agents requires more than better secrets hygiene. It requires eliminating secrets as the primary control plane altogether.

Hush enables organizations to shift AI agents from secret-based access to policy-based identity, fundamentally changing how agentic access is granted, evaluated, and enforced at runtime.

Instead of embedding long-lived API keys, tokens, or database credentials into agent logic, Hush:

  • Establishes a strong runtime identity for each AI agent
  • Replaces static secrets with just-in-time, policy-evaluated access
  • Authorizes every request based on who the agent is, what it is doing, and what it should access

This shift is critical for stopping data exfiltration.

With policy-based access enforced by Hush:

  • AI agents never hold standing credentials that can be leaked, reused, or abused
  • Access is narrowly scoped to approved actions, data sets, and services
  • Permissions automatically expire when the task or context ends
  • Compromised or manipulated agents are structurally limited in what data they can reach

By removing static secrets from the equation and enforcing identity-driven, runtime policy, Hush turns AI agents from an uncontrolled data exfiltration risk into governed, auditable non-human identities.

This is not an incremental improvement.

It is the difference between hoping an AI agent behaves – and ensuring it cannot cause harm even if it doesn’t.

Still Using Secrets?

Let's Fix That.

Get a Demo

Build Your Security for Assume Breach, Not for Good Hygiene

Adi Chemoul's avatar
Adi Chemoul VP Marketing

Table of Contents

In the first half of 2024 alone, the cybersecurity landscape was rocked by high-profile incidents, including the Snowflake data breach and major compromises at Microsoft, that shared a common, devastating thread: stolen credentials and compromised secrets. These weren’t sophisticated “zero-day” exploits of technical flaws; they were attackers simply “logging in” using valid, but stolen, identities to compromise entire organizations.

For years, the industry has preached “cyber hygiene”, the digital equivalent of brushing your teeth: use strong passwords, patch your systems, and don’t click suspicious links. While essential, hygiene is no longer enough to serve as a strategy.

The problem with the “cyber hygiene” metaphor is that it suggests a simple pass or fail, either your credentials are clean and you’re safe, or they’re dirty and you’re exposed. In reality, keys and tokens can be handled “perfectly”: stored in a vault, scoped carefully, rotated on schedule, and still end up in the hands of an attacker. Recent incidents, including the Snowflake and Microsoft-related breaches, reinforced a hard truth: attackers don’t always need to exploit vulnerabilities if they can just log in with valid credentials.

The Speed of Development Has Outpaced Hygiene

Today’s “ship-it-yesterday” development culture doesn’t give security teams the luxury of relying solely on best practices and good hygiene. As organizations race to adopt new technologies, the basics can get buried under delivery pressure. In a world of microservices, CI/CD pipelines, and now agentic AI, the perimeter is no longer a fixed wall you can keep “clean.”

If your strategy is built only on hygiene and best practice, your organization can collapse the moment a developer hardcodes a secret, an employee falls for a sophisticated phishing attack, or an OAuth key in a third-party SaaS app is compromised. In today’s complex environments, security teams need a breach-ready approach: harden posture, tighten exposure, and assume compromise, then build controls that contain blast radius and keep you operating when it happens.

What “Building for Breach” Actually Means

If we accept that compromise is inevitable, that credentials will be stolen, insiders will exist, and trust boundaries will be crossed, the security model shifts entirely.

This shift matters even more now because automation and agentic AI are exploding the number of non-human identities, secrets, AI agents, and MCP connections across every environment. What used to be a manageable set of service accounts and API keys is turning into a massive, fast-changing web of machine access. That growth is quietly expanding the attack surface, yet this vector still doesn’t get the attention, visibility, or shared understanding it deserves, especially when it comes to how easily one compromised identity can cascade into an organization-wide breach.

Minimize/Reduce Risk Where and When Possible

In the identity security world, the leading attack vector is still secret-based access, API keys, tokens, shared credentials, and long-lived secrets that attackers can steal and reuse. The good news is this risk can be minimized to near-elimination by moving from secrets-based access to identity-based access. In practice, that means extending the machine identity model the major cloud providers already use internally to everything else in your environment: internal services, SaaS tools, pipelines, agents, and MCP servers. With a battle-tested framework like SPIFFE, workloads get strong, verifiable identities and short-lived credentials, so access is granted based on identity and policy instead of static secrets.

This shift strips a huge part of the security burden away from developers and DevOps, who shouldn’t be in the business of handling and protecting long-lived secrets. Instead, security teams regain control through centralized policy, consistent identity issuance, and enforcement that holds even when something is compromised.

Building for breach means assuming one of those identities will be compromised and designing so it doesn’t become a full-org incident: remove long-lived secrets, eliminate standing access, enforce right-sized and just-in-time permissions at runtime, and make actions fully attributable so you can detect, contain, and keep your business operating when compromise happens.

Cyber hygiene is the starting line, not the strategy. In a world where attackers don’t break in, they log in, security must shift from the impossible goal of absolute prevention to the essential reality of breach-ready resilience, building a system that assumes compromise and is engineered to survive it.

Still Using Secrets?

Let's Fix That.

Get a Demo

Why Storm the Castle When You Already Hold the Keys to the Kingdom?

Yuval Lazar's avatar
Yuval Lazar Head of Security Research

Table of Contents

What the Salesloft and Gainsight Breaches Really Tell Us About NHI Risk

For years, enterprises have fortified their perimeter – hard MFA, hardened SaaS, locked-down identity layers. But in 2025, the weakest link isn’t the castle gate anymore. It’s the messenger walking through it with unquestioned trust.

In today’s ecosystem, that messenger is an integration with privileged access, and the recent Salesloft and Gainsight breaches exposed just how vulnerable that blind spot is. Attackers didn’t battle their way in – they entered as invited guests.

Security teams who understand this shift are already ahead of the rest of the industry.

The Pattern: Compromise the Integration, Skip the Hard Part

Salesloft to Salesforce

In August 2025, threat actor UNC6395 compromised OAuth and refresh tokens tied to the Drift integration.
Those tokens – trusted Non-Human Identities (NHIs) – opened direct, legitimate access to hundreds of Salesforce orgs.

Once inside, attackers didn’t stop at CRM data. They exfiltrated downstream secrets:

  • Snowflake tokens
  • Cloud access keys
  • Support-case content
  • Internal operational metadata

They bypassed MFA and stepped straight into authenticated privilege.

Gainsight to Salesforce

On November 21, Just weeks later, Salesforce disclosed unusual activity tied to another integration – Gainsight.
Again: no Salesforce vulnerability, no platform exploit.

The door was opened by an integration holding elevated OAuth scopes – another NHI trusted by default. Salesforce’s statement was unambiguous: “No indication that this resulted from any vulnerability in the Salesforce platform.”

Salesforce revoked all active access and refresh tokens for the gainsight apps and removed those apps temporarily from the AppExchange. New reporting suggests the Gainsight breach might reuse secrets taken from the Salesloft/Drift incident, indicating that attackers are chaining these breaches.

Google Threat Intelligence is attributing the Gainsight breach to the related threat-actors that hit Salesloft (clusters such as UNC6240 / ShinyHunters).

The Shared Pattern: NHIs as High Privilege Attack Vectors

Both breaches followed the same blueprint – and it’s one every modern defender must internalize:

  1. Compromise an integration token (NHI with broad scopes)
  2. Enter customer environments with full legitimacy
  3. Move laterally across connected systems
  4. Harvest embedded credentials and secrets
  5. Pivot into cloud infrastructure

This is large-scale, low-friction supply-chain compromise powered by unmonitored NHIs. Defenders who don’t see this pattern are operating blind.

What You Can Do Now – If You Want to Stay Ahead

1. Inventory premium-scope integrations

Map every integration, service account, and bot touching critical systems.
If you can’t see it, you can’t defend it.

2. Govern third-party integrations like first-class identities

Every vendor app is now part of your attack surface.
Demand audit logs, token controls, and operational transparency.

3. Scope and rotate all tokens / service accounts

Least privilege is not optional.
Long-lived tokens are liabilities – shorten them.
Broad scopes are risks – tighten them.

4. Formalize NHI-focused incident triage

If you find a compromised token:

  • Revoke instantly
  • Rotate downstream secrets
  • Block or delete the previous versions
  • Trace every integration that touched that token
  • Assess which downstream identities could be abused
  • Model possible lateral movement paths

Teams who do this well are the ones who stop incidents early – before attackers reach the crown jewels.

The Bottom Line

Sophisticated attackers no longer storm the castle.
They compromise the trusted identity already carrying the master key – the integration, the bot, the token, the NHI.

Security teams who want to lead – not react, must elevate NHI security to the same level as human identity.
Because in 2025, the kingdom doesn’t fall through the gate – it falls through the integration.

‍

Still Using Secrets?

Let's Fix That.

Get a Demo