Don’t Be the Next ShinyHunters Breach

Chen Nisnkorn's avatar
Chen Nisnkorn CCO and Co-Founder

Table of Contents

Stop rotating. Start solving – credentials that never exist as static artifacts can’t be stolen.

For the CISO: Forward this to your infra team. The solution is three YAML files.

For the infra team: This is how you eliminate the entire class of credential-leak breaches – for every service in your stack.

ShinyHunters breached Anodot. Anodot had tokens connecting it to Rockstar’s Snowflake. You know the rest. Many postmortems from the last 18 months end the same way: “Rotate all potentially exposed secrets.”

Snowflake, OpenAI, Postgres, Redis, Elasticsearch… they all hand out keys by default. Those keys will end up somewhere they shouldn’t. In a JFrog artifact. Cleartext in a git commit. An S3 config file. A Kubernetes ConfigMap. And eventually, they’ll be found.

Managing NHI credentials is not a GTA-play. Fixing it time after time is not solving it. The solution is removing the key entirely.

Who is ShinyHunters?

ShinyHunters is a prolific cybercrime group responsible for some of the largest data breaches of the last five years – AT&T, Ticketmaster, Santander, and dozens more. Their method is rarely sophisticated: find a credential left somewhere it shouldn’t be, use it.

Rockstar statement in full follows from last week:

The Real Problem, It’s Not Just One Thing

Problem #1: Every Service Speaks a Different Auth Language

This is why it’s so hard to solve. It’s not just that services hand out long-lived credentials – it’s that they all hand out different kinds:

Category Services Auth Method
AI / LLM providers OpenAI, Anthropic, Grok, Vertex AI, Bedrock API key, IAM role, service account
Databases PostgreSQL, MySQL, MariaDB, MongoDB, Snowflake, Redis Password, connection string, key pair, x.509 cert
Search & analytics Elasticsearch, OpenSearch, Datadog API key, username/password, service token
Messaging & brokers Kafka, RabbitMQ SASL username/password, SCRAM, mTLS, OAuth
Cloud & infra AWS, GCP, Azure, Kubernetes IAM role, managed identity, service account token
SaaS & business apps GitHub, Slack, Jira, Confluence API token, OAuth, PAT

The fragmentation means you can’t enforce one consistent policy across your stack. Every service is its own island. You can’t adopt one rotation policy across API keys, connection strings, x.509 certs, and IAM roles, and you can’t realistically audit whether every vendor holding every credential type has rotated on schedule.

Problem #2: Someone Will Always Cut a Corner

No policy survives contact with a deadline. There will always be a DevOps engineer, a developer, an architect moving fast, and they’ll store that credential somewhere it shouldn’t be (I’ve done it myself. We all have.). In a JFrog artifact. Cleartext in a git commit. An S3 config file. A Kubernetes ConfigMap. Not because they’re careless. Because the current model requires them to handle credentials in the first place.

You can write all the policies you want. You cannot stop a human from doing what humans do under pressure.

Therefore, the solution cannot be implemented within the authentication model of each individual service. Instead, it must operate as a unified governance layer, enforcing a single, consistent access control policy regardless of whether the underlying service relies on an API key, a password, a certificate, or a token. This architecture must also inherently eliminate the risk of credential exposure, as no user ever interacts with them directly.

What You Can Actually Do Today (And It’s Simpler Than You Think)

Imagine setting access to Snowflake, OpenAI, Redis, Elasticsearch, PostgreSQL, Datadog…, JIT, scoped to the exact workload that needs it, with full cryptographic attestation, and never having a credential sitting anywhere to steal.

That’s what Hush Security does. It’s SPIFFE-native out of the box.

Every workload gets a SPIFFE identity, a cryptographically verified ID tied to its runtime environment (Kubernetes namespace, service account, node). When the workload needs access to Snowflake, it doesn’t look up a stored password. It presents its SPIFFE identity, Hush verifies it, and issues a short-lived scoped credential directly to the workload at runtime. The credential expires. Nothing is stored. Nothing can end up in a git commit, a JFrog artifact, a ConfigMap, or an S3 file, because it never existed as a static thing.

Developers never touch the credential. There’s nothing to misplace. No more “rotate your secrets.” There’s nothing to rotate.

The Setup: Three YAML Files

Define the connector (what to connect to: OpenAI, Anthropic, Grok, Vertex AI, Bedrock, PostgreSQL, MySQL, MariaDB, MongoDB, Snowflake, Redis, Elasticsearch, OpenSearch, Datadog, Kafka, RabbitMQ, AWS, GCP, Azure, Kubernetes), the privilege (what access it gets), and the policy (which workload identity receives it). That’s it.

1. connector.yaml - the connection (snowflake as example):

apiVersion: am.hush.security/v1alpha1
kind: AccessCredential
metadata:
  name: demo-snowflake
  namespace: hush-security
spec:
  type: snowflake
  config:
    account: <ORG-ID>-<ACCOUNT-ID>
    warehouse: COMPUTE_WH
    database: DB
    schema: PUBLIC
    username: user_analytics
    auth_method: key-pair
  secretRef:
    name: demo-snowflake-secret
  keyMappings:
    private_key: snowflake-private-key

2. access-privilege.yaml - minimum access only:

apiVersion: am.hush.security/v1alpha1
kind: AccessPrivilege
metadata:
  name: snowflake-readonly
  namespace: hush-security
spec:
  type: snowflake
  config:
    grants:
      - privileges: [SELECT]
        resource_type: table
      - privileges: [USAGE]
        resource_type: warehouse

3. access-policy.yaml - which workload gets it, verified by SPIFFE:

apiVersion: am.hush.security/v1alpha1
kind: AccessPolicy
metadata:
  name: analytics-snowflake-access
  namespace: hush-security
spec:
  enabled: true
  accessCredentialRef:
    name: demo-snowflake
  accessPrivilegeRefs:
    - name: snowflake-readonly
  attestationCriteria:
    - type: "k8s:ns" # SPIFFE attestation - only this workload
      value: analytics
  deliveryConfig:
    type: env # injected at runtime, never stored
    config:
      items:
        - { name: SNOWFLAKE_USERNAME, key: username, type: key }
        - { name: SNOWFLAKE_PRIVATE_KEY, key: private_key, type: key }
        - { name: SNOWFLAKE_ROLE, key: role, type: key }

The attestationCriteria is the key part. Hush verifies the workload’s SPIFFE identity before issuing anything. Only workloads in the analytics namespace get these credentials – not a developer’s laptop, not a CI pipeline, not a third-party vendor’s misconfigured environment. The credential arrives at runtime, lives for the duration of the job, and disappears.

Same pattern works for every service in your stack. Repeat for OpenAI, Redis, Elasticsearch, Datadog, MySQL, MongoDB.

What This Means in Practice

Before After
Static key stored in vendor’s config No key stored anywhere
Developer creates + manages credentials Declare a policy, Hush handles the rest
“Rotate after breach” Nothing to rotate – credential never persisted
Third-party breach = your data at risk Third-party breach = attacker finds nothing
Someone always cuts a corner under pressure No one can – because no one ever holds a credential
Keys leak into git, JFrog artifacts, S3, ConfigMaps Credential is provisioned and delivered just-in-time, exclusively to the intended workload

Anodot gets breached. Attacker searches for Snowflake credentials. Finds nothing – because the credential was issued for that run, verified against a SPIFFE identity, scoped to SELECT only, and expired before the breach happened.

Snowflake, OpenAI, Postgres, Datadog, Redis, Elasticsearch – unstealable.

Not because they’re stored better. Because they were never stored.

Sources: ShinyHunters / Anodot / Rockstar – HackRead · Techcrunch, hackread

Still Using Secrets?

Let's Fix That.

Get a Demo

Still Using Secrets?
Let's Fix That.

Get a Demo