Secrets everywhere!
In a single cloud, secrets management is hard enough. In a multicloud world, it can feel like every platform has its own preferred store, its own IAM model and its own “right” way to inject credentials into workloads. Meanwhile, attackers keep finding API keys, database passwords and access tokens in public repos, container images and CI logs.
This post looks at how secrets actually leak in modern cloud setups, how cloud-native secret managers compare with external vaults, and how to structure secrets per environment, service and tenant. We will also touch on rotation, break-glass procedures and walk through a concrete migration from hardcoded secrets to a proper store.
Threats around secrets in the cloud
Secrets in code and repositories
Secrets management problems often start in source control. Even when developers later delete a secret from the repository, it remains in Git history unless it is explicitly scrubbed, so the credential must be revoked, not just removed from the file.
Recent large-scale scans of public Git hosting platforms keep reinforcing the point. Studies have found tens of thousands of valid secrets in public repositories across major platforms, including API keys and access tokens for production systems.
To reduce this risk, security guidance recommends preventing secrets from entering version control in the first place and scanning repositories continuously for exposed credentials. GitHub and GitLab both provide built-in secret scanning that inspects commits and pipeline output for known credential patterns and raises alerts when something is found. Third-party tools such as TruffleHog, Gitleaks and commercial platforms like GitGuardian extend this scanning across multiple repos, CI systems and logs.
Secrets in container images
Containers introduce a different failure mode: credentials baked into image layers. Research on public container registries has shown that a significant fraction of images contain embedded secrets, often in layers where files were copied, environment variables set or temporary build artefacts left behind. Because container images are composed of immutable layers, secrets added during a build remain retrievable from history even if later layers delete the files.
Guidance from container security best practice is consistent: avoid hardcoding secrets in Dockerfiles or application code and do not rely on image squashing alone to remove sensitive data. Instead, use dedicated mechanisms such as Docker or Kubernetes secret objects, and inject secrets at runtime rather than at build time.
Secrets in CI/CD and build logs
CI/CD pipelines are another rich source of leaked credentials. Analyses of CI platforms have documented cases where misconfigured workflows caused credentials or decrypted config files to be written directly into build logs, which are then accessible to anyone with pipeline access and sometimes even to external collaborators. Security articles on CI/CD repeatedly highlight how pipeline logs and shared runners frequently contain secrets, especially when Docker build arguments or plaintext environment variables are used without redaction.
Modern guidance recommends using the CI system’s dedicated secrets features but treating them as a delivery mechanism rather than the ultimate source of truth. Many teams now integrate secret scanning directly into their CI jobs and log processing pipelines using tools such as ggshield, which can inspect pipeline logs, artefacts and custom data sources and create incidents when it detects exposed credentials.
Secrets at runtime
Even if you avoid secrets in code and images, runtime can still betray you. Kubernetes documentation notes that secret values are base64-encoded and can be mounted as files or injected as environment variables, and that encryption at rest must be explicitly enabled to protect data in etcd. Runtime security articles point out that environment variables can be inherited by child processes and are sometimes surfaced in crash dumps, debug endpoints or diagnostic logs, which makes them a risky place for long-lived credentials.
Across all of these stages, widely used security guidance summarises good hygiene in similar terms: centralise storage, keep secrets short-lived, rotate them automatically, minimise who can see them and ensure they never appear in logs.
Cloud-native secret stores vs external vaults
Cloud-native secret managers
The major cloud providers all ship managed services for storing and retrieving secrets. These typically store small pieces of sensitive data such as passwords, API keys and certificates, integrate with the platform’s IAM system and offer audit logging and some form of versioning and rotation.
On AWS, Secrets Manager stores secrets encrypted with AWS KMS, supports automatic rotation for databases and other services via Lambda functions and can replicate secrets across multiple regions for resiliency. It also supports cross-account access and, in recent feature updates, managed external secrets for some third-party SaaS credentials.
Azure Key Vault provides secure storage for secrets, keys and certificates and uses Microsoft Entra ID for authentication with Azure RBAC or dedicated access policies for authorisation. Vault contents are encrypted at rest using cryptographic modules and, where required, hardware security modules. Azure replicates data within a region and to a paired secondary region to provide high availability.
Google Cloud Secret Manager lets you store secrets as text or binary blobs, manage them as global resources, control access using Cloud IAM and rely on Cloud Audit Logs for a detailed access trail. It provides regional replication policies, first-class secret versioning and support for rotation schedules, which can be triggered via event notifications and serverless functions.
These cloud-native tools integrate tightly with their respective platforms and are often the easiest option for workloads that mostly live within a single provider.
Public guidance from AWS, Azure and GCP all recommend using them instead of hardcoding secrets in code, images or configuration.
However, third-party analyses point out that these services are usually optimised for their home cloud. For example, AWS Secrets Manager is not a complete multicloud secrets platform and is best used as the primary store for infrastructure running on AWS. Other write-ups describe cross-platform challenges such as having disconnected secret stores across AWS, Azure, GCP and GitHub, and the resulting lack of unified visibility and policy.
External vaults
External vaults aim to abstract away individual clouds and give you a single place to define secrets, policies and access patterns across multiple environments. HashiCorp Vault is the most widely referenced example. Vendor and community documentation describe Vault as a central engine for storing secrets and encryption keys, providing identity-based access control, dynamic secret generation and policy-as-code across on-premises and cloud infrastructure.
Vault can generate short-lived credentials for databases, cloud providers and other systems on demand and revoke them automatically, reducing the window in which a stolen credential is useful. Features such as namespaces support multi-tenancy within a single Vault cluster, which is valuable when you want strong separation between teams or customers.
A growing ecosystem of SaaS secrets platforms provides similar centralised capabilities without requiring you to operate your own Vault cluster. Product comparison guides now routinely list options such as CyberArk Conjur, Akeyless, Keeper Secrets Manager, Doppler and 1Password alongside Vault and the cloud-native services, often highlighting multicloud support, policy-driven workflows and integrations into CI/CD and Kubernetes.
The trade-offs are mostly about complexity and operational ownership. Articles that compare Vault with SaaS alternatives note that dynamic credentials and advanced workflows can be harder to operate at scale than static secrets, and that small teams may prefer simpler managed services. At the same time, multicloud reference architectures often present Vault as a central authority that feeds secrets into AWS, Azure, GCP and Kubernetes clusters via operators such as External Secrets and Git-based tools such as SOPS.
Design patterns for multicloud secrets
Per-environment secrets
Most organisations find it useful to separate secrets by environment so that development, test and production values are not mixed. Security guidance recommends isolating secrets per environment and keeping non-production credentials distinct from production ones, both to limit blast radius and to reduce the risk of test systems accessing real data.
In practice, teams often implement this by having separate secret stores or namespaces per environment. For example, a common Azure Key Vault usage pattern is to create a dedicated vault per application per environment, such as “myservice-development” and “myservice-production”, with separate access policies. GCP and AWS documentation similarly show separate projects, accounts or resource paths for dev, staging and production secrets.
Per-service secrets
Per-service isolation builds on that foundation. Rather than sharing a single environment-wide database password between all services in a cluster, each service has its own identity and its own secrets. Cloud documentation and Kubernetes guidance stress mapping secrets to specific service accounts or pods, and limiting their use via IAM or RBAC.
For instance, a Google Cloud example defines a Cloud Run service that pulls configuration from Secret Manager using a dedicated service account, so only that workload can access those secrets. AWS and Azure tutorials show similar patterns using IAM roles or managed identities mapped to individual applications or microservices.
Per-tenant secrets
In multi-tenant systems, it is common to have separate secrets per tenant, or at least per group of tenants with similar isolation requirements. Non-human identity guidance emphasises the risk of secret leakage across tenants and the need for strong separation and fine-grained authorisation between different automated actors.
HashiCorp Vault and similar tools provide explicit multi-tenant features such as namespaces to support this model, letting you allocate isolated spaces with their own policies, mounts and audit logs per team or customer. Kubernetes-focused articles describe combining these with Kubernetes namespaces and dedicated service accounts to ensure that tenant-specific secrets are only mounted into the right pods.
Rotations and break-glass procedures
Rotation strategies
Well-regarded secrets management guidance is very clear: secrets should exist only for as long as necessary, be rotated regularly and ideally rotated automatically. Industry guidance on secrets management echoes this, encouraging teams to use the rotation features provided by their tools rather than relying on manual changes.
Cloud secret stores support several rotation patterns. Google Cloud Secret Manager uses versioning and recommends adding new versions and pointing consumers to specific versions or aliases, rather than always using a “latest” pointer in production. AWS Secrets Manager can automatically rotate database credentials and other secrets on a schedule via Lambda, and now also offers managed external secrets for third-party SaaS credentials with built-in rotation strategies. Vault’s dynamic secrets let you issue short-lived credentials with defined lifetimes and revoke them when their time-to-live expires.
Best-practice articles describe three broad rotation drivers: time-based (rotate every fixed number of days), usage-based (rotate after a defined number of uses) and event-driven (rotate when a policy is violated or a compromise is suspected). For privileged user credentials, modern standards bodies still expect rotation, although some have relaxed hard requirements for frequent password changes to avoid weakening behaviour.
Break-glass access
Even the best-designed secrets infrastructure can fail. Vault clusters can become unavailable, cloud IAM policies can lock you out and automated rotation can misfire. For these cases, security guidance recommends defining “break-glass” or emergency access procedures that allow carefully controlled override of normal access controls.
For identity systems, major cloud providers describe emergency access accounts reserved for break-glass scenarios where normal administrative accounts cannot be used, and recommend keeping at least two such accounts and excluding one from conditional access policies to avoid tenant lockout. Logging and monitoring guidance also calls out the use of break-glass accounts as events that should be logged and monitored.
Applied to secrets, recent best-practice articles suggest that break-glass plans should cover at least three things: how to access secrets if your primary vault is down, how to override automated policies safely and how to bring the environment back into a compliant state afterwards. This typically involves securely storing a minimal set of emergency credentials in a separate system such as a privileged access management vault or hardware token store, protecting them with strong multi-factor authentication and regularly testing the break-glass path.
Example: migrating hardcoded secrets to a proper store
Consider a common starting point: a multicloud team has several microservices, some on AWS, some on Azure and some on GCP. Database passwords and API keys are hardcoded in application configuration, passed around in CI variables and scattered across Dockerfiles. This is very close to the anti-patterns described in Kubernetes and container security guides, and in CI/CD secret handling articles.
A realistic migration might look like this.
Step 1: Find the secrets
Start by scanning your Git repositories, container images and CI logs to build an inventory of existing secrets. GitHub and GitLab secret scanning, TruffleHog, Gitleaks and similar tools can help here, and penetration-testing guidance recommends using multiple scanners to reduce blind spots. GitGuardian and other platforms can provide a single view across different code hosts and CI systems.
Step 2: Choose where secrets will live
For each cloud, decide whether you will use the native secret manager, a central Vault deployment or a mix of both. Native services such as AWS Secrets Manager, Azure Key Vault and Google Cloud Secret Manager are straightforward for workloads that already use that cloud’s IAM and networking.
If consistency across clouds is more important, an external vault such as HashiCorp Vault or a SaaS secrets platform can act as the primary authority while native services act as caches at the edges. Multicloud reference architectures show Vault storing secrets centrally and synchronising them into Kubernetes clusters and cloud platforms using operators and integrations.
Step 3: Model per-environment and per-service scope
Before adding secrets, decide on a structure. Following widely accepted best practices, create separate stores or namespaces for development, staging and production, and within each, separate secrets by service. In Vault, this might mean namespaces per environment and policies per service; in cloud-native managers, it might mean distinct key vaults or projects.
Step 4: Create secrets and wire up identities
For each secret, create the value in the chosen store and configure access using the platform’s identity system. Azure Key Vault uses Entra ID and RBAC or access policies; Google Cloud Secret Manager uses IAM roles; AWS Secrets Manager uses IAM policies and, if necessary, resource policies for cross-account access.
For containerised workloads, configure Kubernetes to pull secrets from the store using a mechanism such as external secrets operators or CSI drivers, and mount them as files or environment variables. Best-practice documents show how to do this while keeping access scoped to the right service accounts.
Step 5: Update applications to fetch secrets
Replace hardcoded values in code and configuration with lookups from the secrets store. Cloud and vendor examples show patterns such as application startup code retrieving secrets via SDKs, or serverless runtimes reading secrets and exposing them to functions via environment variables or mounted files.
At this stage, ensure that secrets are never logged, even at debug level. Guidance explicitly advises implementing masking or encryption for any fields that might contain secrets and treating attempts to use revoked secrets as events worth logging.
Step 6: Turn on rotation
Once code uses the new store, configure rotation. On AWS, that might mean enabling built-in rotation for database credentials; on GCP, defining rotation schedules that create new secret versions; on Vault, using dynamic secrets or periodic rotation of static credentials. Well-regarded guidance recommends combining time-based rotation with event-driven rotation when there is evidence of compromise.
Step 7: Revoke old secrets and clean up history
After traffic has moved over, revoke any credentials that were previously hardcoded to ensure that leaked values are no longer usable. Once a secret has been committed to source control, it should be treated as compromised until revoked, even if you later amend the file. If necessary, use tools to rewrite Git history and remove plaintext secrets, accepting that this is a defence-in-depth measure rather than a substitute for revocation.
Step 8: Add continuous detection and response
Finally, embed secret scanning into your ongoing workflows: pre-commit hooks on developer machines, CI jobs for each pipeline, regular scans of container registries and monitoring of logs and collaboration tools. Some platforms now even support push-to-vault workflows, where detected secrets can be automatically written into an approved secrets manager and the incident tracked until the credential is revoked.
Common mistakes and how to spot them
Hardcoded secrets in code and config
This remains the most common error. Kubernetes and DevSecOps references repeatedly warn against embedding secrets in application code, configuration files or Helm charts, since they inevitably end up in source control and are readable by anyone with repository access.
Detection: enable repository-level secret scanning, run tools such as TruffleHog, Gitleaks or platform-provided scanners on all branches and monitor for alerts from external researchers who may report exposed secrets.
Secrets embedded in images
Baking credentials into container images via Dockerfile instructions or build arguments is another recurring issue. Research has shown that secrets are often found in image layers, even when developers believe they have deleted them, because the underlying layers remain intact and can be inspected.
Detection: use image scanning tools that specifically look for secrets in layers, and include this scanning in your CI pipelines and registry policies. Container security guidance suggests combining this with policies that prevent images containing secrets from being deployed.
Secrets leaking into logs and telemetry
Logging credentials is widely recognised as a serious risk. Security guidance states that secrets should never appear in logs and that implementations should include masking or encryption for fields that may contain sensitive values. Case studies of CI/CD incidents show secrets being exposed through misconfigured log levels, verbose error messages or decrypted files written to logs.
Detection: scan logs with the same tools used for source control, and configure dedicated log-scanning jobs through platforms that support scanning build logs and other artefacts. Treat any detected secret in logs as a full incident that requires rotation and revocation, not just log redaction.
Shared secrets across services and tenants
Using a single database password or API key for multiple services or customers makes incident response extremely difficult and magnifies blast radius. Best-practice documents emphasise the importance of least privilege, per-service identities and per-tenant segmentation.
Detection: review secret access logs from your store to see how many different identities read a given secret, and use analysis tools to flag over-privileged credentials that are used from many places.
Long-lived static secrets
Finally, many organisations still rely on long-lived API keys and passwords that never change, even for high-value systems. Security guidance argues for reducing secret lifetime, using automated rotation and preferring dynamically issued credentials where feasible.
Detection: inventory secrets by age and usage, and flag those that have not been rotated in a long time or are used by many different identities. Secrets management tools and privileged access management platforms often include reports for this purpose.
Conclusion
In a multicloud setup, secrets management is not just about choosing between AWS Secrets Manager, Azure Key Vault, GCP Secret Manager or Vault. It is about understanding where secrets can leak across code, images, CI pipelines and runtime, then designing a layered approach that uses the right tool in the right place, with clear patterns for environments, services and tenants.
By centralising secrets, wiring access through strong identities, enabling rotation, defining break-glass paths and continuously scanning for leaks, you can keep your credentials from becoming the easiest way into your estate, regardless of how many clouds you run on.

