CASE STUDY

Protecting Kubernetes Secrets, The Real Story

Background

Kubernetes Secrets let you store and manage sensitive information, such as passwords, OAuth tokens, and SSH keys used by your applications. Storing confidential information in a Secret is safer and more flexible than putting it verbatim in a Pod definition or in a container image. There are 3 ways for passing secrets to your container:

  • Passing secrets into containerized code
  • Passing secrets as environment variables
  • Passing secrets in files

While each technique has its pros and cons, according to Liz Rice and Michael Hausenblas in their book “Kubernetes Security”, the “files” option is the most recommended approach.

However, even the file mapping method has disadvantages. To minimize these risks, they recommend that in order to protect the secrets when using the file option, you need to blacklist commands like “cat” from the container. This will cause the attacker to have less options to view the content of the file in case they were able to exploit the container processes. They also advise mounting a temporary file system to keep the file in memory (this may not even be relevant).

Will this keep your secret secure?

The Challenge:

Your containers need secrets in order to operate. For example: your database container needs the database credentials (this is not technically accurate – credentials are used by database clients, not the database itself; fetch and insert data requires TDE key) to fetch and insert data.

How should it get to the container without opening a hatch that attackers can abuse? You understand this, but the flow is not for a user who does not know the problem ahead of time.

We claim that the current solutions to pass secrets to the containers in Kubernetes is not secure! I am not going to repeat the explanation of why passing secrets in code and environment variables is not really secure (base64 – this is not security!). I will focus on the file option and the recommendation that comes in “Kubernetes Security”.

Regarding the recommendation to blacklist commands like “cat” in order to raise the bar for the attacker – this is not a real solution. If the attacker gains access to the workload, he or she can install anything on the container, move the file to another computer, write a small program that opens the file and so on. Also, placing the file in memory is not really a solution either, as the attacker can look for the file in the container memory space.

In order to prevent the secret from leaking, the following steps must be performed:

  1. Encrypt the secret file
  2. Ensure the workload accessing the secret file is not compromised
  3. Ensure that only authorized pods can read/decrypt the secret file[MS5]
  4. Protect the encryption key of the secret file in a way that an attacker penetrating the process cannot obtain it

It sounds complex, but in fact it is simpler than you might think. 

The Solution:

In order to protect Kubernetes Secrets, we need to make sure of the following:

  • The application that gets the secret file is signed, meaning that it is cryptographically authenticated when it attempts to read the secret
  • The secret file is encrypted – everything requires strong encryption
  • No one can retrieve the encryption keys

This will ensure that the pod that reads the secret files is not compromised; that only the application that needs these secrets can read the data; and that even if an attacker is able to somehow read the pod  memory, the files cannot be retrieved.

Armo provides any application with exactly that capability, without any changes to the application or its package. Using the Armo agent, you are able to protect your workload against any malware, and encrypt the secret file so that only the pod that requires the secrets can decrypt it. The secret encryption key is protected in a way that is virtually unbreakable.

This makes sure that your secrets are really protected!

Key Elements of the Armo solution

  1. You create the secret file according to the Kubernetes documentation. In the CI/CD you encrypt the secret file.
  2. You attach and sign the pod that should access the secret file and assign an encrypt policy for the pod. The encryption policy should use the same key to decrypt the secret file.

That’s it! Simple and secure.

 

 

In the above drawing:

  • In the CI/CD, the secret file is encrypted
  • An encryption policy is defined allowing the pod to decrypt the file using the same key that was used to encrypt the file in CI/CD
  • The pod is deployed with the encrypted secret file
  •  In Runtime, the workload/pod, is attached and signed with Armo’s agent
  • The key is kept in the DRT, with the patented-secret-protection-mechanism, where it will be used to decrypt the secret file and hand it to the pod
see more case studies