Cloud Storage Encryption


Most cloud vendors offer object storage services as part of their offering. For example, Amazon has its S3 service and Microsoft’s Azure offers Blob storage. These are highly scalable solutions that have some built-in security mechanisms to protect stored data.

There are two important security capabilities:

  1. The data can be stored in an encrypted format
  2. You can decide which machine and/or audience (public or private) can perform actions on the storage – like read/write data, and so on. 
Figure 1: Amazon S3 Security Options

While these capabilities are better than nothing, they are not the optimal solution. The proof for this is in reading the latest published cloud security breaches. 

Capital One, for example, had the various permissions in place – even though some will argue that their permissions were too broad – and the storage was encrypted. Yet still, the attacker was able to exploit a software vulnerability and take all the data in an unencrypted format. 

The same goes for misconfiguring storage as public facing; if it is encrypted, why can anyone access it in clear format? And does placing a storage as public facing eliminate the encryption option?    

Critically, how can we prevent the next data breach?

The Challenge:

There are few issues with data-at-rest encryption in the cloud vendors’ offerings:

  1. Data-at-rest encryption is not a novel idea. We use it all the time on personal computers, servers and laptops. The idea is that if someone physically steals our computer, they will be unable to access the files on the hard disk. This makes a lot of sense, but in the cloud, no one is worried about someone accessing the vendors’ data centers and stealing hard drives!
  1. The identity and permission model of cloud vendors is not optimal. As an example: when you grant access to S3 bucket for an EC2 machine, did you intend to allow any process/workload that runs on the EC2 machine to access the data? Is this granular enough? What if an attacker breaks into the EC2, can they access the data using the permissions you gave the entire EC2? Was this your intention?

In other words: if we encrypt the data, but allow access without having the capability to strongly identify the entity that accesses the data – are we really protecting the data?   

The Solution:

If we are tying permissions and encryption together, we need to have strong identification of the software running in the first place. 

ARMO is the only solution that really protects your data, by providing a strong identity to your workload. Based on this identity, you set the access permissions to the data, making sure that the workload you grant permissions to can read and/or write data. ARMO ensures that this workload is not compromised, by constantly verifying its identity. In case it is compromised, ARMO will not allow it to access the data.

In addition, ARMO manages the encryption keys, making sure that no one, other than the authorized workload, can access them – even when they are loaded to the workload’s memory.

Using ARMO, even in case of misconfiguration, when you accidently allow public access to your object storage, the data will still be encrypted. Only the allowed workloads are able to access the data, securely – this is the only solution that provides true data protection! And the real benefit is that you do not need to change your original code or the cloud configuration; ARMO seamlessly adds strong data protection with zero friction. This applies to any workload, including Windows, Linux, Go, .Net and so on; wherever it is, including cloud and on-premises.

Key Elements of the ARMO Solution

  1. The organization attaches ARMO’s micro-agent to its workloads. It thus ensures that the workload is malware-protected, and that the encryption keys that will be used are protected in memory using a mathematical, virtually unbreakable technique. The original workload is not affected by the fact that ARMO is attached to it.
  2. When the workload writes data, it will write data to the object storage. Based on your encryption policy, ARMO encrypts the data and stores it in the object store.
  3. When another workload needs some part of the data, it reads the object file. Based on the encryption policy, ARMO reads the object file, decrypts it and passes it to the workload. This ensures that only authorized and uncompromised workloads can access the data.


  • Armo is attached to the workloads and is set to protect the identity of these workloads
  • The encryption policy is such that the Customer and Payment workloads can write objects to the storage service
  •  The Business Logic workload has permission to read data which was written from the Customer and Payments workloads
  • The Front End workload cannot access the Customer and Payment data, but can present the data by using the Business Logic APIs.
see more case studies