Saturday, September 30, 2017

Data Wipe On EBS Volumes - Part II

Securely erasing/Data wiping EBS volumes :

When you delete a file using the default commands of the operating system (for example “rm” in Linux/BSD/MacOS/UNIX or “del” in DOS or emptying the recycle bin in WINDOWS) the operating system does NOT delete the file, the contents of the file remains on your hard disk. So we need to explicitly delete or wipe the contents of the disk. Data wiping is the process of logically removing data from a read/write medium so that it can no longer be read.

Methods in Linux :

I will discuss some of the available data wiping methods in Linux system.

1. shred



shred is a command line utility, which overwrites data in a file or a whole device with random bits, making it nearly impossible to recover.

# shred -n 1 -vz /dev/xvdf

Make sure it is the correct device, picking the wrong device will wipe it.


This will overwrite 1 time ( -n ) by showing progress ( -v ) and adding final overwrite with zeros to hide shredding (-z ).
( Use -n more than 5 times for secure wipe, default is 25 times )
ubuntu@ip-xxxxxxxxx:~$ sudo shred -n 1 -vz /dev/xvdf shred: /dev/xvdf: pass 1/2 (random)... shred: /dev/xvdf: pass 1/2 (random)...454MiB/8.0GiB 5% shred: /dev/xvdf: pass 1/2 (random)...759MiB/8.0GiB 9% shred: /dev/xvdf: pass 1/2 (random)...1.0GiB/8.0GiB 12% shred: /dev/xvdf: pass 1/2 (random)...1.3GiB/8.0GiB 16% shred: /dev/xvdf: pass 1/2 (random)...1.6GiB/8.0GiB 20% shred: /dev/xvdf: pass 1/2 (random)...1.9GiB/8.0GiB 24% shred: /dev/xvdf: pass 1/2 (random)...2.2GiB/8.0GiB 27% shred: /dev/xvdf: pass 1/2 (random)...2.5GiB/8.0GiB 31% shred: /dev/xvdf: pass 1/2 (random)...2.8GiB/8.0GiB 35% shred: /dev/xvdf: pass 1/2 (random)...3.1GiB/8.0GiB 39% shred: /dev/xvdf: pass 1/2 (random)...3.4GiB/8.0GiB 42% shred: /dev/xvdf: pass 1/2 (random)...3.7GiB/8.0GiB 46% shred: /dev/xvdf: pass 1/2 (random)...4.0GiB/8.0GiB 50% shred: /dev/xvdf: pass 1/2 (random)...4.3GiB/8.0GiB 53% shred: /dev/xvdf: pass 1/2 (random)...4.6GiB/8.0GiB 57% shred: /dev/xvdf: pass 1/2 (random)...4.9GiB/8.0GiB 61% shred: /dev/xvdf: pass 1/2 (random)...5.2GiB/8.0GiB 65% shred: /dev/xvdf: pass 1/2 (random)...5.5GiB/8.0GiB 68% shred: /dev/xvdf: pass 1/2 (random)...5.8GiB/8.0GiB 72% shred: /dev/xvdf: pass 1/2 (random)...6.1GiB/8.0GiB 76% shred: /dev/xvdf: pass 1/2 (random)...6.4GiB/8.0GiB 80% shred: /dev/xvdf: pass 1/2 (random)...6.6GiB/8.0GiB 83% shred: /dev/xvdf: pass 1/2 (random)...6.9GiB/8.0GiB 87% shred: /dev/xvdf: pass 1/2 (random)...7.2GiB/8.0GiB 91% shred: /dev/xvdf: pass 1/2 (random)...7.5GiB/8.0GiB 94% shred: /dev/xvdf: pass 1/2 (random)...7.8GiB/8.0GiB 98% shred: /dev/xvdf: pass 1/2 (random)...8.0GiB/8.0GiB 100% shred: /dev/xvdf: pass 2/2 (000000)... shred: /dev/xvdf: pass 2/2 (000000)...375MiB/8.0GiB 4% shred: /dev/xvdf: pass 2/2 (000000)...681MiB/8.0GiB 8% shred: /dev/xvdf: pass 2/2 (000000)...985MiB/8.0GiB 12% shred: /dev/xvdf: pass 2/2 (000000)...1.2GiB/8.0GiB 15% shred: /dev/xvdf: pass 2/2 (000000)...1.5GiB/8.0GiB 19% shred: /dev/xvdf: pass 2/2 (000000)...1.8GiB/8.0GiB 23% shred: /dev/xvdf: pass 2/2 (000000)...2.1GiB/8.0GiB 26% shred: /dev/xvdf: pass 2/2 (000000)...2.4GiB/8.0GiB 30% shred: /dev/xvdf: pass 2/2 (000000)...2.7GiB/8.0GiB 34% shred: /dev/xvdf: pass 2/2 (000000)...3.0GiB/8.0GiB 38% shred: /dev/xvdf: pass 2/2 (000000)...3.3GiB/8.0GiB 41% shred: /dev/xvdf: pass 2/2 (000000)...3.6GiB/8.0GiB 45% shred: /dev/xvdf: pass 2/2 (000000)...3.9GiB/8.0GiB 49% shred: /dev/xvdf: pass 2/2 (000000)...4.2GiB/8.0GiB 53% shred: /dev/xvdf: pass 2/2 (000000)...4.5GiB/8.0GiB 56% shred: /dev/xvdf: pass 2/2 (000000)...4.8GiB/8.0GiB 60% shred: /dev/xvdf: pass 2/2 (000000)...5.1GiB/8.0GiB 64% shred: /dev/xvdf: pass 2/2 (000000)...5.4GiB/8.0GiB 67% shred: /dev/xvdf: pass 2/2 (000000)...5.7GiB/8.0GiB 71% shred: /dev/xvdf: pass 2/2 (000000)...6.0GiB/8.0GiB 75% shred: /dev/xvdf: pass 2/2 (000000)...6.3GiB/8.0GiB 79% shred: /dev/xvdf: pass 2/2 (000000)...6.6GiB/8.0GiB 82% shred: /dev/xvdf: pass 2/2 (000000)...6.9GiB/8.0GiB 86% shred: /dev/xvdf: pass 2/2 (000000)...7.2GiB/8.0GiB 90% shred: /dev/xvdf: pass 2/2 (000000)...7.5GiB/8.0GiB 93% shred: /dev/xvdf: pass 2/2 (000000)...7.8GiB/8.0GiB 97% shred: /dev/xvdf: pass 2/2 (000000)...8.0GiB/8.0GiB 100% This will clear the filesystem.

ubuntu@ip-xxxxxx:~$ sudo file -s /dev/xvdf
/dev/xvdf: data

You need to create the file system on the device to make it available for use again.

for example format device with ext4,
sudo mkfs -t ext4 /dev/xvdf

you can also use /dev/urandom as the source of random data:

ubuntu@ip-xxxxxx:~$ sudo shred -v --random-source=/dev/urandom -n1 /dev/DISK/TO/DELETE

2. using dd command .


sudo dd if=/dev/zero of=/dev/DISK/TO/DELETE bs=1M
or
sudo dd if=/dev/urandom of=/dev/DISK/TO/DELETE bs=4096

This will overwrite the whole disk with zeros and is considerably faster than generating gigabytes of random data. Like all the other tools this won't take care of blocks that were mapped out for whatever reason (write errors, reserved, etc.), but it's highly unlikely any tool will recover anything from those blocks.

This will clear the filesystem.

ubuntu@ip-xxxxxx:~$ sudo file -s /dev/xvdf
/dev/xvdf: data

You need to create the file system on the device to make it available for use again.

for example format device with ext4,
sudo mkfs -t ext4 /dev/xvdf

However, these above-discussed tools are not DoD compliant. Government or Defense organizations request for Department of Defense (DoD) compliant disk wipe program to remove files securely.

3. DOD Wiping

What is DoD ?

DoD 5220.22-M is a software-based data sanitization method used in various file shredder and data destruction programs to overwrite existing information on a hard drive or other storage devices. Erasing a hard drive using the DoD 5220.22-M data sanitization method will prevent all software based file recovery methods from lifting information from the drive and should also prevent most if not all hardware based recovery methods.

DoD 5220.22-M Wipe Method

The DoD 5220.22-M data sanitization method is usually implemented in the following way:

Pass 1: Writes a zero and verifies the write
Pass 2: Writes a one and verifies the write
Pass 3: Writes a random character and verifies the write

Scrub :

Most widely used DoD wiping software in Linux is the scrub, which writes patterns on special files (i.e. raw disk devices) or regular files to make retrieving the data more difficult. Scrub implements user-selectable pattern algorithms that are compliant with DoD 5520.22-M or NNSA NAP-14.x.

The dod scrub sequence is compliant with the DoD 5220.22-M procedure for sanitizing removable and non-removable rigid disks which require overwriting all addressable locations with a character, its complement, then a random character, and verify.

$ sudo apt-get install scrub

Once installed, wipe data using dod method like below.

$ sudo scrub -p dod /dev/xvdf
scrub: using DoD 5220.22-M patterns scrub: please verify that device size below is correct! scrub: scrubbing /dev/xvdf 8589934592 bytes (~8192MB) scrub: random |................................................| scrub: 0x00 |................................................| scrub: 0xff |................................................| scrub: verify |................................................|

Thursday, September 28, 2017

Data Wipe On EBS Volumes - Part I



Data Destruction is extremely an important part of security which protects sensitive data falling into the wrong hands.Many customers/vendors look for Certificate of Data Destruction while buying software.In this series, let's see how to securely wipe off data from AWS EBS volumes.

AWS security while paper states that

"Amazon EBS volumes are presented to the customer as raw unformatted block devices, which have been wiped prior to being made available for use. Customers that have procedures requiring that all data be wiped via a specific method, such as those detailed in DoD 5220.22-M (“National Industrial Security Program Operating Manual “) or NIST 800-88 (“Guidelines for Media Sanitization”), have the ability to do so on Amazon EBS. Customers should conduct a specialized wipe procedure prior to deleting the volume for compliance with their established requirements. Encryption of sensitive data is generally a good security practice, and AWS encourages users to encrypt their sensitive data via an algorithm consistent with their stated security policy."

Although AWS guarantees to never return a previous user's data via the hyper-visor as mentioned in their security white paper, we should still wipe data from EBS before deleting it , as a good security practise if we require a Certificate of Data Destruction.

Let us first test AWS new EBS volumes , if any data can be recovered using a data recovery software such as PhotoRec.

1. Create a AWS EC2 t2.micro instance with Ubutnu.

2. ssh to the instance and install PhotoRec

sudo apt-get update
sudo apt-get install testdisk

3. Create a new gp2 EBS volume of size 8GB and attach it to the instace we created in step 1.

4. Check if the device is attached on command line.

ubuntu@ip-XXXXXXXXXXX:~$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 8G 0 disk
└─xvda1 202:1 0 8G 0 part /
xvdf 202:80 0 8G 0 disk

5. Now try to recover data from this new EBS volume using photorec

sudo photorec /dev/xvdf

--------------------------------------------------------------- ---------------------------------------------------------------------

PhotoRec 6.14, Data Recovery Utility, July 2013
Christophe GRENIER
http://www.cgsecurity.org

PhotoRec is free software, and
comes with ABSOLUTELY NO WARRANTY.

Select a media (use Arrow keys, then press Enter):
>Disk /dev/xvdf - 8589 MB / 8192 MiB (RO)

--------------------------------------------------------------- ---------------------------------------------------------------

PhotoRec 6.14, Data Recovery Utility, July 2013
Christophe GRENIER
http://www.cgsecurity.org

Disk /dev/xvdf - 8589 MB / 8192 MiB (RO)
Partition Start End Size in sectors
P Unknown 0 0 1 1044 85 1 16777216

0 files saved in /home/ubuntu/recup_dir directory.
Recovery completed.

--------------------------------------------------------------- -------------------------------------------------------------

There are no files recovered which is perfectly fine.

6. Now let us format the drive with ext4 file system and then try to recover .

ubuntu@ip-XXXXXXXXX:~$ sudo mkfs -t ext4 /dev/xvdf
mke2fs 1.42.9 (4-Feb-2014)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
524288 inodes, 2097152 blocks
104857 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2147483648
64 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

--------------------------------------------------------------- ---------------------------------------------------------------------

sudo photorec /dev/xvdf

PhotoRec 6.14, Data Recovery Utility, July 2013
Christophe GRENIER
http://www.cgsecurity.org

Disk /dev/xvdf - 8589 MB / 8192 MiB (RO)
Partition Start End Size in sectors
P ext4 0 0 1 1044 85 1 16777216

0 files saved in /home/ubuntu/recup_dir directory.
Recovery completed.

--------------------------------------------------------------- ------------------------------------------------------------------------

There are no files recovered in this case also.
Simillarly test this with Provisioned IOPS SSD as well , you will see same results.

In part 2 , we will see how we can wipe EBS volumes with DoD 5220.22-M using scrub.

Sunday, September 24, 2017

Cleaning orphan snapshots in AWS EC2 to save $




When we deregister an Amazon EBS-backed AMI, it doesn't affect the snapshots that were created during the AMI creation process. We'll continue to incur storage costs for these snapshots. Therefore, if we are finished with the snapshots, we should delete it.

In fact, they cant do the mistake of cleaning up snapshots, its a revenue for them!

So we will have to take care of cleaning up snapshots.
Here is how we can do it from AWS Java SDK.

Download AWS Java SDK from here
Add to java classpath/buildpath.

import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import java.util.regex.Matcher;
import java.util.regex.Pattern;

import com.amazonaws.auth.BasicAWSCredentials;
import com.amazonaws.regions.Region;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.ec2.AmazonEC2Client;
import com.amazonaws.services.ec2.model.DeleteSnapshotRequest;
import com.amazonaws.services.ec2.model.DescribeImagesRequest;
import com.amazonaws.services.ec2.model.DescribeImagesResult;
import com.amazonaws.services.ec2.model.DescribeSnapshotsRequest;
import com.amazonaws.services.ec2.model.DescribeSnapshotsResult;
import com.amazonaws.services.ec2.model.Image;
import com.amazonaws.services.ec2.model.Snapshot;

public class FindSanpshots {

 public static void main(String[] args) throws IOException {

  BasicAWSCredentials basicAWSCredentials = new BasicAWSCredentials("xx", "yyy");
  AmazonEC2Client amazonEC2Client = new AmazonEC2Client(basicAWSCredentials);
  Region region = Region.getRegion(Regions.fromName("us-west-2"));
  amazonEC2Client.setEndpoint(region.getServiceEndpoint("ec2"));

  DescribeImagesRequest withOwners = new DescribeImagesRequest().withOwners("self");
  DescribeImagesResult images = amazonEC2Client.describeImages(withOwners);
  ArrayList < String > imageIdList = new ArrayList < String > ();
  List < Image > amiList = images.getImages();
  for (Image image: amiList) {
   imageIdList.add(image.getImageId());
  }

  DescribeSnapshotsRequest withOwnerIds = new DescribeSnapshotsRequest().
  withOwnerIds("self");
  DescribeSnapshotsResult describeSnapshots = amazonEC2Client.
  describeSnapshots(withOwnerIds);
  List < Snapshot > snapshots = describeSnapshots.getSnapshots();

  // ensure snapshot size and ami size in your region.
  System.out.println(snapshots.size());
  System.out.println(amiList.size());

  int count = 0;
  int size = 0;

  // find orphans and delete.

  for (Snapshot snapshot: snapshots) {

   String description = snapshot.getDescription();

   // get AMI id of snapshot using regex from its description.
   Pattern pattern = Pattern.compile("for(.*?)from");
   Matcher matcher = pattern.matcher(description);
   while (matcher.find()) {

    String amiId = matcher.group(1).trim();
    // ami id is currently 12 character long. 
    if (!imageIdList.contains(amiId) && amiId.length() <= 12) {
     String snapshotId = snapshot.getSnapshotId();
     DeleteSnapshotRequest r =
      new DeleteSnapshotRequest(snapshotId);
     amazonEC2Client.deleteSnapshot(r);
     System.out.println(amiId);

     size += snapshot.getVolumeSize();
     count++;
    }
   }
  }
  System.out.println("Orphan Snapshots Deleted : " + count);
  System.out.println("Orphan Snapshots Size : " + size);

 }

}