Showing posts with label AWS. Show all posts
Showing posts with label AWS. Show all posts

Wednesday, August 15, 2018

Enabling HTTPS for your Angular-Spring Website




To enable HTTPS for your website you’ll need to get and configure the required SSL/TLS certificates on your server. Start with choosing a trusted certificate provider. There are many authorities who give certificate free for 90 days. "Let’s Encrypt" is most popular open Certificate Authority. "SSL for free" issues certificates using "Let’s Encrypt"


1. Go to SSL FOR FREE

2. Enter your website to secure ( IP address do not work, you have to enter a registered domain name )

3. Select Manual Verification ( DNS ) option



4. Manually verify Domain



5. Go into the DNS management page that your domain and add TXT records with given key and value.



6. Now download SSL certificate.



The downloaded archive will have a certificate, bundle, and key. Extract the zip file and copy certificate and key to your ubuntu server.

Now you have to install this certificate in client and server.

Client Installation.

For the angular client, you need to add the following options to ng serve command in the package.json

--port 443 --disableHostCheck true --ssl --ssl-cert /home/ubuntu/certificate.crt --ssl-key /home/ubuntu/private.key

Note that port is 443. You have to open the port 443 in your security Group/ Firewall.
"scripts": { "ng": "ng", "start": "ng serve --host 0.0.0.0", "build": "ng build --prod", "test": "ng test", "lint": "ng lint", "e2e": "ng e2e" }

Restart client.

Server Installation.

This tutorial assumes that your server is Java Spring Boot.
You need to generate a key store for the server.
openssl pkcs12 -export -in certificate.crt -inkey private.key -out keystore.p12 -name server

This above step will ask for a password. Please enter the password and remember it.

Go to application.properties file and add following key value pairs.
server.port: 8443 server.ssl.key-store: keystore.p12 server.ssl.key-store-password: <your_password> server.ssl.keyStoreType: PKCS12 server.ssl.keyAlias: server Now clean buid and restart the server.

For python Flask Server.

from flask import Flask app = Flask(__name__) @app.route("/") def hello(): return "Hello World!" if __name__ == "__main__": app.run(host='0.0.0.0',ssl_context=('/home/ubuntu/certificate.crt', '/home/ubuntu/private.key')) port is 5000 by default.

Now you can access your website using https://your-domain.com

Also, import the certificate to java trust store.

Import to trust store :
keytool -import -alias server -keystore /usr/lib/jvm/java-8-oracle/jre/lib/security/cacerts -file /home/ubuntu/certificate.crt

To know your java home on ubuntu

readlink -f /usr/bin/java | sed "s:bin/java::"

$(readlink -f /usr/bin/java | sed "s:bin/java::")lib/security/cacerts

Monday, February 12, 2018

Tectonic (Enterprise Kubernetes) on AWS with Terraform - PART 1

Create a CoreOS account here : https://account.coreos.com/login
You can use your Gmail to sign in and get a free license for 10 nodes.



Create a t2 small ec2 ubuntu 64 bit machine and login

$sudo apt-get update

$sudo apt install gnupg2
$sudo apt install unzip
$sudo apt install awscli




$curl -O https://releases.tectonic.com/releases/tectonic_1.8.4-tectonic.3.zip
$curl -O https://releases.tectonic.com/releases/tectonic_1.8.4-tectonic.3.zip.sig
$gpg2 --keyserver pgp.mit.edu --recv-key 18AD5014C99EF7E3BA5F6CE950BDD3E0FC8A365E
$gpg2 --verify tectonic_1.8.4-tectonic.3.zip.sig tectonic_1.8.4-tectonic.3.zip

$unzip tectonic_1.8.4-tectonic.3.zip
$cd tectonic_1.8.4-tectonic.3

$export PATH=$(pwd)/tectonic-installer/linux:$PATH
$terraform init platforms/aws

$mkdir -p build/${CLUSTER}
$export CLUSTER=my-cluster
$cp examples/terraform.tfvars.aws build/${CLUSTER}/terraform.tfvars



vi build/${CLUSTER}/terraform.tfvars

Make sure you set these properties

tectonic_aws_region = "ap-south-1"
tectonic_base_domain = "yourdomain.com" // your base domain from Rout53
tectonic_license_path = "/home/ubuntu/license.txt"
tectonic_pull_secret_path = "/home/ubuntu/pullsecret.json"
tectonic_cluster_name = "test" // your cluster name



Note: Pull secret and license files are available in your core os account.

save changes wq!

$aws configure

AWS Access Key ID : Enter Access Key ID here
AWS Secret Access Key :Enter Secret Key here
Default region name: ap-south-1
Default output format: Leave Empty

$export TF_VAR_tectonic_admin_email="your google email used for CoreOS"
$export TF_VAR_tectonic_admin_password="your password"

$ terraform plan -var-file=build/${CLUSTER}/terraform.tfvars platforms/aws
$ terraform apply -var-file=build/${CLUSTER}/terraform.tfvars platforms/aws

After few minutes ( 5 to 10 ) , cluster will be up and you can access it here :
https://test.yourdomain.com

The username password is same as your CoreOS account.

Accessing Cluster with kubectl commandline :



Now download kubectl-config and kubectl files from your cluster.

$ chmod +x kubectl
$ mv kubectl /usr/local/bin/kubectl
$ mkdir -p ~/.kube/ # create the directory
$ cp path/to/file/kubectl-config-test $HOME/.kube/config # rename the file and copy it into the directory
$ export KUBECONFIG=$HOME/.kube/config

Try to get nodes and see if you can see the nodes.

$ kubectl get nodes

In next entry, we will see how to Deploy a simple Application with kubectl commandline


Thursday, September 28, 2017

Data Wipe On EBS Volumes - Part I



Data Destruction is extremely an important part of security which protects sensitive data falling into the wrong hands.Many customers/vendors look for Certificate of Data Destruction while buying software.In this series, let's see how to securely wipe off data from AWS EBS volumes.

AWS security while paper states that

"Amazon EBS volumes are presented to the customer as raw unformatted block devices, which have been wiped prior to being made available for use. Customers that have procedures requiring that all data be wiped via a specific method, such as those detailed in DoD 5220.22-M (“National Industrial Security Program Operating Manual “) or NIST 800-88 (“Guidelines for Media Sanitization”), have the ability to do so on Amazon EBS. Customers should conduct a specialized wipe procedure prior to deleting the volume for compliance with their established requirements. Encryption of sensitive data is generally a good security practice, and AWS encourages users to encrypt their sensitive data via an algorithm consistent with their stated security policy."

Although AWS guarantees to never return a previous user's data via the hyper-visor as mentioned in their security white paper, we should still wipe data from EBS before deleting it , as a good security practise if we require a Certificate of Data Destruction.

Let us first test AWS new EBS volumes , if any data can be recovered using a data recovery software such as PhotoRec.

1. Create a AWS EC2 t2.micro instance with Ubutnu.

2. ssh to the instance and install PhotoRec

sudo apt-get update
sudo apt-get install testdisk

3. Create a new gp2 EBS volume of size 8GB and attach it to the instace we created in step 1.

4. Check if the device is attached on command line.

ubuntu@ip-XXXXXXXXXXX:~$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 8G 0 disk
└─xvda1 202:1 0 8G 0 part /
xvdf 202:80 0 8G 0 disk

5. Now try to recover data from this new EBS volume using photorec

sudo photorec /dev/xvdf

--------------------------------------------------------------- ---------------------------------------------------------------------

PhotoRec 6.14, Data Recovery Utility, July 2013
Christophe GRENIER
http://www.cgsecurity.org

PhotoRec is free software, and
comes with ABSOLUTELY NO WARRANTY.

Select a media (use Arrow keys, then press Enter):
>Disk /dev/xvdf - 8589 MB / 8192 MiB (RO)

--------------------------------------------------------------- ---------------------------------------------------------------

PhotoRec 6.14, Data Recovery Utility, July 2013
Christophe GRENIER
http://www.cgsecurity.org

Disk /dev/xvdf - 8589 MB / 8192 MiB (RO)
Partition Start End Size in sectors
P Unknown 0 0 1 1044 85 1 16777216

0 files saved in /home/ubuntu/recup_dir directory.
Recovery completed.

--------------------------------------------------------------- -------------------------------------------------------------

There are no files recovered which is perfectly fine.

6. Now let us format the drive with ext4 file system and then try to recover .

ubuntu@ip-XXXXXXXXX:~$ sudo mkfs -t ext4 /dev/xvdf
mke2fs 1.42.9 (4-Feb-2014)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
524288 inodes, 2097152 blocks
104857 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2147483648
64 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

--------------------------------------------------------------- ---------------------------------------------------------------------

sudo photorec /dev/xvdf

PhotoRec 6.14, Data Recovery Utility, July 2013
Christophe GRENIER
http://www.cgsecurity.org

Disk /dev/xvdf - 8589 MB / 8192 MiB (RO)
Partition Start End Size in sectors
P ext4 0 0 1 1044 85 1 16777216

0 files saved in /home/ubuntu/recup_dir directory.
Recovery completed.

--------------------------------------------------------------- ------------------------------------------------------------------------

There are no files recovered in this case also.
Simillarly test this with Provisioned IOPS SSD as well , you will see same results.

In part 2 , we will see how we can wipe EBS volumes with DoD 5220.22-M using scrub.

Sunday, September 24, 2017

Cleaning orphan snapshots in AWS EC2 to save $




When we deregister an Amazon EBS-backed AMI, it doesn't affect the snapshots that were created during the AMI creation process. We'll continue to incur storage costs for these snapshots. Therefore, if we are finished with the snapshots, we should delete it.

In fact, they cant do the mistake of cleaning up snapshots, its a revenue for them!

So we will have to take care of cleaning up snapshots.
Here is how we can do it from AWS Java SDK.

Download AWS Java SDK from here
Add to java classpath/buildpath.

import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import java.util.regex.Matcher;
import java.util.regex.Pattern;

import com.amazonaws.auth.BasicAWSCredentials;
import com.amazonaws.regions.Region;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.ec2.AmazonEC2Client;
import com.amazonaws.services.ec2.model.DeleteSnapshotRequest;
import com.amazonaws.services.ec2.model.DescribeImagesRequest;
import com.amazonaws.services.ec2.model.DescribeImagesResult;
import com.amazonaws.services.ec2.model.DescribeSnapshotsRequest;
import com.amazonaws.services.ec2.model.DescribeSnapshotsResult;
import com.amazonaws.services.ec2.model.Image;
import com.amazonaws.services.ec2.model.Snapshot;

public class FindSanpshots {

 public static void main(String[] args) throws IOException {

  BasicAWSCredentials basicAWSCredentials = new BasicAWSCredentials("xx", "yyy");
  AmazonEC2Client amazonEC2Client = new AmazonEC2Client(basicAWSCredentials);
  Region region = Region.getRegion(Regions.fromName("us-west-2"));
  amazonEC2Client.setEndpoint(region.getServiceEndpoint("ec2"));

  DescribeImagesRequest withOwners = new DescribeImagesRequest().withOwners("self");
  DescribeImagesResult images = amazonEC2Client.describeImages(withOwners);
  ArrayList < String > imageIdList = new ArrayList < String > ();
  List < Image > amiList = images.getImages();
  for (Image image: amiList) {
   imageIdList.add(image.getImageId());
  }

  DescribeSnapshotsRequest withOwnerIds = new DescribeSnapshotsRequest().
  withOwnerIds("self");
  DescribeSnapshotsResult describeSnapshots = amazonEC2Client.
  describeSnapshots(withOwnerIds);
  List < Snapshot > snapshots = describeSnapshots.getSnapshots();

  // ensure snapshot size and ami size in your region.
  System.out.println(snapshots.size());
  System.out.println(amiList.size());

  int count = 0;
  int size = 0;

  // find orphans and delete.

  for (Snapshot snapshot: snapshots) {

   String description = snapshot.getDescription();

   // get AMI id of snapshot using regex from its description.
   Pattern pattern = Pattern.compile("for(.*?)from");
   Matcher matcher = pattern.matcher(description);
   while (matcher.find()) {

    String amiId = matcher.group(1).trim();
    // ami id is currently 12 character long. 
    if (!imageIdList.contains(amiId) && amiId.length() <= 12) {
     String snapshotId = snapshot.getSnapshotId();
     DeleteSnapshotRequest r =
      new DeleteSnapshotRequest(snapshotId);
     amazonEC2Client.deleteSnapshot(r);
     System.out.println(amiId);

     size += snapshot.getVolumeSize();
     count++;
    }
   }
  }
  System.out.println("Orphan Snapshots Deleted : " + count);
  System.out.println("Orphan Snapshots Size : " + size);

 }

}

Thursday, December 15, 2016

Create Dynamo DB Table using Cloud Formation Template


AWS CloudFormation simplifies provisioning and management on AWS. It gives developers and systems administrators an easy way to create and manage a collection of related AWS resources, provisioning and updating them in an orderly and predictable fashion. To provision and configure stack resources, we must understand AWS CloudFormation templates, which are formatted text files in JSON or YAML. These templates describe the resources that we want to provision in your AWS CloudFormation stacks. We can use the AWS CloudFormation Designer or any text editor to create and save templates.

Let us how designer works in an another blog entry. For now you can play with designer if you wish.
https://console.aws.amazon.com/cloudformation/designer

Creating Dynamo DB Using AWS Cloud Formation Template.

1. We need to create a custom IAM policy "createstack". This policy hels us to execute aws createstack command.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Stmt1449904348000",
            "Effect": "Allow",
            "Action": [
                "cloudformation:CreateStack"
            ],
            "Resource": [
                "*"
            ]
        }
    ]
}

>

2. Attach the above created policy to user through whom we need to create dynamo db table.

3.install awscli and configure
  a. sudo apt-get install awscli
  b. aws configure
        now give proper details like access key and region. This will create a config file in ~/.aws directory.
  c. test if configured properly using command 'aws s3 ls'. This should list all the s3 buckets if you have any.

4.Create a Cloud formation temlate like below , for creating dynamo db.


{
  "AWSTemplateFormatVersion" : "2010-09-09",
  "Resources" : {
    "myDynamoDBTable" : {
      "Type" : "AWS::DynamoDB::Table",
      "Properties" : {
        "AttributeDefinitions" : [
          {
            "AttributeName" : "Name",
            "AttributeType" : "S"   
          },
          {
            "AttributeName" : "Age",
            "AttributeType" : "S"
          }
        ],
        "KeySchema" : [
          {
            "AttributeName" : "Name",
            "KeyType" : "HASH"
          },
          {
            "AttributeName" : "Age",
            "KeyType" : "RANGE"
          }
        ],
        "ProvisionedThroughput" : {
          "ReadCapacityUnits" : "5",
          "WriteCapacityUnits" : "5"
        },
        "TableName" : "Person"
      }
    }
  }
}


5. Save the above file in a s3 bucket and copy the URL of this file.

6. Now on your command line , you can enter following command to create dynamodb table.

aws cloudformation create-stack --stack-name <stack_name> --template-url <s3_bucket_template_url>

----------------------------------------------------------------------------------------------------------------------------

Now, this is too much of manual process. This can be done using a python code as well. ( or java , node.js etc, lets see python for now ).

Here is how.

1. Create a config file like below. Save it as 'awsconfig'

[default]
aws_access_key_id = xxxxxxxx
aws_secret_access_key = xxxxxxxxxxxxxxxxxxx
region = us-west-2

2. Create a shell script like below.

sudo apt-get install -fy --force-yes awscli python
sudo curl -s 'https://bootstrap.pypa.io/get-pip.py' | python2.7 && pip install boto awscli
sudo mkdir ~/.aws
cp awsconfig ~/.aws/config

The above script should install all the required tools for python to create a dynamo db. Allow execution permission and execute the script.

3. Create a python file using following code.


from __future__ import print_function # Python 2/3 compatibility
import boto3

dynamodb = boto3.resource('dynamodb', region_name='us-west-2')


table = dynamodb.create_table(
    TableName='Person',
    KeySchema=[
        {
            'AttributeName': 'name',
            'KeyType': 'HASH'  #Partition key
        },
        {
            'AttributeName': 'age',
            'KeyType': 'RANGE'  #Sort key
        }
    ],
    AttributeDefinitions=[
        {
            'AttributeName': 'name',
            'AttributeType': 'S'
        },
        {
            'AttributeName': 'age',
            'AttributeType': 'N'
        },

    ],
    ProvisionedThroughput={
        'ReadCapacityUnits': 5,
        'WriteCapacityUnits': 5
    }
)

print("Table status:", table.table_status)


4. Execute python code .

  python create_dynamo_table.py

5. Check your Dynamo DB service , table called person should have been created.


Sunday, December 11, 2016

AWS RDS - Take Snapshot, Delete Instance and Restore Instance using Snapshot - Scheduled Automation using Lambda

Create a test RDS instance which is db.t2.micro ( free tier ) , name it testdb and provide all parameters and create.

1. Create an IAM Role for Lambda with following policy. IAM → Roles → CreateNewRole

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "logs:CreateLogGroup",
                "logs:CreateLogStream",
                "logs:PutLogEvents"
            ],
            "Resource": "arn:aws:logs:*:*:*"
        },
        {
            "Action": [
                "rds:AddTagsToResource",
                "rds:CopyDBSnapshot",
                "rds:CopyDBClusterSnapshot",
                "rds:DeleteDBInstance",
                "rds:DeleteDBSnapshot",
                "rds:RestoreDBInstanceFromDBSnapshot",
                "rds:Describe*",
                "rds:ListTagsForResource"
            ],
            "Effect": "Allow",
            "Resource": "*"
        }
    ]
}

2. Create a Lambda function for deleting RDS instance by taking latest snapshot.



3. Select Blank Function.



4. Configure Trigger Using CloudWatch Events – Schedule.



5. Enter Rule Name , Rule Description and Scheduled Expression ( in UTC ) time like : cron(0 0 21 ? * MON-FRI *) - This means it triggers every day from mon to friday at night 9 pm UTC time



6. Select python 2.7 and write Lambda Function ( change db_instance and region accordingly )

import boto3  
import datetime  
import time  
import sys

db_instance='testdb'  
region='us-west-2'

def lambda_handler(event, context):  
    try: 
        date=time.strftime("-%d-%m-%Y")
        snapshot_name = db_instance+date
        source = boto3.client('rds', region_name=region)
        global db_instance
        source.delete_db_instance(DBInstanceIdentifier=db_instance,SkipFinalSnapshot=False,FinalDBSnapshotIdentifier=snapshot_name)
    except Exception as e:
        raise e
    print '[main] End'


7. Select existing IAM role that we created in Step 1.
8. Create Lambda.
9. Test this Function and wait till snapshot created and instance deleted.




Restore :

1 . Create Lambda trigger at morning 9 am UTC



2. Add lambda code.



3.Write Lambda Function

import boto3  
import botocore  
import datetime  
import re  
import logging

region='us-west-2'  
db_instance_class='db.t2.micro'  
db_subnet='default'  
instances = ['testdb']

print('Loading function')

def byTimestamp(snap):  
  if 'SnapshotCreateTime' in snap:
    return datetime.datetime.isoformat(snap['SnapshotCreateTime'])
  else:
    return datetime.datetime.isoformat(datetime.datetime.now())

def lambda_handler(event, context):  
    source = boto3.client('rds', region_name=region)
    for instance in instances:
        try:
            source_snaps = source.describe_db_snapshots(DBInstanceIdentifier = instance)['DBSnapshots']
            print "DB_Snapshots:", source_snaps
            source_snap = sorted(source_snaps, key=byTimestamp, reverse=True)[0]['DBSnapshotIdentifier']
            snap_id = (re.sub( '-\d\d-\d\d-\d\d\d\d ?', '', source_snap))
            print('Will restore %s to %s' % (source_snap, snap_id))
            response = source.restore_db_instance_from_db_snapshot(DBInstanceIdentifier=snap_id,DBSnapshotIdentifier=source_snap,DBInstanceClass=db_instance_class, DBSubnetGroupName=db_subnet,MultiAZ=False,PubliclyAccessible=True)
            print(response)

        except botocore.exceptions.ClientError as e:
            raise Exception("Could not restore: %s" % e)


3. Select IAM Role.
4. Create Function
5. Test Function.