Wednesday, December 27, 2017

Redirecting Traffic from 80 to 8080





$ sudo iptables -A INPUT -i eth0 -p tcp --dport 80 -j ACCEPT
$ sudo iptables -A INPUT -i eth0 -p tcp --dport 8080 -j ACCEPT
$ sudo iptables -A PREROUTING -t nat -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 8080


Friday, October 27, 2017

Teamcity setup on Ubuntu using docker images


TeamCity is a Java-based build management and continuous integration server from JetBrains.In this tutorial we will see a very basic example of setting up a teamcity server and agent using docker images.



sudo apt install docker.io
sudo usermod -aG docker $USER
logout

//login again using ssh

//pull server
docker pull jetbrains/teamcity-server
//pull agent
docker pull jetbrains/teamcity-agent
cd
mkdir -p ~/tcdata/server/data
mkdir tcdata/server/logs
mkdir -p ~/tcdata/agent/conf

//Start Containers in the background.

docker run -itd --name teamcity-server-instance -v /home/ubuntu/tcdata/server/data:/data/teamcity_server/datadir -v /home/ubuntu/tcdata/server/logs:/opt/teamcity/logs -p 8111:8111 jetbrains/teamcity-server

docker run -itd -e SERVER_URL="http://server-ip:8111" -v /home/ubuntu/tcdata/agent/conf:/data/teamcity_agent/conf jetbrains/teamcity-agent

Access server here:

http://server-ip:8111

Select local HSQLDB Database and create a user and log in.
Go to Agents tab and Authorize agent.


Sunday, October 15, 2017

Hygieia authentication using LDAP

Please refer "Installing Hygieia Dashboard on Ubuntu 16.04" blog entry to setup hygieia , before you setup LDAP aunthentication.

LDAP stands for “Lightweight Directory Access Protocol”. It is a simplification of the X.500 Directory Access Protocol (DAP) used to access directory information. A directory is essentially a special-purpose database optimized to handle identity-related information. The LDAP standard also defines a data model based on the X.500 data model. It is a hierarchical data model, with objects arranged in a hierarchical structure, and each object containing a collection of attributes. The overall structure of any particular directory is defined by its schema, much like a database schema defines the tables and columns.

LDAP will access data that are read frequently but updated rarely. One of the main application of LDAP is authentication because user authentication data is updated rarely but read very frequently each time the user logs in. Authentication request could generate from a Linux/Windows client machine or from applications like Jenkins and it authenticates to a remote LDAP server, where authentication data is stored.

LDAP defines a “Bind” operation that authenticates the LDAP connection and establishes a security context for subsequent operations on that connection. There are two authentication methods defined in RFC 4513, simple and SASL. The simple authentication method has the LDAP client send the username (as a LDAP distinguished name) and password (in clear text) to the LDAP server. The LDAP server looks up the object with that username in the directory, compares the password provided to the password(s) stored with the object, and authenticates the connection if they match. Because the password is provided in clear text, LDAP simple Binds should only be done over a secure TLS connection.

LDAP with Hygieia .

You can set up your own LDAP server, which is time-consuming. For testing purpose, we have an Online LDAP Test Server available which we will use in this tutorial.

1. First install Apache Directory Studio and test if this online ldap server is in working condition.

Create a new ldap connection.

host : ldap.forumsys.com
port: 389
Bind DN : uid=euclid,dc=example,dc=com
password : password








2. Once you are able to successfully connect to the Test LDAP server, you can update the dashboard.properties in api folder and restart API .

$ cd Hygieia/api
~/Hygieia/api$ vi dashboard.properties

----------------------------------------------------------------------------------------

# dashboard.properties
dbname=dashboarddb
dbusername=dashboarduser
dbpassword=dbpassword
auth.authenticationProviders=LDAP,STANDARD
auth.ldapServerUrl=ldap://ldap.forumsys.com:389/dc=example,dc=com
auth.ldapUserDnPattern=uid={0}

----------------------------------------------------------------------------------------

~/Hygieia/api$ java -jar target/api.jar --spring.config.location=dashboard.properties -Djasypt.encryptor.password=hygieiasecret

Now Here is how your Hygieia login screen looks like



you can use LDAP entry euclid/password for logging into Hygieia.

You can create a Test Dashboard in Hygieia with this LDAP user and check the mongo entry for the dashboard. You can see that there is flag added to identify the user as LDAP user.
$mongo > use dashboarddb > db.getCollection('dashboards').find({}) { "_id":ObjectId("59e324cf178d2f23ccac05b0"), "_class":"com.capitalone.dashboard.model.Dashboard", "template":"splitview", "title":"TEstApp", "widgets":[ ], "owners":[ { "username":"euclid", "authType":"LDAP" } ], "type":"Team", "application":{ "name":"TEstApp", "components":[ DBRef("components", ObjectId("59e324cf178d2f23ccac05af")) ] }, "validServiceName":false, "validAppName":false, "remoteCreated":false }

Monday, October 2, 2017

VPC Endpoint to Access S3

Create an S3 Access IAM Role.



IAM roles are a secure way to grant permissions to entities that you trust. For example, an application code running on an EC2 instance that needs to perform actions on AWS resources like s3 might need an IAM role to do that.






1. Goto IAM -> Roles -> Create New Role



2. Select "EC2" and in "Permissions" select AmazonS3FullAccess.



3. Give a Role Name, Description and create a role.

This role helps us to access s3 from Ec2 instance.

Now create a t2 micro ubuntu EC2 instance from an AMI which has awscli ( AWS command line tools ) installed already in a private subnet with the IAM role we created.

The private subnet should be completely private, I mean the subnet should not even have a route to the internet through a NAT instance.



Now connect to the machine using ssh & key and since the machine has already awscli installed, you can try accessing the s3 like below.

$aws s3 ls

This will not work, fails with a timeout.

Why it fails even though we have an s3 access role assigned to that ec2 instance?
Because this instance is in private subnet in which we do not have access to internet and s3 does not reside inside any vpc and its endpoints are public in nature.
If you have to access s3 you have to send the request via internet only.

But how do I access s3 using a completely private machine then?
For that purpose, AWS provides s3 endpoints which can be used to connect a vpc with s3.



Currently, as we do not have a route to s3 through a vpc endpoint in the route table associated with our private subnet it failed.

Let's add a VPC Endpoint.



Select your vpc and s3 and continue.



Select the route table which is associated with your private subnet.



A rule with destination pl-id (com.amazonaws.us-west-2.s3) and a target with this endpoints' ID (e.g. vpce-12345678) will be added to the route tables you selected.

Now that we have a vpc endpoint, try to access the s3 from private ec2 instance again.

$ aws s3 ls

This will also fail with timeout because awscli by default will create request to global s3 url (s3.amazonaws.com)

Add an environment variable to your region.

$ export AWS_DEFAULT_REGION=us-west-2
$ aws s3 ls

This should list your buckets in us-west-2 region (vpc router will route your request to s3.us-east-1.amazonaws.com)

You have now successfully accessed s3 without internet from an ec2 instance residing in vpc's private subnet.

Saturday, September 30, 2017

Data Wipe On EBS Volumes - Part II

Securely erasing/Data wiping EBS volumes :

When you delete a file using the default commands of the operating system (for example “rm” in Linux/BSD/MacOS/UNIX or “del” in DOS or emptying the recycle bin in WINDOWS) the operating system does NOT delete the file, the contents of the file remains on your hard disk. So we need to explicitly delete or wipe the contents of the disk. Data wiping is the process of logically removing data from a read/write medium so that it can no longer be read.

Methods in Linux :

I will discuss some of the available data wiping methods in Linux system.

1. shred



shred is a command line utility, which overwrites data in a file or a whole device with random bits, making it nearly impossible to recover.

# shred -n 1 -vz /dev/xvdf

Make sure it is the correct device, picking the wrong device will wipe it.


This will overwrite 1 time ( -n ) by showing progress ( -v ) and adding final overwrite with zeros to hide shredding (-z ).
( Use -n more than 5 times for secure wipe, default is 25 times )
ubuntu@ip-xxxxxxxxx:~$ sudo shred -n 1 -vz /dev/xvdf shred: /dev/xvdf: pass 1/2 (random)... shred: /dev/xvdf: pass 1/2 (random)...454MiB/8.0GiB 5% shred: /dev/xvdf: pass 1/2 (random)...759MiB/8.0GiB 9% shred: /dev/xvdf: pass 1/2 (random)...1.0GiB/8.0GiB 12% shred: /dev/xvdf: pass 1/2 (random)...1.3GiB/8.0GiB 16% shred: /dev/xvdf: pass 1/2 (random)...1.6GiB/8.0GiB 20% shred: /dev/xvdf: pass 1/2 (random)...1.9GiB/8.0GiB 24% shred: /dev/xvdf: pass 1/2 (random)...2.2GiB/8.0GiB 27% shred: /dev/xvdf: pass 1/2 (random)...2.5GiB/8.0GiB 31% shred: /dev/xvdf: pass 1/2 (random)...2.8GiB/8.0GiB 35% shred: /dev/xvdf: pass 1/2 (random)...3.1GiB/8.0GiB 39% shred: /dev/xvdf: pass 1/2 (random)...3.4GiB/8.0GiB 42% shred: /dev/xvdf: pass 1/2 (random)...3.7GiB/8.0GiB 46% shred: /dev/xvdf: pass 1/2 (random)...4.0GiB/8.0GiB 50% shred: /dev/xvdf: pass 1/2 (random)...4.3GiB/8.0GiB 53% shred: /dev/xvdf: pass 1/2 (random)...4.6GiB/8.0GiB 57% shred: /dev/xvdf: pass 1/2 (random)...4.9GiB/8.0GiB 61% shred: /dev/xvdf: pass 1/2 (random)...5.2GiB/8.0GiB 65% shred: /dev/xvdf: pass 1/2 (random)...5.5GiB/8.0GiB 68% shred: /dev/xvdf: pass 1/2 (random)...5.8GiB/8.0GiB 72% shred: /dev/xvdf: pass 1/2 (random)...6.1GiB/8.0GiB 76% shred: /dev/xvdf: pass 1/2 (random)...6.4GiB/8.0GiB 80% shred: /dev/xvdf: pass 1/2 (random)...6.6GiB/8.0GiB 83% shred: /dev/xvdf: pass 1/2 (random)...6.9GiB/8.0GiB 87% shred: /dev/xvdf: pass 1/2 (random)...7.2GiB/8.0GiB 91% shred: /dev/xvdf: pass 1/2 (random)...7.5GiB/8.0GiB 94% shred: /dev/xvdf: pass 1/2 (random)...7.8GiB/8.0GiB 98% shred: /dev/xvdf: pass 1/2 (random)...8.0GiB/8.0GiB 100% shred: /dev/xvdf: pass 2/2 (000000)... shred: /dev/xvdf: pass 2/2 (000000)...375MiB/8.0GiB 4% shred: /dev/xvdf: pass 2/2 (000000)...681MiB/8.0GiB 8% shred: /dev/xvdf: pass 2/2 (000000)...985MiB/8.0GiB 12% shred: /dev/xvdf: pass 2/2 (000000)...1.2GiB/8.0GiB 15% shred: /dev/xvdf: pass 2/2 (000000)...1.5GiB/8.0GiB 19% shred: /dev/xvdf: pass 2/2 (000000)...1.8GiB/8.0GiB 23% shred: /dev/xvdf: pass 2/2 (000000)...2.1GiB/8.0GiB 26% shred: /dev/xvdf: pass 2/2 (000000)...2.4GiB/8.0GiB 30% shred: /dev/xvdf: pass 2/2 (000000)...2.7GiB/8.0GiB 34% shred: /dev/xvdf: pass 2/2 (000000)...3.0GiB/8.0GiB 38% shred: /dev/xvdf: pass 2/2 (000000)...3.3GiB/8.0GiB 41% shred: /dev/xvdf: pass 2/2 (000000)...3.6GiB/8.0GiB 45% shred: /dev/xvdf: pass 2/2 (000000)...3.9GiB/8.0GiB 49% shred: /dev/xvdf: pass 2/2 (000000)...4.2GiB/8.0GiB 53% shred: /dev/xvdf: pass 2/2 (000000)...4.5GiB/8.0GiB 56% shred: /dev/xvdf: pass 2/2 (000000)...4.8GiB/8.0GiB 60% shred: /dev/xvdf: pass 2/2 (000000)...5.1GiB/8.0GiB 64% shred: /dev/xvdf: pass 2/2 (000000)...5.4GiB/8.0GiB 67% shred: /dev/xvdf: pass 2/2 (000000)...5.7GiB/8.0GiB 71% shred: /dev/xvdf: pass 2/2 (000000)...6.0GiB/8.0GiB 75% shred: /dev/xvdf: pass 2/2 (000000)...6.3GiB/8.0GiB 79% shred: /dev/xvdf: pass 2/2 (000000)...6.6GiB/8.0GiB 82% shred: /dev/xvdf: pass 2/2 (000000)...6.9GiB/8.0GiB 86% shred: /dev/xvdf: pass 2/2 (000000)...7.2GiB/8.0GiB 90% shred: /dev/xvdf: pass 2/2 (000000)...7.5GiB/8.0GiB 93% shred: /dev/xvdf: pass 2/2 (000000)...7.8GiB/8.0GiB 97% shred: /dev/xvdf: pass 2/2 (000000)...8.0GiB/8.0GiB 100% This will clear the filesystem.

ubuntu@ip-xxxxxx:~$ sudo file -s /dev/xvdf
/dev/xvdf: data

You need to create the file system on the device to make it available for use again.

for example format device with ext4,
sudo mkfs -t ext4 /dev/xvdf

you can also use /dev/urandom as the source of random data:

ubuntu@ip-xxxxxx:~$ sudo shred -v --random-source=/dev/urandom -n1 /dev/DISK/TO/DELETE

2. using dd command .


sudo dd if=/dev/zero of=/dev/DISK/TO/DELETE bs=1M
or
sudo dd if=/dev/urandom of=/dev/DISK/TO/DELETE bs=4096

This will overwrite the whole disk with zeros and is considerably faster than generating gigabytes of random data. Like all the other tools this won't take care of blocks that were mapped out for whatever reason (write errors, reserved, etc.), but it's highly unlikely any tool will recover anything from those blocks.

This will clear the filesystem.

ubuntu@ip-xxxxxx:~$ sudo file -s /dev/xvdf
/dev/xvdf: data

You need to create the file system on the device to make it available for use again.

for example format device with ext4,
sudo mkfs -t ext4 /dev/xvdf

However, these above-discussed tools are not DoD compliant. Government or Defense organizations request for Department of Defense (DoD) compliant disk wipe program to remove files securely.

3. DOD Wiping

What is DoD ?

DoD 5220.22-M is a software-based data sanitization method used in various file shredder and data destruction programs to overwrite existing information on a hard drive or other storage devices. Erasing a hard drive using the DoD 5220.22-M data sanitization method will prevent all software based file recovery methods from lifting information from the drive and should also prevent most if not all hardware based recovery methods.

DoD 5220.22-M Wipe Method

The DoD 5220.22-M data sanitization method is usually implemented in the following way:

Pass 1: Writes a zero and verifies the write
Pass 2: Writes a one and verifies the write
Pass 3: Writes a random character and verifies the write

Scrub :

Most widely used DoD wiping software in Linux is the scrub, which writes patterns on special files (i.e. raw disk devices) or regular files to make retrieving the data more difficult. Scrub implements user-selectable pattern algorithms that are compliant with DoD 5520.22-M or NNSA NAP-14.x.

The dod scrub sequence is compliant with the DoD 5220.22-M procedure for sanitizing removable and non-removable rigid disks which require overwriting all addressable locations with a character, its complement, then a random character, and verify.

$ sudo apt-get install scrub

Once installed, wipe data using dod method like below.

$ sudo scrub -p dod /dev/xvdf
scrub: using DoD 5220.22-M patterns scrub: please verify that device size below is correct! scrub: scrubbing /dev/xvdf 8589934592 bytes (~8192MB) scrub: random |................................................| scrub: 0x00 |................................................| scrub: 0xff |................................................| scrub: verify |................................................|

Thursday, September 28, 2017

Data Wipe On EBS Volumes - Part I



Data Destruction is extremely an important part of security which protects sensitive data falling into the wrong hands.Many customers/vendors look for Certificate of Data Destruction while buying software.In this series, let's see how to securely wipe off data from AWS EBS volumes.

AWS security while paper states that

"Amazon EBS volumes are presented to the customer as raw unformatted block devices, which have been wiped prior to being made available for use. Customers that have procedures requiring that all data be wiped via a specific method, such as those detailed in DoD 5220.22-M (“National Industrial Security Program Operating Manual “) or NIST 800-88 (“Guidelines for Media Sanitization”), have the ability to do so on Amazon EBS. Customers should conduct a specialized wipe procedure prior to deleting the volume for compliance with their established requirements. Encryption of sensitive data is generally a good security practice, and AWS encourages users to encrypt their sensitive data via an algorithm consistent with their stated security policy."

Although AWS guarantees to never return a previous user's data via the hyper-visor as mentioned in their security white paper, we should still wipe data from EBS before deleting it , as a good security practise if we require a Certificate of Data Destruction.

Let us first test AWS new EBS volumes , if any data can be recovered using a data recovery software such as PhotoRec.

1. Create a AWS EC2 t2.micro instance with Ubutnu.

2. ssh to the instance and install PhotoRec

sudo apt-get update
sudo apt-get install testdisk

3. Create a new gp2 EBS volume of size 8GB and attach it to the instace we created in step 1.

4. Check if the device is attached on command line.

ubuntu@ip-XXXXXXXXXXX:~$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 8G 0 disk
└─xvda1 202:1 0 8G 0 part /
xvdf 202:80 0 8G 0 disk

5. Now try to recover data from this new EBS volume using photorec

sudo photorec /dev/xvdf

--------------------------------------------------------------- ---------------------------------------------------------------------

PhotoRec 6.14, Data Recovery Utility, July 2013
Christophe GRENIER
http://www.cgsecurity.org

PhotoRec is free software, and
comes with ABSOLUTELY NO WARRANTY.

Select a media (use Arrow keys, then press Enter):
>Disk /dev/xvdf - 8589 MB / 8192 MiB (RO)

--------------------------------------------------------------- ---------------------------------------------------------------

PhotoRec 6.14, Data Recovery Utility, July 2013
Christophe GRENIER
http://www.cgsecurity.org

Disk /dev/xvdf - 8589 MB / 8192 MiB (RO)
Partition Start End Size in sectors
P Unknown 0 0 1 1044 85 1 16777216

0 files saved in /home/ubuntu/recup_dir directory.
Recovery completed.

--------------------------------------------------------------- -------------------------------------------------------------

There are no files recovered which is perfectly fine.

6. Now let us format the drive with ext4 file system and then try to recover .

ubuntu@ip-XXXXXXXXX:~$ sudo mkfs -t ext4 /dev/xvdf
mke2fs 1.42.9 (4-Feb-2014)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
524288 inodes, 2097152 blocks
104857 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2147483648
64 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

--------------------------------------------------------------- ---------------------------------------------------------------------

sudo photorec /dev/xvdf

PhotoRec 6.14, Data Recovery Utility, July 2013
Christophe GRENIER
http://www.cgsecurity.org

Disk /dev/xvdf - 8589 MB / 8192 MiB (RO)
Partition Start End Size in sectors
P ext4 0 0 1 1044 85 1 16777216

0 files saved in /home/ubuntu/recup_dir directory.
Recovery completed.

--------------------------------------------------------------- ------------------------------------------------------------------------

There are no files recovered in this case also.
Simillarly test this with Provisioned IOPS SSD as well , you will see same results.

In part 2 , we will see how we can wipe EBS volumes with DoD 5220.22-M using scrub.

Sunday, September 24, 2017

Cleaning orphan snapshots in AWS EC2 to save $




When we deregister an Amazon EBS-backed AMI, it doesn't affect the snapshots that were created during the AMI creation process. We'll continue to incur storage costs for these snapshots. Therefore, if we are finished with the snapshots, we should delete it.

In fact, they cant do the mistake of cleaning up snapshots, its a revenue for them!

So we will have to take care of cleaning up snapshots.
Here is how we can do it from AWS Java SDK.

Download AWS Java SDK from here
Add to java classpath/buildpath.

import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import java.util.regex.Matcher;
import java.util.regex.Pattern;

import com.amazonaws.auth.BasicAWSCredentials;
import com.amazonaws.regions.Region;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.ec2.AmazonEC2Client;
import com.amazonaws.services.ec2.model.DeleteSnapshotRequest;
import com.amazonaws.services.ec2.model.DescribeImagesRequest;
import com.amazonaws.services.ec2.model.DescribeImagesResult;
import com.amazonaws.services.ec2.model.DescribeSnapshotsRequest;
import com.amazonaws.services.ec2.model.DescribeSnapshotsResult;
import com.amazonaws.services.ec2.model.Image;
import com.amazonaws.services.ec2.model.Snapshot;

public class FindSanpshots {

 public static void main(String[] args) throws IOException {

  BasicAWSCredentials basicAWSCredentials = new BasicAWSCredentials("xx", "yyy");
  AmazonEC2Client amazonEC2Client = new AmazonEC2Client(basicAWSCredentials);
  Region region = Region.getRegion(Regions.fromName("us-west-2"));
  amazonEC2Client.setEndpoint(region.getServiceEndpoint("ec2"));

  DescribeImagesRequest withOwners = new DescribeImagesRequest().withOwners("self");
  DescribeImagesResult images = amazonEC2Client.describeImages(withOwners);
  ArrayList < String > imageIdList = new ArrayList < String > ();
  List < Image > amiList = images.getImages();
  for (Image image: amiList) {
   imageIdList.add(image.getImageId());
  }

  DescribeSnapshotsRequest withOwnerIds = new DescribeSnapshotsRequest().
  withOwnerIds("self");
  DescribeSnapshotsResult describeSnapshots = amazonEC2Client.
  describeSnapshots(withOwnerIds);
  List < Snapshot > snapshots = describeSnapshots.getSnapshots();

  // ensure snapshot size and ami size in your region.
  System.out.println(snapshots.size());
  System.out.println(amiList.size());

  int count = 0;
  int size = 0;

  // find orphans and delete.

  for (Snapshot snapshot: snapshots) {

   String description = snapshot.getDescription();

   // get AMI id of snapshot using regex from its description.
   Pattern pattern = Pattern.compile("for(.*?)from");
   Matcher matcher = pattern.matcher(description);
   while (matcher.find()) {

    String amiId = matcher.group(1).trim();
    // ami id is currently 12 character long. 
    if (!imageIdList.contains(amiId) && amiId.length() <= 12) {
     String snapshotId = snapshot.getSnapshotId();
     DeleteSnapshotRequest r =
      new DeleteSnapshotRequest(snapshotId);
     amazonEC2Client.deleteSnapshot(r);
     System.out.println(amiId);

     size += snapshot.getVolumeSize();
     count++;
    }
   }
  }
  System.out.println("Orphan Snapshots Deleted : " + count);
  System.out.println("Orphan Snapshots Size : " + size);

 }

}

Tuesday, July 18, 2017

AWS - How to encrypt instance launched from a community AMI ?



When we launch a instance from a public community AMI like ubuntu , centos etc, the volume will launch unencrypted. It is because Amazon EBS encryption uses AWS Key Management Service (AWS KMS) customer master keys (CMK) when creating encrypted volumes and any snapshots created from them. The first time we create an encrypted volume in a region, a default CMK is created for us automatically. This key is used for Amazon EBS encryption unless we select a CMK that we created separately using AWS KMS. Now this makes sense since every AWS customer needs to launch from this same public AMI, and we can't all share the same key.

However post launch we can encrypt this and then put our data on this.



1. Post launch , locate the volume to be encrypted. If you see , this volume will be unencrypted.



2. Create a snapshot of this volume.



3. Once snapshot is created and available, locate it and copy the snapshot , while copying there is an option to encrypt the volume.



4. copy it and locate new snapshot which is encrypted and create a AMI using this snapshot.




Monday, July 17, 2017

Service/program Start at startup Ubuntu 16.04




vi /etc/systemd/system/myscript.service

Create a file in /etc/systemd/system and add the following lines.
---------------------------------------------
[Unit]
Description=cifs start script

[Service]
ExecStart=/usr/bin/docker-volume-netshare cifs
Restart=always

[Install]
WantedBy=multi-user.target

---------------------------------------------
Then execute following commands.

sudo systemctl daemon-reload
sudo systemctl enable myscript.service
sudo systemctl start myscript.service

RAID Array of EBS Volumes - Ubuntu



RAID is an acronym for Redundant Array of Independent (or Inexpensive) Disks. In fact, RAID is the way of combining several independent and relatively small disks into a single storage of a large size. The disks included into the array are called array members. The disks can be combined into the array in different ways which are known as RAID levels. Each of RAID levels has its own characteristics of:

Fault-tolerance which is the ability to survive of one or several disk failures.
Performance which shows the change in the read and write speed of the entire array as compared to a single disk.
The capacity of the array which is determined by the amount of user data that can be written to the array. The array capacity depends on the RAID level and does not always match the sum of the sizes of the RAID member disks. To calculate the capacity of the particular RAID type and a set of the member disks you can use a free online RAID calculator.

How RAID is organized?
Two independent aspects are clearly distinguished in the RAID organization.

1.The organization of data in the array (RAID storage techniques: striping, mirroring, parity, combination of them).
2.Implementation of each particular RAID installation - hardware or software.

RAID storage techniques
The main methods of storing data in the array are:

Striping - splitting the flow of data into blocks of a certain size (called "block size") then writing of these blocks across the RAID one by one. This way of data storage affects on the performance.
Mirroring is a storage technique in which the identical copies of data are stored on the RAID members simultaneously. This type of data placement affects the fault tolerance as well as the performance.
Parity is a storage technique which is utilized striping and checksum methods. In parity technique, a certain parity function is calculated for the data blocks. If a drive fails, the missing block are recalculated from the checksum, providing the RAID fault tolerance.
All the existing RAID types are based on striping, mirroring, parity, or combination of these storage techniques.

RAID levels
RAID 0 - based on striping. This RAID level doesn't provide fault tolerance but increases the system performance (high read and write speed).
RAID 1 - utilizes mirroring technique, increases read speed in some cases, and provides fault tolerance in the loss of no more than one member disk.
RAID 0+1 - based on the combination of striping and mirroring techniques. This RAID level inherits RAID 0 performance and RAID 1 fault tolerance.
RAID1E - uses both striping and mirroring techniques, can survive a failure of one member disk or any number of nonadjacent disks. There are three subtypes of RAID 1E layout: near, interleaved, and far. More information and diagrams on the RAID 1E page.
RAID 5 - utilizes both striping and parity techniques. Provides the read speed improvement as in RAID 0 approximately, survives the loss of one RAID member disk.
RAID 5E - a variation of RAID 5 layout the only difference of which is an integrated spare space allowing to rebuild a failed array immediately in case of a disk failure. Read more on the RAID5E page.
RAID 5 with delayed parity - pretty similar to basic RAID 5 layout, but uses nonstandard scheme of striping. More information about RAID5 with delayed parity.
RAID 6 - similar to RAID 5 but uses two different parity functions. The read speed is the same as in RAID 5.

How to make RAID Array in Linux with AWS EBS volumes.

sudo mdadm --create --verbose /dev/md0 --level=0 --name=MY_RAID --raid-devices=4 /dev/xvdb /dev/xvdc /dev/xvdd /dev/xvde
sudo mkfs.ext4 -L MY_RAID /dev/md0
sudo mkdir /var/lib/docker
sudo mount LABEL=MY_RAID /var/lib/docker

To mount on startup, edit the file.

sudo vi /etc/fstab

Add following line and save.

LABEL=MY_RAID /var/lib/docker ext4 defaults 0 0

Now to mount this on system startup , we have to add an entry in fstab.

You could do it using sudo vi /etc/fstab.
For example, if you add
LABEL=MY_RAID /var/lib/docker ext4 defaults 0 0

It means that the device/partition located at MY_RAID will be mounted to /var/lib/docker using the file system ext4, with default mount options and no dumping and no error-checking enabled.

After we've added the new entry to /etc/fstab, we need to check that our entry works. Run the sudo mount -a command to mount all file systems in /etc/fstab.

sudo mount -a

If the previous command does not produce an error, then your /etc/fstab file is OK and your file system will mount automatically at the next boot. If the command does produce any errors, examine the errors and try to correct your /etc/fstab.


Wednesday, July 12, 2017

Static Code Analysis with Sonarqube

Sonar is a web based code quality analysis tool for Maven based Java projects. It covers a wide area of code quality check points which include: Architecture & Design, Complexity, Duplication, Coding Rules, Potential Bugs, Unit Test etc.



Make sure java is inatlled.
If not install uisng following command.

sudo apt-get install default-jre

1. Download and unzip the SonarQube distribution (let's say in "/etc/sonarqube")
2. Start the SonarQube server:
/etc/sonarqube/bin/[OS]/sonar.sh console

3. Download and unzip the SonarQube Scanner (let's say in "/etc/sonar-scanner")
4. Analyze a project:
Go to projects root directory and run follwing command.
/etc/sonar-scanner/bin/sonar-scanner -Dsonar.projectKey=DSP -Dsonar.sources=. -Dsonar.scm.disabled=true

6. Browse the results at http://localhost:9000 (default System administrator credentials are admin/admin)


Monday, July 10, 2017

SVN to GIT MIGRATION with Branches and Tags



git svn clone -r1:HEAD -s svnurl svnfolder

git branch
git branch -a


git remote add origin giturl.git

git checkout -b branchname origin/branchname
git checkout -f -b branchname origin/branchname

git push origin master branchname

git checkout origin/tag/tagname
git tag -a tagname -m "creating tag tagname"
git push origin tagname


You can use the following java code if there are too many branches and tags.
Keep all branches in file "branches.txt" and tags in file "taglist.txt"

import java.io.BufferedReader;
import java.io.File;
import java.io.FileReader;
import java.io.IOException;

public class SVNGitMigrator {

 public static void main(String[] args) throws IOException {

  try {

   StringBuilder a = new StringBuilder("git push origin master ");
   File f = new File("branches.txt");
   BufferedReader b = new BufferedReader(new FileReader(f));
   String readLine = "";
   while ((readLine = b.readLine()) != null) {
    System.out.println("git checkout -f -b " + readLine + " origin/" + readLine);
    a.append(readLine+" ");
   }
   b.close();
   
   f = new File("taglist.txt");
   b = new BufferedReader(new FileReader(f));
   readLine = "";
   while ((readLine = b.readLine()) != null) {
    System.out.println("git checkout origin/tags/" + readLine);
    System.out.println("git tag -a " + readLine + " -m \"creating dsp tag " + readLine+"\"");
    a.append(readLine+" ");
   }
   System.out.println(a);
   b.close();
   
  } catch (IOException e) {
   e.printStackTrace();
  }

 }

}

Thursday, June 29, 2017

LUKS ENCRYPTION

sudo apt-get install cryptsetup
sudo fallocate -l 64G /root/fordocker
sudo cryptsetup -y luksFormat /root/fordocker
sudo file /root/fordocker
sudo cryptsetup luksOpen /root/test2 volume1
sudo mkfs.ext4 -j /dev/mapper/volume1
sudo mkdir /var/lib/docker
sudo mount /dev/mapper/volume1 /var/lib/docker
df -h


LUKS is a on-disk format for encrypted volumes. It puts metadata in front of the actual encrypted data. The metadata stores the encryption algorithm, key length, block chaining method etc. Therefore one does not need to memorize those parameters which make LUKS suitable for use on e.g. USB memory sticks. Additionally LUKS uses a master key that is encrypted using the passphrase hash. That way it's possible to change the passphrase and one can use multiple passphrases. cryptsetup is able to handle LUKS volumes.

Luks is an encryption layer on a block device, it operates on a particular block device, and exposes a new block device which is the decrypted version. Access to this device will trigger transparent encryption/decryption while it's in use.



LUKs stores a bunch of metadata at the start of the device.

It has slots for multiple passphrases. Each slot has a 256-bit salt that is shown in the clear along with an encrypted message. When entering a passphrase LUKS combines it with each of the salts, in turn, hashing the result and tries to use the result as keys to decrypt an encrypted message in each slot. This message consists of some known text and a copy of the master key. If it works for any one of the slots because of the known text matches, the master key is now known and you can decrypt the entire container. The master key must remain unencrypted in RAM while the container is in use.

Knowing the master key allows you access to all the data in the container, but doesn't reveal the passwords in the password slots so one user cannot see the passwords of other users. The system is not designed for users to be able to see the master key while in operation, and this key can't be changed without re-encrypting. The use of password slots, however, means that passwords can be changed without re-encrypting the entire container, and allows for use of multiple passwords.


Monday, June 19, 2017

Installing Hygieia Dashboard on Ubuntu 16.04


Install Java

sudo apt-add-repository ppa:webupd8team/java
sudo apt-get update
sudo apt-get install oracle-java8-installer

Install mongo db

sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv EA312927
echo "deb http://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/3.2 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.2.list
sudo apt-get update
sudo apt-get install -y mongodb-org

Edit following file and add contents

sudo vi /etc/systemd/system/mongodb.service

----------------------------
[Unit]
Description=High-performance, schema-free document-oriented database
After=network.target

[Service]
User=mongodb
ExecStart=/usr/bin/mongod --quiet --config /etc/mongod.conf

[Install]
WantedBy=multi-user.target

----------------------------------

start mongodb

sudo systemctl start mongodb
sudo systemctl status mongodb
sudo systemctl enable mongodb

create db and user in mongo.

use dashboarddb

db.createUser( { user: "dashboarduser", pwd: "dbpassword", roles: [ {role: "readWrite", db: "dashboarddb"} ] } )


Install other required software.

sudo apt-get install nodejs-legacy
sudo apt-get install ruby
ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
sudo apt-get install npm
sudo npm install -g bower
sudo npm install -g gulp
sudo apt-get install gdebi
wget http://ppa.launchpad.net/natecarlson/maven3/ubuntu/pool/main/m/maven3/maven3_3.2.1-0~ppa1_all.deb
sudo gdebi maven3_3.2.1-0~ppa1_all.deb
sudo ln -s /usr/share/maven3/bin/mvn /usr/bin/mvn
sudo apt-get install git

checkout hygieia code.

mkdir Hygieia
cd Hygieia
git clone https://github.com/capitalone/Hygieia.git .

Build code.

mvn clean install

-------------------------------

cd UI
UI$ gulp serve

UI starts on port 3000

Start API:

Create dashboard.properties in Hygieia/api folder.

Hygieia/api$ vi dashboard.properties

Add following content.

-----------------------------

# dashboard.properties
dbname=dashboarddb
dbusername=dashboarduser
dbpassword=dbpassword

-----------------------------

Now start API.

Hygieia/api$ java -jar target/api.jar --spring.config.location=dashboard.properties -Djasypt.encryptor.password=hygieiasecret

And then you can start the collectors you want.