Wednesday, February 14, 2018

Container Orchestration using OpenShift.

Login/Signup OpenShift Online ( free ) https://www.openshift.com/



Download the command line OpenShift client from HERE.

Extract the archive and go the directory.
Add current directory to path
export PATH=$(pwd):$PATH

narayan@ubuntu:~/Downloads/openshift-origin-client-tools-v3.9.0-alpha.3-78ddc10-linux-64bit$ oc login
Authentication required for https://api.starter-us-west-1.openshift.com:443 (openshift)
Username: #your username#
Password: #your password#
Login successful
.

You can create a new project using the GUI or command line, let us create a project using commandline tool for the purpose of this tutorial.
Use a unique name for the project.





narayan@ubuntu:~/Downloads/openshift-origin-client-tools-v3.9.0-alpha.3-78ddc10-linux-64bit$ oc new-project test-templates

Now using project "test-templates" on server "https://api.starter-us-west-1.openshift.com:443".
You can add applications to this project with the 'new-app' command. For example, try:
oc new-app centos/ruby-22-centos7~https://github.com/openshift/ruby-ex.git
to build a new example application in Ruby.

You can see that the project is created in the GUI.



Go inside project, you see that there are no applications deployed.



Now let us deploy a simple hello-world-web docker image on the cluster

narayan@ubuntu:~/Downloads/openshift-origin-client-tools-v3.9.0-alpha.3-78ddc10-linux-64bit$ oc new-app narayan1ap/hello-world-web

--> Found Docker image 5c64ec0 (39 minutes old) from Docker Hub for "narayan1ap/hello-world-web"
* An image stream will be created as "hello-world-web:latest" that will track this image
* This image will be deployed in deployment config "hello-world-web"
* Port 5000/tcp will be load balanced by service "hello-world-web"
* Other containers can access this service through the hostname "hello-world-web"

--> Creating resources ...
imagestream "hello-world-web" created
deploymentconfig "hello-world-web" created
service "hello-world-web" created
--> Success
Application is not exposed. You can expose services to the outside world by executing one or more of the commands below:
'oc expose svc/hello-world-web'
Run 'oc status' to view your app.

Note that the docker image should not run as a root user otherwise you will get a warning message like below.

[WARNING] Image "example" runs as the 'root' user which may not be permitted by your cluster administrator.

By default, all containers that we try and launch within OpenShift, are set blocked from “RunAsAny” which basically means that they are not allowed to use a root user within the container. This prevents root actions such as chown or chmod from being run and is a sensible security precaution as should a user be able to perform a local exploit to break out of the container, then they would not be running as root on the underlying container host.

Refer this article: How to enable docker image to run as non-root user.

After successful deploy, GUI looks like below. You can see that there is no route associated with the deploy.



Lets associate route.

narayan@ubuntu:~/Downloads/openshift-origin-client-tools-v3.9.0-alpha.3-78ddc10-linux-64bit$ oc expose svc/hello-world-web
route "hello-world-web" exposed



Now you can click on the route and access the APP.


Monday, February 12, 2018

Tectonic on AWS with Terraform - PART II

Deploying a simple application on AWS Tectonic Cluster

We have created Tectonic cluster and accessed it with kubectl command in Part I
Now we will try to deploy a simple application on this cluster.

Deployments: Run multiple copies of a container across multiple nodes
Services: Endpoint that load balances traffic to containers run by a deployment

Copy the following YAML into a file named simple-deployment.yaml. apiVersion: extensions/v1beta1 kind: Deployment metadata: name: simple-deployment namespace: default labels: k8s-app: simple spec: replicas: 3 template: metadata: labels: k8s-app: simple spec: containers: - name: nginx image: quay.io/coreos/example-app:v1.0 ports: - name: http containerPort: 80 The parameter replicas: 3, will create 3 running copies.
Image: quay.io/coreos/example-app:v1.0 defines the container image to run, hosted on Quay.io.

Then, copy the following YAML into a file named simple-service.yaml.

kind: Service apiVersion: v1 metadata: name: simple-service namespace: default spec: selector: k8s-app: simple ports: - protocol: TCP port: 80 type: LoadBalancer
$ kubectl create -f simple-deployment.yaml
$ kubectl get deployments
$ kubectl create -f simple-service.yaml
$ kubectl get services -o wide

Get the EXTERNAL-IP URL and access the Application. The application will be up in few minutes.








Tectonic (Enterprise Kubernetes) on AWS with Terraform - PART 1

Create a CoreOS account here : https://account.coreos.com/login
You can use your Gmail to sign in and get a free license for 10 nodes.



Create a t2 small ec2 ubuntu 64 bit machine and login

$sudo apt-get update

$sudo apt install gnupg2
$sudo apt install unzip
$sudo apt install awscli




$curl -O https://releases.tectonic.com/releases/tectonic_1.8.4-tectonic.3.zip
$curl -O https://releases.tectonic.com/releases/tectonic_1.8.4-tectonic.3.zip.sig
$gpg2 --keyserver pgp.mit.edu --recv-key 18AD5014C99EF7E3BA5F6CE950BDD3E0FC8A365E
$gpg2 --verify tectonic_1.8.4-tectonic.3.zip.sig tectonic_1.8.4-tectonic.3.zip

$unzip tectonic_1.8.4-tectonic.3.zip
$cd tectonic_1.8.4-tectonic.3

$export PATH=$(pwd)/tectonic-installer/linux:$PATH
$terraform init platforms/aws

$mkdir -p build/${CLUSTER}
$export CLUSTER=my-cluster
$cp examples/terraform.tfvars.aws build/${CLUSTER}/terraform.tfvars



vi build/${CLUSTER}/terraform.tfvars

Make sure you set these properties

tectonic_aws_region = "ap-south-1"
tectonic_base_domain = "yourdomain.com" // your base domain from Rout53
tectonic_license_path = "/home/ubuntu/license.txt"
tectonic_pull_secret_path = "/home/ubuntu/pullsecret.json"
tectonic_cluster_name = "test" // your cluster name



Note: Pull secret and license files are available in your core os account.

save changes wq!

$aws configure

AWS Access Key ID : Enter Access Key ID here
AWS Secret Access Key :Enter Secret Key here
Default region name: ap-south-1
Default output format: Leave Empty

$export TF_VAR_tectonic_admin_email="your google email used for CoreOS"
$export TF_VAR_tectonic_admin_password="your password"

$ terraform plan -var-file=build/${CLUSTER}/terraform.tfvars platforms/aws
$ terraform apply -var-file=build/${CLUSTER}/terraform.tfvars platforms/aws

After few minutes ( 5 to 10 ) , cluster will be up and you can access it here :
https://test.yourdomain.com

The username password is same as your CoreOS account.

Accessing Cluster with kubectl commandline :



Now download kubectl-config and kubectl files from your cluster.

$ chmod +x kubectl
$ mv kubectl /usr/local/bin/kubectl
$ mkdir -p ~/.kube/ # create the directory
$ cp path/to/file/kubectl-config-test $HOME/.kube/config # rename the file and copy it into the directory
$ export KUBECONFIG=$HOME/.kube/config

Try to get nodes and see if you can see the nodes.

$ kubectl get nodes

In next entry, we will see how to Deploy a simple Application with kubectl commandline


Monday, February 5, 2018

Static Code Analysis with Sonarqube docker image.

sudo apt-get update
sudo apt-get install default-jdk
sudo apt install docker.io
sudo usermod -aG docker $USER
logout

// login again

docker pull sonarqube
docker run -d --name sonarqube -p 9000:9000 -p 9092:9092 sonarqube

Sonar server has been started on this machine. Now you can login in to http://ip_address:9000 with admin/admin

Analyse a project.

sudo apt-get install maven
cd your_project_pom_directory
mvn sonar:sonar -Dsonar.host.url=http://ip_address:9000 -Dsonar.login=your_sonar_token




Increasing ssh timeout

sudo vi /etc/ssh/sshd_config

Make sure it has following two properties at the end.

ClientAliveInterval 120
ClientAliveCountMax 720

Restart ssh daemon.

sudo service ssh restart

The first one configures the server to send null packets to clients every 120 seconds and the second one configures the server to close the connection if the client has been inactive for 720 intervals that are 720*120 = 86400 seconds = 24 hours