Friday, October 27, 2017

Teamcity setup on Ubuntu using docker images


TeamCity is a Java-based build management and continuous integration server from JetBrains.In this tutorial we will see a very basic example of setting up a teamcity server and agent using docker images.



sudo apt install docker.io
sudo usermod -aG docker $USER
logout

//login again using ssh

//pull server
docker pull jetbrains/teamcity-server
//pull agent
docker pull jetbrains/teamcity-agent
cd
mkdir -p ~/tcdata/server/data
mkdir tcdata/server/logs
mkdir -p ~/tcdata/agent/conf

//Start Containers in the background.

docker run -itd --name teamcity-server-instance -v /home/ubuntu/tcdata/server/data:/data/teamcity_server/datadir -v /home/ubuntu/tcdata/server/logs:/opt/teamcity/logs -p 8111:8111 jetbrains/teamcity-server

docker run -itd -e SERVER_URL="http://server-ip:8111" -v /home/ubuntu/tcdata/agent/conf:/data/teamcity_agent/conf jetbrains/teamcity-agent

Access server here:

http://server-ip:8111

Select local HSQLDB Database and create a user and log in.
Go to Agents tab and Authorize agent.


Sunday, October 15, 2017

Hygieia authentication using LDAP

Please refer "Installing Hygieia Dashboard on Ubuntu 16.04" blog entry to setup hygieia , before you setup LDAP aunthentication.

LDAP stands for “Lightweight Directory Access Protocol”. It is a simplification of the X.500 Directory Access Protocol (DAP) used to access directory information. A directory is essentially a special-purpose database optimized to handle identity-related information. The LDAP standard also defines a data model based on the X.500 data model. It is a hierarchical data model, with objects arranged in a hierarchical structure, and each object containing a collection of attributes. The overall structure of any particular directory is defined by its schema, much like a database schema defines the tables and columns.

LDAP will access data that are read frequently but updated rarely. One of the main application of LDAP is authentication because user authentication data is updated rarely but read very frequently each time the user logs in. Authentication request could generate from a Linux/Windows client machine or from applications like Jenkins and it authenticates to a remote LDAP server, where authentication data is stored.

LDAP defines a “Bind” operation that authenticates the LDAP connection and establishes a security context for subsequent operations on that connection. There are two authentication methods defined in RFC 4513, simple and SASL. The simple authentication method has the LDAP client send the username (as a LDAP distinguished name) and password (in clear text) to the LDAP server. The LDAP server looks up the object with that username in the directory, compares the password provided to the password(s) stored with the object, and authenticates the connection if they match. Because the password is provided in clear text, LDAP simple Binds should only be done over a secure TLS connection.

LDAP with Hygieia .

You can set up your own LDAP server, which is time-consuming. For testing purpose, we have an Online LDAP Test Server available which we will use in this tutorial.

1. First install Apache Directory Studio and test if this online ldap server is in working condition.

Create a new ldap connection.

host : ldap.forumsys.com
port: 389
Bind DN : uid=euclid,dc=example,dc=com
password : password








2. Once you are able to successfully connect to the Test LDAP server, you can update the dashboard.properties in api folder and restart API .

$ cd Hygieia/api
~/Hygieia/api$ vi dashboard.properties

----------------------------------------------------------------------------------------

# dashboard.properties
dbname=dashboarddb
dbusername=dashboarduser
dbpassword=dbpassword
auth.authenticationProviders=LDAP,STANDARD
auth.ldapServerUrl=ldap://ldap.forumsys.com:389/dc=example,dc=com
auth.ldapUserDnPattern=uid={0}

----------------------------------------------------------------------------------------

~/Hygieia/api$ java -jar target/api.jar --spring.config.location=dashboard.properties -Djasypt.encryptor.password=hygieiasecret

Now Here is how your Hygieia login screen looks like



you can use LDAP entry euclid/password for logging into Hygieia.

You can create a Test Dashboard in Hygieia with this LDAP user and check the mongo entry for the dashboard. You can see that there is flag added to identify the user as LDAP user.
$mongo > use dashboarddb > db.getCollection('dashboards').find({}) { "_id":ObjectId("59e324cf178d2f23ccac05b0"), "_class":"com.capitalone.dashboard.model.Dashboard", "template":"splitview", "title":"TEstApp", "widgets":[ ], "owners":[ { "username":"euclid", "authType":"LDAP" } ], "type":"Team", "application":{ "name":"TEstApp", "components":[ DBRef("components", ObjectId("59e324cf178d2f23ccac05af")) ] }, "validServiceName":false, "validAppName":false, "remoteCreated":false }

Monday, October 2, 2017

VPC Endpoint to Access S3

Create an S3 Access IAM Role.



IAM roles are a secure way to grant permissions to entities that you trust. For example, an application code running on an EC2 instance that needs to perform actions on AWS resources like s3 might need an IAM role to do that.






1. Goto IAM -> Roles -> Create New Role



2. Select "EC2" and in "Permissions" select AmazonS3FullAccess.



3. Give a Role Name, Description and create a role.

This role helps us to access s3 from Ec2 instance.

Now create a t2 micro ubuntu EC2 instance from an AMI which has awscli ( AWS command line tools ) installed already in a private subnet with the IAM role we created.

The private subnet should be completely private, I mean the subnet should not even have a route to the internet through a NAT instance.



Now connect to the machine using ssh & key and since the machine has already awscli installed, you can try accessing the s3 like below.

$aws s3 ls

This will not work, fails with a timeout.

Why it fails even though we have an s3 access role assigned to that ec2 instance?
Because this instance is in private subnet in which we do not have access to internet and s3 does not reside inside any vpc and its endpoints are public in nature.
If you have to access s3 you have to send the request via internet only.

But how do I access s3 using a completely private machine then?
For that purpose, AWS provides s3 endpoints which can be used to connect a vpc with s3.



Currently, as we do not have a route to s3 through a vpc endpoint in the route table associated with our private subnet it failed.

Let's add a VPC Endpoint.



Select your vpc and s3 and continue.



Select the route table which is associated with your private subnet.



A rule with destination pl-id (com.amazonaws.us-west-2.s3) and a target with this endpoints' ID (e.g. vpce-12345678) will be added to the route tables you selected.

Now that we have a vpc endpoint, try to access the s3 from private ec2 instance again.

$ aws s3 ls

This will also fail with timeout because awscli by default will create request to global s3 url (s3.amazonaws.com)

Add an environment variable to your region.

$ export AWS_DEFAULT_REGION=us-west-2
$ aws s3 ls

This should list your buckets in us-west-2 region (vpc router will route your request to s3.us-east-1.amazonaws.com)

You have now successfully accessed s3 without internet from an ec2 instance residing in vpc's private subnet.