Background

And while doing so you may need to set the values of some variables in your Spring application based on the environment or abstract it out to a properties file. For eg. base URL of your application or username/password and other database connection details etc. Basically properties that may vary in each environment.

@value annotation is used for same reason. To set value of a variable from a properties file or environment variables. We will see the usage of this annotation next.

Setup

Before you start using @value annotation you need to setup the properties file from which your configured values can be read. To set of properties file you can use @PropertySource annotation in your configuration class. Example -

Above usage will automatically inject values from your property file into your Java model class as you see above. Now you are free to inject Properties class anywhere in your Spring project and access the variable.

Tuesday, 5 December 2017

Background

Atlassian has multiple products Jira. Confluence, Hipchat, Crucible etc. Jira is one of the widely used issues tracking system. In this post we are going to see how to create a plugin for Jira cloud. This would include creating a small demo app on your local and deploying it.

NOTE : I will be using words addon and plugin interchangeably. Both mean the same thing.

There are two different ways to create Atlassian addon -

Atlassian Connect

Plugins 2 framework

Plugins built with Atlassian Connect are meant to be run on Jira cloud instance where as plugins developer with Plugins2 framework are supposed to run in Jira server.

Jira cloud is the cloud version of Jira in which all you need is create an account and get started with Jira products where as Jira server is on premise counterpart where you run your own Jira Server with licenses. As you must have guessed running an onprem version gives you more flexibility in creating and developing the addons where as there are lot of constraints developing plugin for cloud jira instance since developer does not have control over the Jira system and everything happens remotely.

In this post we are going to see how to develop a simple app using Atlassian connect and deploy it in Jira cloud instance. All apps developed with this work remotely from your hosted server. Jira cloud makes it possible to integrate your hosted app with Jira. To an end user it will look like the plugin is running on Jira itself. That's the power of Atlassian connect framework. We will see these in details in a moment.

Next go to Jira. You should already be an administrator. You can do stuff like create project, add users etc.

Now go to setting (cog icon at the top) > Add-ons > Manage add-ons

Next select Manage add-ons page and then settings.

Here enable Development mode

Your cloud jira instance is all setup for plugin deployment. We will come to this later. Let's go ahead and see plugin development.

Step 2. Setting up your local development environment

Now we are going to setup our local environment that is needed to develop out Jira cloud addon.

We will need 2 npm modules to be installed. This obviously expects nodejs and npm installed on your machine. If it's not please do that first.

http-server

ngrok

As I mentioned before Jira cloud apps based in atlassian connect are hosted remotely on your own servers and Jira cloud just integrates it with the cloud instance. So we will need http-server to host our jira plugin on a server and we need ngrok to make our local traffic accessible to internet where the actual jira cloud instance is running ( https://athakur.atlassian.net in this case). ngrok helps tunnel local ports to public URLs and inspect traffic. You can just run the following commands to set up above modules -

sudo npm install -g http-server
sudo npm install -g ngrok
ngrok help

This should suffice out local setup for now. We will again come back to this when we develop our app and need deploying.

Step 3. Building your app

The most basic file that is needed is named - atlassian-connect.json. It is called plugin descriptor file. This basically tells Jira cloud instance what your plugin is, where it resides etc. This needs to be supplied to the cloud instance while configuring your Jira addon there which is why we need this file to be available over the internet. Hence the http-server and ngrok.

For now create a folder for your app. Let's call it helloworld-jira. Navigate to this folder and create a file called atlassian-connect.json with following content -

baseUrl is the url where your app is hosted. We will supply our ngrok url here. So leave it like a placeholder for now.

Other setting are really description about your plugin and your company

Next we have generalPages section which defines which pages are part of your plugin. Here we are defining just one page. We also give it'e relative path (relative to base URL), location and a unique key.

You need a to understand couple of things from above html page before we proceed -

AUI is Atlassian user interface. It gives you css to make your plugin look like standard Jira page. For more details refer - https://docs.atlassian.com/aui/

Next is just a HTML content showing "Hello World from Jira!" as the content. We should be able to see this when we deploy our app in Cloud Jira instance.

Next and last section is just adding a script to the DOM. This script is the Atlassian Connect JavaScript API. It simplifies client interactions with the Atlassian application. Eg making an XMLHttpRequest. This file can be found the URL -https://<yourhostname.atlassian.net>/atlassian-connect/all.js

Once you have saved this file your app is ready. Let's see how we can deploy this.

Step 4. Deploy your app

First step would be to host your app on the server. So go to helloworld-jira directory where our app resides and execute following command -

http-server -p 8000

This should host your app on localhost domain on port 8000.

You can make sure your URLs are accessible -

http://localhost:8000/atlassian-connect.json

http://localhost:8000/helloworld.html

Next you need to make this accessible from internet and for this we will use ngrok we have already set up. Just run following command -

ngrok http 8000

This will redirect our local traffic to internet. You should be able to see the URL that you can refer.

We are interested in https part of this URL -

You can again test your URLs with this to check your file is available. In my case they are -

https://8d543c3d.ngrok.io/atlassian-connect.json

https://8d543c3d.ngrok.io/helloworld.html

Once this is done you are pretty much all setup. You app is build and is accessible from the internet. Last thing that you do this update this url in the baseUrl field in the descriptor file where we left as placeholder. So your baseUrl is as follows -

"baseUrl": "https://8d543c3d.ngrok.io/"

Now simply go to Manage Addons in the Jira cloud instance we created in Step1 and click on upload addon. Provide the URL to the atlassian-connect json. In my case it is -

https://8d543c3d.ngrok.io/atlassian-connect.json

and your addon should get installed.

Now you can easily test out your addon. Just reload the page and you should see Welcome in the header section. You can click on it and you should see our content - "Hello world from Jira!"

Production Deployment

This was local deployment and testing. For production you need a proper webserver to host your App. You can use service like Heroku or AWS services like S3 , EC2 or Elastic beanstalk.

This will basically allow Account A IAM user to call assume role or any role of any other account.

Cross account role setup

Before we start with the code lets configure a cross account role in Account B.

Go to Account B IAM console of Account B and create a role as follows -

Select a cross account role -

Next provide Account ID of Account A in the input. Also select external ID requirement. External ID provided added security. (In abstract terms, the external ID allows the user that is assuming the role to assert
the circumstances in which they are operating. It also provides a way for the account
owner to
permit the role to be assumed only under specific circumstances. The primary function
of the
external ID is to address and prevent the "confused deputy" problem - more details)

Note the external ID we have used here. We are going to use it later . In this case we are using string called - SECRET

Do not select any policies for now. We will come to that later. Just review , name your role and create it.

Now once you have finished creating this role go to this role and select add inline policy and add below policy -

Again run it as we did in last post. Ouput should be-Read File from S3 bucket. Content : This is from cross account!validated Download : true

Understanding the Workflow

Let's try to understand the workflow here

We have credentials of IAM user of Account A.

We use these credentials to make assume role call with the cross account role created in Account B to give Account A access

We also use the external ID to validate Account A user is the authorized to make this call.

When assumeRole call is made 1st thing that is checked is wherther this user has access to make this assume call. Since we had added this in the inline policy if IAM user of account A it goes through.

Next check is whether assumeRole is successful. This checks if user calling this assumeRole is of same account configured in cross account role of Account B and that same external ID is used.

Once these checks are cleared User from Account A will get temporary credentials corresponding to the role.

Using these we can make call to S3 Upload/Download

Now when these calls are made it is checked whether the role has access to GET/PUT of S3. if not access is denied. Since we explicitly added these policies for our cross account role this step is also accepted.

And finally we have access to S3 GET/PUT.

But note due to our role policy anyone assuming this role will have access to GET/PUT of aniket.help bucket only. No other AWS service or no other bucket of S3. This is why roles and policies are so important.

Same goes with IAM user policy of user in Account A. It can only do sts assumerole call and has access to S3. Nothing else.

NOTE : Good thing about this approach is Account B can give access to KMS as well to the role and you can have a KMS based encryption as well (Which was not possible with previous approach).

To summarize this is diagram it can be as follows -

Again this is just a simplistic overview. All the things that happen in background are listed in workflow section above.

Thursday, 30 November 2017

Background

AWS is the most widely used cloud platform today. It is easy to use, cost effective and takes no time to setup. I can go on and on about it's benefits over your own data center but that's not the goal of this post. In this post I am going to show how you can access cross account services in AWS.

More specifically I will demo accessing cross account S3 bucket. I will show 2 approaches to do so. 1st one is very specific to Cross account bucket access and approach 2 is generic and can be used to access any services.

IAM User Setup

Let's start by creating an IAM user in
Account A (the account you own). Create a user with complete access to
S3 service. You can attach S3 full access policy directly. Other way to
do it is attach an inline policy as follows -

NOTE
: I have purposefully not provided bucket name here since it is a cross
account bucket access we may not know the bucket name of Account B
before hand.

Also enable programmatic access for this IAM user. We will need the access key ID and secret key to use in our API calls. You need to save these details down somewhere as you will not be able to get it again from Amazon console. You will have to regenerate it.

Also note down the arn of this IAM user. For me it is -

arn:aws:iam::499222264523:user/athakur

We will need these later in our setups.

Project Setup

You need to create a new Java project to test these changes out. I am using maven project for dependency management. You can choose whatever you wish to. You need a dependency of AWS Java SDK.

NOTE : Language should not be a barrier here. You can use any language you want python, nodejs etc. For this post I am going to use Java. But other languages will have similar APIs.

Approach 1 (Using Bucket policies)

The 1st approach to use cross account access for S3 buckets is to use S3 bucket policies. To begin with you need an IAM user in your own account (let's call it Account A). And then there is Account B to which you need access to read/write to it's S3 bucket.

Now let's say bucket name of S3 bucket in Cross account is aniket.help. Go ahead and configure bucket policy for this bucket as follows -

Above bucket policy basically provides cross account access to our IAM user from Account A (Notice the arn is same as that of IAM user we created in Account A) . Also note we are just giving permission for S3 GET, PUT and DELETE and to a very specific bucket names aniket.help.

NOTE : Bucket names are global and so is S3 service. Even though your bucket may reside in a particular AWS region. So do not try to use same bucket name as above. But you can use any other name you want.

Now you can run the following Java code to upload a file to S3 bucket of Account B.

NOTE : Replace BUCKET_NAME, BUCKET_REGION with the actual bucket name and region that you have created in Account B. Also replace awsAcessKeyId, awsSecretKey with your actual IAM credentials that we created in Account A.

and the output is as follows -Read File from S3 bucket. Content : This is from cross account!validated Download : true

Drawback : Drawback of using bucket policy is Account B cannot use KMS encryption on their bucket since IAM user of Account B does not have access to KMS of account A. They can still use AES encryption. (These encryptions are encryption at REST and S3 takes care of encrypting files before saving it to the disk and decrypting it before sending it back). This can be resolved by taking approach 2 (assume role).

NOTE :Security is the most important aspect in cloud since potentially any one can access it. It is the responsibility of individual setting these up to ensure it is securely deployed. Never give out your IAM credentials ot check it into any repository. Restrict access roles and policies as much granular as you can. In above case if you need just get,put provide the same in IAM policy. Do not give wildcards there.

Stay tuned for PART 2 of this post. In that we will see how we can do a assume role to access any service in Account B (securely ofcourse). We need not use Bucket policy in that case.

Sunday, 26 November 2017

Background

In this post we will see how we can configure Lambda function to connect to RDS instance and run queries on it. RDS is AWS service for Relational database service. It offers multiple databases like -

mysql

aurora

postgres

oracle etc

For this particular post we are going to use postgres DB. This post is about the lambda function so this assumes you have postgres DB running in RDS and have it's endpoint. username and password handy.

https://www.pgadmin.org/ : If you want a GUI based client to test postgres on local try using pgAdmin.

Explanation

Here we are using postgres library called pg. You can install this module using -

npm install pg

In first part we create a pool of connection giving required parameters to connect to postgres DB. Notice how we are reading these parameters from environment variables.

Next we call connect on it and pass a callback to get the connection when successful

In the callback we can execute client.query() and pass a callback to get rows of data we need for the employee table.

Finally we iterate over each record using async and print the employee record name.

Release the client when you are done with that particular connection

You can end the pool when all the DB operations are done.

AWS specific notes

By default AWS Lambda has internet connection. So it can access web resources.

Lambda by default does not have to AWS services running in private subnet.

If you want to access services in private subnet eg. RDS running in private subnet then you need to configure the VPC, private subnet to run lambda in and security group is network section of Lambda.

However once you do this you will no longer have access to internet (since it is run in private subnet now).

Now if you still need internet access then you need to spin up a NAT gateway or a NAT instance in public subnet and make a route from private subnet to this NAT.

Note if you are encrypting lambda environment variable using KMS you will require internet access (KMS needs that). So if your RDS is running in private subnet you need to follow above steps to make it work. Else you are going to get bunch of timeout exceptions.

Also note maximum run time of Lambda is 5 mins. So make sure your lambda execution completes withing that time. You should probably limit the queries returned by DB and process that much in one Lambda execution.

You can also run lambda as a batch job (using cron expression) from cloud watch.