Using Lambda is the most attractive option as it is very easy to set it up and means you don’t need to maintain another account with a different service. The only draw back with using Lambda is that its only available in a few select regions. If you use AWS regions outside of the 4 that currently support Lambda then you cannot send alerts to Slack directly via Lambda.

Hopefully Lambda will eventually be able to be used in all AWS regions but until that time there is another way to leverage the power of Lambda to get CloudWatch alerts posting into Slack channels – using the AWS API Gateway.

Lets get started by creating a new incoming web hook within Slack. Once that is done we can create our Lambda function to process the SNS alerts.

Choose one of the available regions for Lambda skip the blueprint section and choose a name for your function. Make sure Node.js is selected as the runtime. You can accept the defaults for the rest of the fields.

Paste the following code into the code box replacing <your_unique_web_hook_url> on line 49 with the web hook URL you have created in Slack and save the Lambda function.

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

varhttp=require('https');

varquerystring=require('querystring');

exports.handler=function(event,context){

console.log(event);

varmessage=JSON.parse(event.Message);

varcolor='warning';

switch(message.NewStateValue){

case"OK":

color='good';

break;

case"ALARM":

color='danger';

break;

}

varpayloadStr=JSON.stringify({

"username":"Cloudwatch",

"attachments":[

{

"title":message.AlarmName,

"fallback":message.NewStateReason,

"text":message.NewStateReason,

"fields":[

{

"title":"Region",

"value":message.Region,

"short":true

},

{

"title":"State",

"value":message.NewStateValue,

"short":true

}

],

"color":color

}

],

"icon_emoji":":cloudwatch:"

});

varpostData=querystring.stringify({

"payload":payloadStr

});

varoptions={

hostname:'hooks.slack.com',

port:443,

path:'/services/<your_unique_web_hook_url>',

method:'POST',

headers:{

'Content-Type':'application/x-www-form-urlencoded',

'Content-Length':postData.length

}

};

varreq=http.request(options,function(res){

res.on("data",function(chunk){

console.log(chunk);

context.done(null,'done!');

});

}).on('error',function(e){

context.done('error',e);

});

req.write(postData);

req.end();

};

Now we can create our API with the API Gateway from within the AWS console.

Setup a POST method and choose the Lambda function we setup earlier then click Save.

Now you are ready to deploy your API, click Deploy API and create a stage, I have used the default suggestion of prod.

Copy the invoke URL and create a new SNS topic called “Slack”. Create a subscription setting the protocol to HTTPS and then paste in your API URL from above.

The final step is to request a confirmation for your new subscription and then check the logs for your Lambda function to get the subscription confirmation link. You need to confirm the subscription with this link.

Now you are done and you should have CloudWatch alerts flowing through to your Slack channel.

When you create the new IAM user you should generate new keys, place the access key, secret key and your S3 bucket region into a file named
deploy-keys.json (Make sure you place this file in your
.gitignore – you should never commit API keys) in the same directory as your Gruntfile and in the following format:

1

2

3

4

5

{

"AWSAccessKeyId":"",

"AWSSecretKey":"",

"AWSRegion":""

}

Attach the following IAM policy to your newly created user where
is the name of the S3 bucket you have created:

1

2

3

4

5

6

7

8

9

10

{

"Version":"2012-10-17",

"Statement":[

{

"Effect":"Allow",

"Action":"s3:*",

"Resource":"arn:aws:s3:::/*"

}

]

}

You can now add the following to your
grunt.initConfig to setup the deploy task:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

aws:grunt.file.readJSON('deploy-keys.json'),// Load deploy variables

aws_s3:{

options:{

accessKeyId:'&lt;%= aws.AWSAccessKeyId %&gt;',

secretAccessKey:'&lt;%= aws.AWSSecretKey %&gt;',

region:'&lt;%= aws.AWSRegion %&gt;',

uploadConcurrency:5,// 5 simultaneous uploads

downloadConcurrency:5// 5 simultaneous downloads

},

production:{

options:{

bucket:'',

},

files:[

{expand:true,cwd:'/',src:['**'],dest:'/'}

]

},

},

Refer to the grunt-aws-s3 documentation for further configuration options, you may also like to change the files cwd to a sub folder such as
/dist or
/www if you have Grunt running a build step into a sub directory.

You can now deploy your application to S3 by running the following:

1

$grunt aws_s3

Most likely you will want to run a few other grunt tasks before deploying like linting or building so you should register a task like so:

1

2

3

4

5

grunt.registerTask('deploy',[

'jshint',

'build',

'aws_s3'

]);

Now you have a deploy task which will run jshint and build your project before deploying it to S3 (obviously you will need to have registered tasks for jshint and build for this to work) :

1

$grunt deploy

This task can now be run whenever you need to push a new version of your application live.

A final tip – If you are using jshint or any other linter with your project and have it set to enforce camel case variable names you may find that it does not like the “aws_s3” key used inside the
grunt.initConfig block – this is easy to fix by adding the following to your Gruntfile:

1

grunt.task.renameTask('aws_s3','s3Deploy');

You can now go through the rest of your Gruntfile and replace all references to “aws_s3” with “s3Deploy” and you will receive no more linting errors.

Creating key pairs with AWS is rather easy but for convenience and security reasons generating your own SSH keys and importing them into AWS can be a good option.

From a security stand point generating your own key pair means that you can know 100% that the private key has never seen the light of day… or any other computer other than the one you generate it on.

If you are using multiple regions in AWS then generating your own key pair and importing it gives you another benefit – you can use the same key globally rather than having to create one per region.

On Ubuntu the process of generating a key pair is as simple as running the following command.

1

$ssh-keygen

This will prompt you to enter a name for the key and then a pass-phrase – this can be left blank if you wish… I usually leave this blank because I don’t want to enter a password every time I use the key.

Once you have entered the required details you will have two files which have been generated for you: <keyname> and <keyname>.pub where <keyname> is the name you chose.

You can now import the .pub file into the Key Pairs section of the EC2 console, usually located here.

You can import this same public key into as many different regions as you wish which enables you to connect to all of your servers with the same private key – much simpler than keeping track of a key for each region.

Now you are good to go, you will be able to launch new instances with your created key pair safe in the knowledge that your private key is as secure as can possibly be.

If you use Auto Scaling with AWS, the following script may come in handy.

Sometimes you just want to connect to a random auto scaled server or servers. Using this script you can simply run it once to get a random server or run it repeatedly to connect to all the servers in your auto scaling group.

I place the script at
~/bin/appserver and then run
chmod+x~/bin/appserver to make it executable.

It requires PHP and the AWS CLI to be installed – you will also need to have permission to run the
aws ec2 describe-instances command.

Setup is simple. Just set the path to your private key, your SSH username and change the autoscaling group or groups you wish to connect to.