I recently posted about how to use regex in Node routes, and based on a couple of questions I got, I thought I would throw a couple of more nuggets out there on the subject of using
params and Express error handling.

#1 – router.params is only available in Express 3 and 4. Express 3 has quirks.

If you are on Express 2.5, there is a great and easy to use Express package called express-params. It’s easily installed through npm, and makes these functions more or less drop in.

Going from Express 3 to Express 4, you are going to experience some quirks, as some of the syntax has been deprecated over time as the functionality has gone from an outside package to a built in feature. The docs make it pretty clear what they have killed in the process, but if you are making a conversion it is worth taking a look.

But frankly, if you are on these older versions, I would urge you to be working towards a 4.x conversion. 3.x is on a sunset schedule and 2.x is considered deprecated. You are also missing out on a lot of convenience methods that have come as the platform has matured over time.

#2 – Express Error Handling

Specific to my example, I opted to target the ‘catch all’ position for my router.

1

2

3

app.use(function(err,req,res,next){

res.send(**chose error code**,**chose message**);

});

In a production situation, or really even just a larger environment, you will probably want to have a more elaborate/sophisticated error handling process, but I wanted to keep things simple for the example as the focus was not on error catching.

Hope that clears some things up! Feel free to reach out on Twitter or leave a comment if there’s anything else.

I was recently noodling around on a problem involving more secure
express.router() endpoints on node.js servers. As you use more ‘x-as-a-service’ APIs, you begin to notice that Regex and regular expressions are a common available feature – Firebase, for example, employs matching strings as a part of its security method. This got me thinking – why can’t I do something similar for my own points of entry?

In the process, I found some hard ways to do it, but I also found a pretty easy way to make this happen.

#0 – A quick review on express.router

While the Express project describes itself as a “minimal and flexible web application framework”, the
Router() function docs call it a “mini-application.” What is meant by this is that in Express a
router object is a function limited to only routing functions and middleware operations. You can still accomplish a good deal from ‘just’ inside of middleware, however.

For instance, to execute something every time the router is invoked, you simply place it at the top level of your router.

1

2

3

4

router.use(function(req,res,next){

// perform some sort of logic test, such as body contents in req or authentication as occurred

next();// moving on...

});

Similarly, you can use the same sort of design pattern on a deeper location of your route, allowing for more localized and less global logic.

1

2

3

4

router.use('/user',function(req,res,next){

// perform additional logic

next();

});

From there, you can respond to routes using
.get and
.post functions, two paradigms which should feel familiar as typical HTTP methods to anyone with a background in server technologies.

1

2

3

4

5

6

7

router.get('/user/:parameter',function(req,res){

// do something

});

router.post('/profile',function(req,res){

// do something else

});

With that oversimplified explanation out of the way, let’s jump into something more interesting.

#1 – Using Parameters on .get methods

As far as experimenting and creating examples for this sort of functionality, I find it far easier to start by working off of
.get methods. But first, let’s create a little server code.

1

2

3

4

5

6

7

8

// Let's get it started

varexpress=require('express');

varapp=express();

varrouter=express.Router();

varport=process.env.PORT||8080;

app.use('/api',router);

app.listen(port);

console.log('Port '+port+' is a go.');

Router() methods, at their most basic, will simply resolve as long as you have them setup.

1

2

3

router.get('/user',function(req,res){

res.send('I am route!');

});

You can also pass parameters into your routes by using the pattern below. They are accessed through the
param method inside your route function.

1

2

3

router.get('/user/:username/:thing',function(req,res){

res.send('Hey, '+req.params.username+'. I am '+req.params.thing+'!');

});

This is cool and easy, but it also means that anyone can just walk right on in. That is less cool. What would be nice is if it were possible to prevent random or snooping traffic from making it’s way into our system.

#2 – Use Regex in line with your Route

As mentioned before,
Router() also allows you to include logic onto your route. In this instance, that is handy so that you can check against the parameter that is being passed against your route.

1

2

3

4

router.route('/user/:username([A-Z]+)/:thing')

.get(function(req,res){

res.send('Hey, '+req.params.username+'. I am '+req.params.thing+'!');

});

As a very simple example, this updated route will now only accept usernames that are composed of uppercase letters.

Should it fail due to an unmatched URL, it will return a 404 as well as an error message thanks to some Express defaults.

In case you are interested,
route actually works by turning the route’s strings into regular expressions, which Express then matches against the incoming requested routes. What we end up doing here, then, is placing additional constraints on those internal regular expressions.

#3 – Create Params Functions as Middleware Functions

That pattern works if a) you have a fairly simple string you are trying to match and b) you are don’t need to do anything else other than pass or fail. Should you want to get more complex or creative, however, you are going to want to dig further into the documentation.

Router.param() lets you add increased complexity to the execution series for your router method, and in this case is a great benefit to making more complex checking requirements manageable.
param logic is fired before your route code, giving you the same sort of managed process as before while also pushing you past the ‘yes’ or ‘no’ restrictions.

To start, you simply need to remove the inline check and make a simpler route statement.

1

2

3

4

router.route('/user/:username/:thing')

.get(function(req,res){

res.send('Hey, '+req.params.username+'. I am '+req.params.thing+'!');

});

You then need to create a
param function. The method should be named to match the parameter you are actually trying to match, and when set up correctly will pass the
GET variable into the
param function so you can work with it however you so choose. In this case, I will opt to only check against a regular expression to confirm that the username is made up of numbers. If it is, we will proceed as normal. If it is not, then I will toss it to Express’ error methods.

Another nice thing about this design pattern is that
param functions are local to their
router objects, so you don’t have to worry as much about naming conflicts – authenticated pages such as
profile/:username and public pages such as
homepage/:username can require different constraints while keeping your code readable.

While these patterns are only a start, they should provide a reliable foundation for creating specifically constrained systems that reject unexpected inputs. It is, of course, not a bullet proof method by itself, but can be added as as nice addition to any routing recipe.

I’d be very interested to hear any thoughts, questions, comments, or suggestions below!

In a previous post, I discussed how to use Mandrill in an Angular application. I concluded, however, that I wasn’t sure that I would really use that design pattern in a production environment. In my opinion, it ultimately exposes too much to the client. I mentioned, however, that it would probably be a better route to use the Node API with an Angular client sending it signals. So I thought I’d put together an example that demonstrates just that.

#1 – Setup Factory in Angular

The next step is to setup/replace a factory in Angular that can post information to your Node server. In a minute, we will setup an API route inside of Node that will expose the Mandrill endpoint. For now, just point the
$http.post() to your actual location.

mandrillFactory.js

1

2

3

4

5

6

7

8

9

10

11

12

13

.factory('Mandrill',['$http',

function($http){

return{

goodLogin:function(resp,message){

return$http.post('http://localhost:8080/simpleapi/mandrill',{

'data':resp,

'message':message

})

.success(function(data,status,headers,config){

returndata;

});

}

}]);

On success, the factory will return the data that is returned to it by the server to the controller that called it. You will want to setup error handling in addition to what I have here, but there are a lot of project-specific choices to be made in that regard, so I’ll leave it out for the example code.

#2 – Call it from your Controller

Like in my previous post, you will want to call the factory from your controller.

someController.js

1

2

3

4

5

6

7

Mandrill.goodLogin(resp).then(function(data){

if(data.data[0].status==='sent'){

$scope.thisErrorMessage+=' The email was sent.';

}else{

$scope.thisErrorMessage+=' This eamil was not sent to the dev team for an unknown reason. Oops.';

}

});

Using the Promise method, the data returned to the factory by the server is then given back to the controller, where something can be done with it. What I provide here is, admittedly, a weak example, as it simply updates a variable in your View to return a message to the user, but I think the idea should be clear enough. I also covered promises a bit more deeply in that former post.

#3 – Use Mandrill In Node

Finally, we get to Node. To recap, you should have done some basic setups for your server. Keep in mind, that for my example I will be using
body-parser.

server.js

JavaScript

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

varexpress=require('express');

varapp=express();

varrouter=express.Router();

varbodyParser=require('body-parser');

app.use(bodyParser.urlencoded({extended:true}));

app.use(bodyParser.json());

app.use(function(req,res,next){

res.header("Access-Control-Allow-Origin","*");

res.header("Access-Control-Allow-Headers","Content-Type");

next();

});

app.use('/simpleapi',router);

varapiPort=process.env.PORT||8888;

app.listen(apiPort);

Then you installed and configured Mandrill as part of your server.

server.js

JavaScript

1

2

3

4

5

6

7

varexpress=require('express');

varapp=express();

...

varmandrill=require('mandrill-api/mandrill');

varmandrill_client=newmandrill.Mandrill('YOUR_API_KEY');

...

app.listen(apiPort);

Now comes the time to create the Mandrill endpoint. For the sake of consistency, I have kept the same format of the message as the Angular-only post.

Using
body-parser the contents of the post are located in addresses such as
req.body.message, which I have used in this basic template. An object variable,
message, is constructed with Mandrill API constraints and passed to the
mandrill_client.messages.send function. On success, the resulting JSON is passed back to the factory (which is then passed back to the controller). On error, for now, a message is printed to your console.

In reality, it is more likely that the factory would be sending more information to the server – email addresses would be dynamic rather than hardcoded, etc. Additionally, Mandrill’s extensive API options mean there is far more to consider. This example set, however, demonstrates the back and forth.

Using this design pattern, I believe that you can enjoy the enhanced benefit of doing your works more slightly in the background without exposing nearly as much to the client. In my tests, this design pattern performed just as well as a client-only method. It also has the additional benefit of sending from a single IP address point, which allows you to implement one more part of Mandrill’s security offerings.

In the process, however, I came across a third, and probably final, design approach that would take this paradigm one step further – using a secure but light-weight service, such as Firebase, to function as a bus system for sending signals between the client and backend, completely segregating your client and server work altogether. Stay tuned for one more post in this series.

UPDATE 4/30/2016: This is, hands down, one of my most popular posts. I had to take a break from blogging for several months, and I have come back to a lot of comments about a couple of my API methods being unavailable. I don’t have an answer yet, but I will be looking into this and I promise I will be getting back to you guys.

I’ve recently been working on a project, a piece of which I’ve also open sourced, using AngularJS for the client and Firebase for the backend. This stack still left us in need of an object storage service, so for now we have turned to AWS S3. Doing a quick Google search will bring up all kinds of background on what S3 is and does, but suffice to say it’s a pretty powerful and flexible set of APIs for BLOB storage.

One quirk in the AWS ecosystem, however, is that much of it’s security assumes AWS service to AWS service connections – that your EC2 is reading/writing your S3. It is not particularly difficult to gain access to your AWS services from the outside, but it has not always been clean or secure. Many times you would have to embed your secret key into your application. Sure, you could set up all kinds of IAM rules, and AWS provides methods to avoid hard coding in line with your application code, but you are still shipping with permanent keys. While this model would be cause for pause even in a compiled environment, such as a mobile app, it was even more concerning as we were operating in client-side JavaScript.

The Cognito team has added a means to use the service as an OpenID token generator, which can be used to make a connection to AWS more broadly…

It turned out, though, we were coming into the project at a great time. The AWS team has introduced a new set of tools to a service called Cognito, literally explaining parts of it as we were pushing through our own code. At it’s core, what CognitoIdentityCredentials does is allow OpenID tokens to cleanly access AWS without exposing the secret keys to the client. When you really look into it, there is some cool stuff there.

But what if, like us, you were working on an app that did not require Facebook or Google logins, thus could not guarantee that there would always be an OpenID token? (For the record, Firebase does not support OpenID. I definitely checked.) That is where Cognito really becomes an interesting solution. Fairly recently, the Cognito team has added a means to use the service as an OpenID token generator, which can then be used to make a connection to AWS more broadly.

It takes a little doing – a little Angular, a little Node, and a few AWS settings – but with the right setup you can start offering a safer and smarter way to connect your outside services to AWS. So here we go.

#1 – Setup a Cognito ‘Identity Pool’

The first thing you are going to want to do is setup an Identity Pool on Cognito. The setup is really straight forward for a simple sample application. Choose Cognito from the services menu inside of your AWS account, name the Pool, set up a Developer Authenticated Identity, and hit ‘Create Pool.’ For now, I would let the setup wizard create a new IAM role for you. And then you are technically all setup. You should see a dashboard that shows zero logins, and if you go to your IAM dashboard you will see a new role has been created.

Note: If you have a brand new AWS account, you are will be able to continue the rest of the setup at this point, but keep in mind that root access is open. If you have removed root access, created users and group policies, you will need to update your policy to allow the users to access Cognito:

While digging into the API docs and becoming familiar with the authentication features, it took me a while to really wrap my head around where Cognito was meant to sit in my design pattern. The AWS team had put together a few blog posts, but I was still finding myself a bit stuck on where exactly all the different pieces were supposed to fall. This confusion was stemming from a few things, some of them my false expectations for the service and some of them mixed communication regarding the JavaScript SDK. So before I go too far into real code, let me explain a few things that I spun my wheels on in hopes of saving others some time. If you want to skip ahead, the last three are important.

– You will still need to use your secret keys at some point, but you can still keep those keys relatively secret

If you use good design patterns and server security, and familiarize yourself with AWS configuration recommendations, you are off to a good start. I will say that the Cognito team stress over and over again in the forums about exposing your secret keys in client-side JavaScript. That seems to be a big reason they are offering this solution – it’s a good fix.

– Cognito Identity is only maybe .01% the access battle

A lot of the security and privileges are going to come from the AWS services themselves, setting access and security rules like you would for any other method. I didn’t go in thinking about it as a magic bullet, but ran into a few people with high expectationsof what Cognito would do. It is handy, but not God-like.

And those credentials are temporary, time-limited, and, preferably, role-reduced on the IAM end. So client-side exposure or interception is less catastrophic.

– You actually end up establishing two separate credentialed AWS entities

One uses your private keys, one uses your WebIdentityCredentials, both are making calls to your AWS services at different times, but they will not conflict or overwrite since both operate on different servers, or at least ports, from each other because…

– Cognito’s roots are in mobile devices, not web technologies, so it expects the client to be completely abstracted from the backend

This aspect was actually the hardest for me to ‘get’. Because of it’s background as a mobile application solution, sometimes the JavaScript side of things get explained a little weird to the community – in some cases, the blog posts or forum writers would leave out an exception or rule for the JS SDK. Things get worked out eventually, but, still, the JavaScript end is not always as clear as I would have liked.

Cognito expects a design pattern where the client and the server are entirely separated. Said another way, you need some sort of small API your JavaScript app can call. I actually like this approach. It keeps your more sensitive pieces out of the exposed JavaScript, and it also has the added benefit of writing a single private API that can be accessed by any client – be it web, Android, iOS, or whatever else may come.

So for our purposes, the flow will go something like:

[App Auth in Angular] ->
[Success Auth] ->
[Post Request to Private API in Express/Node] ->
[getOpenIdTokenForDeveloperIdentity is called in post request] ->
[returns WebToken to Angular] ->
[WebIdentityCredentials is called in Angular using WebToken]

I will assume you can handle creating authentication of your application and focus on the steps that come after that.

It is probably a smart idea to create a factory to handle the AWS work outside of your larger authentication method. For now, make a small factory that makes a simple post request out to an API that you will need to build.

// Only use the below settings to make it easy on your local, development machines.

// You will want to better prep your settings for distribution

AWS.config.loadFromPath('./config.json');

AWS.config.region='us-east-1';

app.use(function(req,res,next){

res.header("Access-Control-Allow-Origin","*");

res.header("Access-Control-Allow-Headers","Content-Type");

next();

});

router.route('/aws')

.post(function(req,res){

});

app.use('/simpleapi',router);

varapiPort=process.env.PORT||8888;

app.listen(apiPort);

console.log('Port '+apiPort+' is a go.');

From here, you need to start looking at the CognitoIdentity docs, and things can get a little unclear. There are several ways you can use the service, but in this instance we are only wanting to wire up a connection that employs an authenticated identity using the developer provided name.

One of the reasons I was so particular for using authenticated identities with Cognito is that the service creates an unique IdentityId for each user inside of that Identity Pool. This IdentityId can be merged with other developer identities later on, say if the user adds a Facebook or Google account.

This structure makes it extremely helpful for keeping your data organized and referable down the road. (The italics are Cognito terminology)

Additionally, in various explainers from the AWS team, you will see reference to CognitoCredentials and AssumeRoleWithWebIdentity. Neither of those apply when connecting in this way with the JavaScript SDK. (Back to that in a minute)

In the JavaScript docs, you may also come across CognitoIdentityCredentials which can abstract the authentication process, but it does not work with ‘non-public’ Developer Identities (anyone that is not Facebook, Google, or Amazon) at this time. Those three properties cost me a lot of lost time in working up this configuration, so save yourself some time and stay focused here.

That being said, you will want to build out your POST function to actually do something. Because I wanted to use authenticated identities with Cognito, as part of the API design I needed to check for some sort of ID information in the post request before reaching out to AWS.

You will then reach out to your Cognito service using your private credentials and request an OpenID token through the DeveloperIdentity provision. If successful, that will return both the IdentityID and the token, both of which you can pass back to your client application for further use.

You can know that the connection has been successful by checking your Cognito dashboard to see if the number of connections has increased; you’ll notice the above code also outputs to the console.

#5 – Handling API Return in Angular

Once your Node server has made it’s connection and has created it’s token, that needs to get passed back to your client-side application so you can configure AWS with these temporary, access-restricted credentials. At this point, you want to be sure that you have loaded in the SDK.

Much of the content on using Developer Identities says that the next step should be invoking AssumeRoleWithWebIdentity. After a few dead ends, I was able to confirm with the Cognito team that you are actually going to want to call WebIdentityCredentials. It is wrapped around AssumeRoleWithWebIdentity, and part of their Web Identity Federation process. I imagine the confusion is partly due to Cognito’s background as a mobile-focused service and the docs will get updated eventually.

At this stage, the POST request needs to get updated to actually connect the client on success. WebIdentityCredentials is fairly straight forward to configure.

#6 – Giving Cognito Roles Access to Other AWS Services

The last piece to this puzzle is setting up an Authentication Policy for your Cognito Identity Pool. Go into your IAM Roles list and choose the role that was created by Cognito. You will need to manage the policy depending on what services you want to access and how you want to access them. For example, if you want basic S3 read/write/delete authority, click on ‘Manage Policy’ and add:

1

2

3

4

5

6

7

"Statement":[{

"Effect":"Allow",

"Action":["s3:GetObject","s3:PutObject","s3:DeleteObject"],

"Resource":[

"arn:aws:s3:::your-S3-name/*"

]

}]

You will, of course, need to setup any normal S3 requirements as well, but at that point you will be credentialed for your AWS services to the the point of your user (or root access, should you still be using that).

In Conclusion

Adding Cognito does require a few more steps than simply dropping in your credentials, but considering how poor that option is, this feature is a fantastic alternative coming from the AWS team. It also, honestly, does not result in that much additional code, and once you have it running it is ready to service non-web apps as well.

I am glad to see AWS looking outside of it’s own ecosystem and to how it can provide some services for modern web development practices. It was a great fit for a team like ours that saw one part of the AWS offering as a solution, but weren’t prepared to go all in. The use of unique IdentityIds for users between multiple developer’s OpenIDs (mine, Facebook’s, Google’s) is also a nice benefit for some backend work across platforms.

Feel free to leave any questions or comments – also any suggestions improvements! I’m interested to see how this feature and it’s use evolves.