Update: In my original post, nearly a year ago, I got a little hot under the collar over a lack of promise support in the AWS SDK. Well, no more! As of the end of March, 2016, our cries have been heard. I have some digging into the new option to do for myself, but great to see this design patter is finally native to the SDK.

One of the first things that any tutorial on Angular will explain is it’s two way data binding feature. As explained by the docs, this enables the “automatic synchronization of data between the model and view components.” It’s extremely handy and powerful – not to mention time saving from a development perspective.

But I recently ran into an unexpected issue while developing an Angular application that also uses the AWS Javascript SDK. While I could successfully retrieve my AWS object and place it in my model, my view would not update in response to the update in the model. After fighting with and digging into the AWS documentation, I finally realized my issue was not with AWS, but with Angular, or rather my Angular code. I have admittedly not looked deeply into what would be causing this problem, but this solution has resolved it in all my use cases.

For a touch more practical Angular context, you could set something up like the below in your controller:

1

2

3

4

5

6

7

8

9

10

vartheObjectLocation=locationString+'/file.ext';

vars3=newAWS.S3();

varparams={Bucket:'my-test-bucket',Key:theObjectLocation};

s3.getObject(params,function(err,file){

if(err)console.log('something went wrong: '+err);

else{

$scope.file=file;

/* do something with the file */

}

});

#1 – Using AWS getSignedUrl with getObject

So getObject is great and all, but I find a lot of times when I am ‘getting’, I am really just ‘needing’ – as in “I need to display an image.” In comes getSignedUrl – an extra little S3 helper that makes your resource available by a presigned URL with an expiration date. I find it’s a nice balance between making your resources perpetually available and only available when directly downloaded by your own software. So now, instead of downloading a file, doing some sort of work on it, and then pushing it to my app’s View, I can just point directly to the URL. The user can then download for themselves, or for my specific example, I can now set an image URL instead of trying something less compatible with older browser (hello canvas) or more work intensive on the client (atob/btoa).

I can take my fairly straightforward example above, and with a minor change, simply throw an S3 generated URL to my Angular app’s View.

1

2

3

4

5

6

7

8

9

10

vartheObjectLocation=locationString+'/file.ext';

vars3=newAWS.S3();

varparams={Bucket:'my-test-bucket',Key:theObjectLocation};

s3.getSignedUrl('getObject',params,function(err,url){

if(err)console.log('something went wrong: '+err);

else{

$scope.url=url;

/* do something with the file */

}

});

#2 – And… mag… wait… nothing happened?

If you are following along at home (or found this post because you are experiencing this right now), you will have noted that your View seemed unimpressed by all our fancy coding and interaction with cloud services. Despite all the big talk of two way data binding… we get absolutely nothing happening in the UI. Check that we bound our variables correctly, check that our template is compliant, check the console for errors…

Still nothing. Bummer.

#3 – Angular Life Cycle and AWS and you…

If you want a deep dive into the Angular Life Cycle, I’m going to just refer you here: Scope Life Cycle

The quick and dirty on this issue, though, is that AWS does not support promises. And, while pretty fast, S3 is just not a fast enough negotiator to return you a URL before Angular assumes even your async function is done and moves on with it’s life (cycle). You essentially end up in a weird race condition between your client and your server – or in this case your web service.

That also means going the ‘oh I’ll just force the issue by wrapping it all in my own promise’ won’t work either – there just isn’t anything in the S3 function to put on the stops from Angular’s perspective.

Life Cycle has 5 main stages:

Creation

Watcher registration

Model mutation

Mutation observation

Scope destruction

Our issue is somewhere between 3 and 4, while
$apply is doing what it does. Basically, even though two-way data binding is great, if Angular is unaware that something is happening in the background (say, a slightly delayed S3 response is finally getting around to showing up), it might as well have never happened.

This was actually a pretty tricky issue – the traditional method of debugging (setting break points and stepping through) actually further clouded the issue because even fast stepping actually gave S3 the time it need to finish. So when not using breakpoints, it broke. When stepping through breakpoints, it behaved as expected.

#4 – …and a solution

With that knowledge, the fix becomes pretty easy. We just need to force the response event to bubble back up to the Angular
$digest. A simple
$timeout wrapper should do it.

1

2

3

4

5

6

7

8

vartheObjectLocation=locationString+'/Image.png';

vars3=newAWS.S3();

varparams={Bucket:'my-test-bucket',Key:theObjectLocation};

s3.getSignedUrl('getObject',params,function(err,url){

$timeout(function(){

$scope.imageurl=url;

});

});

Considering the prevalence of both promises and JavaScript frameworks that make use of it, as well as the general maturity of the AWS SDK, I am frankly pretty surprised this is even an issue. It’s an easy fix, but it would definitely be nice to see promises become a native feature of the SDK.

I have tried to address the issue from a generic perspective, but if I’ve gone to general and there’s anything I can clarify, feel free to reach out or leave a comment below. The feedback I get help make these posts even better!

Over the past few months, I have been getting into the Genesis Framework. Having done a lot of WordPress integration work, I appreciate how the framework makes quick work of a lot of the standard build requirements found in a WordPress project so … [Continue reading]

I recently posted about consuming SQL Server Reporting Services, and how you may need to use the XmlUrlResolver in order to access the external source. Additionally, you may have need to get behind a credentialed page.
There is a lot of material out … [Continue reading]

While I've recently been writing about creating Node API endpoints, sometimes you find yourself on the other end - not building the data source, but consuming it.
One of our legacy vendors on the trade desk prefers to deliver its data through … [Continue reading]

I recently posted about how to use regex in Node routes, and based on a couple of questions I got, I thought I would throw a couple of more nuggets out there on the subject of using [crayon-5ab065c039d3d244578078-i/] and Express error handling.
#1 - … [Continue reading]