As I've blogged about before, you can use Plupload to upload files directly from the client-browser to Amazon S3. But, in order to do so, you have to generate an Amazon S3 upload policy and pass it along with the file upload. Hard-coding this policy at page-load can be problematic because it has to be flexible enough to handle whichever file(s) a user selects and active long enough to handle the duration of the page's lifetime. As such, I wanted to see if we could use Plupload to generate per-file Amazon S3 upload policies that were short-lived and specific to the file about to be uploaded.

When you use Plupload, you have the opportunity to hook into a large number of events that have to do with the queue, the uploader state, and the file lifecycle. But, the one event that has caught my eye is the "BeforeUpload" event. In the past, I've looked at using the "BeforeUpload" event as a means to set per-file configuration settings. So, it seems possible that we could use this event to generate per-file Amazon S3 upload policies.

By default, the BeforeUpload event and the file upload are synchronous. Meaning, the BeforeUpload event fires and then the upload of the given file begins. As such, Plupload won't wait around for us to perform an AJAX (Asynchronous JavaScript and XML) request to our application server for a per-file upload policy; by the time our AJAX request returns to the client, the file upload will have already begun.

Luckily, Plupload allows us to override this behavior by returning "false" from the BeforeUpload event handler. When we do this, Plupload essentially pauses the upload processing until we explicitly tell it to start again (by manually triggering the "UploadFile" event). This gives us a window, in the queue processing, during which we can communicate with our application server, generate a per-file Amazon S3 upload policy, and update the Plupload configuration, all before the file upload begins.

Once we take Plupload out of its normal workflow, however, we have to take special precautions around error handling. Generally speaking, when Plupload is processing the queue, it handles errors gracefully. But, it doesn't know about our AJAX request for the per-file upload policy. As such, we have to catch those errors, explicitly clean up the file, and restart the upload process. It's not hard to do (at least in the way I've done it); but, it has to be done.

Ok, let's look at some code. There's a full demo application in my GitHub project; but, I'm only going to show you the Plupload code and the Amazon S3 upload policy generation. First, we'll look at the Plupload code. The part you want to pay attention to is the handleBeforeUpload() function - this is our BeforeUpload event hook where we make a request to the server for a file-specific upload policy:

// Tell AngularJS that something has changed (the public queue will have

// been updated at this point).

$scope.$apply();

}

// I handle the file-uploaded event. At this point, the image has been

// uploaded and thumbnailed - we can now load that image in our uploads list.

function handleFileUploaded( uploader, file, response ) {

$scope.$apply(

function() {

// Broudcast the response from the server that we received during

// our previous request to saveImage(). Remember, the FileUpload

// event is only for the successful push of the image up to

// Amazon S3 - the actual image object was already saved during

// the BeforeUpload event. At that point, the image response was

// associated with the file, which is what we're broadcasting.

$rootScope.$broadcast( "imageUploaded", file.imageResponse );

// Remove the file from the internal queue.

uploader.removeFile( file );

}

);

}

// I handle the init event. At this point, we will know which runtime has

// loaded, and whether or not drag-drop functionality is supported.

function handleInit( uploader, params ) {

console.log( "Initialization complete." );

console.log( "Drag-drop supported:", !! uploader.features.dragdrop );

}

// I handle the queue changed event. When the queue changes, it gives us an

// opportunity to programmatically start the upload process. This will be

// triggered when files are both added to or removed (programmatically) from

// the list (respectively).

function handleQueueChanged( uploader ) {

if ( uploader.files.length && isNotUploading() ){

uploader.start();

}

$scope.queue.rebuild( uploader.files );

}

// I handle the change in state of the uploader.

function handleStateChanged( uploader ) {

if ( isUploading() ) {

element.addClass( "uploading" );

} else {

element.removeClass( "uploading" );

}

}

// I get called when upload progress is made on the given file.

// --

// CAUTION: This may get called one more time after the file has actually

// been fully uploaded AND the uploaded event has already been called.

function handleUploadProgress( uploader, file ) {

$scope.$apply(

function() {

$scope.queue.updateFile( file );

}

);

}

// I handle the resizing of the browser window, which causes a resizing of

// the input-shim used by the uploader.

function handleWindowResize( event ) {

uploader.refresh();

}

// I determine if the upload is currently inactive.

function isNotUploading() {

return( uploader.state === plupload.STOPPED );

}

// I determine if the uploader is currently uploading a file.

function isUploading() {

return( uploader.state === plupload.STARTED );

}

}

// I model the queue of files exposed by the uploader to the child DOM.

function PublicQueue() {

// I contain the actual data structure that is exposed to the user.

var queue = [];

// I index the currently queued files by ID for easy reference.

var fileIndex = {};

// I add the given file to the public queue.

queue.addFile = function( file ) {

var item = {

id: file.id,

name: file.name,

size: file.size,

loaded: file.loaded,

percent: file.percent.toFixed( 0 ),

status: file.status,

isUploading: ( file.status === plupload.UPLOADING )

};

this.push( fileIndex[ item.id ] = item );

};

// I rebuild the queue.

// --

// NOTE: Currently, the implementation of this doesn't take into account any

// optimizations for rendering. If you use "track by" in your ng-repeat,

// though, you should be ok.

queue.rebuild = function( files ) {

// Empty the queue.

this.splice( 0, this.length );

// Cleaer the internal index.

fileIndex = {};

// Add each file to the queue.

for ( var i = 0, length = files.length ; i < length ; i++ ) {

this.addFile( files[ i ] );

}

};

// I update the percent loaded and state for the given file.

queue.updateFile = function( file ) {

// If we can't find this file, then ignore -- this can happen if the

// progress event is fired AFTER the upload event (which it does

// sometimes).

if ( ! fileIndex.hasOwnProperty( file.id ) ) {

return;

}

var item = fileIndex[ file.id ];

item.loaded = file.loaded;

item.percent = file.percent.toFixed( 0 );

item.status = file.status;

item.isUploading = ( file.status === plupload.UPLOADING );

};

return( queue );

}

// Return the directive configuration.

return({

link: link,

restrict: "A",

scope: true

});

}

);

This file is an AngularJS directive that wraps the Plupload implementation; but, the bulk of the code is just Plupload event bindings. If you look at our BeforeUpload hook, you should notice three things: we are returning "false" to cancel the BeforeUpload event; we're saving the image data to the server and getting our per-file Amazon S3 upload policy in return; and, we're storing our image object (persisted data structure) as a property in the Plupload File object. This last action allows the image data to be easily accessed in the subsequent "FileUploaded" event handler.

In this case, I've chosen to create the "image record" before actually performing the file upload. I suppose you could do this in reverse order; but, since I had to communicate with the server to get the Amazon S3 upload policy, I figured I might as well create the data record as well. Plus, I have plans for a future blog post in which it will be helpful to have the persisted record before the file upload begins.

Now, let's take a look at the server-side API endpoint that actually generates the Amazon S3 upload policy. This is the API that is called in the body of the BeforeUpload event handler:

// Since we know the name of the file that is being uploaded to Amazon S3, we can

// create a pre-signed URL for the S3 object (that will be valid once the image is

// actually uploaded) and an upload policy that can be used to upload the object.

// In this case, we can use a completely unique URL since everything is in our

// control on the server.

// --

// NOTE: I am using getTickCount() to help ensure I won't overwrite files with the

// same name. I am just trying to make the demo a bit more interesting.

s3Directory = "pluploads/before-upload/upload-#getTickCount()#/";

// Now that we have the target directory for the upload, we can define our full

// amazon S3 object key.

s3Key = ( s3Directory & form.name );

// Create a pre-signed Url for the S3 object. This is NOT used for the actual form

// post - this is used to reference the image after it has been uploaded. Since this

// will become the URL for our image, we can let it be available for a long time.

imageUrl = application.s3.getPreSignedUrl(

s3Key,

dateConvert( "local2utc", dateAdd( "yyyy", 1, now() ) )

);

// Create the policy for the upload. This policy is completely locked down to the

// current S3 object key. This means that it doesn't expose a security threat for

// our S3 bucket. Futhermore, since this policy is going to be used right away, we

// set it to expire very shortly (1 minute in this demo).

// ---

// NOTE: We are providing a success_action_status INSTEAD of a success_action_redirect

// since we don't want the browser to try and redirect once the image is uploaded.

settings = application.s3.getFormPostSettings(

dateConvert( "local2utc", dateAdd( "n", 1, now() ) ),

[

{

"acl" = "private"

},

{

"success_action_status" = 201

},

[ "starts-with", "$key", s3Key ],

[ "starts-with", "$Content-Type", "image/" ],

[ "content-length-range", 0, 10485760 ], // 10mb

// The following keys are ones that Plupload will inject into the form-post

// across the various environments. As such, we have to account for them in

// the policy conditions.

[ "starts-with", "$Filename", s3Key ],

[ "starts-with", "$name", "" ]

]

);

// Now that we have generated our pre-signed image URL and our policy, we can

// actually add the image to our internal image collection. Of course, we have to

// accept that the image does NOT yet exist on Amazon S3.

imageID = application.images.addImage( form.name, imageUrl );

// Get the full image record.

image = application.images.getImage( imageID );

// Prepare API response. This needs to contain information about the image record

// we just created, the location to which to post the form data, and all of the form

// data that we need to match our policy.

response.data = {

"image" = {

"id" = image.id,

"clientFile" = image.clientFile,

"imageUrl" = image.imageUrl

},

"formUrl" = settings.url,

"formData" = {

"acl" = "private",

"success_action_status" = 201,

"key" = s3Key,

"Filename" = s3Key,

"Content-Type" = "image/#listLast( form.name, "." )#",

"AWSAccessKeyId" = application.aws.accessID,

"policy" = settings.policy,

"signature" = settings.signature

}

};

</cfscript>

As you can see, I am taking the name of the file that we're about to upload and then using that name to generate a file-specific Amazon S3 upload policy. I'm also setting a very small expiration date for the policy - 1 minute. Together, these properties lock the upload policy down, making it perfectly functional but with no security concerns.

Once the policy is generated, I then return both the data record and the policy in the response. The policy is then injected into the Plupload configuration settings and the upload processing is reanimated. At that point, Plupload will post the file to Amazon S3 and trigger the "FileUploaded" event.

I love the idea of offloading Amazon S3 file uploads to the client. This can free up a lot of server-side resources (bandwidth and processing). And, it's great to know that there's a way to do this with Plupload that doesn't require an open-ended upload policy. From a technical standpoint and security standpoint, this feels like a huge win! I love Plupload.

Reader Comments

Interesting approach - I'm also looking at per file upload URLs with plupload. I had thought I'd make a synchronous ajax request inside the beforeupload function, but I'm not sure if blocking the plupload processing is a great idea. It's good to see this approach remaining asynchronous.

I am the co-founder and lead engineer at InVision App, Inc — the world's leading prototyping,
collaboration & workflow platform. I also rock out in JavaScript and ColdFusion 24x7 and I dream about
promise resolving asynchronously.