In my scenario I am creating an ephemeral Kinesis stream in my State Machine, which I then stream a large number of records into while executing one lambda. I then process those records more slowly in a series of subsequent lambda functions. Once completed I then delete the ephemeral kinesis stream.

The problem with this approach is that if there is an unexpected error anywhere in one of my steps it can cause the whole Step Function to fail and end up orphaning the kinesis stream. Therefore I needed a way to reduce the likelihood of this problem with a try/finally pattern.

In the InitializeIterator step we are creating our ephemeral Kinesis stream. In the XmlStream step we are streaming items from a large xml document into JSON objects which are then written to the stream. Next, in the SendItemsToApi we are reading items out of the kinesis stream, doing some formatting and validation on those items, and then sending each item to a REST endpoint for storage and/or other actions. Finally in the IteratorDone step we are destroying the Kinesis stream.

You could imagine a variety of other possible scenarios where one would need to cleanup resources allocated in a previous Step. In this particular scenario we need to ensure that the IteratorDone step is called regardless of errors that may happen between it and the InitializeIterator step.

To do this we first will wrap then XmlStream and SendItemsToApi steps in a Parallel block with a single branch. The reason we want to do this is so that these steps can be treated like a single block where any errors in any state can be caught and handled in a single Catch clause.

It’s important to note here that the result of the block is an array of results where each index in the array is the result object from the last step of each branch. So in this case we will have an array with a single object in it [ { iterator: ... } ]. If you don’t specify a ResultPath it will replace the entire context object $, which is undesirable in this case since we need to still access the iterator object in a later step.

It’s also important to note that we are storing the caught exception into the $.error field, which we will rethrow later, after cleanup.

So now if an error occurs while processing our xml file or sending items to the api it will retry a couple of times and then ultimately capture the error and move to the Cleanup ​phase. We’ve added a new Finally Step, which will throw an exception if there is a value stored in $.error, which will allow the Step Function to complete in an Error state rather than a Success state so we can further trigger alarms through Cloud Watch.

Filed under: Humor, Programming]]>https://justinmchase.com/2017/04/25/tao-of-leo-27/feed/0justncase80Iterating with AWS Step Functionshttps://justinmchase.com/2017/03/08/iterating-with-aws-step-functions/
https://justinmchase.com/2017/03/08/iterating-with-aws-step-functions/#commentsWed, 08 Mar 2017 07:26:55 +0000http://justinmchase.com/?p=1555Continue reading Iterating with AWS Step Functions→]]>One interesting challenge I immediately encountered when attempting to work with AWS Lambda and Step functions was the need to process large files. Lambda functions have a couple of limitations namely memory and a 5 minute timeout. If you have some operation you need to perform on a very large dataset it may not be possible to complete this operation in a single execution of a lambda function. There are several ways to solve this problem, in this article I would like to demonstrate how to create an iterator pattern in an AWS Step Function as a way to loop over a large set of data and process it in smaller parts.

In order to iterate we have created an Iterator Task which is a custom Lambda function. It accepts three values as inputs in order to operate: index, size and count.

ConfigureCount

In this step we need to configure the number of times we want to iterate. In this case I have set the number of iterations to 10 and put it into a variable called $.count. In a more complete example this may be the number of files you want to iterate over. For example in my real world scenario I am receiving a substantial CSV file which is then broken into many smaller CSV files, all stored in s3, the number of smaller files is then set into the count variable here. The large CSV file can be read entirely in a single lambda execution, streaming sections into smaller files, never loading the entire file into memory at the same time; but it cannot be processed entirely in a single function. Thus we split it and then iterate over the smaller parts.

ConfigureIterator

Here we set the index and step variables into the $.iterator field, which the iterator lambda uses to determine whether or not it should continue iterating.

Iterator

This is the iterator itself, a small lambda function that simply increments the current index by the step size and calculates the continue field based on the current index and count.

The reason why we want to support a step size is because we may have multiple workers which operate on data in parallel. In this example we have a single worker but in other cases we may need more in order to complete the overall work in a timely fashion.

IterateRecords

From there we need to immediately move into a Choice state. This state simply looks at the $.iterator.continue field and if it is not true then our iteration is over and we exit the loop. If iteration is not over then we move to the worker tasks which may use the $.iterator.index field to determine which unit of work it should operate on.

ExampleWork

In this example this is just a Pass state, but in a real example this may represent a series of Tasks or Activities which process the data for this iteration. When completed, the last step in the series should point back to the Iterator state.

Its also important to note that all states in this chain must use the ResultPath field to bucket their results in order to preserve the state of the iterator field throughout theses states. Do not override the $.iterator or $.count fields while doing work or you may end up in an infinite loop or error condition.

Filed under: Humor, Programming Tagged: Tao of Leo]]>https://justinmchase.com/2016/09/06/tao-of-leo-20/feed/0justncase80Getting your npm package version from bashhttps://justinmchase.com/2016/08/29/getting-your-npm-package-version-from-bash/
https://justinmchase.com/2016/08/29/getting-your-npm-package-version-from-bash/#respondMon, 29 Aug 2016 22:47:59 +0000http://justinmchase.com/?p=1502Continue reading Getting your npm package version from bash→]]>This question came up for me and since I spent more than 5 minutes looking it up and didn’t find this answer anywhere else I wanted to document it.

The way to do it through npm is to add a script to your package.json file like so:

If you want to do automation as npm scripts, you can just access the $npm_package_version variable directly in your scripts.

The longer explanation here is that when you run an npm script it will automatically pull out all of the values from the package.json and put them into environment variables with the pattern npm_package_*. This means you can expose those variables as scripts if you want to use them externally.

The alternate version that I saw elsewhere was to just grab it using node like so:

The problem was that, on Windows, loading a native module with a file system path that was longer than 256 characters hit the classic MAX_PATH windows issue. Node actually has a way to successfully load files with long paths, and that way was applied to all path handling except in this one particular spot. It just so happened that I was the only person in the world trying to load native modules from a long path on windows it seems!