https://blog.rocketinsights.com/https://blog.rocketinsights.com/favicon.pngLaunch Padhttps://blog.rocketinsights.com/Ghost 2.11Wed, 16 Jan 2019 14:26:39 GMT60NoSQL has become a buzz word over the last decade, and has gained large popularity over that time. At the time of this writing, MongoDB, the most popular document database, has nearly equal usage to PostgreSQL.

NoSQL databases can be great for getting to market quickly if your timeline is

]]>https://blog.rocketinsights.com/why-nosql-is-still-relational/5c3d67abaa03b200c0920df3Wed, 16 Jan 2019 01:23:45 GMTNoSQL has become a buzz word over the last decade, and has gained large popularity over that time. At the time of this writing, MongoDB, the most popular document database, has nearly equal usage to PostgreSQL.

NoSQL databases can be great for getting to market quickly if your timeline is particularly restrictive. You are able to create less tables/collections, create a thin data layer, and even skip the traditional REST routes. Granted, the latter two aren't long-term solutions, but they get the ship sailing. Additionally, if on a rather slim budget, using the full MEAN or MERN stack allows you to only have to worry about hiring developers that know Javascript. No specialists required! From a technical perspective, documents can be retrieved with some auxiliary information already attached with no additional work necessary (i.e. getting a user with an address object already attached, as opposed to having to join two tables to get the same result). As Martin Fowler points out in his 2012 talk, this can have benefits when running on clusters, as you will have less trips between members of a cluster to communicate with different tables.

A trend among new technologies is the notion that they must be adopted in their most "purest" form. However, I argue that NoSQL databases, particularly document databases, should not cause implementers to cast aside the concept of relational models.

Within the category of a NoSQL database lie two sub-categories: Key-Value databases, and Document databases. While Key-Value databases are part of this overarching categorization, we will be focusing on Document databases for the purposes of this discussion, as they are the more common.

Let's look at a situation I faced three years ago when I built a finance management platform using MongoDB as the database of choice. I needed to represent the concept of a user, which had one or more monthly bills, as well as one or more accounts, each with one or more transactions.

In a document database such as MongoDB, the simplest way to represent this would be to have a single user collection, with objects like this:

Initially, this looks great and simple, and has a few obvious benefits:

We only have to make one top-level query in our UI to get everything that we need, and can just pass it to child components.

We only have to maintain a single schema using something like mongoose.

We only have write, test, and maintain a single endpoint.

Unfortunately, looks can be deceiving, and this comes with a laundry list of drawbacks:

In general, MongoDB hosting providers like MLab only allow up to a set data transfer limit per query, per payment level.

Transferring data over HTTP only allows up to a certain size for data sent in a single request. Users with a large amount of transactions will exceed this. In simple terms: It's not scalable.

Large amounts of memory will be required for simple transactions. Even if you only want a user's accounts, the entire user object must be retrieved.

Testing becomes more complicated. You will generally need to fake the entire user object, bills, accounts, and transactions included, in order to test subsystems.

Migrations become more risky. To simply change the shape of transactions, entire user objects will have to be overwritten.

Two words: race condition. Any change to a bill, account, transaction, or user information will require writing to the same document. In order to avoid race conditions, endpoints will have to be written to use $push to add transactions, and endpoints will have to be more specific in general to avoid overwriting entire user objects per transaction.

Now, let's fast forward to when I had to deal with these situations in reality. I eventually used this application to the point where a single account may have 1000+ transactions. Each transaction had a minimum of three properties: description, amount, and date. Assuming one character each, this still results in 3000 characters, and that's an extremely conservative estimate. The user objects were simply getting too large, and I ultimately faced 413 Payload Too Large errors.

This led to transactions eventually getting their own collection. In the end, the database looked similar to a relational one.

Transactions collection

Conclusion

NoSQL does not mean non-relational. Relational techniques are about organization and that principle is not null and void when using a NoSQL database like MongoDB. It is entirely okay to think relationally when working with these databases. In the end, queries will be leaner, the API will be under less stress, and with the advent of API methodologies like GraphQL and features like MongoDB's aggregations, the same powerful querying will still be available. Note that accounts and bills are still stored on the user. We still get plenty of data for free when simply requesting a user by _id or email. If I had to boil down the best practice that I've discovered, I would say this:

When dealing with an array of objects with a small, finite number of elements, use nested documents. This is exemplified in the bills array in the example above. A bill will not be shared by multiple users, and a user will not have a massive amount of unique bills (100+), so there's no problem having them nested.

When dealing with a one-to-very-many relationship, get relational! In the above scenario, transactions are a great example. A user has an unlimited amount of transactions per account. Thus, it makes the most sense to give them their own collection. In this case, no matter how many transactions there are, we will never be forced to query for every single one, giving us full control of how intense the database access needs to be.

When the size of any given element in an array is potentially large, give it its own collection too. On one project I've worked on, a particular document would have an array of objects where one property was a Base64 image string. This means there need only be a few before the parent document gets particularly bloated. In this case, you will benefit from having the objects containing the image strings in their own collection. (Or better yet, host the images somewhere and only store a link)

]]>Launching a new project is exciting, but comes with the natural caveat of being extremely fast paced. In an environment focused on agile software delivery, the main priority is delivering working code and getting to market. In the heat of the rush, it's easy to let some housekeeping fall behind.]]>https://blog.rocketinsights.com/git-good-mastering-team-git-management/5c3d54adaa03b200c0920d31Wed, 16 Jan 2019 00:59:52 GMTLaunching a new project is exciting, but comes with the natural caveat of being extremely fast paced. In an environment focused on agile software delivery, the main priority is delivering working code and getting to market. In the heat of the rush, it's easy to let some housekeeping fall behind. Managing git can feel like a second-class citizen, but lifting it up to a core priority can have major benefits in the long run. It's important to lay the groundwork early on to pave a road to team success. Let's take a look at some key principles that will keep merge conflicts to a minimum, make end-to-end delivery quicker, and keep the team moving at all times.

1. Get the team involved in the review and merge process.

I have found that developers tend to lean toward the category of people that like to have control. It's in our nature. One project I have worked on had a centralized review and merge setup. Developers would work on a ticket and create a pull request. At some unknown time in the future, one of the two organization leaders would get around to testing the ticket for code quality and meeting requirements, and then merge the PR. While this can have benefits, it has major drawbacks. First, the process is slow - two people that also have other role responsibilities can only do so much in a given time frame. In addition, the code base gets a very one-sided, opinionated treatment. By allowing your entire team to review one another's work, pull requests get from A to B much faster, and the views expressed can spark conversations that result in overall better quality. Speed can be crucial in the early stages of a project, and benefits everyone at all stages, but keeping the open pull request list short has its own benefits, as explained shortly.

Action item - Empower the whole team to review one another's work. Trust them when a PR is approved. Create a hub for developers to notify others that they have a PR needing review. A simple step here would be to create a "PR Review" channel in Slack or your team's messaging application.

2. Keep the list of pull requests short.

It is important to treat all pull requests with the same sense of urgency. Picture this scenario: A developer creates a pull request with a major refactor that lays the groundwork for a planned feature. At the same time, a few pull requests are created with critical priority that add features that clients are demanding, fix bugs, or refactor in other helpful ways. Over time, more of these so-called "critical" pull requests are created, reviewed, and merged. All the while more "non-critical" pull requests are piling up in the queue. Now we land at a time where there are over a dozen pull requests pending. The most obvious drawback is that we have a ton of great work done that we are not benefitting from building on top of. However, a secondary repercussion of this large queue is that we probably have a slew of pull requests that will generate major conflicts. I've joined projects with pull requests over three months old! What are the chances that those code changes even make sense anymore? How many hours will have to go into merging or rebasing those branches to make them compatible with the current development branch? This is simply valuable development time wasted. Referring back to the first point, getting everyone in on the review and merge process can help keep this list short and everyone happy.

Action item - As mentioned in point #1, creating a group messaging channel or email list to post new PR's can be beneficial. Encourage developers to seek reviews from one another. Ideally, set a maximum number of PR's. Once the number has been reached, everyone is in "review mode." They should drop what they're doing, pick a PR, and review it. This will motivate developers to stay on top of it in the first place!

3. Smaller pull requests promote a better workflow.

This might seem counter to point #2, but with the aid of point #1, it will be a non-issue. Large pull requests mean more files and lines changed, and if everyone is making the best use of refactoring principles, this is even more true. Pull requests of large size require more time to complete, and this means that the developer spends even more time rebasing and fixing conflicts. Once merged, the number of other affected pull requests is larger, causing even more pull requests to be put on hold while developers fix merge conflicts to get them updated. When possible, break down a feature into sub-tasks that can be committed independently without breaking anything. If a feature requires a refactor, perform the refactor and put up a pull request with that change alone. If everyone is helping each other with code reviews, this should make its way through the pipeline in no time at all. While waiting, the original developer can simply branch off of the refactor branch and perform a rebase --onto once the refactor branch is merged. This allows everyone to rebase more frequently with less headache, assuring that everyone is working on the latest code at all times. An added benefit is that when a developer is relying on another developer's refactor, it will arrive in the current working branch more quickly, allowing them to progress forward earlier.

Action item - Review the current tasks assigned to developers and make sure they're divided into the most logical subtasks possible. If you have colocated developers, you can even use this as an opportunity to have them work on the same main task simultaneously as they can refer to one another to ensure they know what to expect when the other is finished their part.

4. Keep sprints short; rebase early and often.

Okay, I'm cheating here. Two points in one. However, I argue that they are one and the same. In a perfect world, we would create a develop branch, or a sprint2.1 branch, and at the end of the planned time period, we would merge that branch into master, and it would become the live code. Unfortunately, the world is not so perfect. Users find bugs and file reports, and we are forced to merge bug fixes straight into the master branch, or whatever you may call your production branch. This means that the code that your developers are working on top of may not actually be the latest. By keeping sprints short, we minimize the frequency at which the root branch changes per sprint, thus minimizing how much work we need to do when it's time to prepare the sprint branch with the root branch. I once spent two years on a project with six-month sprints. The developer in charge of keeping sprint branches sync'd with master only rebased the sprint branch once or twice per sprint. This resulted in a two and a half hour ordeal for him, and generally a full day's worth of bug fixes for everyone. This doesn't even factor in the hours of work required to rebase the existing pull request branches on top of the new sprint branch afterward. In the end, pick at least one (if not both) of the key takeaways from this point: Keep sprints short, rebase the working sprint branch often, or both.

Action item - Determine the shortest possible sprint schedule that is realistic for your project. When the deadline approaches, rollover remaining work into the next sprint, and adjust plans accordingly. Ideally, plan such that the current sprint branch can be deployed live at a moment's notice. Then, bug fixes can even go into the sprint branch. Feature flags may be necessary to make sure none of the unfinished sprint work gets seen by the users.

5. Don't forget about the business.

Here's a point that sometimes gets lost, and it's more related to code reviews and the technical aspect. In the end, if you're getting paid to write or manage code, you're working for a business. Any business has a client or user-base to please. It's how the business continues to exist. When reviewing code, it is important to evaluate the code in the context of the time frame in which it is needed. When reviewing a bug fix, especially one going straight to production, don't be afraid to hit the checkmark if it's not perfect. If a pull request to fix a critical bug is not perfect, but it works, approve it. This is even more crucial for those straight-to-production fixes. In fact, it's betterto modify as little as possible when a code change will go straight to master. Avoid refactoring, as mush as it hurts to do so.

Conclusion

Git itself can be a monster to tame, but by following a solid foundation of guidelines, and getting the entire team onboard, the development process can flow more smoothly. Ultimately, when your team moves as swift as a coursing river, with the force of a great typhoon, and with the strength of a raging fire, software goes from a plan to a profit.

]]>Voice is obviously the primary method of interaction with Alexa, but sometimes a picture really is worth a thousand words. Amazon has recently announced a new way to enhance Alexa skills with interactive visuals on devices with a screen such as the Echo Show. It's called Alexa Presentation Language (APL)]]>https://blog.rocketinsights.com/alexa-presentation-language-tutorial-getting-started/5c0ae9836dd87500cc904922Thu, 10 Jan 2019 20:55:42 GMT

Voice is obviously the primary method of interaction with Alexa, but sometimes a picture really is worth a thousand words. Amazon has recently announced a new way to enhance Alexa skills with interactive visuals on devices with a screen such as the Echo Show. It's called Alexa Presentation Language (APL) and it's a major step forward for the platform. Before APL, GUI options for Alexa skills were limited to 9 predefined display templates. This setup had the advantage of keeping the look and feel consistent across all of the skills, but ultimately it was too limiting. Many of our customers have been asking us to design and implement Alexa skills with custom graphic interfaces and thanks to APL we are finally able to deliver on such requests.

The documentation for APL is pretty good and has been getting better over time. Some excellent code samples can be found under alexa-labs on GitHub and in the APL authoring tool. Having worked with APL even before it became public, I figured I would contribute to this list of useful resources with a series of blog posts describing the process of implementing a simple Alexa skill that takes full advantage of APL. Let's build a skill that plays animal sounds similar to those sturdy sounds books with buttons little kids love to play with. Feel free to follow along and if you get stuck the code revisions described below map pretty closely to the commit history for the animal-sounds-apl repo on my GitHub.

There are many good ways to write a backend for an Alexa skill, but the stack with the most Amazon blessing (and therefore best tools and support) right now seems to be JavaScript code using the Alexa Skill Kit (ASK) SDK v2 for Node.js deployed to AWS Lambda, so let's stick to that. To that point, running the ask new command from the ASK CLI toolkit is a great way to get up and running with a new Alexa project in no time. Given no extra options, it produces a deployable Hello World skill using the tech outlined above that we can use as a skeleton for our project. If you need to set up ASK CLI on your machine, follow this quick start guide.

Only a few changes to the Hello World project produced by ask new are needed before we can start adding APL. In the interaction model, change the invocation name to animal sounds a. p. l.. Also, turn the HelloWorldIntent into AnimalSoundIntent with a slot called animal and add an animals type filled with some sample values. After these edits, the contents of the en-US.json file should look something like this:

Next, let's make some adjustments to the skill manifest in the skill.json file. The publishing information doesn't really matter for skills in development, so the changes to fields like examplePhrases and description can be kept to minimum for now. The most important thing to do here is to add ALEXA_PRESENTATION_APL as a type of an interface used by the skill under apis. This communicates to the Alexa device that rendering APL will be involved and is required if we want to see our designs appear on screens. Here is what skill.json should look like after these changes:

Moving on to the code (contained in lambda/custom/index.js), there are a few speechText values and some parameters to the handlerInput.responseBuilder.withSimpleCard function calls that should be adjusted to something that makes more sense for our skill's intent handlers. More importantly, we need an AnimalSoundIntentHandler in place of the HelloWorldIntentHandler. For the purposes of our sample skill something simple like this would suffice:

For the sake of example, a local object with strings representing sounds and URLs to publicly accessible images for the animals from the model is good enough. Only the sound strings are used for now (to build the speech outputs), but we'll start using the images soon enough. Remember to add AnimalSoundIntentHandler to the parameters for the skillBuilder.addRequestHandlers function, but otherwise we are ready to start writing some APL to define what we want displayed whenever the AnimalSoundIntent gets invoked. APL GUIs are implemented in APL documents, which are JSON files made up of APL components that get instantiated using the following syntax pattern:

The best place to write APL is the Start from scratch section of the APL authoring tool. It lets you see the visual output of your work simulated in the browser for a number of different screen sizes and even push a preview to a real device, which are great frontend development features giving you a quick feedback loop. There is also a Data JSON tab that lets you populate your template with some test data and a toggle to switch between row APL code and a more abstracted editor. The boilerplate code the authoring tool will set you up with is likely going to look something like this:

Most of these fields are used for code optimizations, which I will get to in another post. The only part you need to worry about for now is mainTemplate. For simple layouts, you can nest all of your APL components right there. Here is what you could do for AnimalSoundIntent:

Let's break this down. "parameters": [ "payload" ] gives us a reference to the data object our skill's backend code will be sending along with the APL document. The other parameter of mainTemplate ("items") is used for nesting components inside other components. Frame makes up the first layer of our layout. It can be used to create and style rectangular and oval shapes, but all we are using it for this time is to set the background color, defined through its RGB value. The vw and vh units are used to make the Frame fill all of the available viewport space. The next layer of our layout is a Container component. Containers do not produce any visible output, but they are very useful for positioning and constraining components nested inside of them. "alignItems": "center" will seem familiar to anyone who has come across CSS Flexbox. It's a very straightforward way to center items along the cross axis, which is horizontal for Containers by default. "justifyContent": "spaceAround" is another Flexbox inspired parameter that adds equal amounts of distance between and around all of the items along the main axis, which is vertical for Containers by default. The final layer of our layout consists of Text and Image components sourced from the payload object we defined earlier. The syntax for getting values out of this data source is very similar to the one used in JavaScript's template strings, with placeholders indicated by a dollar sign and curly braces. Text's"fontSize" and "color" parameters should be pretty self explanatory. The same goes for Image's"width", "height" and "scale".

With the APL document described above and given the following Data JSON:

your APL authoring tool should look more or less like this at this point:

The only step remaining to enable rendering of this layout for each AnimalSoundIntent request is to send the APL document along with the data inputs in a directive with each response. First, add the APL document code to the project directory structure. lambda/custom//aplDocuments/animalSound.json is a good place for it. Then, in AnimalSoundIntentHandler replace the call to withSimpleCard() in handlerInput.responseBuilder with addDirective() like this:

With that (and once the final version of the skill gets deployed), you should be able to see this interaction in the test section of the alexa developer console:

Next time, I'll describe how to add a launch screen with a list of selectable animal sounds using the Sequence and TouchWrapper components. In the meantime, feel free to open an issue on GitHub if you have any questions!

]]>https://blog.rocketinsights.com/unity-swift/5c253255769c0f00c07e7612Wed, 02 Jan 2019 17:00:00 GMTSince the introduction of ARKit with iOS 11, we've seen an increasing demand for applications incorporating some form of Augmented Reality (AR) experience. AR is a relatively new technology that enhances the user's real world with virtual audio/visual objects, and a mobile phone or tablet is a perfect platform for this tech. In this post, I’ll dive into some of the technical challenges of AR as well as our preferred solution, focusing on integrating Unity into an iOS project.

The first technical requirement for a compelling AR experience is an understanding of where the user is in space. Apple's ARKit and Google's ARCore offer positional tracking and spatial mapping to construct a 3D representation of the user's environment, using just the camera and sensors built into the phone. These technologies are fairly similar in that they provide a context for how to render augmented content in a way that looks convincing; for example, a 3D rendered coffee mug at the correct position and perspective so that it appears to rest on a physical table.

Apple offers SceneKit as a rendering solution that closely integrates with ARKit, but I've found it very cumbersome to use, and of course, it's iOS only. Unity is a simpler and more powerful cross-platform alternative, and is one of the most prominent 3rd party rendering engines for mobile applications. One of the primary reasons for its popularity is the extensive developer community, and therefore accessibility of learning content. It's incredibly easy to get started building a convincing 3D scene, and then deploy it to both iOS and Android. Unfortunately, it's difficult to integrate that scene into an existing Xcode project due to the Unity build process, which generates an Xcode project that expects to control the entire app.

There are several advantages, however, to embedding Unity in an existing project. Primarily, almost all new projects these days use Swift, but the generated Unity project is written in Objective-C++. Additionally, 3D or AR experiences are often just a single feature in a much larger app, meaning it would be convenient to treat Unity as any other screen to be presented modally or in a navigation controller. There's no official support for including Unity in another app in this manner, but that doesn't mean it's impossible!

We've put together a template for a Swift project that embeds a Unity project as a single view controller. You can download the template, which includes instructions for starting or configuring your own projects here. This repository provides Xcode build configs and Unity pre- and post-build scripts that keep the two projects in sync automatically. Simply building the Unity project will build for iOS device and simulator, and link those build products in the Swift project. From there, the native Swift code can present or dismiss the Unity scene just like any other view controller.

The included sample project also shows how to integrate with Vuforia, a powerful, cross-platform AR platform with advanced features such as object recognition and some of the most accurate model placement available. It uses ARKit or ARCore under the hood, depending on the platform, and is also built into Unity, allowing developers to enable it with a single checkbox.

The use cases of Unity integration into a larger Swift project certainly aren't limited to AR. Unity’s cross-platform support (which is especially helpful for Android… but more on that in a later post!) and unique features such as the ability to seamlessly integrate UI into both 2D and 3D space (a challenging task in native iOS because UIKit can't render into SceneKit contexts) make it an extremely powerful tool. In addition, anything requiring 3D rendering, physics simulation, or even complex audio or video features is often easier to implement in Unity than in the corresponding native frameworks.

Serverless architectures are changing the landscape of computing and business by fundamentally de-risking value creation and delivery. Advanced serverless cloud computing is helping solve fundamental problems concerning utility, performance, and security. For over a decade Amazon Web Services has been offering managed infrastructure solutions like EC2, S3, and now Lambda with 100s of other services to choose from. If you are operating a business how do you decide what to invest in to get the edge in 2019? At this stage of the game if you are considering moving to AWS or using it more effectively you definitely want to consider focusing on architecture and more specifically going Serverless where possible.

AWS Lambda and cloud functions in general has redefined utility computing at scale

For 2019, how do you power down that last server in production and go NoOps? More importantly, why do you need to? For the majority of business use cases we can so lets do it! Lets cover 5 ways to crush it on AWS with serverless technologies.

The Case for Serverless

What does Serverless even mean, I thought the cloud already solved the utility computing problem? Serverless means that you can run your code (your business) without provisioning and managing your own compute resource. If you have used AWS for a couple of years now think about the services that ultimately manifest themselves as server instances running in EC2, AWS' flagship compute service that everything is built on top of. Servers that you can see running in your AWS EC2 console are constantly costing you money whether they are being used or not.

We know that we can use application autoscaling to cut down on cost in EC2, but we can't cut it down to zero. With services like Lambda, AWS will start, scale, maintain and "stop" required compute resources. The core underpinnings of services like Lambda are being baked into current and future AWS offerings as well; offering utility computing on demand at a reduced priced. Ultimately less data centers have to be built as computing becomes more efficient from both a provider and consumer perspective overall.

#1 Content Delivery Networks (CDNs)

So you already have a working product, how can you immediately start to benefit from AWS using ,demand based, serverless computing? One easy way to get started and add scale is to take advantage of a content delivery network, CDN. AWS Cloudfront can be used to effectively serve up both static and dynamic content globally at the edge of the network and closest to your customers.

The most important component of any solution is the interface with the customer. Modern interfaces are content heavy, minimize the user interface, and are most often served over mobile networks; it is important that the content reaches the customer quickly over complicated network topologies. We are familiar with static content delivery networks, but additional dynamic content can also be processed in Cloudfront via Lambda@Edge. This allows requests and responses from origin servers to be customized based on customer details. All of these benefits come without provisioning a single server and the pricing model is based on usage along with a generous free tier. Ultimately this means fast response times for users and less servers at the origin, regardless of implementation.

#2 Continuous Integration and Continuous Delivery (CI/CD)

For larger software development shops sometimes the challenge is simply getting the code out the door. You've implemented agile best practices and are pushing features out the door continuously. Unfortunately your Jenkins server isn't keeping up and it is becoming a bear to manage in its own right. The needs for that critical infrastructure only grow as you add more projects to the mix and you've had a couple of outages. Your ops team is already working on AWS migrations so you get the following proposal from an operations team member:

AWS Reference for Jenkins (don't do it!!!)

It is enough to make your head spin and the needs of your build server are only increasing as you adopt container architectures. What do you do as a dev manager that wants to increase overall productivity for your team? Try another way! The good news on this front is that AWS went through many of the same challenges as they increased throughput for their own organization a few years ago. With that came AWS CodePipeline and the suit of tools that come with it for you to manage CI/CD at enterprise scale, with no servers!!! The family of developer tools is as follows:

AWS CodeCommit - managed Git repos (don't use these to store your source repos! They are a lightweight way for you to publish your code to a pipeline)

AWS CodeBuild - you can define your entire build process in a single yaml file. If you need more than that you are probably not deploying a microservice and need to break things apart

AWS CodeDeploy - now you can normalize your release process for any application on almost any platform in AWS and on prem. You'll know what was deployed and when, with full reporting every step of the way.

Once again, all this power and not a single piece of infrastructure to manage. In simple setups your team will need nothing more than an AWS account to get started. In more complicated environments and larger teams you can use a combination of AWS Organizations and AWS IAM (Identity and Access Management) to provision access to not only AWS Development tools like Codepipeline, but anything running in AWS.

#3 On-Premises Network Attached Storage (NAS)

On-Prem? I thought we were talking about the cloud, AWS, and Serverless? What if I told you that AWS S3 was also the solution to storage that is bursting at the seams inside of your on-prem colo or datacenter? A viable alternative to upgrading hardware is proactively shifting certain use cases to S3 for long term affordable infinite storage using an AWS File Gateway.

One approach to on-premises environments, where storage resources are reaching capacity, is to migrate colder data to the file gateway to extend the life span of existing storage systems and reduce the need to use capital expenditures on additional hardware. When you add the file gateway to an existing storage environment, on-premises applications can take advantage of Amazon S3 storage durability, consumption-based pricing,and virtual infinite scale, while ensuring low-latency access to recently accessed data over NFS. The AWS File Gateway can be provisioned as hardware or as a VM in VMWare. This is perfect for home directories and legacy applications that have archival data. The story gets even better when you take into consideration S3's new "Intelligent-Tiering" capability.

The S3 Intelligent-Tiering storage class optimizes costs by automatically moving data to the most cost-effective access tier, without performance impact or operational overhead. It works by storing objects in two access tiers: one tier that is optimized for frequent access and another lower-cost tier that is optimized for infrequent access. S3 monitors access patterns of the objects in Intelligent-Tiering, and moves the ones that have not been accessed for 30 days to the infrequent access tier. If an object in the infrequent access tier is accessed, it is automatically moved back to the frequent access tier. So you can significantly reduce storage costs at scale on prem by eliminating storage servers, NAS, and SANS for many use cases. The only thing worse than servers needing to be managed is servers with hard drives that need to be managed!!!

#4 Database as a Service (DBaaS)

If you are still managing databases it means you definitely are not in the cloud or you need to take a hard look in the mirror. The three pillars of cloud computing are compute, network, and storage. Any database of significance is going to grow very large and embody all of the pillars at the same time. Most database solutions are going to make compromises around total storage size, query response time, or query complexity. This is in addition to transactional properties, commonly referred to as ACID (Atomicity, Consistency, Isolation, Durability). Traditional database administrators and engineers manage complicated configuration and core infrastructure by hand to operate databases at large and growing scales. The world has been made small by connected devices that have outsized the amount of data and content we produce on a per second basis; 300 million photos are posted to Facebook per day. Each photo has a database entry... Facebook can afford to manage its own costly data centers and even invent its own databases, but most don't have that luxury. Fortunately AWS has an ever growing army of database services for you to choose from, including solutions that are taken for granted. It should be no surprise that some of the best solutions involve minimal to no server configuration!

To this day DynamoDB is still an 8th wonder of the world and it now does more than ever to be your first and last chance at full scalability when it comes to your business. DynamoDB provides zero compromise when it comes to scale and speed. Recent DynamoDB advances now raise the bar in terms of durability as well by providing regional replication and daily snapshot backups at any scale! If you begin a new project on AWS you should consider Lambda and DynamoDB together. They work like peanut butter and jelly and is the easiest path hands down for delivery modern, large scale, microservice backed, capabilities to startups and enterprises alike. In addition, at the time of this writing, DynamoDB is now the only non-relational database that supports transactions across multiple partitions and tables. Amazon is chipping away the reasons to use a standard relational database for greenfield projects.

The Aurora managed database service is the fastest growing service in AWS history. That is a testament to how difficult it is to manage traditional relational databases at any scale. Real businesses require sophisticated transaction processing and analytics that only SQL can provide. One of the best solutions in the overall Aurora suite hands down has to be Aurora PostgreSQL. You get the power of one of the most advanced databases in the world with the scale and performance that AWS provides. PostgreSQL is becoming the de facto standard interface for large scale SQL workloads, both on the open source front and commercially. In recent years Google has made their spanner database commercially available which can scale horizontally in a geo replicated fashion. There are open source variants such as cockroachDB that require management. This is speculation, but don't be surprised if AWS does further worker to integrate the storage backend of PostreSQL with DynamoDB for unlimited scalability. I believe the first signs of that can be seen in their Amazon Quantum Ledger Database (QLDB). QLDB is an amazing append only system of record that AWS is now exposing to any business, at any scale! You still have to think about your specific use case, but you can hold off on expensive database and system administrators to keep the heart of your business up and running.

#5 Firecracker!

This brings us full circle in our discussion of serverless computing and presents an exciting wildcard! Firecracker is the open source microVM technology that powers management free container services like AWS Lambda and Fargate! Firecracker can launch a microVM in as little as 125 ms and is built with many security isolation levels in place. Lambda is over 4 years old and provides virtually infinite scalability and elasticity in modern compute architectures.

Lambda functions truly are the generalized universal event trigger. The fact that HTTPS is the dominant Internet protocol means Lamba dominates as an implementation for API frontends. With Lambda you can you can run compute with zero administration and true subsecond utility metering to manage costs.

Amazon Fargate uses the same microVM architecture as Lambda to deliver a reimagined backend, scalable, always-on, mirror of the network topologies we are familiar with seeing in EC2! With full networking and OS support you can run any Docker container or network of containers in a scalable, maintenance free fashion. Security and access control works using VPCs, subnets, security groups, load balancers... all of the datacenter building blocks we have mastered in the traditional EC2 environment. Fargate puts comprehensive general computing on rails.

In Summary

We covered quite a few solutions offered by AWS under the mantles of serverless and fully managed. These solutions don't work in isolation of course and are meant to be composed together into working solutions to solve real world problems. If you are interested in going serverless or just getting started with AWS, feel free to reach out to us. We are happy to sit down over coffee or virtually to just chat about your needs.

Next Up

In our next DevOps post we will get going on a working solution for a particular itch we have at Rocket Insights. We'll pick from the basket of technologies we have touch on to actually show what this looks like from start to finish. So stay tuned!

For the past few years we have been lucky enough to work alongside some of the best and most innovative companies to help them expand their business online.When working on projects, we aim for complete transparency and clarity for our partners. We offer incredibly talented developers that devote their time and effort to creating the app or website your business is envisioning.

Our clients aren’t the only ones who have noticed our work. Recently our team was recognized by Clutch, an agency based in Washington D.C. that provides ratings for B2B service providers. Clutch provides objective reviews on these companies that are based on the feedback of the firm's previous clients.

Clutch has announced that we have be included on their inaugural Clutch 1000 list! This is the very first year that Clutch has created this list, which is comprised of the Top 1000 B2B Global Service Providers on Clutch. These firms were selected based on the quality of their work and their ability to meet the goals of their clients.

In addition to being ranked so highly among the leading app development companies in our industry, our ratings have also landed us on Clutch’s sister-website, The Manifest, as being one of the top 100 app developers in the world! As a service provider, we understand that our reputation depends upon the quality of work we provide our clients.

Here are a few examples of testimonials from our former clients on Clutch:

● “Unlike other vendors, (Rocket) is receptive to our ideas and not intimidated by new technologies.” – Manager, Television Network

● “(Rocket) is good about communicating, and we’ve never had to question their availability or transparency.” – VP, Software Company

● "(Rocket) is very attentive, and seemed to be dedicated to producing something of high quality." – Executive Director, Video Streaming Network

Being ranked among the best firms in our industry as a young company is incrediblyencouraging, and gives us great insight in how we compare to our competitors. As we continue to take on new projects and create new apps for our partners, who knows how far our company will go in the years to come!

]]>In the off chance you haven't heard of Milk Bar, the sweetest child of the Momofuku restaurant group, it's a bakery that's had a cult following even before it was featured on Chefs Table. Founded in NYC, it's since spread to locations all over the country thanks to Christina Tosi]]>https://blog.rocketinsights.com/how-fast-can-we-order-a-crack-pie-our-design-experiment-for-milk-bar/5c0ead306dd87500cc9049dcFri, 14 Dec 2018 16:01:09 GMT

In the off chance you haven't heard of Milk Bar, the sweetest child of the Momofuku restaurant group, it's a bakery that's had a cult following even before it was featured on Chefs Table. Founded in NYC, it's since spread to locations all over the country thanks to Christina Tosi's genius take on desserts.

Milk Bar's desserts relive every childhood favorite of ours - funfetti birthday cake and cereal milk ice cream? What more do you need? If it's not already obvious, we really like (read: are obsessed with) Milk Bar. So much so that one of our team members in NYC is having their birthday cake as her wedding cake.

So... why are a bunch of product people talking about desserts? Our NYC design team had an unexpected week off between projects and, in a crack pie induced frenzy, decided to redesign the Milk Bar ordering experience... without asking them. We were careful to stay true to the existing functionality of the experience, but wanted to amplify specific parts of it.

The Problem

We found a few core issues in the current ordering flow that we wanted to tackle:

The mobile and web experience is a little boring and dare we say, messy. We think the ordering experience should feel like you're walking into a Milk Bar store: accommodating, engaging, full of life, and delicious.

Milk Bar is best known for their epic cakes, oftentimes used for a special occasion. While they do allow customers to customize their cakes with messages, there's no way to see what it would look like before ordering.

They currently rely on third-party ordering software like Postmates and Goldbelly. While convenient, these services don't offer a true touchpoint to their brand. Even more importantly, they lose control over the entire ordering experience. We'd like to see orders driven through the Milk Bar store instead.

Here's the Postmates experience. It's fine and does the job, but completely loses the Milk Bar brand.Here's the current Milk Bar web experience. While it's significantly more branded than the Postmates experience, it's pretty messy.

Our Proposal and #Goals

Design a kickass ordering and product customization web app for Milk Bar, utilizing their existing customization options. The flow must easily integrate with the existing website.

Getting to Work

Since we only had a couple of days to work on this, we singled out one flow to work on: the ordering and customization experience. After identifying what we could improve about Milk Bar's experience, we ate a couple more bites (okay fine, pieces) of crack pie and got on our merry way.

Mapping out the Experience

Before we could get to the fun stuff, we needed to nail down the flow. We felt it was important to keep the ordering and customization experience to as few screens as possible to keep it simple. Easy as pie (pun intended).

Here's the user flow we ended up with:

Designing for Variants

Here's the tricky part. Milk Bar has a lot of items, some of which are super customizable and some of which are not. We decided to tackle cakes first as the category has the most variants: there are nine flavors, some of which can be gluten free, and two sizes per cake flavor.

To stay in line with our theme of keeping things simple, we listed all of the cakes with other categories in a scroll bar at the top. Once a user clicks on an item, they can then view more details, choose variants (size, gluten free, flavor) and customize the cake.

From there, the user can either view their customized cake in a nifty little 3D rendering or go straight to checkout.

Bringing the Brand In

After we finished designing the app, we had a couple of hours left in the day so had a bit of fun bringing it the app to life. We played around with a few different page transitions.

First we played with the Milk Bar logo by adding a liquid fill effect. Next, we took inspiration from the birthday cake and designed a hand-made illustration with teeny-tiny rainbow sprinkles. We also put our crack pie to use and documented our delicious post-lunch gluttony for the order success message.

Finally, we played around in Blender to create three different Milk Bar flavors for our 3D cakes: birthday cake, peppermint bark and chocolate malt.

Our Stack

For the folks interested, here are the tools we used. Since we didn't have a ton of time, we went pretty low-fidelity and scrappy.

When we (as humans) were doing our formative learning, we learned from the very baseline up. “Here are letters, these are numbers.” It was only later that we started to combine them to form more complex ideas, like language and math. And that’s all well and good for toddlers, but that’s not a great way to learn complex stuff fast.

From the time we start school, we’re all forced to start with chapter one and never skip ahead. You have to know addition before you can multiply, multiplication before exponents, etc.

I’ve found that’s a terrible way to learn almost any hobby or subject (except math, I guess). We’ve been taught to think that learning in this way will net us a solid understanding of the foundations.

We shouldn’t be learning that baseline first. If I want to learn how to change the spark plugs in my car, I don’t need an understanding of internal combustion first. If I want to try my hand at sewing a pair of cotton pajamas, I don’t first need to learn about the many types of fabric. And yet that’s how so many courses, books, and instructional videos start the process. It’s as if they’re teaching us an entire career instead of showing us how to sample one. Put another way, if I want to learn how to build a house I need to start with something smaller. But I don’t need to learn about all the different types of wood and screws to get moving.

For the sake of simple analogies, let’s assume software and house development are similar in complexity. You can start with the simplest shape (the sandbox), and work up to a medium sized project (a shed). One day you’ll have earned the confidence to build a house-sized project and beyond.

This roadmap view makes it seem like you have to dive deeper into each subject to properly learn it. Even worse, it might leave you with an impression that you need to know most of this to be able to function as a front-end engineer. In reality, you should learn what you need to get by. Let’s use that same example again, and let’s say you wanted to learn React. Your journey would look more like this:

Learning React from the middle, out

The downside to this method is that the learning curve is more complex. Because you’ll be learning as you go, each time you dive into some new piece, your progress will slow down. That can frustrate some. But learning in this way allows you to focus on only learning what you need right now. It also (typically) means that your sandbox/shed is going to look a little funky in some areas. It’s the nature of “the first project” because at that point, you don’t know what you don’t know. When you learn by doing, you get a real-world experience that teaching can’t match. It’s those insights that create better, future projects: more stability, lower cost, better use of time, etc. This is also a good way to become very quickly familiar with the tools and materials you’ll need to use to get it done.

So dive into the middle of the next project you need to learn. Don’t build something just to learn. Build something you actually need/want and don’t expect perfection (this is key). By the time you finish your first project, you’ll have insights that take much longer learning the traditional way. You won’t know it all, but you’ll know “enough to be dangerous”, as they say.

AndroidX was announced in May of this year and has had regular releases and updates since then. We're starting to see libraries migrate to AndroidX which means we'd also need to migrate to keep using the latest versions of those. As a result, we decided it was time to test out the waters.

With AndroidX's Jetifier, we also keep compatibility with any libraries that haven't made the switch. This writeup is a breakdown of how it worked out for one project.

First, using Android Studio's "Refactor > Migrate to AndroidX" menu option was quick. It handled updating gradle support library dependencies and references to those dependencies throughout the app.

Then came the first hangup. There were many places where instead of just changing the import statement for a class (Fragment, for example), it changed the import and also directly referenced the class as well. Our fragments ended up looking like this:

While it was simple enough to replace this throughout the app, it wasn't the only class that it happened to. An even easier way to clean this up would be to check out the differences in git and only include the import changes.

The affected classes included:

Fragment

RecyclerView

ViewPager

Snackbar

CoordinatorLayout

DialogFragment

SwipeRefreshLayout

Additionally, classes related to these were also directly referenced. This included classes like FragmentManager and FragmentTransaction for Fragment, Adapter and ViewHolder for RecyclerView, and OnPageChangeListener for ViewPager, etc. Not the end of the world, but something to be aware of before checking in the changes.

I was unable to determine why that happened, and it might've been corner case for us. Or someone else could run into the same issue with more or other classes. The solution took just a little bit of cleanup, and it wasn't too troublesome.

We were using Koin in this app, so it needed to be updated to AndroidX-compatible versions. Again, that was a quick change. We were able to take advantage of the libraries mentioned in the first paragraph that had new AndroidX versions as well.

The final change to ensure our tests still worked was adding a reference to androidx.test:rules:1.1.0. Our ActivityTestRule using android.support.test.rule.ActivityTestRule worked before and was updated to androidx.test.rule.ActivityTestRule in the import statement. However, it wasn't included in the migration of the libraries.

The final issue didn't take long to find and fix, but someone else might run into it later.

Overall, the experience wasn't as painful as someone might expect. Major overhauls like this with interdependencies can be scary, but the reality wasn't in this case. We're going to do a full round of testing before we decide to push it to production, but the early results have been positive with no visible changes to the user.

Dependency injection is a way to increase unit test coverage in many applications, including Android apps. By injecting mocked versions of classes or interfaces that another class uses, it's much easier to test each potential code path.

How does this work in practice when developing for Android? In the past this usually meant implementing Dagger. While widely used and documented, it has a bit of a learning curve and can be a stumbling block at times.

For anyone who has used Architecture Components for a while, there are situations where a Fragment's ViewModel should be tied to the Fragment's lifecycle while at other times it should use a ViewModel associated with the Activity's lifecycle and shared. By just changing private val someViewModel: SomeViewModel by viewModel() to private val someViewModel: SomeViewModel by sharedViewModel(), then the Activity lifecycle is used instead.

As a result, Fragments and Activities no longer depend on using ViewModelProviders. Any lateinit or nullable ViewModels are also handled with Koin's lazy loading.

These are just a few simple examples on how Koin can help with Android development, particularly making Architecture Component ViewModels more concise. Setup is quick and easy, and adding it to an existing application can be done relatively painlessly. In addition, it's simple to read and understand, meaning developers new to Android or the project can get up to speed quickly.

I'm simply a happy user with some (~6 years) experience with it. I've seen people struggling with RX. They say it has a steep learning curve. In that light, I was super excited when I first heard about Google's Android Architecture Components (AAC) and LiveData. A simplified version of RX with automatic lifecycle management! I immediately tried to use it in a pet project and it seemed simple indeed. The next real project I started, I chose LiveData without any hesitation: it is part of AAC, so future maintainers of the code should already be familiar with it. Fast forward 3-4 months and my excitement has been somewhat diminished. The familiarity point still sticks, but the simplicity part ... not so much. Here are some things that might be useful for someone else who has experience with RX and starts using LiveData for the first time.

LiveData has no error channel

LiveData is designed for the happy path, a stream of successful results. But things tend to fail and especially so on mobile devices (connectivity issues, limited power and hardware resources, etc). So how do you handle these? One simple solution would be to split it up to two separate LiveData streams:

results: LiveData<Foo>
errorMessages: LiveData<String>

Another option would be to introduce a helper class Result and wrap the results in it:

LiveData is sticky

So naturally enough, people coming from RX world think it's a simplified Observable. But actually, it is more similar to BehaviourSubject. It holds the last value and new observers would get that first. This fact makes error handling very cumbersome. Errors should be one time events, it makes no sense to cache them. Yes, it's nice to re-populate your RecyclerView with cached data on screen orientation changes, but showing the last REST API call error at the same place? Should I show the error or have I done it already? Since there's no API contract for error handling, it's unclear whether an error is fatal or not. Would there be more successful results after an error?

LiveData has no Future<> like APIs

RX provides nice API for special cases of "streams of events" where there are either no results (Completable), a possible result (Maybe), or a result (Single). I think it makes the API very clear to understand. With LiveData, the caller would somehow have to know how many results to expect and stop observing at the right time, because the "stream" never ends.

LiveData has (almost) no operators

LiveData has only two operators (called "transformations"): map() and switchMap() (flatMap() in RX). I keep missing RX operators like zip(), combineLatest(), distinctUntilChanged(), etc. Sure, I have written my own versions of these for LiveData. But I'd always prefer the quality of RX operators over my home grown ones.

LiveData has no operator chaining

I've heard complaints about RX being hard to read. But compare these two code samples:

CONCLUSIONS

LiveData tries to be a simplified version of RX but fails because you need to handle the hard parts yourself. Yes, it has small API. Yes, writing a zip() operator is not very hard. But it sounds like the reasoning of a novice developer - "I don't understand the hard parts of this code, so I'll rewrite it, it'll be so simple!". Error handling is complicated. Corner cases are complicated. Code that looks complicated is (hopefully) complicated because it needs to handle these conditions (https://twitter.com/havocp/status/1032632650165616645). It's much simpler to ignore the RX operators you don't know about than to write the ones you do need yourself.

Note that I've completely ignored automatic lifecycle handling. It's great, no argument against that. I've just never understood the complaints (and projects like RxLifeCycle or its "successor" AutoDispose). People say it requires manual work that is easy to forget. In my opinion the manual work is trivial, makes the intention very clear, and the editor already reminds you when you don't keep track of your Disposables. I feel it's the perfect case of Simple Made Easy (Side note: every programmer should be required to watch that talk at least once in their life).

But familiarity is also important. If your team has experience with LiveData and no experience with RX, the choice is already made for you.

]]>Typically, when we have multiple unrelated asynchronous tasks, we want to execute them concurrently and possibly combine the results when all tasks are completed. Occasionally, though, concurrent execution is not possible, and we need to execute each task only after the previous is completed. One example of this is calling]]>https://blog.rocketinsights.com/whenserial/5b521b2a52290300bff575deMon, 23 Jul 2018 13:41:51 GMTTypically, when we have multiple unrelated asynchronous tasks, we want to execute them concurrently and possibly combine the results when all tasks are completed. Occasionally, though, concurrent execution is not possible, and we need to execute each task only after the previous is completed. One example of this is calling an API that accepts only one connection (we ran into this when trying to upload multiple attachments via Zendesk SDK). The API that we want then is exactly the same as with concurrent execution:

or maybe even when with an additional argument to tell whether the execution needs to be serial or concurrent. Unfortunately, this doesn't work because each promise starts executing as soon as it's created, so no matter what we would do in the body, the promises would still execute in parallel. So instead of an array of promises, we need to start with an array of values that we can transform into promises (for example, an array of images, where each image is transformed into an upload promise):

The implementation would iteratively transform each value into a promise, execute it, and append the result to the array of results. Since each task is asynchronous, we can't use a regular loop for this. One way around it is to use recursion to start the next task when the previous task completes. It would look very similar to tail recursion, except that the tail call is done when the promise completes. Here is the full implementation:

]]>At Rocket Insights we have no shortage of developer talent across iOS, Android, and Web platforms. It was only recently, however, that we were presented with the opportunity to explore a fusion of these traditionally separate teams under the umbrella of our first production-ready React Native app. The appeal of]]>https://blog.rocketinsights.com/an-adventure-in-react-native/5b1546a4e03a5e00bf3b21feMon, 04 Jun 2018 14:10:17 GMT

At Rocket Insights we have no shortage of developer talent across iOS, Android, and Web platforms. It was only recently, however, that we were presented with the opportunity to explore a fusion of these traditionally separate teams under the umbrella of our first production-ready React Native app. The appeal of React Native is obvious and, perhaps, a bit tired. Share source code between platforms while expanding your resource pool and drastically decreasing your time to delivery?Yes, please. The theoretical potential of this Holy Grail has been torturing developers for years and, continuing with the same analogy, many have died in the pursuit of its fortification. It should be no surprise, then, that we engaged with eyes wide open and a limited definition of success despite advocacy from some (very public) heavy hitters.

Dipping A Toe In The Water

One of the more ubiquitous adages in engineering is to “implement now…perfect later”, and we found there is no better path of adherence to this wisdom than using Expo. Expo provides a highly accessible all-in-one solution for building and deploying React Native apps while requiring essentially zero knowledge of the actual native platforms. We knew up front that this would not be our final destination as many of our clients require deep integration with proprietary SDKs. However, the appeal of building the shell of our React Native app in (literally) minutes while passing rough deployments between devices via QR Code was too enticing to deny. Ultimately we ended up building well over 90% of our presentational components within Expo before ejecting and proceeding with a vanilla React Native architecture.

Navigating Navigation

One of the benefits of starting with Expo is that it provides a handful of modest opinions in areas that otherwise suffer from fragmentation within the React Native community at large. There is no better example of this than navigation. A few blogs circulating from late 2017 captured the pain points of trying to understand a) why there were so many navigation options; and b) which option best suits our immediate needs. I have casually coined this as “The React Problem” in my personal developer circles, meaning that there is a tendency to conflate a modular approach to the ecosystem with a rigid agnosticism regarding what peripheral libraries belong in the stack. Cue the fragmentation and noob misery.

Expo makes no bones about advocating for React Navigation which we found to be fairly easy to use and capable of accomplishing the task at hand. That being said, because the implementation is in Javascript (read: not truly native) it seems plausible that we would eventually migrate to a native alternative or perhaps even consider moving this task outside the scope of React Native entirely.

Native Developers and Custom SDKs

As much fun as it was to translate the design into presentational components, the true test of the platform was working with our native development team to integrate our client’s custom SDK and make the app actually do something. The ease of this process was arguably the biggest surprise we encountered. Within a few short days we were rapidly working in tandem, implementing needed adjustments in our relative domains not unlike the client-server relationship familiar to web developers. Suffice to say that we found the provided tools for native integration (Async Storage, Native Modules) were not only adequate but also surprisingly pleasant to use.

Conclusion

Our experience with React Native was a resounding success, though this statement does come with an important caveat. As mentioned above, we have a deep talent pool at Rocket Insights in both web and native development. In hindsight, it was very unlikely for us to encounter a challenge that required expertise beyond our internal team. Because of this, I think it would accurate to say that we are very much the target audience for the React Native platform. There are excellent tools (such as Expo) that can bridge the native gap for teams comprised of predominantly web talent. But any serious commercial app will likely need ready and dedicated access to both talent pools in order to succeed.

]]>Burndown charts are a popular aspect of the Sprint methodology and why wouldn’t it be: it’s a chart. Business people love charts! Ironically, the goal of this simple chart is to get all the stories done and reach 0 in time, and that flies in the face of]]>https://blog.rocketinsights.com/untitled-3/5ae33e80e9c61b0022cbe887Mon, 30 Apr 2018 13:05:00 GMTBurndown charts are a popular aspect of the Sprint methodology and why wouldn’t it be: it’s a chart. Business people love charts! Ironically, the goal of this simple chart is to get all the stories done and reach 0 in time, and that flies in the face of everything it means to be agile and productive.

A Quick Recap

This is a burndown chart. The gray line is the “ideal” trending line for stories to be “burnt down” (i.e. make their way to complete/done). The red line is reality. It’s well-known that the ideal line is never really achieved. In this example, the team started with 40 points of work and ended with about 8 points remaining.

The Argument

The basic problem with a burndown chart is that its goal is to get to 0, which implies that everyone working on stories will finish their work at some point. But it doesn’t account for the next work item that needs to get done. Typically this means pulling in a story from the backlog. While that sounds simple, not all teams work that way since that kind of thing can make for one ugly burndown chart.

When stories get pulled into a sprint from the backlog, it causes spikes in the trend line. It also risks that this new, pulled-in story won’t be completed by the end of the sprint and thus the trend line will never reach zero. In some businesses, never reaching zero in a burndown chart is a bad sign. Even the chart itself implies this is a bad thing. This creates a strange atmosphere of “the perfect sprint” where everyone completes their work in a timely manner and all work is done just in time for the sprint to end. But the reality is that people will finish their work at different intervals, and just because a story is still open, doesn’t mean anyone can just work on it. Theoretically, if we lived strictly by the chart’s rules, we’d all stop working when we had nothing left to pick up, just waiting patiently for the sprint to end. No extra stories pulled in that can’t be finished in time: none of that funny business here!

In this common scenario, the person writing the stories will most likely not be the same person working on the story. Sometimes certain stories are best suited for certain engineers, which means the average team velocity is kind of hard to gauge. Dade’s velocity would be above average, Andrea’s velocity is probably close to average, and Seth’s average is close to 0. Dade’s going to have a busy sprint. There’s also a good chance that stories are light on details since a lot of the pieces in play are in the head of the person that wrote the story (dammit Dade!). Working on stories and not pulling them into a sprint will negatively affect this average velocity calculation. You should care a lot more about your velocity being accurate than an impossible-to-achieve line in a burndown chart.

Sprint Purgatory

Right near when the sprint ends and many of the last stories are making their way to completion, there’s a gap of working time for many engineers on the team. Once a story leaves the engineer’s workflow and goes to QA/staging/prod, their involvement with the story typically drops dramatically. If this happens at the end of the sprint, and the team has been encouraged to not pull in a story unless it can be completed in time, it’s uhh…. engineer party time?

There should always be allowance to pull stories into the sprint, because work being done should be visible and tracked. If the team is focusing on only pulling in stories that can be completed in the sprint, that means the backlog’s order is irrelevant. You’re then picking stories based on the convenience of their size, and then considering their priority. This also means that you’ll have an unexpected influx of work landing at the beginning of the next sprint that no one knew about.

Let's Wrap This Up

Process is critical in just every organization, but blind acceptance of its rules can be dangerous. Kanban is flexible by nature, and focuses on a constant workflow, so that might be a better process fit. I will admit that the root of many sprint process related issues is how detailed the stories are. They need to be written with the mindset of “I’m giving this to a new team member”, but I digress: that’s a different blog post entirely.

Intro

The essence of functional programming is to express computation in small, self contained units (functions) that we can combine together to get the result. If each unit that we start with doesn't affect other units, we can combine them in all kinds of ways

Intro

The essence of functional programming is to express computation in small, self contained units (functions) that we can combine together to get the result. If each unit that we start with doesn't affect other units, we can combine them in all kinds of ways and still easily reason about the outcome. For example, if we have

func f(x: T) -> V {...}
func g(x: V) -> U {...}

We can do g(f(x)) to compute U from a value of type T. With classes, this reverse notation can be expressed more naturally with something like x.f().g() which looks like a chain.

In addition to chaining, we can also combine functions into more complex building blocks. For example,

func h(x: T) -> U { return g(f(x)) }

And then use h just like f and g.

PromiseKit and composition

Usually, chaining is the first thing that comes to mind when we think about Promises. We take a few simple asynchronous tasks and chain them together to get the result that we need. But promises and the way they are combined are also examples of these concepts from functional programming, and it should be possible to combine them to get compound promises that act exactly as the simpler promises that they are made of, assuming that those simpler promises don't have side effects. Having these compound promises can be just as useful as having compound functions -- they allow us to express a more complex computation that we can reuse.

A function doesn't have side effects if we we can call it multiple times without any change to the context in which it's called. Those functions are the easiest to combine. A weaker requirement is if the state changes after calling it once, and then doesn't change more. For example, if a function authenticates a user and changes the app state to "logged in", calling it the next time is a no-op. Or if a function creates and presents a view controller from a specific view controller, calling it more than once doesn't try to present more and more copies. (Here, we assume that presentation takes time due to animation, so it has a result, the returned view controller, and is an asynchronous task that changes the state).

A specific example

So let's say we have a task that creates a view controller, presents it, collects some input from the user, and returns that input. A good way to describe this would be a function that returns Promise<Result>. The implementation would follow all those steps and fulfill the promise when the user enters all the information or cancels the form. In all cases, we want to dismiss the presented view controller to leave the app in the same state where it was before running the promise. This would make it a good Promise citizen because we can then call it without any changes in the app state and can make it part of a compound promise (for example, show the form in different contexts). Naturally, we would want to put the code to dismiss the view controller into the ensure block so that it executes no matter what branch the code takes. And that's where our code would break the "no side effects" rule because by the time the promise is done executing, the presented view controller will be still running the dismiss animation! So if we try to do something UI related in the next promise of the containing chain, we may either miss the animation (which will be jarring to the user), or break the app state.

The problem

Even though the ensure operator returns a promise, it assumes that its closure is synchronous; calls it and then immediately returns the promise that it's being applied to. The assumption is that ensure is expected to run at the end of the chain, so it doesn't need to be asynchronous. This works if ensure is used only in that way but breaks if we want to turn the chain into a reusable (compound) promise.

A solution

Luckily, PromiseKit provides enough primitives to make an alternate version of ensure which waits to end the promise until its closure argument (a promise) is done. Here is one way to implement it:

extension Promise {
/// Boolean state that is used for sharing data between
/// promises. It needs to be a class because a struct would
/// be just copied.
private class BoolBox {
/// The stored value
var value: Bool
init(_ value: Bool) {
self.value = value
}
}
/**
The provided closure executes when this promise resolves.
This variant of `ensure` executes just as any other `then`
clause which allows the provided closure to have
ascynchronous code. Unlike `then`, the closure executes on
either fullill or reject (same as `ensure`). If the closure
is rejected, this rejects the containing chain as well.
- Parameter on: The queue to which the provided closure dispatches.
- Parameter execute: The closure that executes when this promise fulfills.
- Returns: A new promise that resolves when all promises
returned from the provided closure resolve.
*/
public func ensureAsync(on q: DispatchQueue? = conf.Q.return, execute body: @escaping () -> Promise<Void>) -> Promise {
// the state for keeping track whether the body executed
// in `then` or should be executed in `recover` to avoid
// executing it in both places.
let executedBody = BoolBox(false)
return self.then(on: q) { value -> Promise in
// update the state that the body is executed in `then`
executedBody.value = true
return body().then(on: q) { () -> Promise in
// if body is rejected, this rejects the containing
// chain as well
// pass through the resolved value
return .value(value)
}
}
// we have to use `recover` instead of `catch` because `catch`
// cascades -- no `then` block is executed unless `recover` is called
// but we pass through the rejection after the body resolves
.recover(on: q) { (error) -> Promise in
// execute body only if it wasn't executed before.
// If there was an error while executing
// body, this is the error we get here,
// but executedBody is already set to true (since
// that happens before actually executing it), so
// the body is still not executed twice.
if !executedBody.value {
// execute the body, and then pass through the rejected error
return body().then(on: q) { Promise(error: error)}
}
else {
// just pass through the rejected error
return Promise(error: error)
}
}
}
}