Posts by Hany Elemary

The video introduces isomorphic applications and their unique benefits and challenges, and then jumps straight into the architecture of React/Redux applications from a testability perspective. Hany follows test-driven development (TDD) principles while building a real-world example–a search application–to demonstrate effective end-to-end testing. Popular testing tools for React/Redux applications, including Enzyme, SinonJS, Expect, Nock and Mountebank, are highlighted.

The need for building highly performant and maintainable user interfaces is now greater than ever. React, a library designed and developed by Facebook, can solve many of the existing problems users face today–including poor browser performance while handling dynamic interactions with high loads of data. React also solves a lot of challenges for front-end developers. Due to its popularity, other supporting frameworks, such as Redux, came into the picture to make applications more predictable, testable, and easier to debug. However, they come with a different approach to front-end development and testing.

Testability is key to maintaining the quality of any application while building confidence in the code as developers refactor their work. Writing proper tests can be a lengthy process, especially for newer frameworks such as React/Redux. Test-Driven Development for React/Redux in an Isomorphic Application LiveLessons quickly gets you up-to-speed on when to build isomorphic applications, how to effectively test your React and Redux code, and how to confidently refactor code while ensuring that business functionality is maintained.

In Building Microservices with SenecaJS: Part 1, we finished writing a simple, RESTful microservice for products with hard-coded data. Now it’s time to hook into a real data store and showcase Seneca’s data abstraction layer. In this post, we will be looking at the JSON file store. In the next post, we will swap this out for a MongoDB store.

There are many advantages to using a file store or in-memory store. One of the advantages is the ease of testability since you don’t have to make an actual connection to the data store. You can run your tests on your local machine (in isolation) and you won’t have to worry about the data getting out sync as it maybe if you’re using a shared database.

There is a couple of changes that are needed to use Seneca’s data entity plugin. First, we need to include the dependencies on “seneca-entity” and the type of store we’re interested in using (jsonfile-store). Here is a snippet:

Now we’re ready to start using the file store. But, we don’t have data in our file store yet. So, it’s time to introduce another end-point to our API in which product information can be added (POST’ed).

In the previous snippet, we accessed the ‘products’ collection on the first line. If it doesn’t exist, it will be created with the first save. Then, in our add action/cmd, we saved the record using the save$ method from the entity plugin, capturing the new product information from the request body (assuming it is JSON). We now need to register the end-point by mapping it to our new action. This is done via the web plugin:

In the snippet above, we accessed the ‘products’ collection on the first line. Then, in our getProductById action, we loaded the record using the product id from request parameter. And that’s it. We have a fully functioning service communicating with a real data store. The full project code is listed on GitHub.

On my current project, our client chose to use Seneca to re-platform their entire e-commerce system. Seneca is a microservices framework built on top of the rich Node.js ecosystem. The project required building a bunch of microservices that could serve multiple clients/channels (web, mobile, in-store kiosk, etc). I hadn’t heard of Seneca prior to the engagement. I was hesitant to use it as the client is mainly a .NET shop. However, after a little bit of digging, I’ve come to enjoy working with the framework as it makes building and organizing small bits of business features really fun.

Seneca wasn’t built initially to serve microservices. It was mainly built to truly decouple components and provide plug-and-play architecture where developers can focus on writing business logic, extending that logic easily and, more importantly, replacing it quickly. However, its clean design enables it to be a strong player in the microservices space. Here are some aspects of Seneca that I really like:

It makes no assumption about your architecture/design patterns. Hence, you’re not locked into an MVC-style pattern though you could organize your code in an MVC way if you wanted to

You can use any of the standard test frameworks/strategies that are available in JavaScript or NodeJS

Perhaps the most important aspect of Seneca is the plugin architecture, which allows the development of simple plugins for things like data abstractions, etc.

That said, there are some things to watch out for. Since Seneca makes no assumption about your architecture, you’re left to come up with design patterns and code organization on your own. Also, due to the richness of Seneca (and NodeJS), there is a bit of a learning curve. However, once you get around that, you start moving quite fast. If you like code generators to get started, there is a seneca yeoman generator though it doesn’t appear to be actively maintained (at the time of this writing) so we didn’t use it.

Enough talk, let’s build a simple, RESTful microservice with Seneca. Following the scheme of e-commerce, our service will be dealing with product information for camera widgets/parts. In this post, we will be using hard-coded data but we will swap this out for a real data store (MongoDB or any other data store) in future posts.

There are two interesting bits that warrant mention. Let’s take a look at the first bit:

this.add('role: products, cmd: getProductById', function() {...});

This is Seneca’s way of registering an action (getProductById) on a business feature/pattern (role: products). Whenever Seneca encounters this pattern from a client, it will execute the callback function associated with this code.

In contrast to registering an action, this.act('role: web') executes the action using Seneca’s web plugin. As I mentioned before, Seneca is transport-layer agnostic. The web plugin provides HTTP features (API routing) for the framework. It is responsible for translating url patterns into Seneca action patterns. It is packaged with Seneca by default so there is no need to include it or require it separately. By default, the alias that we have identified above is a GET request to get a product by the passed in ID. As you can see, that url pattern is mapped to our action function getProductById.

The code above starts up the server on port 3000, using the json body parser to exchange data in JSON. Express is another dependency here which is used by the web module.

Now you’re able to fire up your browser or any other REST client (postman) and hit the following end-point:

http://localhost:3000/products/123

You should be able to see the product information listed in JSON format. The full project is listed on GitHub. A sample http POST request for searching is also included in the code. In future posts, we will be looking at using Seneca’s data abstraction plugin and a file store so we’re not hard-coding the data.

I was recently part of a retrospective at a client site with a mix of ThoughtWorkers and client devs/QAs. The retro was facilitated by a fellow ThoughtWorker in which we started with a safety check. A safety check is a quick test that gauges the comfortability and openness of attendees, by allowing them to anonymously write a number on a sticky note. Typically measured on a scale of 1-5, 1 being closed off, not comfortable bringing up issues, and wanting to leave and 5 being open and comfortable to talk about everything and anything.

The anonymous numbers are then announced to bring awareness to the room. For instance, if there are 2′s or 1′s, other attendees should bear in mind that some aren’t comfortable with issues that may be brought up. Too many 1′s is a sign the retro might not be productive and, perhaps, best to reschedule until the comfortability issues are ironed out.

The retro was successful in terms of openness and comfortability, in spite of encountering a single 2 in the room. However, certain topics which were raised posed some unanswered questions. There were also some additional cross-team topics that were a bit sensitive. This created an environment that could either make individuals open up or shut down in future retros.

A few days later, I’m catching up with some of my colleagues and wondered if another safety check at the end of the retro was necessary. A safety check for how the conversation turned. Feedback on the retro, if you will, though just with safety check numbers. This could gauge whether it made individuals more comfortable opening up to their colleagues and managers, or shut down, risking exposure to good ideas or suggestions for improvements.

I don’t know whether it will be valuable or not since it was not done. However, this seemingly random thought had to come out in a blog post to gauge if others have often felt the same way. Or if others think there might be value for a post-retro safety check in select scenarios.

I’m beyond excited to announce that I have joined ThoughtWorks. Since my acceptance of the position, I have been asked by many colleagues, friends and family; Why ThoughtWorks? I was even asked the same question by ThoughtWorkers during the interview. A week before I start, I’m asked the same question by ThoughtWorks’ People Support department (to highlight my answer in the new hire newsletter).

The truth is, there is no shortage of reasons or answers to this question, especially if software is your craft. In fact, I always felt tempted to answer: “Duh, it’s ThoughtWorks.” However, after all this time, I realized that I didn’t accept the offer because “Duh, it’s ThoughtWorks.” I applied and interviewed with ThoughtWorks because of it. I accepted the offer for different reasons; reasons that didn’t cross my mind until 3 or 4 weeks after my offer acceptance.

Ok, let’s back up a bit. I thought I knew ThoughtWorks, the company, fairly well. My ideas of the company were essentially about Martin Fowler; one of the most insightful people in the software industry. My ideas revolved around building great software with great developers while learning from them along the way. However, I was lucky enough to have worked with great developers in past jobs and also learn from them. In fact, there are tons of companies around the world that hire the best-in-class developers. So, why ThoughtWorks?

It really boils down to this … First, ThoughtWorkers brought out the best in me during the interview process. I felt like I was set up to succeed, even though it was the most radically different interview that I have ever been to. At the end of the full day interview, I felt like they learned everything they could about me. And I certainly learned much more about them than I ever did before.

They asked thought-provoking questions that are purposeful and meaningful. I felt respected as an engineer, but most of all, I felt respected as a human being. Which brings me to the second point … it was obvious that everyone I met at ThoughtWorks had a strong desire to make a big social impact. It was obvious they cared about how technology fits in to advance social and economic justice.

In fact, the first hour of my interview was all about ideas on advancing social and economic justice. What’s incredible about this is that ThoughtWorks doesn’t just say we care about it, they actually do engage in lots of pro-bono projects that serve humanity. To my surprise, during my first week with ThoughtWorks, I was involved in an ongoing social impact program by teaching low-income adults how to code in a real agile setting, helping them develop a real world mobile app. Read Roy’s Social Experiment for more information on how ThoughtWorks got there.

During week 1, ThoughtWorkers are actively trying to set me up for success. From sending me kind regards and good-luck messages to useful resources to involving me in pairing on code reviews for potential hires, and teaching me about the process from the inside. I’m beyond impressed by the level of transparency, thoughtfulness and thoroughness that is visible in everything ThoughtWorkers do. I even got to review the notes from my interview with ThoughtWorks (strengths and weaknesses). How awesome is that?

By the end of week 1, I’m inspired and excited to come to work. I’m excited to see where this journey takes me, personally and professionally.

I was recently introduced to the term “promotion to incompetence” by my good friend Julio. I was fascinated by the term and wanted to explore it more. So I started evaluating my work environment in addition to other roles around me.

Promotion to incompetence, or the Peter Principle (after Laurence J. Peter), is a concept in management theory in which the selection of a candidate for a position is based on the candidate’s performance in their current role, rather than on abilities relevant to the intended role.

For instance, if you’re a strong software engineer, you get promoted to become a tech lead, then perhaps, a manager. However, being a contributor is different from being a leader, which in turn, is different from being a manager. Each role requires different skill sets. In other words, just because you’re a strong software engineer does not mean you will be an effective tech lead or manager.

Promotion to incompetence tackles the classic case of promotion; moving up. Simply, moving a candidate up the hierarchies until they’re no longer effective at their job. However, there is a different take on promotions that may be hard to recognize. It is fair to assume that added (similar) roles and responsibilities may also be considered as a promotion. However, this kind of promotion is more of a lateral expansion than a vertical move. This, too, could lead to incompetence. Julio calls this scenario Empowerment to Incompetence, as opposed to a promotion.

For example, in most software organizations, tech leads are also contributors (that’s two roles). In an empowerment to incompetence scenario, a tech lead is asked to lead more than one team (3 roles), and perhaps join other teams as a contributor (that’s 4 to more roles). At some point, with added workload and responsibilities, the tech lead is unable to gain traction on any single project/team.

The empowerment to incompetence problem can appear to stem from one area; resourcing or lack thereof. However, when examined carefully, it can (almost) always be mapped to a lack of prioritization of the workload.

An attempt to fix the problem could involve time management techniques. For instance, employees may be allocated 20% to Project A, 30% to Team B, 10% to Project C and 40% to Team D. While different people have different multitasking capabilities, humans are not good at multitasking in general.

Though the percentages listed above beg the question; if an organization is willing to spend as little as 10% or 20% of an individual’s time on a project/team, is the project worth taking on at the moment? The answer may very well be yes, for valid reasons. However, the question needs to be asked and the priorities need to be examined.

Resourcing is a hard problem that successful businesses face on a daily basis. However, it will always remain a problem. It only surfaces aggressively and becomes a symptom of a much more deeply rooted problem; poor priority calls, or in the worst case, having competing number one priorities.

Major thanks to my friend Julio Mateo for the inspiration behind this article in addition to proofreading it.

“You cannot be anything you want to be — but you can be a lot more of who you already are.” — Tom Rath, Strength Finder 2.0.

A while back, I was sipping a macchiato with my friend Julio at a local coffee shop when we started having one of our typical thought-provoking conversations. I told him I really liked the quote (above) from Strength Finder, and that I found it eye opening and rather refreshing.

We tend to focus on improving our areas of weakness. Though we often forget that we might be naturally limited in certain areas and, thus, might not improve as much as we would like to, regardless of how much work we put in. And that’s ok … at least it should be. Strength Finder focuses on finding talents that make individuals unique, then nurturing those talents.

“Well, this can also apply to software.” Said Julio. Then he followed up by explaining how software companies often focus their efforts on their software’s weaknesses while forgetting the features or architecture that sets their software apart. After hearing this mapping to software, I immediately started thinking of companies that do this well. Here is the list I could come up with.

Google:

Even though they’re now a major player in many areas, Google will always be a “search” company first. What they’ve done to improve searching is quite remarkable. Google, however, was never about fancy UI design, even though they have very capable engineers. Instead, they realized their users’ goal was to get to the results/content as fast as machinably possible without anything getting in the way. That alone was/is one of the main differentiators between Google and it’s early/current competitors. Here are some interesting features Google implemented to improve its search engine.

Instant Search: Tying users’ search query to a dynamic result set.

Mobile-friendliness: Indicating whether a site is mobile-friendly or not when users are searching on Google using their mobile device.

Apple:

As opposed to Google, Apple has been a software company that excels at aesthetics and design. Captivating, simple designs and interactions are evident in every Apple product. Apple has never been about early innovation or first-to-market. They always considered existing products and made them dead-simple to use. That’s their true innovation and they have stayed true to their software strength and culture.

iPod: It wasn’t the first music/mp3 player, but it changed the music industry.

iPhone: It wasn’t the first smartphone, but it revolutionized the market.

iPad: It wasn’t the first tablet, but it was the one that created a new market.

Facebook:

What makes facebook different from other social networking apps is their dedication to the social graph and connecting people in many ways.

Graph search: A natural way of searching. “Restaurants in London my friends have been to.”

Timeline: A new way of looking at your social history on Facebook.

Microsoft:

Microsoft didn’t do as good a job as the other companies until recently. They led the pack in Operating Systems (Windows) and utility tools (Office). For a while, things seemed to have gotten a little stale at Microsoft until their recent cloud-based solutions (Office 365). But the point remains. They stayed true to their software’s strength.

Conclusion:

As we all get a little introspective around the new year, perhaps it’s time to think about the features we introduce in our products. Perhaps we should constantly put them under the microscope. We should stay true to what makes our software strong, by resisting any temptations to add features that take us away from the “ONE” path that our software does better than anyone else’s.

Designing APIs is no walk in the park. In many instances, when your API becomes popular, it can be difficult or even impossible to change. Whether it’s a programming language or framework, a good API makes a world of difference for developers who use it.

Needless to say, an API needs to be simple. But what’s equally (or more) important is for an API to be simple to get started with. The point is, you need to have a low barrier to entry for new users. jQuery, as opposed to other JavaScript frameworks, does a fantastic job of allowing new developers to, not only become immediately productive with the framework, but also get excited about JavaScript development in general.

However, when not accompanied with good design principles, simplicity may lead to anti-patterns. For instance, the main strength of jQuery is perhaps one of its major weaknesses. The “kitchen sink” jQuery’s overloaded main method violates the single-responsibility principle of API design. It also violates another characteristic of good APIs; Self-documentation, though I’ll write a separate post on that at a later time. Consider the following use of jQuery’s main method below:

Obviously this function has multiple, vastly different responsibilities at this point. So, it’s best to actually separate out the responsibilities into different, self-documenting functions. And while jQuery appears to be overloading the function, it actually isn’t as there is no inherent overloading in JavaScript. After taking a deeper dive into the source code of jQuery, what’s actually happening is that jQuery is validating the arguments/parameters of the $(argument); call to determine which execution path to take. Here is what’s happening in (simplified) pseudo-code:

If argument is a “string”, then

check if argument is a selector and execute path

check if argument is html (regex) and execute path

If argument is an HTML Node, then execute path to wrap in jQuery object.

If argument is a function, then the “callback” path is taken for DOMReady.

This obviously increases the complexity of the code due to the presence of different execution paths. Thus, the maintainability of the code becomes a problem. Additionally, this impacts testing negatively as there are 7 different scenarios that could go wrong within the same function as opposed to one and only one. It is no surprise that jQuery has one of the highest cyclomatic complexity measures of most JavaScript frameworks out there. See image below (less is better).

The hardest part about writing testable JavaScript code is actually re-imagining your front-end as isolated modules, where the reality is, the pieces of the UI are intertwined. Can a button or hyperlink be a module? What about a content section on the page? It’s not always clear what to consider “self-contained” on a webpage of related content. Though it might be easier in some cases than others.

For instance, a tab panel widget is easy to unit test. For one, it’s an established pattern, but more importantly, it doesn’t interact with other content on the page. It’s truly self-contained. But how often do you need to write your own implementation of tab panels when there is no shortage of UI frameworks? What developers often focus on is business logic. That’s what I intend to cover in this article by looking at a real example from an application I’m working on at OCLC.

Problem

To give a little bit of context, I’m working on a search engine application for library catalogs. Though the main feature to be covered here is that, when users search for a book, let’s say “The Hunger Games,” we would like to know whether a “full” or “partial” preview of the book is available on Google. If so, we should display it. This is an example of book preview for The Hunger Games on Google.

Identifying Functionality

As a first step, we need to identify functionality at a very high level:

Determine book preview availability in Google’s index by passing ISBNs as the key(s) to Google. Google Preview’s Dynamic Links API is needed.

Given that a book preview exists, render the internationalized/localized Google preview image button based on users’ current locale (Google dictates that you use their image button for branding purposes). See image below.

The button can then link to Google for the preview or embed the preview within the current page.

Design and Solution

At this point, we’re at a good place to start designing. So, what should be modularized here? The button? Maybe. But that’s not as important as the call to determine book preview availability. That’s the part that can be truly modular and “self-contained.” That’s the part the can be portable from one application to the next. So, this module is going to be only responsible for the data; the request and response. And while Google suggests injecting a script tag dynamically into the DOM to avoid cross-origin requests, we can use JSONP as that’s exactly what it covers.

Let’s call our module: GooglePreviewAvailability

We will need the following properties on our module:

_locale: the locale to internationalize the image button provided by Google

_url: the JSONP url with the callback parameter

We will also need the following public function:

load([string] ISBNs, [function] callbackFn): passes a list of comma-separated isbn numbers to the call, and a callback function to be executed upon a successful google preview availability response. The callback function parameter allows the caller/client great flexibility as to what to do with the response. In other words, the caller can either attach the response to a UI element or maybe embed book preview (if it’s available) within the same page.

Identifying Testability

At this point, we haven’t written any code. We just have a boilerplate/skeleton of our module. We just need to identify what we’ll test. As a developer, you can test the heck out of your code all day long, but without knowing how to structure your test code, your tests aren’t really helping you. Martin Fowler addresses this in his article on async testing in JavaScript.

I usually structure my tests in similar fashion. I look for the following things to test:

Object creation & default values:
Verify the object is created successfully and the proper defaults are set.

Request, Response, Callbacks:If requests (JSONP in our case) are made or callbacks are provided, verify a successful request along with the parameters sent, the response and whether it was valid and, last but not least, successful callbacks.

Code and Tests

Here is a snippet of our module which only includes the constructor. Note that this relies on John Resig’s implementation of simple inheritance in JavaScript in addition to some other home-grown JS modules:

Now that we’ve validated our parameters, we need to test the request, response and callback function. Note that there is also a little bit of validation here for when to make the call. If you look back at the module, you’ll see that the JSONP call isn’t made if the isbn numbers parameter is the empty string.

google-preview-availability-tests.js – verify the request isn’t made for empty isbn

You will also notice that we only care about two cases in the response; (1) “full” preview and (2) “partial” preview. Google will send back “no view” in the “preview” response field when a book isn’t available for preview. At this point, we’re to do nothing. Here’s how we validate that in Jasmine.

NOTE: The test code contains a bit of duplication for the sake of clarity in this blog post.

Wrap up

The point that I (hopefully) illustrated is that we didn’t necessarily have to tie any specific UI component to a particular module. However, we’ve taken a very specific case of making requests and receiving responses very generic, reusable, portable and, most importantly, testable. Now clients of this feature can call this module and pass in their code for what to do with the button. In our case, we’re making the preview_url to the google image button as an href. Here is an example of a caller of this module:

This code could have utilized the same GooglePreviewAvailability module in order to embed a preview panel within the webpage as opposed to linking directly to Google. That’s the flexibility we get when we think about testability. Enjoy!

Using native HTML elements has positive effects on web content accessibility. Elements can be read properly by screen readers according to their function or role. For instance, a button written using the markup below will be announced as: “Refresh Button” by screen readers.

<button id="refresh">Refresh</button>

Nice and easy. However, web development isn’t always this straight forward. We often need to write more complex widgets using a combination of native elements. And in some cases, we use frameworks to tap into well-defined/designed patterns to fulfill this need. That’s largely due to the deficient nature of the HTML spec. But can you imagine for a second a richer HTML standard? A standard encompassing all the well-established widgets that we’ve been inventing, re-inventing and using for years?

Let’s consider a paging toolbar widget … I imagine a markup (in its simplest form) to be something like this:

<pagingtoolbar numPages="10" recsPerPage="10" totalRecords="100"/>

Obviously this is a simple case. There are certainly more complex use cases for paging toolbars. But the point is, we have been using this pattern for years, yet HTML hasn’t caught up to provide an API for it. So, since the example above doesn’t exist in the HTML standard, we use frameworks such as Bootstrap or Foundation or we roll our own. A common markup for pagination is one that Bootstrap provides:

There is nothing wrong with this markup. In fact, it’s perfectly valid, and well-structured. Now, let’s consider how it will be announced by screen readers (VoiceOver on my Mac):

Link. Left pointing double arrow.
List 7 items
Link 1
Link 2
Link 3
…

Screen reader users will gather the context by the time screen reader announces the 5th or 6th line (Link 2 or Link3). The context is gathered by the way things are ordered and where objects are located in the DOM. But wouldn’t it be better if we can announce the context as soon as the user lands on the pagination widget itself?

Here is an example of how one might improve on Bootstrap’s pagination widget:

Now, the widget name/type is announced immediately, making the context much more understandable for screen reader users. This immediate announcement happens because we’re using ARIA (Accessible Rich Internet Application) roles and labels. The ARIA standard helps developers provide additional context to composite/complex widgets and can describe the various states (disabled, checked, busy, etc.) on these widgets.

We can improve this widget even further by introducing a few tweaks. The full, improved markup for this widget can be found here

Following the ARIA spec is instrumental in making web applications more accessible. For more information on ARIA, visit the W3C website