ThoughtWorks Antology 2 - by various authors

Posted on December 26th, 2014

The [second of the ThoughtWorks anthology]:(http://www.amazon.com/ThoughtWorks-Anthology-Software-Technology-Innovation/dp/1937785009) about 4 years after the [first one]:(http://blog.hugocorbucci.com/thoughtworks-antology-various-authors). Like the previous one, this one balances between very narrow concrete subjects and more abstract ones which makes the book age very differently depending on the chapter.

The following notes detail each chapter:

Chapter 1: Introduction - Neal Ford

Presents why an antology is good and how it is not meant to stay current forever. Also presents some of the chapters to come.

PART I: Languages

Chapter 2: The Most Interesting Languages - Ola Bini

A newer version of Rebecca Parsons' essay in the first book but less concerrned about the enterprise vision and more about the diversity of concepts. Ola presents more code samples and is a bit more hands on. He presents Clojure, Coffeescript, Erlang, Factor, Fantom, Haskell and Io (note that those are in alphabetical order in here as well a the book so there is no preference in the languages themselves).

The approach touches various programming paradigms for each language emphasys and then provides resources to learn more about each language.
- Clojure touches on its functional aspect (and lisp origins), immutability, java integration and Software Transactional Memory (STM) to allow for parallelization. This is the first of several JVM languages presented. The lisp syntax is also presented positively with appropriate differentiation with scheme.
- Coffeescript is presented as an overlayer of Javascript with numerous syntax improvements that make the code readability much higher. Most focus around basic type creation, list comprehension and creating a class model.
- Erlang represents the old languages that have interesting ideas. Functional aspect as well as immutability are briefly mentioned, some notes about the concept of patn matching and some applications but the most intersting piece is dedicated to the actors system in place in the language.
- Factor is present to break from the main concepts present. Its stack based approach interrupts the dual OO/functional into something that looks like functional programming with a major state environment which is the stack. Obviously there are serious performance gains that can be achieved with such model but the thinking model has to change a lot.
- Fantom shows an approach to bridge the gap between the JVM (Java Virtual Machine) and CLR (Common Language ? - .Net platform). As another layer on top of languages (similar to coffeescript), fantom is presented with the abstraction benefits that allow the code to be valid in both environments and operate in both. Some focus dedicated to the solution regarding generics and the idea that null objects are special instead of normal. It also covers the dynamic side that Fantom allows
- Haskell shows on the pure functional idea. No mentions of Monads although Ola explains that IO is handled in a "different" way that allows the language to treat the fact as a functional aspect of it. Emphasys in the static typing, pattern matching solutions that Haskell presents as well as the powerful type inferrence system available. He closes with the lazy aspect and the power it can have for your software.
- Io is shown as a pure object oriented language. It also presents the prototype based object orientation (although I missed seeing some comment about Javascript and, therefore, Coffeescript also being prototype based). Ola mentions the dynamic side that the model allows as well as the lazy solution that futures allow. There is also some example of the meta-programming allowed in a form that looks very much like Scheme.

The essay is very good. Great descriptions of concepts with practical examples. Obivously nothing is a deep dive but there no promise for such either. Ola concludes that we should all be studying different languages to learn about different ideas and thought paths.

Aman works on presenting a name for the preference of delegation over inheritance as well as ducktyping over strong typing and other miss conceptions about object oriented modeling.

He starts by explaining what are classes and what are pbjects and t he difference between trying to model systems based on concepts instead of roles. The examples, although good, are not obvious in showing the downside of the first approach. One has to already have faced the problem to see real benefits in the roles approach.
There are also some examples about the problems of utilities class but the samples don't really show the harm that can come from those utilities class nor the cost of those "microtypes".

He then shortly touches on the testability aspect of an object based approach and the use of mocks when designing for objects. Some arguable "wins" presented.

He finally moves into showing some example of object based programming on different languages. Ruby, javacript and groovy are shown for their preference on dynamic objects and mutability that come from that thinking model.
Aman concludes that there are important things that should stay in a class thinking environment for design and compilation time safety but that our systems really deal with running objects in runtime and there are benefits from designing for those.

Mark touches to specific elements of functional programming that can be applied in most object oriented languages. Mostly map-filter-reduce and non mutability. The advices are simple but very effective. There is also a fair amount of OO modeling warning regarding the pitfalls that diving into a functional mindset can bring to an object oriented system.
Note that the filter image is not correct. It shows a map along with a filter.

The end focuses on using functions as first class citizens of a design system. A sucess and failure function to remove the error handling problems and the ideas of continuations (although the concept is not explained) are the end of the essay. Mark gives good advices but stays on a superficial level. The questions around when to start functional and stop oo and vice versa are not addressed.

Article is a good start for people that never had a functional experience.

PART II: Testing

Chapter 5: Extreme Performance Testing - Alistair Jones & Patrick Kua

Alistair and Pat present the idea that performance testing be integrated as a regular agile team's responsibility. They show how many of the concepts used for other testing activities can be applied for performance and the downsides from a traditional approach and benefits from the xp approach.

They touch in a lot of simple applications of basic principles but don't dive into how to deal with the problems that arise from trying to teach all the concepts and ideas as well as the values and costs of performance testing by the whole team.

Brian and Luca show some applications of known patterns into javascript with a simple but effective example. They make the point that javascript usually is consisted of 2 or 3 Javascript systems that are integrated.
- Javascript logic. That's the bit that contains the business logic aout what needs to happen from a busins perspective. Things such as "show an error message if user and password don't match".
- DOM elements/the browser. That's the part that comes from HTML. It shouldnt hold any logic and likely changes as new layouts come out.
- Server side. This is mostly a layer to help you communicate with the back end. This usually consists solely of AJAX calls.

If one follows this pattern for javascript, testing the first group becomes a simple exercise of regular unit test writting. Those tests are now protected from changes in the ajax end points or even from how to trigger a given result. They also should be protected from changes to the DOM.
Now testing the second layer is also a task a lot simpler as there should be no logic other than hooking events and elements together or performing dom/css changes. This is tipically a lot hard to miss when developing since those are the obvious mistakes in a page.

Finally the last part is rarely exercised since it requires essentially a full stack application running to be used and not redundant. In this case, the tests are common integration tests but their failures tend to be easy to catch since they only hold on logic to put the ajax call together.

The article is well layed out and the advices valuable and nicely presented. Recommended for all who have not been writting javascript tests or have been having problems with their js suites.

Chapter 7: Building better acceptance tests - James Bull

The essay presents acceptance tests and some ideas to have the most benefit possible with them. The ideas showns are fairly basic and rely heavily on general automated tests good practices. James starts by defining acceptance tests as automated tests that are drive through the user interface, run on the full software stakc, on real integration points and are part of the continuous integration build. He goes on to argues that acceptance tests should be fast, resilient and maintainable and proposes some techniques to achieve each of those goals.

To keep them fasts, James talks about specific problems with selenium and finding dom elements with retry techniques and backoff retry policies. He also touches on paralelising the tests which force an unmentioned feature of test independence and a mentioned one of being able to run subsets of the test suite easily. Along the selenium road, he also talks about hidding the browser driver to allow for uns in multiple browsers.
Regarding resilience, the suggestions range from not depending on html structure to find elements (using ids), using regular data population from other tests and having dedicated separated integration tests for external services.

Finally, regarding maintainability, the main specific element is the page object pattern. James also mentions treating test code as production code, having coherents suites and creating a layer to reduce the contact dependencies with the test framework.

I think my main insatisfaction with the essay is that it conveys very broad and general messages about automated testing good practices but spends many lines to do so. As an extra, I'm also unsure that I agree with some of the arguments presented in favor of the acceptance test as a growing large user facing automated suite.

The essay terminates by suggesting good practices to adopt the ideas shown in the article. Among those: pair programming, test maintenance, story kick off and demos. All of which are general agile practices.

PART III: Issues in Software Development

Chapter 8: Modern Java web applications - Sam Newman

Sam's essay addresses developers that live in the Java Enterprise Web development work.
He starts off with a chronological summary of this environnments development.
Due to the initial approach coming from desktop development, the first few servers were stateful using servere side session objects that would be referenced to from the client.
To scale such architecture, application containers (such as tomcat, jboss, websphere, etc) had to rely on clustering and replication.

Such features were hard to develop and maintain and had an associated cost to them so stateless servers started growing popular in other languages. To share a session in a stateless server, those server would use browser cookies. Security comes in play since hijacking unencrypted cookies is very easy so secure cookies can be used in addition to regular cookies to store more sensitive data (such as authentication session for money related operations).

We move on then to the idea of containerless web applications for testing and even for regular usage. Now that we have single processed with less replication and clustering involved, being able to handle a certain request load is no longer as simple as adding more hardware. To even avoid getting into the problem, Sam proposes using existing Cache solutions with proper HTTP headers and response codes to enable caching both on browser as well as on CDNs (such as Akamai) or reverse proxies (such Nginx). He continues talking about the segmentation by freshness pattern to allow for page fragment caching or progressive enhancement as another form to allow for partial caching.

He finishes the essay on the post redirect get pattern to avoid double posting problems and bookmarks on weird form submissions.

In general the article shoots in a lot of directions but in a way that seems to encompass a broad amount of knowledge that should be interesting to most developers that are not http/web/java experts.

Chapter 9: Taming the integration problem - Julio Maia

Julio's essay addresses the hard problem of integration testing.
He starts off mentioning various problems that come up when trying to set up a proper integrated environment. He goes on to talk about the stubbing solution and the various pitfalls that one can encounter when trying to implement it.

He moves on to build pipelines and how they can help identify if integration test results can or cannot be trusted and how much needs to be worked on if one of them fails. He goes on to mention how monitors can provide a live view of how much integration is healthy or not.

Moving to another topic, the essay approaches the matter of integration contracts and their monitoring and applications towards more dynamic and reliable testing.

Julio finalizes with the necessity to provide a visual representation of metrics collected about integration and its impact to the projects lifecycle.

In general, a good article that might need some more pictures and examples to be meningful for more people. There is also no addressing the fact that many integrated environments do not provide reliable preprod environments and how to handle continuous changes in both sides and avoid production surprisess.

Chapter 10: Feature toggles in practice - Cosmin Stejerean

Comin's essay is essentially about feature branches VS features toggles. He starts off by talking about feature branches and their pitfalls. He quickly moves into simple feature toggles with conditionals in the code and their better version with object oriented code. From there into some deviation towards very java/c# specific approaches using dependency injection frameworks and annotations to achieve the same goal of turning a feature on or off in a server side.

He then addresses the problems one might have with static assets (such as css, js, images, etc) having to be available differently according to the condition of the toggle.

Some more digression into the concerns about unexpected release and secrecy revelations to point out that this danger is often a higher management concern.

He moves on to talk about the difference between runtime and built time toggles and their advantages and disadvantages. One being the problem with incompatible dependencies which is very hard to solve in static languages after built time.

Finally, Cosmin briefly mentions the concern about testability of features branch essencially saying that there is no need to qa all combinations but only the ones that are going to be relased. The assumption here that I don't buy is that, with runtime toggles, this argument still applies. The essay ends reminding the reader that feature toggles are not meant to stay in the code forever but rather live a short fe and be cleaned up as soon as possible.

Chapter 11: Driving innovation into delivery - Marc McNeil

Marc's essay walks the read throught a process of evolving an idea to the point of delivery. He addresses the concerns about how to handle prioritizing, evolving, shaping and correcting the ideas with various activities from different methodologies.

He starts off talking about collaboration and ensuring common understanding of the idea and the goals. As the idea evolves, Marc moves into how to determine a feature set that is suitable to conquer your future customers and not make them feel upset or frustrated with your product. He goes into the fact that frustration rarely comes from a feature lack but frequently from a poor quality feature.

He suggests learning from others as well as learning from a trusted group of early users that are more willing to work closely with the developing team.

He also talks about putting yourself (and any member of the team) trying to achieve a goal in an area that is not their expertise to develop empathy with the users of the system to be. He talks about personas, insights from everywhere, gathering ideas from all levels and finally moves into estimating works and prioritizing work to be done.

He finishes up recommending to test your ideas and ensure that your can always quickly respond to a new feature requirement or customer change.

The whole essay brings several techniques from the lean and ux worlds along with various agile pratices to move an idea through all the development process needed to transform it into a successful product with a large, loyal and healthy customer base. Recommended to all that are starting into any of the various trends that lead towards building a product.

PART IV: Data Visualization

Chapter 12: A Thousand Words - Farooq Ali

Farooq brings a much less common subject into the book that opens a whole new set of ideas to explore. His essay is an introduction to the subject of Information Visualization or infovis.
He starts by the problem of presenting the amount of data we are currently faced with. Spreadsheets and tables and most of the tools commons in the enterprise world is not suited to deal with the amount of data that most of our applications and customers generate. The ability to visualize that pletoria of data in matter of seconds is priceless to react appropriately and stay tuned to your customers and keep them happy and excited about your product.

Farooq presents a process very similar to the agile development processes to evolve infovis elements. We start defining a domain task which consist of understanding which information is desired from the data available. Once defined what to present, we move into understanding which tasks we would like the viewer of the graph to realize when looking at the graph. Example are filtering, identifying extremums, correlate and others.
Once we understand what we would like to present and what task we wish the viewers to perform, the infovis designer works on identifying the data abstract that suits the purposes defined. Those can be to have a quantitative view on data (i.e. which has an order and magnitude of variation such as 1 unit vs 5 units vs 100 units), ordinal data (that is has an order but the variation is unknown or irrelevant such as happy vs unhappy) or nominal dara (which has no ordering, just discriminates between different data points).

In hands of the domain task, the task abstraction and the data abstraction, the infovis designer can choose an appropriate visual encoding to portrait the information desired. A 3 stages process helps discovering a good visual encoding: feature extraction, pattern matching and goal directed processing. Feature extraction works in our intuitive and automatic ability to extract information from an image. Things such a proximity, size, length, color intensity, form and enclosure (among others) are processed by our brains with such velocity that we are completely unconcious about it. To leverage this capacity of feature extraction with such speed allows the infovis desginer to capture his audiences attention.

Pattern matching is another one of those amazing abilities our brain has that allows us to identify common things within various elements and, intuitively, create an association between those elements. A lot of attributes that allow us to extract features also help us match patterns. Painting elements the same color or giving them the same form or linking them with a line are very powerful patterns to induce a feeling of associaton between those elements.

Once those two "automatic" processes are performed, we need to switch back to the conscious mind to process more complex elements such as words or numbers.
Farooq moves on to quickly touch how to handle change over time and how it plays with our intuitive abilities. He finalizes the essay mentioning a few tools that help generate infovis graphic elements as well as deciding which ones are more effective for the desired goals.

The article presents the ideas and provides various sources to explore the subject a lot more. If you are already familiar with the techniques and ideas behind infovis, it is probably very basic on the matter. However, if you have merely seen some infovis and never actually tried producing one or failed to do so, this article is a great start in the right path.

Overall

In general, the book is, as described by Neal in the first chapter, a wide variety of experience reports, introductory essays and methodology descriptions for very different topics. It is unlikely that the whole book will be all news for any single reader but it is even more unlikely that there is nothing here that a reader will learn going through the book. Use this shorter descriptions of the chapters to identify the chapters that are more interesting to you and dive into the subjects that passion you.