The simple way that I think of insight, or those âah haâ moments, is by remembering a question Ward Cunningham uses a lot:

âWhat did you learn that you didnât expect?â or âWhat surprised you?â

Ward uses these questions to reveal insights, rather than have somebody tell him a bunch of obvious or uneventful things he already knows. For example, if you ask somebody what they learned at their presentation training, theyâll tell you that they learned how to present more effectively, speak more confidently, and communicate their ideas better.

No kidding.

But if you instead ask them, âWhat did you learn that you didnât expect?â they might actually reveal some insight and say something more like this:

âEven though we say donât shoot the messenger all the time, you ARE the message.â

Or

âIf you win the heart, the mind follows.â

Itâs the non-obvious stuff, that surprises you (at least at first). Or sometimes, insight strikes us as something that should have been obvious all along and becomes the new obvious, or the new normal.

Ward used this insights gathering technique to more effectively share software patterns. He wanted stories and insights from people, rather than descriptions of the obvious.

Iâve used it myself over the years and it really helps get to deeper truths. If you are a truth seeker or a lover of insights, youâll enjoy how you can tease out more insights, just by changing your questions. For example, if you have kids, donât ask, âHow was your day?â Ask them, âWhat was the favorite part of your day?â or âWhat did you learn that surprised you?â

Wow, I now this is a short post, but I almost left without defining insight.

According to the dictionary, insight is âThe capacity to gain an accurate and deep intuitive understanding of a person or thing.â Or you may see insight explained as inner sight, mental vision, or wisdom.

I like Edward de Bonoâs simple description of insight as âEureka moments.â

Some people count steps in their day. I count my âah-haâ moments. After all, the most important ingredient of effective ideation and innovation is âŠyep, you guessed it â insight!

For a deeper dive on the power of insight, read my page on Insight explained, on Sources Of Insight.com

Most problem are quite straightforward to solve: when something is slow, you can either optimize it or parallelize it. When you hit a throughput barrier, you partition a workload to more workers. Although when you face problems that involve Garbage Collection pauses or simply hit the limit of the virtual machine you're working with, it gets much harder to fix them.

When you're working on top of a VM, you may face things that are simply out of your control. Namely, time drifts and latency. Gladly, there are enough battle-tested solutions, that require a bit of understanding of how JVM works.

If you can serve 10K requests per second, conforming with certain performance (memory and CPU parameters), it doesn't automatically mean that you'll be able to linearly scale it up to 20K. If you're allocating too many objects on heap, or waste CPU cycles on something that can be avoided, you'll eventually hit the wall.

The simplest (yet underrated) way of saving up on memory allocations is object pooling. Even though the concept is sounds similar to just pooling objects and socket descriptors, there's a slight difference.

When we're talking about socket descriptors, we have limited, rather small (tens, hundreds, or max thousands) amount of descriptors to go through. These resources are pooled because of the high initialization cost (establishing connection, performing a handshake over the network, memory-mapping the file or whatever else). In this article we'll talk about pooling larger amounts of short-lived objects which are not so expensive to initialize, to save allocation and deallocation costs and avoid memory fragmentation.

This is a practical book about the work of creating software and providing estimates when needed. Her estimation troubleshooting guide highlights many of the hidden issues with estimating such as: multitasking, student syndrome, using the wrong units to estimate, and trying to estimates things that are too big. — Ryan Ripley

While there is agreement that you should use DoD at scale, how to apply it is less clear.

The Definition of Done (DoD) is an important technique for increasing the operational effectiveness of team-level Agile. The DoD provides a team with a set of criteria that they can use to plan and bound their work. As Agile is scaled up to deliver larger, more integrated solutions the question that is often asked is whether the concept of the DoD can be applied. And if it is applied, does the application require another layer of done (more complexity)?

The answer to the first question is simple and straightforward. If the question is whether the Definition of Done technique can be used as Agile projects are scaled, then the answer is an unequivocal âyesâ. In preparation for this essay I surveyed a few dozen practitioners and coaches on the topic to ensure that my use of the technique at scale wasnât extraordinary. To a person, they all used the technique in some form. Mario Lucero, an Agile Coach in Chile, (interviewed on SPaMCAST 334) said it succinctly, âNo, the use of Definition of Done doesn’t depend on how large is the project.â

While everyone agreed that the DoD makes sense in a scaled Agile environment, there is far less consensus on how to apply the technique. The divergence of opinion and practice centered on whether or not the teams working together continually integrated their code as part of their build management process. There are two different camps. The first camp typically finds themselves in organizations that integrated functions either as a final step in a sprint, performed integration as a separate function outside of development or as a separate hardening sprint. This camp generally feels that to apply the Definition of Done requires a separate DoD specifically for integration. This DoD would include requirements for integrating functions, testing integration and architectural requirements that span teams. The second camp of respondent finds themselves in environments where continuous integration is performed. In this scenario each respondent either added integration criteria in the team DoD or did nothing at all. The primary difference boiled down to whether the team members were responsible for making sure their code integrated with the overall system or whether someone else (real or perceived) was responsible.

In practice the way that DoD is practiced includes a bit of the infamous âit dependsâ magic. During our discussion on the topic, Luc Bourgault from Wolters Kluwer stated, âin a perfect world the definition should be same, but I think we should be accept differences when it makes sense.â Pradeep Chennavajhula, Senior Global VP at QAI, made three points:

Principles/characteristics of Definition of done do not change by size of the project.

However, the considerations and detail will be certainly impacted.

This may however, create a perception that Definition of Done varies by size of project.

The Definition of Done is useful for all Agile work whether a single team or a large scaled effort. However, how you have organized your Agile effort will have more of a potential impact on your approach.

Fangjin Yang, creator of the Druid real-time analytical database, talks with Robert Blumen. They discuss the OLAP (online analytical processing) domain, OLAP concepts (hypercube, dimension, metric, and pivot), types of OLAP queries (roll-up, drill-down, and slicing and dicing), use cases for OLAP by organizations, the OLAP storeâs position in the enterprise workflow, what âreal timeâ […]

I had this running inside a Python script which incremented ‘skip’ by 10,000 on each iteration as long as ‘crimesProcessed’ came back with a value > 0.

To start with the ‘CATEGORY’ relationships were being created very quickly but it slowed down quite noticeably about 1 million nodes in.

I profiled the queries but the query plans didn’t show anything obviously wrong. My suspicion was that I had a super node problem where the cypher run time was iterating through all of the sub category’s relationships to check whether one of them pointed to the crime on the other side of the ‘MERGE’ statement.

I cancelled the import job and wrote a query to check how many relationships each sub category had. It varied from 1,000 to 93,000 somewhat confirming my suspicion.

Michael suggested tweaking the query to use the shortestpath function to check for the existence of the relationship and then use the ‘CREATE’ clause to create it if it didn’t exist.

The neat thing about the shortestpath function is that it will start from the side with the lowest cardinality and as soon as it finds a relationship it will stop searching. Let’s have a look at that version of the query:

This worked much better – 10,000 nodes processed in ~ 2.5 seconds – and the time remained constant as more relationships were added. This allowed me to create all the category nodes but we can actually do even better if we use CREATE UNIQUE instead of MERGE

It is good practice to first write large user stories (commonly known as epics) and then to split them into smaller pieces, a process known as product backlog refinement or grooming. When product backlog items are split, they are often re-estimated.

I’m often asked if the sum of the estimates on the smaller stories must equal the estimate on the original, larger story.

No.

Part of the reason for splitting the stories is to understand them better. Team members discuss the story with the product owner. As a product owner clarifies a user story, the team will know more about the work they are to do.

That improved knowledge should be reflected in any estimates they provide. If those estimates don’t sum to the same value as the original story, so be it.

But What About the Burndown?

But, I hear you asking, what about the release burndown chart? A boss, client or customer was told that a story was equal to 20 points. Now that the team split it apart, it’s become bigger.

When we told them the story would be 20 points, that meant perhaps 20, perhaps 15, perhaps 25. Perhaps even 10 or 40 if things went particularly well or poorly.

OK, you’ve probably delivered that message, and it may have gone in one ear and out the other of your boss, client or customer. So here’s something else you should be doing that can protect you against a story becoming larger when split and its parts are re-estimated.

I’ve always written and trained that the numbers in Planning Poker are best thought of as buckets of water.

You have, for example an 8 and a 13 but not a 10 card. If you have a story that you think is a 10, you need to estimate it as a 13. This slight rounding up (which only occurs on medium to large numbers) will mitigate the effect of stories becoming larger when split.

Consider the example of a story a team thinks is a 15. If they play Planning Poker the way I recommend, they will call that large story a 20.

Later, they split it into multiple smaller stories. Let’s say they split it into stories they estimate as 8, 8 and 5. That’s 21. That’s significantly larger than the 15 they really thought it was, but not much larger at all than the 20 they put on the story.

In practice, I’ve found this slight pessimistic bias to work well to counter the natural tendency I believe many developers have to underestimate, and to provide a balance against those who will be overly shocked when any actual overruns its estimate.

I hear all the time estimating is the same as guessing. This is not true mathematically nor is not true business process wise. This is an approach used by many (guessing), not understanding that making decisions in the presence of uncertainty requires we understand the impact of that decision. When that future is uncertain, we need to know that impact in probabilistic terms. And with this, comes confidence, precision, and accuracy of the estimate.

Whatâs the difference betweenÂ estimateÂ andÂ guess? The distinction between the two words is one of the degree of care taken in arriving at a conclusion.

The word EstimateÂ is derived from the Latin wordÂ aestimare, meaning to value. The term is has the origin ofÂ estimable, which means capable of being estimated or worthy of esteem, and of courseÂ esteem, which means regard as in High Regard.

To estimate means to judge the extent, nature, or value of something - connected to the regard - he is held in high regard, with the implication that the result is based on expertise or familiarity. An estimate is the resulting calculation or judgment. A related term isÂ approximation, meaning close or near.

In between a guess and an estimate is an educated guess, a more casual estimate. An idiomatic term for this type of middle-ground conclusion is ballpark figure. The origin of this American English idiom, which alludes to a baseball stadium, is not certain, but one conclusion is that it is related to in the ballpark, meaning close in the sense that one at such a location may not be in a precise location but is in the stadium.

To guess is to believe or suppose, to form an opinion based on little or no evidence, or to be correct by chance or conjecture. A guess is a thought or idea arrived at by one of these methods. Synonyms forÂ guessÂ includeÂ conjectureÂ andÂ surmise, which likeÂ guessÂ can be employed both as verbs and as nouns.

We could have a hunch or an intuition, or we can engage in guesswork or speculation. Dead reckoning is same thing asÂ guesswork.Â Â Dead reckoning wasÂ originally referred to a navigation process based on reliable information. Near synonyms describing thoughts or ideas developed with more rigor includeÂ hypothesisÂ andÂ supposition, as well asÂ theoryÂ andÂ thesis.

A guess is a casual, perhaps spontaneous conclusion. An estimate is based on intentional Â thought processes supported by data.

What Does This Mean For Projects?

If we'reÂ guessing we're making uninformed conclusions usually in the absence of data, experience, or any evidence of credibility. If we'reÂ estimating we are making informed conclusions based on data, past performance, models - including Monte Carlo models, and parametric models.

When we hearÂ decisionsÂ can be madeÂ withoutÂ estimates.Â Or all estimating is guessing, we now mathematically and business process - neither of this is true.

The most frequent questions we answer for developers and devops are about our architecture and how we achieve such high availability. Some of them are very skeptical about high availability with bare metal servers, while others are skeptical about how we distribute data worldwide. However, the question I prefer is “How is it possible for a startup to build an infrastructure like this”. It is true that our current architecture is impressive for a young company:

Just like Rome wasn't built in a day, our infrastructure wasn't as well. This series of posts will explore the 15 instrumental steps we took when building our infrastructure. I will even discuss our outages and bugs in order to you to understand how we used them to improve our architecture.

The first blog post of this series focused on our early days in beta and the second post on the first 18 months of the service, including our first outages. In this last post, I will describe how we transformed our "startup" architecture into something new that was able to meet the expectation of big public companies.

When you are implementing a microservices architecture you want to keep services small. This should also apply to the frontend. If you don't, you will only reap the benefits of microservices for the backend services. An easy solution is to split your application up into separate frontends. When you have a big monolithic frontend that canât be split up easily, you have to think about making it smaller. You can decompose the frontend into separate components independently developed by different teams.

Imagine you are working at a company that is switching from a monolithic architecture to a microservices architecture. The application your are working on is a big client facing web application. You have recently identified a couple of self-contained features and created microservices to provide each functionality. Your former monolith has been carved down to bare essentials for providing the user interface, which is your public facing web frontend. This microservice only has one functionality which is providing the user interface. It can be scaled and deployed separate from the other backend services.

You are happy with the transition: Individual services can fit in your head, multiple teams can work on different applications, and you are speaking on conferences on your experiences with the transition. However youâre not quite there yet: The frontend is still a monolith that spans the different backends. This means on the frontend you still have some of the same problems you had before switching to microservices. The image below shows a simplification of the current architecture.

With a monolithic frontend you never get the flexibility to scale across teams as promised by microservices.

Backend teams can't deliver business value without the frontend being updated since an API without a user interface doesn't do much. More backend teams means more new features, and therefore more pressure is put on the frontend team(s) to integrate new features. To compensate for this it is possible to make the frontend team bigger or have multiple teams working on the same project. Because the frontend still has to be deployed in one go, teams cannot work independently. Changes have to be integrated in the same project and the whole project needs to be tested since a change can break other features.
Another option is to have the backend teams integrate their new features with the frontend and submitting a pull request. This helps in dividing the work, but to do this effectively a lot of knowledge has to be shared across the teams to get the code consistent and on the same quality level. This would basically mean that the teams are not working independently. With a monolithic frontend you never get the flexibility to scale across teams as promised by microservices.

Besides not being able to scale, there is also the classical overhead of a separate backend and frontend team. Each time there is a breaking change in the API of one of the services, the frontend has to be updated. Especially when a feature is added to a service, the frontend has to be updated to ensure your customers can even use the feature. If you have a frontend small enough it can be maintained by a team which is also responsible for one or more services which are coupled to the frontend. This means that there is no overhead in cross team communication. But because the frontend and the backend can not be worked on independently, you are not really doing microservices. For an application which is small enough to be maintained by a single team it is probably a good idea not to do microservices.

If you do have multiple teams working on your platform, but you were to have multiple smaller frontend applications there would have been no problem. Each frontend would act as the interface to one or more services. Each of these services will have their own persistence layer. This is known as vertical decomposition. See the image below.

When splitting up your application you have to make sure you are making the right split, which is the same as for the backend services. First you have to recognize bounded contexts in which your domain can be split. A bounded context is a partition of the domain model with a clear boundary. Within the bounded context there is high coupling and between different bounded contexts there is low coupling. These bounded contexts will be mapped to micro services within your application. This way the communication between services is also limited. In other words you limit your API surface. This in turn will limit the need to make changes in the API and ensure truly separately operating teams.

Often you are unable to separate your web application into multiple entirely separate applications. A consistent look and feel has to be maintained and the application should behave as single application. However the application and the development team are big enough to justify a microservices architecture. Examples of such big client facing applications can be found in online retail, news, social networks or other online platforms.

Although a total split of your application might not be possible, it might be possible to have multiple teams working on separate parts of the frontend as if they were entirely separate applications. Instead of splitting your web app entirely you are splitting it up in components, which can be maintained separately. This way you are doing a form of vertical decomposition while you still have a single consistent web application. To achieve this you have a couple of options.

Share code

You can share code to make sure that the look and feel of the different frontends is consistent. However then you risk coupling services via the common code. This could even result in not being able to deploy and release separately. It will also require some coordination regarding the shared code.

Therefore when you are going to share code it is generally a good a idea to think about the API that itâs going to provide. Calling your shared library âcommonâ, for example, is generally a bad idea. The name suggests developers should put any code which can be shared by some other service in the library. Common is not a functional term, but a technical term. This means that the library doesnât focus on providing a specific functionality. This will result in an API without a specific goal, which will be subject to change often. This is especially bad for microservices when multiple teams have to migrate to the new version when the API has been broken.

Although sharing code between microservices has disadvantages, generally all microservices will share code by using open source libraries. Because this code is always used by a lot of projects, special care is given to not breaking compatibility. When youâre going to share code it is a good idea to uphold your shared code to the same standards. When your library is not specific to your business, you might as well release it publicly to encourage you think twice about breaking the API or putting business specific logic in the library.

Composite frontend

It is possible to compose your frontend out of different components. Each of these components could be maintained by a separate team and deployed independent of each other. Again it is important to split along bounded contexts to limit the API surface between the components. The image below shows an example of such a composite frontend.

Admittedly this is an idea we already saw in portlets during the SOA age. However, in a microservices architecture you want the frontend components to be able to deploy fully independently and you want to make sure you do a clean separation which ensures there is no or only limited two way communication needed between the components.

It is possible to integrate during development, deployment or at runtime. At each of these integration stages there are different tradeoffs between flexibility and consistency. If you want to have separate deployment pipelines for your components, you want to have a more flexible approach like runtime integration. If it is more likely different versions of components might break functionality, you need more consistency. You would get this at development time integration. Integration at deployment time could give you the same flexibility as runtime integration, if you are able to integrate different versions of components on different environments of your build pipeline. However this would mean creating a different deployment artifact for each environment.

Software architecture should never be a goal, but a means to an end

Combining multiple components via shared libraries into a single frontend is an example of development time integration. However it doesn't give you much flexibility in regards of separate deployment. It is still a classical integration technique. But since software architecture should never be a goal, but a means to an end, it can be the best solution for the problem you are trying to solve.

More flexibility can be found in runtime integration. An example of this is using AJAX to load html and other dependencies of a component. Then the main application only needs to know where to retrieve the component from. This is a good example of a small API surface. Of course doing a request after page load means that the users might see components loading. It also means that clients that donât execute javascript will not see the content at all. Examples are bots / spiders that donât execute javascript, real users who are blocking javascript or using a screenreader that doesnât execute javascript.

When runtime integration via javascript is not an option it is also possible to integrate components using a middleware layer. This layer fetches the html of the different components and composes them into a full page before returning the page to the client. This means that clients will always retrieve all of the html at once. An example of such middleware are the Edge Side Includes of Varnish. To get more flexibility it is also possible to implement a server which does this yourself. An open source example of such a server is Compoxure.

Once you have you have your composite frontend up and running you can start to think about the next step: optimization. Having separate components from different sources means that many resources have to be retrieved by the client. Since retrieving multiple resources takes longer than retrieving a single resource, you want to combine resources. Again this can be done at development time or at runtime depending on the integration techniques you chose decomposing your frontend.

Conclusion

When transitioning an application to a microservices architecture you will run into issues if you keep the frontend a monolith. The goal is to achieve good vertical decomposition. What goes for the backend services goes for the frontend as well: Split into bounded contexts to limit the API surface between components, and use integration techniques that avoid coupling. When you are working on single big frontend it might be difficult to make this decomposition, but when you want to deliver faster by using multiple teams working on a microservices architecture, you cannot exclude the frontend from decomposition.

For a long time, one of the major things that held me back in life was thinking I needed to ask permission to do something or be someone. I lived with a mentality that allowed others to limit and define my potential. I allowed other people to tell me who I was, what I was […]

Update 27th July 2015:
The Design Support Library is now available, simplifying the implementation of elements like the Floating Action Button, check out the post for details.

Original Post:
Material design is a new system for visual, interaction and motion design. We originally launched the Topeka web app as an Open Source example of material design on the web.
Today, weâre publishing a new material design example: The Android version of Topeka. It demonstrates that the same branding and material design principles can be used to create a consistent experience across platforms.
Grab the code today on GitHub.
table, th, td {
border: clear;
border-collapse: collapse;
}
The juicy bits
While the project demonstrates a lot of different aspects of material design, letâs take a quick look at some of the most interesting bits.
Transitions
Topeka for Android features several possibilities for transition implementation. For starters the Transitions API within ActivityOptions provides an easy, yet effective way to make great transitions between Activities.
To achieve this, we register the shared string in a resources file like this:

For multiple transition participants with ActivityOptions you can take a look at the CategorySelectionFragment.
Animations
When it comes to more complex animations you can orchestrate your own animations as we did for scoring.
To get this right it is important to make sure all elements are carefully choreographed.
The AbsQuizView class performs a handful of carefully crafted animations when a question has been answered:
The animation starts with a color change for the floating action button, depending on the provided answer. After this has finished, the button shrinks out of view with a scale animation. The view holding the question itself also moves offscreen. We scale this view to a small green square before sliding it up behind the app bar. During the scaling the foreground of the view changes color to match the color of the fab that just disappeared. This establishes continuity across the various quiz question states.
All this takes place in less than a secondâs time. We introduced a number of minor pauses (start delays) to keep the animation from being too overwhelming, while ensuring itâs still fast.
The code responsible for this exists within AbsQuizViewâs performScoreAnimation method.
FAB placement
The recently announced Floating Action Buttons are great for executing promoted actions. In the case of Topeka, we use it to submit an answer. The FAB also straddles two surfaces with variable heights; like this:
To achieve this we query the height of the top view (R.id.question_view) and then set padding on the FloatingActionButton once the view hierarchy has been laid out:

and setClipToOutline(true) on the target view in order to get the right shadow shape.
Check out more details within the outlineprovider package within Topeka for Android.
Vector Drawables
We use vector drawables to display icons in several places throughout the app. You might be aware of our collection of Material Design Icons on GitHub which contains about 750 icons for you to use. The best thing for Android developers: As of Lollipop you can use these VectorDrawables within your apps so they will look crisp no matter what density the deviceâs screen. For example, the back arrow ic_arrow_back from the icons repository has been adapted to Androidâs vector drawable format.

The vector drawable only has to be stored once within the res/drawable folder. This means less disk space is being used for drawable assets.
Property Animations
Did you know that you can easily animate any property of a View beyond the standard transformations offered by the ViewPropertyAnimator class (and itâs handy View#animate syntax)? For example in AbsQuizView we define a property for animating the viewâs foreground color.

This is not particularly new, as it has been added with API 12, but still can come in quite handy when you want to animate color changes in an easy fashion.
Tests
In addition to exemplifying material design components, Topeka for Android also features a set of unit and instrumentation tests that utilize the new testing APIs, namely âGradle Unit Test Supportâ and the âAndroid Testing Support Library.â The implemented tests make the app resilient against changes to the data model. This catches breakages early, gives you more confidence in your code and allows for easy refactoring. Take a look at the androidTest and test folders for more details on how these tests are implemented within Topeka. For a deeper dive into Testing on Android, start reading about the Testing Tools.
Whatâs next?
With Topeka for Android, you can see how material design lets you create a more consistent experience across Android and the web. The project also highlights some of the best material design features of the Android 5.0 SDK and the new Android Design Library.
While the project currently only supports API 21+, thereâs already a feature request open to support earlier versions, using tools like AppCompat and the new Android Design Support Library.
Have a look at the project and let us know in the project issue tracker if youâd like to contribute, or on Google+ or Twitter if you have questions.Join the discussion on

Our current AngularJS project has beenÂ under developmentÂ for about 2.5 years, so the number of unit tests has increased enormously. We tend to have a coverage percentage near 100%, which led to 4000+ unit tests. These include service specs and view specs. You may know that AngularJS - when abused a bit - is not suited for super large applications, but since we tamed the beast and have an application with more than 16,000 lines of high performing AngularJS code, weÂ want to keep in charge about the total development processÂ without any performance losses.

We are using Karma Runner with Jasmine, which is fine for a small number of specs and for debugging, but running the full test suite takes up to 3 minutes on a 2.8Ghz MacBook Pro.

We are testing our code continuously, so we came up with a solution to split al the unit tests into several shards. This parallel execution of the unit tests decreased the execution time a lot. We will later write about the details of this Karma parallelization on this blog. ShardingÂ helped us a lot when we want to run the full unit test suite, i.e. when using it in the pre push hook, but during development you want quick feedback cycles about coverage and failing specs (red-green testing).

With such a long unit test cycle, even when running in parallel, many of our developers are fdescribe-ing the specs on which they are working, so that the feedback is instant. However, this is quite labor intensive and sometimes anÂ fdescribe is pushed accidentally.

And then.... we discovered WallabyJS. It is just an ordinary test runner like Karma. Even the configuration file is almost a copy of our karma.conf.js.
The difference is in the details.Â Out of the box it runs the unit test suite in 50 secs, thanks to the extensive use of Web Workers. Then the fun starts.

Screenshot of Wallaby In action (IntelliJ). Shamelessly grabbed from wallaby.com

I use Wallaby as IntelliJ IDEA plugin, which adds colored annotations to the left margin of my code. Green squares indicate covered lines/statements, orange give me partly covered code and grey means "please write a test for this functionality or IÂ introduceÂ hard to find bugs". Colorblind people see just kale green squares on every line, since the default colors are not chosen very well, but these colors are adjustable via the Preferences menu.

Clicking on a square pops up a box with a list of tests that induces the coverage. When the test failed, it also tells me why.

A dialog box showing contextual information (wallaby.com)

Since the implementation and the tests are now instrumented, finding bugs and increasing your coverage goes a lot faster. Beside that, you don't need to hassle with fdescribes and fits to run individual tests during development. Thanks to the instrumentation Wallaby is running your tests continuously and re-runsÂ only the relevant tests for the parts that you are working on. Real time.

It is just like in mathematics class when I hadÂ to make a proof for Thalesâ theorem I wrote âCanât you see that B has a right angle?! Q.E.D.â, but heÂ still gave me an F grade.

You want to make things work, right? So you start programming until your feature is implemented. When it is implemented, it works, so you do not need any tests. You want to proceed and make more cool features.

Suddenly feature 1 breaks, because you did something weird in some service that is reused all over your application. Ok, letâs fix it, keep refreshing the page until everything is stable again. This is the point in time where you regret that you (or even better, your teammate) did not write tests.

In this article I give you 5 reasons why you should write them.

1. Regression testing

The scenario describes in the introduction is a typical example of a regression bug. Something works, but it breaks when you are looking the other way.
When you had tests with 100% code coverage, a red error had been appeared in the console or â even better â a siren goes off in the room where you are working.

Although there are some misconceptions about coverage, it at least tells others that there is a fully functional test suite. And it may give you a high grade when an audit company like SIG inspects your software.

100% Coverage feels so good

100% Code coverage does not mean that you have tested everything.
This means that the test suite it implemented in such a way that it calls every line of the tested code, but says nothing about the assertions made during its test run. If you want to measure if your specs do a fair amount of assertions, you have to do mutation testing.

This works as follows.

An automatic task is running the test suite once. Then some parts of you code are modified, mainly conditions flipped, for loops made shorter/longer, etc. The test suite is run a second time. If there are tests failing after the modifications begin made, there is an assertion done for this case, which is good.
However, 100% coverage does feel really good if you are an OCD-person.

The better your test coverage and assertion density is, the higher probability to catch regression bugs. Especially when an application grows, you may encounter a lot of regression bugs during development, which is good.

Suppose that a form shows a funny easter egg when the filled in birthdate is 06-06-2006 and the line of code responsible for this behaviour is hidden in a complex method. A fellow developer may make changes to this line. Not because he is not funny, but he just does not know. A failing test notices him immediately that he is removing your easter egg, while without a test you would find out the the removal 2 years later.

Still every application contains bugs which you are unaware of. When an end user tells you about a broken page, you may find out that the link he clicked on was generated with some missing information, ie. users//edit instead of users/24/edit.

When you find a bug, first write a (failing) test that reproduces the bug, then fix the bug. This will never happen again. You win.

2. Improve the implementation via new insights

âPremature optimalization is the root of all evilâ is something you hear a lot. This does not mean that you have to implement you solution pragmatically without code reuse.

Good software craftmanship is not only about solving a problem effectively, also about maintainability, durability, performance and architecture. Tests can help you with this. If forces you to slow down and think.

If you start writing your tests and you have trouble with it, this may be an indication that your implementation can be improved. Furthermore, your tests let you think about input and output, corner cases and dependencies. So do you think that you understand all aspects of the super method you wrote that can handle everything? Write tests for this method and better code is garanteed.

Test Driven Development even helps you optimizing your code before you even write it, but that is another discussion.

3. It saves time, really

Number one excuse not to write tests is that you do not have time for it or your client does not want to pay for it. Writing tests can indeed cost you some time, even if you are using boilerplate code elimination frameworks like Mox.

However, if I ask you whetherÂ you would make other design choices if you had the chance (and time) to start over, you probably would say yes. A total codebase refactoring is a âno goâ because you cannot oversee what parts of your application will fail. If you still accept the refactoring challenge, it will at least give you a lot of headache and costs you a lot of time, which you could have been used for writing the tests. But you had no time for writing tests, right? So your crappy implementation stays.

A bug can always be introduced, even with good refactored code. How many times did you say to yourself after a day of hard working that you spend 90% of your time finding and fixing a nasty bug? You are want to write cool applications, not to fix bugs.
When you have tested your code very well, 90% of the bugs introduced are catched by your tests. Phew, that saved the day. You can focus on writing cool stuff. And tests.

In the beginning, writing tests can take up to more than half of your time, but when you get the hang of it, writing tests become a second nature. It is important that you are writing code for the long term. As an application grows, it really pays off to have tests. It saves you time and developing becomes more fun as you are not being blocked by hard to find bugs.

4. Self-updating documentation

Writing clean self-documenting code is one if the main thing were adhere to. Not only for yourself, especially when you have not seen the code for a while, but also for your fellow developers. We only write comments if a piece of code is particularly hard to understand. Whatever style you prefer, it has to be clean in some way what the code does.

Â // Beware! Dragons beyond this point!

Some people like to read the comments, some read the implementation itself, but some read the tests. What I like about the tests, for example when you are using a framework like Jasmine, is that they have a structured overview of all method's features. When you have a separate documentation file, it is as structured as you want, but the main issue with documentation is that it is never up to date. Developers do not like to write documentation and forget to update it when a method signature changes and eventually they stop writing docs.

Developers also do not like to write tests, but they at least serve more purposes than docs. If you are using the test suite as documentation, your documentation is always up to date with no extra effort!

5. It is fun

Nowadays there are no testers and developers. The developers are the testers. People that write good tests, are also the best programmers. Actually, your test is also a program. So if you like programming, you should like writing tests.
The reason why writing tests may feel non-productive is because it gives you the idea that you are not producing something new.

Is the build red? Fix it immediately!

However, with the modern software development approach, your tests should be an integrated part of your application. The tests can be executed automatically using build tools like Grunt and Gulp. They may run in a continuous integration pipeline via Jenkins, for example. If you are really cool, a new deploy to production is automatically done when the tests pass and everything else is ok. With tests you have more confidence that your code is production ready.

A lot of measurements can be generated as well, like coverage and mutation testing, giving the OCD-oriented developers a big smile when everything is green and the score is 100%.

If the test suite fails, it is first priority to fix it, to keep the codebase in good shape. It takes some discipline, but when you get used to it, you have more fun developing new features and make cool stuff.

Software Process and Measurement Cast 352 features our interview with Gil Broza.Â We discussed Gilâs new book The Agile Mind-Set. Do you know what the Agile Mind-Set is or how to get one?Â Gilâs new book explains the concept of the Agile Mind-Set and how you can find it in order to deliver more value!

Gil Broza helps organizations, teams and individuals implement high-performance Agile principles and practices that work for them. His coaching and trainingÂ clients â over 1,300 professionals in 40 companies â have delighted their customers, shipped working software on time, increased their productivity andÂ decimated their software defects. Beyond teaching, Gil helps people overcome limiting habits, fears of change, blind spots and outdated beliefs, and reachÂ higher levels of performance, confidence and accomplishment.

Gil has a M.Sc. in Computational Linguistics and a B.Sc. in Computer Science and Mathematics from the Hebrew University of Jerusalem, Israel. He is aÂ certified NLP Master Practitioner and has studied organizational behavior and development extensively. He has written several practical papers for theÂ Cutter IT Journal, other trade magazines, and for conferences, winning the Best Practical Paper award at XP/Agile Universe 2004. Gil co-produced the AgileÂ Coaching stage for the âAgile 2010â and âAgile 2009â conferences.

I have a challenge for the Software Process and Measurement Cast listeners for the next few weeks. I would like you to find one person that you think would like the podcast and introduce them to the cast. This might mean sending them the URL or teaching them how to download podcasts. If you like the podcast and think it is valuable they will be thankful to you for introducing them to the Software Process and Measurement Cast. Thank you in advance!

Re-Read Saturday News

Remember that the Re-Read Saturday ofÂ The Mythical Man-MonthÂ is in full swing.Â This week we tackle the essay titled âAristocracy, Democracy and System Designâ!

Remember: We just completed the Re-Read Saturday of Eliyahu M. Goldratt and Jeff CoxâsÂ The Goal: A Process of Ongoing ImprovementÂ which began on February 21nd. What did you think?Â Did the re-read cause you to readÂ The GoalÂ for a refresher? Visit theÂ Software Process and Measurement BlogÂ and review the whole re-read.

Note: If you donât have a copy of the book, buy one.Â If you use the link below it will support the Software Process and Measurement blog andÂ podcast.

I will be speaking on the impact of cognitive biases on teams! Â Let me know if you are attending! If you are still deciding on attending let me know because I have a discount code!

Â

More on otherÂ great conferences soon!

Â

Next SPaMCAST

The next Software Process and Measurement Cast features three columns.Â The first is our essay on leaning styles.Â Learning styles are an interesting set of constructs that are useful to consider when you are trying to change the world or just and an organization.Â

Software Process and Measurement Cast 352 features our interview with Gil Broza.Â We discussed Gilâs new book The Agile Mind-Set. Do you know what the Agile Mind-Set is or how to get one?Â Gilâs new book explains the concept of the Agile Mind-Set and how you can find it in order to deliver more value!

Gil Broza helps organizations, teams and individuals implement high-performance Agile principles and practices that work for them. His coaching and trainingÂ clients â over 1,300 professionals in 40 companies â have delighted their customers, shipped working software on time, increased their productivity andÂ decimated their software defects. Beyond teaching, Gil helps people overcome limiting habits, fears of change, blind spots and outdated beliefs, and reachÂ higher levels of performance, confidence and accomplishment.

Gil has a M.Sc. in Computational Linguistics and a B.Sc. in Computer Science and Mathematics from the Hebrew University of Jerusalem, Israel. He is aÂ certified NLP Master Practitioner and has studied organizational behavior and development extensively. He has written several practical papers for theÂ Cutter IT Journal, other trade magazines, and for conferences, winning the Best Practical Paper award at XP/Agile Universe 2004. Gil co-produced the AgileÂ Coaching stage for the âAgile 2010â and âAgile 2009â conferences.

I have a challenge for the Software Process and Measurement Cast listeners for the next few weeks. I would like you to find one person that you think would like the podcast and introduce them to the cast. This might mean sending them the URL or teaching them how to download podcasts. If you like the podcast and think it is valuable they will be thankful to you for introducing them to the Software Process and Measurement Cast. Thank you in advance!

Re-Read Saturday News

Remember that the Re-Read Saturday of The Mythical Man-Month is in full swing.Â This week we tackle the essay titled âAristocracy, Democracy and System Designâ!

Remember: We just completed the Re-Read Saturday of Eliyahu M. Goldratt and Jeff CoxâsÂ The Goal: A Process of Ongoing Improvement which began on February 21nd. What did you think?Â Did the re-read cause you to read The Goal for a refresher? Visit the Software Process and Measurement Blog and review the whole re-read.

Note: If you donât have a copy of the book, buy one.Â If you use the link below it will support the Software Process and Measurement blog andÂ podcast.

I will be speaking on the impact of cognitive biases on teams! Â Let me know if you are attending! If you are still deciding on attending let me know because I have a discount code!

More on otherÂ great conferences soon!

Â Next SPaMCAST

The next Software Process and Measurement Cast features three columns.Â The first is our essay on leaning styles.Â Learning styles are an interesting set of constructs that are useful to consider when you are trying to change the world or just and an organization.

Today we re-readÂ the fourth essay in The Mythical Man-Month titled: Aristocracy, Democracy and System Design. In this essay, Brooks deals with the role of conceptual integrity in building systems and the impact organizationally of getting to an appropriate level of conceptual integrity. According UC San Diego, conceptual integrity is the principle that anywhere you look in your system, you can tell that the design is part of the same overall design. Brooks suggests that conceptual integrity is most important consideration in system design. He begins the essay by building the case that conceptual integrity leads to high ease of use, and therefore higher value. Systems with conceptual integrity have one set of design ideas and that ANY idea or concept that violates conceptual integrity must be excluded. Conceptual integrity is important because systems that are based on any different architectural concepts are both hard to work with and maintain. Â For example, many of the houses in my neighborhood began life as lake cottages. Many of these cottages have been added to over the years in a fairly haphazard manner. Over the last few years many have been knocked down due to high cost of upkeep that their complexity generates. Software systems are no different; complexity leads to increased support costs, bugs and systems that are hard to use. In Aristocracy, Democracy and System Design, Brooks addresses three questions (he posits four, but postpones the discussion of the fourth until the next essay):

How is conceptual integrity achieved?
Brooks addresses this question from the point of view of a programing system. The goal of a programing system is to making using a computer easy. You could easily substitute any other type of system or application for the term âprograming systemâ. Brooks argues that the goal of ease of use dictates unity of design, therefore conceptual integrity. Â Ease of use can’t exist if the design consists of disparate ideas that make the system both less simple and less straightforward.

Doesnât unity of design imply an aristocracy of architects?
Brooks suggests that conceptual integrity in design comes from one or the collaboration of a small set of coordinated minds. Unfortunately constraints like schedule pressure require organizations to use many hands to develop, design and build a system. There are two fixes to the schedule constraint. The first is the separation of design and build resources. Architects develop the design in advance of the developers. The architect acts as the userâs agent, developing a description of what is to happen. The developers in an implementation mode define howÂ theÂ whatsÂ will be constructed. The second solution is to use a surgical team. The surgeon defines how the operation will be done and then directs the operation. Â The surgeon is the single person that controls the flow work and is responsible for the outcome. In any project the number of architectsÂ will always be less than the developers or testers therefore they will be viewed as aristocrats.

What do the implementers do while waiting for the design?
When organizations separate design and implementation, there are complaints because of the perception that the process yields:

A scenario in which an overly rich architecture that canât be implemented in the constraints most projects operate within. Brooks admits that this complaint is true and indicates that he will address it in the essay: The Second System Effect. In my experience, many organizations use standards, peer reviews and time boxing to combat this problem.

A scenario in whichÂ developers have no outlet for their creativity.
Brooks says this is a false argument. In my experience, even in organizations with the strictest architectural and design constrains, developers have wide latitude to be creative. That creativity is applied in determining how to implement the design within the boundaries and constraints they are given.

A scenario in whichÂ developers will have to sit around waiting for the design to be developed.
While Brooks says that this is a false argument, in my experience it is partially true. In scenarios where designers and developers are separated there will be some timing considerations. The Scaled Agile Framework Enterprise (SAFe) addresses this issue by developing an architectural runway for the developers. Just enough design and architecture is developed ahead of the development teams to provide the guidance they need just before they need it.

Conceptual integrity is an important concept that affects how any system will be developed. Brooks links higher levels of conceptual integrity to improved ease of use, productivity and maintainability. However the old bugaboo of schedule pressure, along with the perception thatÂ scenarios in which developers are sitting on their hands or are stripped of their creativitiyÂ makes the pursuit of conceptual integrity difficult, but not impossible.

Previous installments of Re-Read Saturday for theÂ The Mythical Man-Month