Building the right thing, building it right, fast

Who I am

My name is Jakub Holy and I’m a software craftsmanship enthusiast and a (mainly JVM-based) developer since ~ 2005, consultant, and occasionally a project manager, working currently with Iterate AS in Norway. More under About and #opinion.

Your test namespaces do not require cemerick.cljs.test (and thus it is missing from the compiled .js; requiring macros is not enough)

cljsbuild has not included any of your test files (due to wrong setup etc.; this is essentially another form of #1)

You are trying to test with the node runner but have built with :optimizations:none or :whitespace (for node you need to concatenate everything into a single file, which only happens if you use :simple or :advanced optimizations)

20 Kick-ass programming quotes – including Bill Gates’ “Measuring programming progress by lines of code is like measuring aircraft building progress by weight.”, B.W. Kernighan’s “Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it.”, Martin Golding’s “Always code as if the guy who ends up maintaining your code will be a violent psychopath who knows where you live.” (my favorite)

Expression Language Injection – security defect in applications using JSP EL that can sometimes leads to double evaluation of the expressions and thus makes it possible to execute data supplied by the user in request parameters etc. as expressions, affects e.g. unpatched Spring 2.x and 3.

Languages etc.

HN discussion about Scala 2.10 – compilation speed and whether it matters, comparison of the speed and type system with Haskell and OCaml, problems with incremental compilation (dependency cycles, fragile base class), some speed up tips such as factoring out subprojects, the pros and cons of implicits etc.

Neal Ford: Functional thinking: Why functional programming is on the rise – Why you should care about functional programming, even if you don’t plan to change languages any time soon – N. Ford explains the advantages of FP and why FP concepts are spreading into other languages (higher abstractions enabling focus on the results over steps and ceding control to the language, more reusability on a finer level (higher-order functions etc.), few generic data structures with many operations -> better composability, “new” and different tool such as lazy collections, shaping the language towards the problem instead of vice versa, aligning with trends such as immutability)

How to Completely Fail at BDD – a story of an enthusiastic developer who tried to make everyone’s life better by introducing automated BDD tests and failed due to differences in culture (and inability to change thinking from the traditional testing), a surprising lack of interest in the tool and learning how to write good tests: “Culturally, my current team just isn’t ready or interested in something like this.” Morale: It is hard to change people, good ideas are not enough.

M. Feathers: Refactoring is Sloppy – refactoring is often prioritized out of regular development and refactoring sprints/stories aren’t popular due to past failures etc. An counter-intuitive way to get refactoring in is to imagine, during planning, what the code would need to be like to make it easy to implement a story. Then create a task for making it so before the story itself and assign it to somebody else then the story (to force a degree of scrutiny and communication). “Like anything else in process, this is medicine. It’s not meant to be ‘the way that people do things for all time’ [..]” – i.e. intended for use when you can’t fit refactoring in otherwise. It may also make the cost of the current bad code more visible. Read also the commits (f.ex. the mikado method case).

Cyber-dojo: A great way to practice TDD together. Compare your read-green cycle and development over time with other teams. Purposefully minimalistic editor, a number of prepared tdd tasks.

On the Dark Side of “Craftsmanship” – an interesting and provoking article. Some developers, the software labouers, want to get work done and go home, they haven’t the motivation and energy to continualy spend time improving themselves. There is nothing wrong with that and we shouldn’t disparge them because of that. We shouldn’t divide people into craftsmen and the bad ones. A summary of and response to the varied reactions follows up in More on “Craftsmanship”. The author is right that we can’t expect everybody to spend nights improving her/his programming skills. Still they should not produce code of poor quality (with few exceptions) since maintaining such code costs a lot. There should be time for enough quality in a 9-5 day and people should be provided with enough guidance and education to be able to write decent code. (Though I’m not sure how feasible it is, how much effort it takes to become an acceptable developer.) Does the increased cost of writing (an learning to write) good code overweight the cost of working with bad code? That is an eternal discussion.

Cloud, web, big data etc.

Whom the Gods Would Destroy, They First Give Real-time Analytics (via Leon) – a very reasonable argument against real-time analytics: yes, we want real-time operational metrics but “analytics” only makes sense on a sensible amount of data (for the sake of statistical significance etc.) RT analytics could easily provide misguided results.CAP Twelve Years Later: How the “Rules” Have Changed (tl;dr, via @_dagi) – an in-depth discussion of the CAP theorem and the simplification (2 out of 3) that it makes; there are many more nuances. By Eric Brewer, a professor of computer science at the University of California, Berkeley, and vice president of infrastructure at Google.

Tools

Vaurien, the Chaos TCP Proxy (via @bsvingen) – an extensible proxy that you can control from your tests to simulate network failure or problems such as delays on 20% of the requests; great for testing how an application behaves when facing failures or difficulties with its dependencies. It supports the protocols tcp, http, redis, memcache.

Wvanbergen’s request-log-analyzer for Apache, MySQL, PostgreSQL, Rails and more (via Zarko) – generates a performance report from a supported access log to point out requests that might need optimizing

Recommended Readings

V. Duarte: Story Points Considered Harmful – Or why the future of estimation is really in our past… (also as 1h video) – thoughtful and data-backed claim that there is a much cheaper way for estimating work throughput than estimating each story in story points (SP) and that is simply counting the stories. Even though their sizes differ, over (not that much) longer periods, where it really matters, these differences will even out. The author argues that estimating in number of stories provides the same reliability and benefits as SP and is much easier. (Keep in mind that estimation is just an attempt at predicting the future and humans are proved to be terrible at doing that; why to pretend that we can do it?) I’d recommand this to anybody doing Scrum and similar.

M. Fowler: Test Coverage – it’s obvious that increasing test coverage for the sake of test coverage it’s a nonsense but some people still need to be reminded of it :-). Fowler explains what the real benefit of test coverage measurements is and how to use it for good instead of for evil.

Brian Marick: How to Misuse Code Coverage (pdf) – cited a lot by Fowler in his article, this is really a good paper. Marick has participated in the development of several code coverage tools and understands well their limitations. One of the key points is that code coverage tools can discover only one class of test weakness (not testing some paths through your code) but cannot discover that you are missing some code you should have (e.g. when you check only for two of three possible return values). Thus the code coverage metric tells you “this code isn’t well tested, are you sure you don’t to look more into it”? It’s crucial not to write tests so as to increase the code coverage; look at the code and improve the test without any regard for coverage. You may thus decrease the likeliness of both the class of problems.

A Year with MongoDB – Kiip has found out that Mongo isn’t the best choice for them (having 240GB, 500+ operations/s, 85M docs and their specific usage of the store) and migrated to the combination of Riak (key-value store) and PostgreSQL. Some of the issues they hit are slow counts and limit/offset queries due to using non-counting B-trees for indexing, memory management that could be more intelligent and tuned for the use to make sure the data needed is indeed in RAM, no built-in support for compressing key names (their size adds up as they’re repeated in each document; you’ve to compress them [user -> u etc.] in the client if you want to), limited concurrency due to process-wide write lock (which becomes a problem if the write’s aren’t short enough w.r.t. number of ops/s, e.g. because data isn’t in RAM and/or the query is complicated), safe settings (waiting for a write to finish, …) off by default, offline-only table compaction (w/o it the disk usage grows unbounded). The lessons learnt for me: Know your storage, its weaknesses and intended way of usage, and make sure it matches your needs.

Rudolf Winestock: The Lisp Curse – Lisp’s expressive power is actually a cause of its lack of momentum because it’s so easy to implement anything that people have no need to join forces and thus there are many half-baked (“works-for-me”) solutions for anything – but no complete, generally accepted one. An interesting essay. “Lisp is so powerful that problems which are technical issues in other programming languages are social issues in Lisp.”

Understanding JDBC Internals & Timeout Configuration – the article itself could have been written better but it conveys the important information that configuring timeouts for JDBC isn’t trivial because they need to be set correctly at different levels and without a socket timeout set in a driver-specific way it can hang forever if the DB cannot be reached due to network/system failure

Tools

CRaSH: Extensible shell for the JVM (docs) – a shell that you can embedd into a web server as a WAR, run standalone or attach to a running JVM, connect to it via SSH or Telnet, and use it to execute commands against the JVM. Some commands: configure loggers, control threads, monitor the system (mem, threads, ..), connect/issue queries via JDBC. More commands can be written in Groovy. There is a whole set of commands for working with JCR. Pluggable authentication.

Jez Humble: Four Principles of Low-Risk Software Releases – how to make your releases safer by making them incremental (versioned artifacts instead of overwritting, expand & contract DB scripts, versioned APIs, releasing to a subset of customers first), separating software deployment from releasing it so that end-users can use it (=> you can do smoke tests, canary releasing, dark launching [feature in place but not visible to users, already doing something]; includes feature toggles [toggle on only for somebody, switch off new buggy feature, …]), delivering features in smaller batches (=> more frequently, smaller risk of any individual release thanks to less stuff and easier roll-back/forward), and optimizing for resiliance (=> ability to provision a running production system to a known good state in predictable time – crucial when stuff fails).

The Game of Distributed Systems Programming. Which Level Are You? (via Kent Beck) – we start with a naive approach to distributed systems, treating them as just a little different local systems, then (painfully) come to understand the fallacies of distributed programming and start to program explicitely for the distributed environment leveraging asynchronous messaging and (often functional) languages with good support for concurrency and distribution. We suffer by random, subtle, non-deterministic defects and try to separate and restrict non-determinism by becoming purely functional … . Much recommended to anybody dealing with distributed systems (i.e. everybody, nowadays). The discussion is worth reading as well.

Shapes Don’t Draw – thought-provoking criticism of inappropriate use of OOP, which leads to bad and inflexible code. Simplification is OK as long as the domain is equally simple – but in the real world shapes do not draw themselves. (And Trades don’t decide their price and certainly shouldn’t reference services and a database.)

The framework sorts the issues facing leaders into five contexts defined by the nature of the relationship between cause and effect. Four of these—simple, complicated, complex, and chaotic—require leaders to diagnose situations and to act in contextually appropriate ways. The fifth—disorder—applies when it is unclear which of the other four contexts is predominant.

Et spørsmål om kompleksitet (Norwegian). Key ideas mixed with my own: Command & control management in the traditional Ford way works very well – but only in stable domains with clear cause-and-effect relationships (i.e. the Simple context of Cynefin). But many tasks today have lot of uncertanity and complexity and deal with creating new, never before seen things. We try to lead projects as if they were automobile factories while often they are more like research – and researchers cannot plan when they will make a breakthrough. Most of the new development of IT systems falls into the Complex context of Cynefin – there is lot of uncertanity, no clear answers, we cannot forsee problems, and have to base our progress on empirical experience and leverage emergence (emergent design, ..).

The Economics of Developer Testing – a very interesting reflection on the cost and value of testing and what is enough tests. Tests cost to develop and maintain (and different tests cost differently, the more complex the more expensive). Not having tests costs too – usually quite a lot. To find the right ballance between tests and code and different types of tests we must be aware of their cost and benefits, both short & long term. Worth reading, good links. (Note: We often tend to underestimate the cost of not having good tests. Much more then you might think.)

Quotes

I get paid for code that works, not for tests, so my philosophy is to test as little as possible to reach a given level of confidence (I suspect this level of confidence is high compared to industry standards, but that could just be hubris). If I don’t typically make a kind of mistake (like setting the wrong variables in a constructor), I don’t test for it. I do tend to make sense of test errors, so I’m extra careful when I have logic with complicated conditionals. When coding on a team, I modify my strategy to carefully test code that we, collectively, tend to get wrong.

Different people will have different testing strategies based on this philosophy, but that seems reasonable to me given the immature state of understanding of how tests can best fit into the inner loop of coding. Ten or twenty years from now we’ll likely have a more universal theory of which tests to write, which tests not to write, and how to tell the difference. In the meantime, experimentation seems in order.

Two phase release planning – the best way to plan something somehow reliably is to just start doing it, i.e. just start the project with the objective of answering “Can this team produce a respectable implementation of that system by that date?” in as short time as possible (i.e. few weeks). Then: “Phase 2: At this point, there’s a commitment: a respectable product will be released on a particular date. Now those paying for the product have to accept a brute fact: they will not know, until close to that date, just what that product will look like (its feature list). What they do know is that it will be the best product this development team can produce by that date.” Final words: “My success selling this approach has been mixed. People really like the feeling of certainty, even if it’s based on nothing more than a grand collective pretending.”

Jay Fields’ Thoughts: Compatible Opinions on Software – about teams and opinion conflicts – there are some areas where no opinion is really right (e.g. powerful language vs. powerful IDE) yet people may have very strong feeling about them. Be aware of what your opinions are and how strong they are – and compose teams so that they include more less people with compatible (not same!) opinions – because if you team people with strong opposing opinions, they’ll loose lot of productivity. Quotes: “I also believe that you can have two technically excellent people who have vastly different opinions on the most effective way to deliver software.” “I suggest that you do your best to avoid working with someone who has both an opposing view and is as inflexible as you are on the subject. The more central the subject is to the project, the more likely it is that productivity will be lost.”

JBoss Byteman 2.0.0: Bytecode Manipulation, Testing, Fault Injection, Logging – a Java agent which helps testing, tracing, and monitoring code, code is injected based on simple scripts (rules) in the event-condition-action form (the conditions may use counters, timers etc.). Contrary to AOP, there is no need to create classes or compile code. “Byteman is also simpler to use and easier to change, especially for testing and ad hoc logging purposes.” “Byteman was invented primarily to support automation of tests for multi-threaded and multi-JVM Java applications using a technique called fault injection.” It was used e.g. to orchestrate the timing of activities performed by independent threads, for monitoring and statistics gathering, for application testing via fault injection. Contains a JUnit4 Runner for easily instrumenting the code under test, it can automatically load a rule before a test and unload it afterwards:

How should code search work? – a thought-provoking article about how much better code completion could be if it profited more from patterns of usage in existing source codes – and how to achieve that. Intermediate results available in the Code Recommenders Eclipse plugin.

How to GET a Cup of Coffee, 10/2008 – good introduction into creating applications based on REST, explained on an example of building REST workflow for the ordering process in Starbucks – a “self-describing state machine”. The advantage of this article is that it presents the whole REST workflow with GET, OPTIONS, POST, PUT and “advanced” features such as the use of If-Unmodified-Since/If-Match, Precondition Failed, Conflict. The workflow steps are connected via the Location header and a custom <next> link tag with rel and uri. Other keywords: etag, microformats, HATEOS (-> derive the next resource to access from the links in the previous one), Atom and AtomPub, caching (web trades latency for scaleability; if 1+s latency isn’t acceptable than web isn’t the right platform), URI templates (-> more coupling than links in responses), evolution (-> links from responses, new transitions), idempotency. “The Web is a robust framework for integrating systems at local, enterprise, and Internet scale.”

Links to Keep

Tools, Libraries etc.

ClusterSSH – whatever commands you execute in the master SSH session are also execute in the slave sessions – useful if you often need to execute the same thing on multiple machines (requires Perl); to install on Mac: “brew install csshx”

Twitter Finagle – “library to implement asynchronous Remote Procedure Call (RPC) clients and servers. Finagle is flexible enough to support a variety of RPC styles, including request-response, streaming, and pipelining; for example, HTTP pipelining and Redis pipelining. It also makes it easy to work with stateful RPC styles; for example, RPCs that require authentication and those that support transactions.” Supports also failover/retry, service discovery, multiple protocol (e.g. http, thrift). Build on Netty, Java NIO. See the overview and architecture.

Eclipse Code Recommenders – interesting plugin in incubation that tries to bring more more intelligent completion based more on context and the wisdom of the crowds (i.e. patterns of usage in existing source codes) to Eclipse

Clojure Corner

Clojure/huh? – Clojure’s Governance and How It Got That Way – an interesting description how the development of Clojure and inclusion of new libraries is managed. “Rich is extremely conservative about adding features to the language, and he has impressed this view on Clojure/core for the purpose of screening tickets.” E.g. it took two years to get support for named arguments – but the result is a much better and cleaner way of doing it.

Quotes

A language that doesn’t affect the way you think about programming, is not worth knowing-

- Alan Perlis

Lisp is worth learning for the profound enlightenment experience you will have when you finally get it; that experience will make you a better programmer for the rest of your days, even if you never actually use Lisp itself a lot.

Goal: Execute the slow integration tests separately from unit tests and show as much information about them as possible in Sonar.

The first part – executing IT and UT separately – is achieved by using the maven-failsafe-plugin and by naming the integration tests *IT (so that the unit test running surefire-maven-plugin will ignore them while failsafe will execute them in the integration-test phase and collect results in the verify phase).

The second part – showing information about integration tests in Sonar – is little more tricky. Metrics of integration tests will not be included in the Test coverage + Unit tests success widget. You can add Integration test coverage (IT coverage) widget if you enable JaCoCo but there is no alternative for the test success metrics. But don’t despair, read on!

Important notice: The integration of Sonar, JaCoCo and Failsafe evolves quite quickly so this information may easily get outdated with the next releases of Sonar

Recommended Readings

Jeff Sutherland: Powerful Strategy for Defect Prevention: Improve the Quality of Your Product – “A classic paper from IBM shows how they systematically reduced defects by analyzing root cause. The cost of implementing this practice is less than the cost of fixing defects that you will have if you do not implement it so it should always be implemented.” – categorize defects by type, severity, component, when introduced; 80% of them will originate in 20% of the code; apply prioritized automated testing (solve always the largest problem first). “In three months, one of our venture companies cut a 4-6 week deployment cycle to 2 weeks with only 120 tests.”

Ebook draft: Beheading the Software Beast – Relentless restructurings with The Mikado Method (foreword by T. Poppendieck) – the book introduces the Mikado Method for organized, always-staying-green (large-scale) refactorings, especially useful for legacy systems, shows it on a real-world example (30 pages!), discusses various application restructuring techniques, provides practical guidelines for dealing with different sizes of refactorings and teams, discusses in depth technical debt and more. To sum it up in three words: Check it out!

Daily Routine of a 4 Hour Programmer (well, it’s actually about 4h of focused programming + some hours of the rest) – a very interesting reading with some inspiring ideas. We should all find some time to follow up the field, to reflect on our day and learn from it (kaizen)

The Agile Testing Quadrants – understanding the different types of tests, their purpose and relation by slicing them by the axis “business facing x technology facing” and the axis “supporting the team x critiquing the product” => unit tests x functional tests x exploratory testing x performance testing (and other). It helps to understand what should be automated, what needs to be manual and helps not to forget all the dimensions of testing.

Adam Bien: Can stateful Java EE apps scale? – What does “stateless” really mean? “Stateless only means, that the entire state is stored in the database and has to synchronized on every request.” “I start the development of non-trivial (>CRUD) applications with Gateway / PDOs [JH: stateful EJBs exposing JPA entities] and measure the performance and memory consumption continuously.” Some general tips: Don’t split your web server and servlet container, don’t use session replication.

Brian Tarbox: Just-In-Time Logging – How to remove 90% of worthless logs while still getting detailed logs for cases that matters – the solution is to (1) only add logs for a particular “transaction” with the system into a runtime structure and (2) flush it to the log only if the transaction fails or st. else significant happens with it. The blog also proposes a possible implementation in detail.

Clojure 2011 Year in Review – a list with important events in the Clojure sphere with links to details – C. 1.3.0, ClojureScript, logic programming with core.logic, clojure-contrib restructuring, birth of 4Clojure and Avout.

Clojure Atlas – interesting project (alpha version) presenting Clojure documentation in the form of interactive graph of related concepts and functions; it’s far from perfection but I like the concept and consider paying those ~ $25 for the 1.3.0 version when its out (however, the demo is free and it might become open-sourced in 2012)

This post summarizes what I’ve learned from various sources about making acceptance or black-box tests maintainable. This topic is of great interest to me because I believe in the benefits that acceptance tests can bring (such as living documentation) but I’m also very much aware that it is all too easy to create an unmaintainable monster whose weight eventually crushes you. So the question is how to navigate the minefield to get to the golden apple?

The key elements that contribute to the maintainability of acceptance tests are:

Aligned business, software, and test models => small change in business requires only a similarly small change in the software and a small change in tests (Gojko Adzic explains that very well in his JavaZone 2012 talk Long-term value of acceptance tests)

The key to gaining the alignment is to use business language in all the three models from the very start, building them around business concepts and relationships

Testing under the surface level, if possible

Prefer to test your application via the service layer or at worst the servlet layer; only test on the UI level if you really have to and only as little as possible for UI is much more brittle (and also difficult to test)

The more you want to test the more you have to pay for it in the terms of maintenance effort. Usually you decide so that you cover the part(s) of the application where the most risk is – the best thing is to do cost-benefit evaluation.

Layer 2: Instrumentation – right below the acceptance test is an instrumentation layer, which extracts input/output data from the test and defines how to perform the test via a high-level API, provided by the next level (we could say a test DSL) such as “logInUser(X); openAccountPage();”

Layer 3: High-level test DSL: This layer contains all the implementation details and exposes to the higher layer high-level primitives that it can use to compose the tests without depending on implementation details (ex.: logInUser may use HtmlUnit to load a page, fill a form, post it). See the PageObject example below.

Recommended Readings

The Netflix Chaos Monkey – how to test your preparedness for dealing with a system failure so that you won’t experience nasty wakeup when something really fails in Sunday 3 am? Release a wild, armed monkey into your datacenter. Watch carefuly what happens as it randoly kills your instances. This is exactly what Netflix does with their with their cloud infrastructure – also a great inspiration for my recent project. Do you need to be always available? Than consider employing the chaos monkey – or a whole army of monkeys!
(PS: There is also a post with a picture of the scary monky.)

Links to Keep

BDD: Write specifications, not scripts (from the Concordion site) – relatively brief yet very enriching practical description of how to do behavior-driven development a.k.a. Specification by Example right, the key point here being “Write specifications, not scripts.” It says why not (for scripts overspecify -> are brittle, specs should be stable) and how to do it (decouple the stable spec and the volatile system via fixture code, expose minimal stuff to the spec, perhaps evolve a DSL between the fixture and the system). It also lists common “smells” of BDD done wrong. If it still isn’t clear to you, read the Script to Specification Makeover example (or perhaps read it anyway). BTW, Concordion is a new tool for doing BDD based on JUnit and HTML, which was created as a response to the weaknesses of Fit[Nesse], i.e. exactly the tendency to do scripting instead of specifications. It looks very promissing to me!

SW Utilities

(Linux/Mac) Autojump – superfast navigation between favorite directories in the command line (via Jake McCrary) – it keeps track of how much time you spend in each directory and when you execute j <substring of directory path/name>, it jumps into the most frequently used one matching the substring. It awesome! (You can also run jumpstat to see the statistics.)

(Linux) Tcpkill – service/network outage testing (via Jake McCrary) – kill connections to or from a particular host, network, port, or combination of all – useful e.g. when you want to test that your software is resilient to the outage of a particular service or server – less brutal than actually killing the database etc. instances. We need to test that our application recoveres properly when one of our MongoDB nodes dies so this may be quite useful.

Manik Hot Deploy Plugin for Maven Projects (v1.0.2 in 5/2011; older version in the Marketplace) – plugin that can do hot and incremental deployment to any app server (simply by copying to its hotdeploy directory or the directory of an installed webapp) whenever you run mvn install or automatically whenever sources change, multi-module support

Clojure Corner

Jake McCrary: Quickly Starting a Powerful Clojure REPL – Clojure REPL only two steps away: 1) Run Emacs, 2) Execute M-x clojure-swank (no more need to open an existing Leinigen project) – the trick is to install the Leinigen plugin swank-clojure and use Jake’s elisp function clojure-swank that automatically starts the swank-clojure server. (I had to hack the function for the clojure-swank output contained “null” instead of “localhost”, likely due to incorrect DNS setup.)

Having [automated] unit/integration/functional/… tests is great but it is too easy for them to become a hindrance, making any change to the system painful and slow – up to the point where you throw them away. How to avoid this curse of rigid tests, too brittle, too intertwined, too coupled to the implementation details? Surely following the principles of clean code not only for production code but also for tests will help but is it enough? No, it is not. Based on a discussion on our recent course with Kent Beck, I think that the following three principles below are important to have decoupled, easy to evolve tests: