Feedhttp://blog.baudson.de
KirbyThe blog feed for Baseblog.Stop and remove all docker containers and imageshttp://blog.baudson.de/blog/stop-and-remove-all-docker-containers-and-images
blog/stop-and-remove-all-docker-containers-and-imagesFri, 20 May 2016 00:00:00 +0000Sometimes it's useful to start with a clean slate and remove all Docker containers and even images. Here are some handy shortcuts.

List all containers (only IDs)

docker ps -aq

Stop all running containers

docker stop $(docker ps -aq)

Remove all containers

docker rm $(docker ps -aq)

Remove all images

docker rmi $(docker images -q)

]]>Running a local Sonarqube with Dockerhttp://blog.baudson.de/blog/running-a-local-sonarqube-with-docker
blog/running-a-local-sonarqube-with-dockerFri, 04 Mar 2016 00:00:00 +0000In order to get the Maven configuration of Sonar right, I wanted to have a local Sonarqube to test with. Using Docker, this is totally trivial.

Run the Docker container

You should already have Docker running on your local machine. Download the Sonarqube container from Docker Hub like this

Run Maven goal

I assume that your project is already configured with the Maven Sonar plugin. Now simply run the goal with the local Sonarqube installation

mvn sonar:sonar -Dsonar.host.url=http://localhost:9000

You should see the generated metrics at

http://localhost:9000

]]>Dependency convergence and the Maven enforcer pluginhttp://blog.baudson.de/blog/maven-enforcer-plugin-dependency-convergence
blog/maven-enforcer-plugin-dependency-convergenceFri, 26 Feb 2016 00:00:00 +0000Another great plugin for security and application stability is the Maven Enforcer plugin. You don't want to end up in JAR hell :)

You can use the Enforcer plugin for the following tasks.

Dependency convergence

Requires that dependency version numbers converge. If a project has two dependencies, A and B, both depending on the same artifact, C, this rule will fail the build if A depends on a different version of C then the version of C depended on by B.

Read more about dependency convergence in Tim Steffen's blog post. For me, adding specific versions to the pom.xmls dependencyManagement section works best, I favor active management over exclusion.

Ban circular dependencies

Checks the dependencies and fails if the groupId:artifactId combination exists in the list of direct or transitive dependencies.

I haven't really come across any occurence of this, however it is nice to have.

Ban duplicate classes

Checks the dependencies and fails if any class is present in more than one dependency.

For example two classes could be identical after the package of one class has been renamed. You should not blindly ignore duplicate classes. You should try to

exclude classes by excluding dependencies

update a library (if possible)

look for alternative splitted dependencies

If you add something make sure the ignored classes are binary identical!

Enforce bytecode version

Checks the dependencies transitively and fails if any class of any dependency is having its bytecode version higher than the one specified.

Example

Here's a draft for your pom.xml. You might want to add this to a dedicated build profile so it will not slow down your regular build.

<?xml version="1.0" encoding="UTF-8"?>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-enforcer-plugin</artifactId>
<configuration>
<rules>
<!--
Requires that dependency version numbers converge.
If a project has two dependencies, A and B, both depending on the same artifact, C,
this rule will fail the build if A depends on a different version of C then the
version of C depended on by B.
-->
<dependencyConvergence>
<uniqueVersions>false</uniqueVersions>
</dependencyConvergence>
</rules>
</configuration>
<executions>
<execution>
<id>enforce</id>
<goals>
<goal>enforce</goal>
</goals>
<phase>validate</phase>
</execution>
<!--
Checks the dependencies and fails if the groupId:artifactId combination exists in the
list of direct or transitive dependencies.
-->
<execution>
<id>enforce-ban-circular-dependencies</id>
<goals>
<goal>enforce</goal>
</goals>
<configuration>
<rules>
<banCircularDependencies />
</rules>
<fail>true</fail>
</configuration>
</execution>
<!--
Checks the dependencies and fails if any class is present in more than one
dependency.
-->
<execution>
<id>enforce-ban-duplicate-classes</id>
<goals>
<goal>enforce</goal>
</goals>
<configuration>
<rules>
<banDuplicateClasses>
<ignoreClasses>
<!--
Don't just add classes here! add them as a last resort.
Before doing so try to:
* exclude classes by excluding dependencies
* update a library (if possible)
* look for alternative splitted dependencies
If you add something make sure the ignored classes are binary
identical!
-->
<ignoreClass>org.apache.juli.*</ignoreClass>
<ignoreClass>org.apache.commons.*</ignoreClass>
<ignoreClass>org.aspectj.*</ignoreClass>
</ignoreClasses>
<findAllDuplicates>true</findAllDuplicates>
</banDuplicateClasses>
</rules>
<fail>true</fail>
</configuration>
</execution>
<!--
checks the dependencies transitively and fails if any class of any dependency is having
its bytecode version higher than the one specified.
-->
<execution>
<id>enforce-bytecode-version</id>
<goals>
<goal>enforce</goal>
</goals>
<configuration>
<rules>
<enforceBytecodeVersion>
<ignoredScopes>
<scope>test</scope>
</ignoredScopes>
<maxJdkVersion>${java.version}</maxJdkVersion>
</enforceBytecodeVersion>
</rules>
<fail>true</fail>
</configuration>
</execution>
</executions>
<dependencies>
<dependency>
<groupId>org.codehaus.mojo</groupId>
<artifactId>extra-enforcer-rules</artifactId>
<version>1.0-beta-3</version>
</dependency>
</dependencies>
</plugin>

]]>Maven security pluginshttp://blog.baudson.de/blog/maven-security-plugins-owasp-findbugssec
blog/maven-security-plugins-owasp-findbugssecWed, 24 Feb 2016 00:00:00 +0000There are two great plugins that help you make your applications built in Maven more secure. I have recently added them to some projects at work and it seems to work quite well.

FindbugsSec

You may have heard about Findbugs, it looks for bugs in Java programs. It is based on the concept of bug patterns. A bug pattern is a code idiom that is often an error.

FindbugsSec is a security plugin for Findbugs, it can detect 80 different vulnerability types with over 200 unique signatures.

OWASP dependency check

OWASP is the Open Web Application Security Project, an organization focused on improving the security of software. Dependency Check is one of their security plugins, it identifies project dependencies and checks if there are any known, publicly disclosed, vulnerabilities.

The plugins are bound to the validate phase, so you might want to run your build like this

mvn clean package -Psecurity

Configuration and fixing issues

FindbugsSec

You can set the threshold and effort attributes to modify the search results. A good idea might be to leave the settings like this, and let the build only break for severe issues. Later on you might lower the threshold in order to find more issues with a lower severity.

Findbugs comes with a graphical user interface that can be started via Maven like this

mvn findbugs:gui

OWASP dependency check

If the plugin finds any security issues in your dependencies, the build will break and you will be given a list of CVE-IDs (common vulnerabilities and exposures), for example CVE-2015-4335.

There are some great resources with infos on these issues, like the National Cyber Awareness System or MITRE. There you'll find which versions are affected by the vulnerability, so you can update your pom.xml accordingly.

]]>Getting Acceptance Criteria right with Example Mappinghttp://blog.baudson.de/blog/example-mapping
blog/example-mappingFri, 19 Feb 2016 00:00:00 +0000I already wrote about the concept of the Three Amigos, this time I want to share a method to facilitate the refinement of user stories and the creation of acceptance criteria, named Example Mapping.

The problem with getting user stories and acceptance criteria right is that often there is not enough collaboration - just remember the Agile Principle

"Business people and developers must work together daily throughout the project."

Example Mapping is an effective and playful way of solving this problem.

Rules vs. Examples

There are different ways to describe acceptance criteria, like rules or examples. While rules are generalizations and by this often broad and ambiguous, examples are specific and easy to understand. Often an underlying rule is not yet clear, while a single example is.

Example Mapping

The idea of Example Mapping is to create acceptance criteria for user stories by mapping examples to rules. By this, you generate better rules and in general, discover issues that are not yet addressed in the story. You write these issues down as questions, they can trigger new rules, or are simple a token for deferring the issue.

You need index cards with four colours. Each card corresponds to one of the artifacts mentioned above

Yellow ↔ Story
Blue ↔ Rule
Green ↔ Example
Red ↔ Question

Scenario: Parking price calculation

Imagine we were to design a machine that calculates the price of parking tickets at the airport. There could be different parking sites like

Valet parking
Short term parking
Long-term parking

with each site having its own set of rules how a ticket price is calculated.

When discussing the user story "as a user I want use valet parking in order to save time" (yellow card), the product owner would for example explain that the price is 6€ for the first hour (a blue card), with a maximum of 18€ per day (another blue card).

The first obvious example would be parking for 5 hours (green card), with a price of 6€, and a blue card for parking a little longer than five hours (another green card), with a price of 18€.

Maybe the amigos (most likely the tester) would address more edge cases, like daylight savings time or overnight parking. The product owner would realize that the story needed refinement: A red card indicating an unresolved question would be created. Then the amigos could improve the existing (or add new) rule and example cards.

Take a look at the photographs (german language):

Benefit

Example mapping enhances the shared understanding of user stories by refining them in collaboration.

The product owner no longer has to make the acceptance criteria up on his own, their quality increases, and developers and testers can estimate with more confidence.

The rules can be used as a guidance for an implementation, while the examples can be used as templates for test cases.

Writing on index cards and pinning them on a board is more fun than staring at a screen displaying an issue tracker, there is more interaction and exeryone is in an active state of mind.

Maybe try an Example Mapping session in one of your grooming sessions, and find out if it works for you!

Reference

Just read the blog post by Matt Wynne, the creator of Example Mapping, for a definitive introduction.

]]>Grep with surrounding lineshttp://blog.baudson.de/blog/grep-with-surrounding-lines
blog/grep-with-surrounding-linesThu, 18 Feb 2016 00:00:00 +0000Yesterday I wanted to find a maven dependency in my project, that itself depended on another dependency, which had a security issue and needed updating.

The command

mvn dependency:tree

displays the whole dependency tree, but in a large project it takes a while to find what you're looking for. So I grepped the result

mvn dependency:tree | grep "<name>"

which confirmed the dependency was present, but not in which context. Fortunately there are the following switches

-A <number> : number of lines to be displayed after the match

-B <number> : number of lines to be displayed before the match

-C <number> : number of lines to be displayed before and after the match

So I was able to find the dependency in my project with

mvn dependency:tree | grep "<name>" -B 10

If there is a better way to do this, let me know :)

]]>Usages for the Linux "watch" commandhttp://blog.baudson.de/blog/usages-for-the-linux-watch-command
blog/usages-for-the-linux-watch-commandMon, 15 Feb 2016 00:00:00 +0000Today I learned about the watch command in Linux. It's a brilliant tool for command line monitoring. Basically, it just executes a command repeatedly and displays its output in a readable format.

Examples

Watch your wi-fi network traffic

watch ifconfig wlan0

Watch free memory

watch free

Watch a directory

For example when downloading files

watch ls -lt ~/Downloads

Switches

Set the interval with -n <seconds> or omit the status line with -t. Like this

watch -t -n 1 ifconfig

]]>Test-Driven Development with Green Bar Patternshttp://blog.baudson.de/blog/test-driven-development-green-bar-patterns
blog/test-driven-development-green-bar-patternsFri, 12 Feb 2016 00:00:00 +0000Quite recently I attended a training for Agile Developer Skills. It was a great opportunity to revisit and update my understanding of Test-Driven Development (TDD).

Why Test-Driven Development?

The point of TDD is to write code that is modular and testable. Think of a test as the first user of your production code: If even you struggle to write tests for your code, how hard must it be for another developer to use it in the context of a real application?

The TDD Cycle

Test-Driven development is a cycle of three stages

writing test code

writing implementation code

refactoring

Just take a look at the diagram:

The importance of refactoring

My approach to TDD so far had more been Test-first - writing a failing test, then implementing functionality, and by this making the test pass, and repeat. I wasn't aware that refactoring is only meant to take place during the "green" phase, so you are sure that nothing breaks during refactoring, and you stay on the "green path" most of the time.

I was never that strict about it, but it makes sense to have the ability to check whether your refactoring is still a valid implementation. It also leads to working in significantly smaller steps, which is good for your motivation - you hardly ever feel stuck and seem to make constant progress.

Refactoring repeatedly also leads to more concise and clean code - you don't defer cleaning it up to "when you have the time" (which you never will have).

Green bar patterns

Another thing I learned was that there are three well-defined workflows in TDD, which Kent Beck introduced in "test-Driven Development by Example".

The workflows are named "Green Bar Patterns" (the green bar being the indicator that your tests are still passing). These patterns help you to be working on the green path, or to return to it as soon as possible.

Obvious implementation

This is the approach I used to follow almost exclusively: just solving the problem at hand, no matter how hard it is. But the obvious implementation often is not as obvious as it might seem.

I remember getting stuck a lot when trying to come up with a solution, even for a small problem, and I was subconsciously too proud to take small steps, and overestimated my abilities.

This approach also easily leads to problems when pairing, as the "driver" is hacking away on the red path, assuring that the solution is just around the corner, while the navigator is puzzled and doesn't want to interrupt the flow of the driver.

Sometimes you even have to throw everything away and need to start over, or the solution is only understood by the driver, because the navigator is not emotionally invested in the solution.

The lesson learned is that you should only follow this path if the implementation is absolutely trivial - as the name suggests, obvious. If you find yourself coding an obvious implementation, but fail to get your tests to pass, it's time to switch to one of the following approaches.

Fake it (till you make it)

This approach is forcing you to work in very small increments, until you find a pattern or algorithm that solves your problem. When you start with a failing test, it's fine to (for example) just return a static value at first.

The idea is to get the test to pass as soon as possible. Once it is, you can refine the fake in the refactoring phase. You can always check if you are on the right way, just run your tests, they should always pass.

This approach is great if you already have an idea about a possible implementation, but can't quite see it through. The small increments slowly lead you towards your goal.

Triangulation

In contrast to the "Fake it" approach, triangulation suggests adding more test cases in order to come up with a solution. This is helpful if you realise you are faking it, but NOT getting close to making it, if you have no idea how to implement the solution.

Having another test gives you another perspective and also gives you a security net - your implementation fulfills a constantly growing set of criteria. If you are not sure where to go with your implementation, triangulation is worth a try.

Once you feel more secure about your implementation idea, switch back to "Fake it" or "Obvious implementation" - but remember that your test code is just as valuable as your production code, and refactor your test code as rigorously as your production code.

Meaning

I don't really think that Test-Driven Development is inherently superior to other forms of programming. I think of it more as a mindset, or a way of life. To me TDD has a spiritual, buddhist feel to it, as you are very much in the moment, taking one step after the other, without hurrying or worrying what is around the corner. I'm very much reminded of a specific koan.

For me Test-Driven Development is about feeling at peace and staying sane while programming, and not so much about the result - but still, I think I deliver better code when employing TDD.

]]>The Three Amigoshttp://blog.baudson.de/blog/three-amigos
blog/three-amigosFri, 12 Feb 2016 00:00:00 +0000During an Agile Testing workshop I learned about the Three Amigos concept and Example Mapping, two interesting ideas for creating better acceptance criteria for user stories. I'll talk about the Three Amigos now and about Example Mapping later on.

Origin and definition

Borrowing the title from the 1986 Steve Martin comedy, in the agile context Three Amigos means the continuous collaboration of

Developers

Testers

Product Owners

in order to refine user stories and create acceptance criteria.

It's not strictly limited to these three parties, it is more about bringing different perspectives in as early as possible, resulting in higher quality requirements.

You don't want a Product Owner to make up acceptance criteria for user stories all on his or her own!

Conversations and shared understanding

The goal of the Three Amigos is not to produce artifacts like BDD tests or enhanced JIRA issues, it's more about creating a shared understanding and identifying problems early on. The Three Amigos work with user stories, which are first of all tokens representing the conversations happening about them.

So the value of The Three Amigos cannot be measured by the artifacts they produce. Of course this doesn't mean that creating artifacts is forbidden - just value the shared understanding more than the artifacts.

There are teams that only add stories to their sprints which have been "amigo-ed", making it part of the team's Definition-Of-Ready.

Agile mindset

While the concept of a specification workshop exists (a meeting where Three Amigos work on user stories and refine acceptance criteria), it should rather be seen as a mindset - remember the corresponding agile principle

"Business people and developers must work together daily throughout the project."

Just like backlog grooming should be a continuous effort and not a regularly scheduled meeting, the Three Amigos should rather be seen as a pairing than a meeting - You actually do the work, not talk about it!

]]>Transactions in distributed systemshttp://blog.baudson.de/blog/transactions-in-distributed-systems-and-microservices
blog/transactions-in-distributed-systems-and-microservicesTue, 09 Feb 2016 00:00:00 +0000There was some discussion at work about whether it is a good idea to implement a transaction-based workflow in a RESTful microservice environment. It didn't feel right, so I did some research to transform this hunch into reliable arguments.

Transactions and microservices

Martin Fowler argues that transactions would couple services, while the idea of a microservice architecture is a "shared nothing architecture"

"Using transactions like this helps with consistency, but imposes significant temporal coupling, which is problematic across multiple services. Distributed transactions are notoriously difficult to implement and and as a consequence microservice architectures emphasize transactionless coordination between services, with explicit recognition that consistency may only be eventual consistency and problems are dealt with by compensating operations.

Choosing to manage inconsistencies in this way is a new challenge for many development teams, but it is one that often matches business practice. Often businesses handle a degree of inconsistency in order to respond quickly to demand, while having some kind of reversal process to deal with mistakes. The trade-off is worth it as long as the cost of fixing mistakes is less than the cost of lost business under greater consistency."

Roy Fielding argues that transactions (in the form of a distributed transaction protocol) are not RESTful

"If you find yourself in need of a distributed transaction protocol, then how can you possibly say that your architecture is based on REST? I simply cannot see how you can get from one situation (of using RESTful application state on the client and hypermedia to determine all state transitions) to the next situation of needing distributed agreement of transaction semantics wherein the client has to tell the server how to manage its own resources."