Velocity Conference Takeaways

7digital software developer Mia Filisch attended the October 28th Velocity conference in Amsterdam. She was kind enough to share her account of the core takeaways here with us. She found that the core recurring theme around security was enough to inspire some internal knowledge sharing sessions she has already started scheming on. The diversity of insights led to a productive and informative conference. See below for her notes.

Be aware it’s pretty long (at Velocity the session took 3hs and that was with him actually skipping all the exercises), but it really does cover a lot.

Using Docker Safely (Adrian Mouat)

This talk discussed the different attack vectors of containers, as well as a good few practical steps and strategies for applying common security paradigms (defence-in-depth and least privilege) to Docker and containers generally.

As an industry, we don’t currently tend to manage secrets very well (even when bearing in mind that security is always about trade-offs)

Secret management should be considered tier 0 / core infrastructure (should be highly available, have monitoring, alerting and access control)

In light of this, Schoof proposed the following core principles of modern secret management:

The set of actors who can do something should be as small as possible

Secrets need to expire (set up efficient, easy ways to do secret rotation - this shouldn't require a deploy) ((This also implies that secrets shouldn't be in version control))

It should be easier to handle secrets in secure ways than insecure ways

Security of a system is only as strong as its weakest access link

Secrets must be highly available (as they will stop the basic functioning of apps if they aren't)

The talk went on to discuss all the various aspects of building a secret management system, which I’ll leave up to you to follow along via the slides, it was quite interesting.

Existing services that were discussed and recommended in the talk were: Vault, Keywhiz and CredStash, but all of these solutions are still pretty new, so with any of them there’ll probably still be quite a bit of tweaking required to get a management system in place that works well.

Seeing the Invisible: Discovering Operations Expertise (John Allspaw)

John Allspaw reveals what he gets up to in his free time, i.e. pursuing an MA in “Human Factors and Systems Safety” at Lund University Sweden (obviously).

His own research explores the area of human factors in web engineering, both with respect to understanding catastrophic failures, but also with respect to understanding the human factors involved in not having catastrophic failures in the face of things potentially going wrong literally all the time. Human Factor & Ergonomics (HFE) research has a long history in areas like aviation, surgery and mining, but for our industry is still relatively under-researched.

TL;DR: The language we use and views we hold when talking about failure shape the outcome of that discussion, and what we learn for the future.

Both “Why” and “How” questions tend to limit the scope of our inquiry into incidents; instead “What” questions are a much better device for building empathy, and also help focusing the analysis on foresight - rather than it’s less constructive counterpart hindsight, which more easily falls prey to various cognitive bias and to blameful thinking.

Always assume local rationality: “people make what they consider to be the best decision given the information available to them at the time.” - there isn't really a just culture that doesn't revolve around this premise.

Alert Overload: Adopting A Microservices Architecture Without Being Overwhelmed With Noise (Sarah Wells)

No huge surprises but a good summary on how to set up useful alerts - below are some key points discussed.

Focus on business functionality:

Look at architecture and decide which parts or relationships are crucial to your core functionalities

Decide what it is that you care about for each - speed? errors? throughput? ...

Focus on End-to-End - ideally you only want an alert where you actually need to take action

Make alerts useful, build with support in mind!

readability! (eg. use spaces rather than camel casing etc.)

add links to more information or useful lookups

provide helpful messages

If most people filter out most of the email alerts they are getting, you should probably fix your alert system.

The Definition Of Normal: An Intro and guide to anomaly detection (Alois Reitbauer)

As anomaly detection has a nice role to play in spotting issues early (ideally before any really bad things happen), I was really excited about this talk, but it quickly turned out that if you’re not from a relatively strong maths / stochastics background (like I am not), then you probably need to rely on other people for anomaly detection magic. So the following is a more high-level view.

Anomalies are defined as events or observations that don’t conform to an expected pattern.

events are checked against your hypotheses, applying a likeliness judgement

how the event performs against this likeliness judgement translates into whether it is an anomaly or not

How to approach setting the baselines which define your normal model? One thing to bear in mind that some of them (such as mean/average or median) don’t learn very well. The presenter recommended using exponential smoothing instead, since it is both easy to calculate and learns very well.

Tech is only half the work - identify all stakeholders and their goals; involve Legal/Finance early (especially if you might have to battle early terminations of legacy infrastructure contracts), work on awareness and knowledge transfer across teams

WebPageTest.org now offer a few “Real Mobile Networks” test locations - only a handful for the time being, but if they extend this it could be pretty interesting for us testing client web apps from different locations etc.!

Somewhere in the 7digital.com web site infrastructure there are classes that override the default controller and view factories (it is an ASP MVC project). Why did we do this? In our opinion, the default project layout is a hindrance to code readability.

The idea is explained by Uncle Bob in his concept of “screaming architecture”. i.e. if you glance at the program's folder structure, what is the most blatant thing about it, what is it “screaming about”?

If there's a folder full of controllers, and a folder full of views, and another for models, then it's screaming “I am an ASP.Net MVC project! I do ASP MVC things!”. If there's a folder called “Artists” and another called “Genres”, each containing controllers, views and other classes related to that feature, it's instead saying “I am a music catalogue on the web”.

I personally feel that “screaming architecture” is a very poor name for a very good concept. The architecture isn't having a crisis. It's not running around with hair on fire shouting “aaargh!!!”. Maybe Uncle Bob has more positive associations with the word “screaming”? With his meaning of “screaming”, every architecture is screaming about something, but what is the important thing.

Everything we do should be driven by clear business goals and objectives. Where they are lacking we should go and find them.

We expect business needs to be provided as problems that need solving with clear expectations and measurables without prejudice towards the implementation.

Release Early and Often; Fail Early and LOUDLY!

It’s essential we can respond quickly to changing business requirements. The best measure of our effectiveness in doing so is via frequent predictable releases through a steady rhythm of working. Things need to be easy to change (maintainable) and delivered at a sustainable pace.

It’s far more preferable to get something in production as soon as possible and develop iteratively based on feedback than to get bogged down in speculative analysis or a fear of not making all the right decisions up front (be that regarding technology choices or requirements).

Failures are expected, and welcome. When projects fail, we learn about other routes that might work. When software fails, it tells us about invalid assumptions we’ve made. The earlier and louder the failure, the more valuable that information is.

Servicestack is a comprehensive web framework for .NET that allows you to quickly and easily set up a REST web service with very little effort. We already use OpenRasta to achieve this same goal within our stack, so I thought it would be interesting to compare the two and see how quickly I could get something up and running. The thing that most interested me initially about ServiceStack was the fact that it claims out of the box support for Memcached, something we already use extensively to cache DTOs, and Redis, the ubiquitous NoSql namevaluecollection store.

Getting cracking

I set myself the task of creating a basic endpoint for accessing 7digital artist, release and track details. Whilst taking advantage of ServiceStack’s ability to create a listener from a console window so I didn’t have to waste time attempting to set it up via IIS:

Over the last month we've started using ServiceStack for a couple of our api endpoints (go to the full ServiceStack story here) . We're hosting these projects on a Debian Squeeze vm using nginx and Mono. We ran into various problems along the way which we'll explain, but we also managed to achieve some interesting things; here's a summary. Hopefully you'll find this useful.

Nginx

We're using nginx and fastcgi to host the application. This is good from a systems perspective because our applications can run without root privileges. For the communication between mono-fastcgi and nginx, we are using a unix socket file instead of proxying through a local port. This makes configuration much easier, as you map applications to files rather than port numbers, so the convention rules for this are much more straightforward. (Besides, you may be hit by a memory leak if you don't use unix socket files.) Furthermore, using files instead of ports has made our life easier for automated deployments because: