Velocity Conference Takeaways

7digital software developer Mia Filisch attended the October 28th Velocity conference in Amsterdam. She was kind enough to share her account of the core takeaways here with us. She found that the core recurring theme around security was enough to inspire some internal knowledge sharing sessions she has already started scheming on. The diversity of insights led to a productive and informative conference. See below for her notes.

Be aware it’s pretty long (at Velocity the session took 3hs and that was with him actually skipping all the exercises), but it really does cover a lot.

Using Docker Safely (Adrian Mouat)

This talk discussed the different attack vectors of containers, as well as a good few practical steps and strategies for applying common security paradigms (defence-in-depth and least privilege) to Docker and containers generally.

As an industry, we don’t currently tend to manage secrets very well (even when bearing in mind that security is always about trade-offs)

Secret management should be considered tier 0 / core infrastructure (should be highly available, have monitoring, alerting and access control)

In light of this, Schoof proposed the following core principles of modern secret management:

The set of actors who can do something should be as small as possible

Secrets need to expire (set up efficient, easy ways to do secret rotation - this shouldn't require a deploy) ((This also implies that secrets shouldn't be in version control))

It should be easier to handle secrets in secure ways than insecure ways

Security of a system is only as strong as its weakest access link

Secrets must be highly available (as they will stop the basic functioning of apps if they aren't)

The talk went on to discuss all the various aspects of building a secret management system, which I’ll leave up to you to follow along via the slides, it was quite interesting.

Existing services that were discussed and recommended in the talk were: Vault, Keywhiz and CredStash, but all of these solutions are still pretty new, so with any of them there’ll probably still be quite a bit of tweaking required to get a management system in place that works well.

Seeing the Invisible: Discovering Operations Expertise (John Allspaw)

John Allspaw reveals what he gets up to in his free time, i.e. pursuing an MA in “Human Factors and Systems Safety” at Lund University Sweden (obviously).

His own research explores the area of human factors in web engineering, both with respect to understanding catastrophic failures, but also with respect to understanding the human factors involved in not having catastrophic failures in the face of things potentially going wrong literally all the time. Human Factor & Ergonomics (HFE) research has a long history in areas like aviation, surgery and mining, but for our industry is still relatively under-researched.

TL;DR: The language we use and views we hold when talking about failure shape the outcome of that discussion, and what we learn for the future.

Both “Why” and “How” questions tend to limit the scope of our inquiry into incidents; instead “What” questions are a much better device for building empathy, and also help focusing the analysis on foresight - rather than it’s less constructive counterpart hindsight, which more easily falls prey to various cognitive bias and to blameful thinking.

Always assume local rationality: “people make what they consider to be the best decision given the information available to them at the time.” - there isn't really a just culture that doesn't revolve around this premise.

Alert Overload: Adopting A Microservices Architecture Without Being Overwhelmed With Noise (Sarah Wells)

No huge surprises but a good summary on how to set up useful alerts - below are some key points discussed.

Focus on business functionality:

Look at architecture and decide which parts or relationships are crucial to your core functionalities

Decide what it is that you care about for each - speed? errors? throughput? ...

Focus on End-to-End - ideally you only want an alert where you actually need to take action

Make alerts useful, build with support in mind!

readability! (eg. use spaces rather than camel casing etc.)

add links to more information or useful lookups

provide helpful messages

If most people filter out most of the email alerts they are getting, you should probably fix your alert system.

The Definition Of Normal: An Intro and guide to anomaly detection (Alois Reitbauer)

As anomaly detection has a nice role to play in spotting issues early (ideally before any really bad things happen), I was really excited about this talk, but it quickly turned out that if you’re not from a relatively strong maths / stochastics background (like I am not), then you probably need to rely on other people for anomaly detection magic. So the following is a more high-level view.

Anomalies are defined as events or observations that don’t conform to an expected pattern.

events are checked against your hypotheses, applying a likeliness judgement

how the event performs against this likeliness judgement translates into whether it is an anomaly or not

How to approach setting the baselines which define your normal model? One thing to bear in mind that some of them (such as mean/average or median) don’t learn very well. The presenter recommended using exponential smoothing instead, since it is both easy to calculate and learns very well.

Tech is only half the work - identify all stakeholders and their goals; involve Legal/Finance early (especially if you might have to battle early terminations of legacy infrastructure contracts), work on awareness and knowledge transfer across teams

WebPageTest.org now offer a few “Real Mobile Networks” test locations - only a handful for the time being, but if they extend this it could be pretty interesting for us testing client web apps from different locations etc.!

We have recently been working on an incremental indexer for our Solr based search implementation, which was being updated sporadically due to the time it took to perform a complete re-index; it was taking about 5 days to create the 13GB of XML, zip, upload to the server, unzip and then re-index. We have created a Windows service which queries a denormalised data structure using NHibernate. We then use SolrNet to create our Solr documents and push them to the server in batches.

After having read the o’Reilly book “REST in Practice” , I set myself the challenge of using OpenRasta to create a basic RESTful web service. I decided for the first day to just concentrate on getting a basic CRUD app as outlined in chapter 4 working. This involved the ability to create, read, update and delete physical file xml representations of Artists. It is described in the book as a Level 2 application on Richardson’s maturity model, as it doesn’t make use of Hypermedia yet. One reason why OpenRasta is such a good framework to implement a RESTful service is that it deals with “resources” and their representations. As outlined in “REST in Practice”, a resource is defined as any resource accessible via a URI, and OpenRasta deals with this perfectly as it was built to handle this model from the ground up.

When bootstrapping a structure map registry, you are able to set the "life style" of that particular instance using Structuremaps fluent interface. For example, when using NHibernate, it is essential that you set up ISessionFactory to be a Singleton and ISession to be on a per Http Request basis (achievable with StructureMaps HybridHttpOrThreadLocalScoped directive). Example:

For() .Singleton() .Use(SessionFactoryBuilder.BuildFor("MY.DSN.NAME", typeof(TokenMap).Assembly)) .Named("MyInstanceName");For() .HybridHttpOrThreadLocalScoped() .Use(context =>; context.GetInstance("MyInstanceName") .OpenSession()) .Named("MyInstanceName");It's nice and easy to test a Singleton was created with a Unit Test like so:

We have been using Solr for a while for search, Solr is fantastic, but the way we get our data into Solr is not so good. The DB is checked for new/updated/removedcontent, then written into a jobs table, which is checked to see if there are any pending jobs. There are numerous issues with using a DB table as a queue, some for MySQL are listed at:

To stop using our DB as a queue I decided to test out setting up and using an AMQP based message queue. AMQP is an open standard for passing messages via queues. The finally goal would be to allow other teams to push high priority updates or new content directly to the queue rather than have to go through the DB, which can add considerable latency to the system.

For this test RabbitMQ was used, as it has a .Net library and it runs on virtually all OSs, has good language support, and good documentation. This can be found at the RabbitMQ site: http://www.rabbitmq.com/