That's so stupid that it's not even wronghttps://joneaves.wordpress.com
A blog for jon@eaves.orgThu, 08 Mar 2018 05:14:00 +0000enhourly1http://wordpress.com/https://s2.wp.com/i/buttonw-com.pngThat's so stupid that it's not even wronghttps://joneaves.wordpress.com
Now, Turning to Reason, & Its Just Sweetness (the design)https://joneaves.wordpress.com/2018/03/08/now-turning-to-reason-its-just-sweetness-the-design/
https://joneaves.wordpress.com/2018/03/08/now-turning-to-reason-its-just-sweetness-the-design/#commentsThu, 08 Mar 2018 04:41:32 +0000http://joneaves.wordpress.com/?p=389This is the first part in a 3 part blog series on how to write code so you can build testable systems with external dependencies. Link to part 2 is at the end of the post.

There has been some discussion over the last year or so about testing. Some of this discussion has been naive but well meaning. The intent is good, we all want to build reliable and robust systems, but it’s been naive because the chosen path leads to the opposite.

The general thrust of this discussion is about “mocking out external interfaces”. Which, on the surface seems like a sensible thing to do. After all, we want to test our software and we don’t want to have these slow external interfaces (like a database, a message queue, a file system) impact our development. So clearly, the approach is to mock/stub out the calls to AWS/MQ/database in our code and “pow” – tested, reliable software.

Well, no.

What we have now is coupled, fragile software and those tests you spent all that time writing, they’re basically testing “does String.equals() work?”

So, it’s clear that there’s a gulf in both education and experience in this space, so prompted by the recent discussions on Slack, I’ll present to you, a narrative on the design and development of a multi-component software system.

First, we have some context so people can do some reading.

The assertion that we need “AWS stubs” to test systems that are deployed to AWS

Second, I’m going to talk a little bit about testing before launching into some solution design so that people can understand the why part.

Testing is hard. Much harder than people give credit for, and what’s worse, most people think that testing is easier than it is, leading to lots and lots of terrible tests. Mocks and stubs are an indication of a design smell. Ken’s comments in the Slack conversation and his post provide more concrete description of why.

Along with testing being hard, there are different concerns with testing, and this is fundamentally where the big issue occurs. Not all testing is symmetrical, and not all techniques are sensible or desirable.

Within a software system we have 3 main categories of tests.

Functionality tests – these are tests that ensure that the software WE are writing behaves as WE expect it to. The most notable type of test that are encountered in this space are Unit tests.

Contract verification – these are fixtures that VALIDATE the components and interfaces that the software validated with unit tests will continue to work. Think of these as a sort of pre-condition. They’re not so much tests as they are contract verification. It just so happens a lot of the testing frameworks that we have in the software ecosystem are very good to be able to build contract verification suites.

Smoke tests – these are fixtures that VALIDATE the components deployed into an environment are correctly configured, the interfaces are available as expected and they all operate together correctly. This can be a single verification, it can be a sub-set of the contract verification tests, it can be a synthetic transaction e-2-e through the system. So many choices, so many options.

For the purposes of this narrative, I’m going to be only interested in the first 2, they are the general consideration for component design and development. This doesn’t, nor shouldn’t mean that for an operational system the 3rd category isn’t equally, or even more important – it’s just that I’m going to deliberately put them out of scope for now.

Ok, context done. Let’s do some design. Step one is to have a look at the problem we want to solve, and fortunately for us we have a spare one.

The system receives an SQS event which indicate which files in S3 to load and process, the files contain some numbers which we “do math” and the result is written out to S3

Many people would at this point launch into TDD, and while that might seem sensible, I’d always advocate it’s worth some thinking about the problem, and some analysis and preliminary high level design.

30 minutes later, add a small amount of Ken Scambler for sanity and we have the following initial thoughts about how the system design will proceed. Note, this isn’t locked in stone, but when doing TDD, it’s not some random wandering in the dark about where your design will end up – you should be doing science here. Have a hypothesis, and let the code help you work that all out.

We can see the main components, and have identified the basic flows. Nothing too exciting, probably 30mins worth of chatting with Ken. For those interested, he did a “functional style” analysis and having us both work on the design we ended up with substantially the same system components and interaction design. Was a lot of fun. Recommended. Would pair with Ken again.

Now we want to think about one of the most interesting parts of the implementation. How will the use of the data store work with the event queue. Part of the requirements says that the events are only to be removed when the data store items have been successfully processed – so we need some form of signalling between the two. We can couple the two together with some horrible “if” code, and expose the innards of the event queue. Guaranteed this will be hard to test, so we’ll just dependency inject in some processor into the event queue – seems like the best approach. Writing code will test it out, but if you don’t know the direction you’re heading in – you’ll just wander all over the map.

(Note: You’ll see that I’ve put some form of “attach()” method in the interface/contract. This gives me some way of doing “authentication” / “connection” to the external systems. Probably not going to implement in the initial phases, but just a reminder that it’s probably going to be important at some point)

The important part of this is the process(Processor p):boolean method. This enables us to “tell, don’t ask” when processing things on the event queue. For now, we’re only going to get one type of thing on that queue so this is probably the simplest implementation and all that is needed. If there was a bunch of different things on the event queue, I’d probably construct the event queue with some form of Factory that would allow each of the events to be processed, but no need for that now.

The last little bits are pretty similar, and don’t really require any major thought – just simple data sources and sinks.

As stated above, names-not-final. There’s nothing about what I’ve scrawled here that is “forcing” me to do it in this way, and the code may well change my thoughts as I get into it. However, spending the (about) 60 minutes to draw these 4 pictures and talk with Ken gives me confidence I have a robust solution that’s going to fulfil the solution requirements as well as have the right sorts of extension points. The discussion and some thought experiments means that I’m pretty sure I can implement this solution using any underlying implementation technologies. Files, databases, queues, sockets etc. This is the most important thing when designing something – it’s not about “can I build this using technology <X>”, it’s “can I build this in ANY technology”.

Finally, if we look at this now slightly differently, we have the classic “boundaries” model where our business logic (the calculation) is all in the “middle” with our external interfaces providing the interfaces to the horrible messiness of the outside world. Functional core, imperative shell. This is another good indication that our design proposal has merit.

This also helps us understand where our testing styles should be going. We should have our unit/functional tests for the “functional core”, and contract/verification tests for our “imperative shell”. Our code is the core – this is really, really important and is the key point that needs to be made from this entire narrative. Our job is not (NOT!) to test the AWS libraries, the DB connection libraries, the SNS/SQS libraries – these can be verified at run-time using smoke tests, or at various points in the development cycle using contract/validation tests.

For people who worry about the protocols – that’s not a testing job, that’s a business logic task. If the payload in the event queue is “different” – then the system should just fail (gracefully or otherwise). The contract is broken, you no longer have to continue to behave rationally and can make sensible decisions about your own reactions. Under no circumstances should you attempt to “massage/hide” the broken contract. This leads to hard to detect errors and is a significant source of production failures. Just fail early – and in close proximity to the broken contract. This is a fundamental of good software implementations.

]]>https://joneaves.wordpress.com/2018/03/08/now-turning-to-reason-its-just-sweetness-the-design/feed/2joneaves5A097570-6BEB-4AE7-92C7-B73042B48D9B.pngEC6EED05-A319-45F5-8936-F91A1017C0E9.pngC04C7A58-FCCA-4E18-BABC-EDBFF78BD83C.pngC216D512-23E9-48AA-BBDE-5C4321ABE6CD.png7D2C37D1-894C-45BF-976E-31233C2F42A7.pngNow, Turning to Reason, & Its Just Sweetness (the aftermath)https://joneaves.wordpress.com/2018/03/08/now-turning-to-reason-its-just-sweetness-the-aftermath/
https://joneaves.wordpress.com/2018/03/08/now-turning-to-reason-its-just-sweetness-the-aftermath/#commentsThu, 08 Mar 2018 04:41:30 +0000http://joneaves.wordpress.com/?p=406This is the last part in a 3 part blog series on how to write code so you can build testable systems with external dependencies. The first post can be found here

Thanks to Ken Scambler, Milly Rowett and Alyssa Biasi who all made contributions to the posts.

Things that happened about the design.

The initial design is pretty compact, and the components are well factored in terms of their responsibility, but I noticed a bit of a smell in the implementation where there was a bit of twisty-turny logic to deal with the receipt of the message on a queue, and then waiting around to see if the file existed.

The implementation problems the design creates (which I ended up ignoring for this example) is that if the file _never_ appears, you end up with all sorts of knots, and I don’t really want that sort of logic to be part of my “process the file” code.

Going through the design a second time with Alyssa Biasi we came up with separating the logic for the “got the message, wait for the file” and “process the file”. We can decouple these 2 problems and the responsibility for each part makes it a lot cleaner. Then we can tune the “retry logic” in the “wait for the file” code, and the “process the file” logic never needs to know. A simple IPC mechanism (another queue message suffices) and responsibility separation seems to be a lot cleaner. The nice thing is that it’s just another form of QueueProcessor, so we can re-use most of the code framework, with just some changed wiring. Winning.

In the implementation of S3DataStore(), there’s the opportunity to decouple the Authentication method from the implementation. For the purpose of the example I didn’t bother adding this complexity in, but my original notes in the design highlighted how it might happen. The Java AWS libraries actually make the implementation of such a design very straightforward. (Currently the implementation assumes Anonymous “world read” access.)

Things that happened about the code.

It was fun writing code again. There’s parts of the code that I’d refactor to make neater. A protocol implementation decision about how to pass information in the request queue and some of my test code is kinda bleh and I’d like to fix that in the future.

My choice of code structure (and lack of new Java 1.8 features) is based on history and a lack of writing much code. I didn’t particularly want to use this as a mechanism for learning new coding techniques, and to be frank, there’s nothing complicated enough in here that warrants it. There’s definitely areas that could be cleaned up, generally where I create local variables for return values. Most of those can be inlined (the compiler is doing this anyway), but it’s handier for me to examine the variables in a debugger when I’m tracing things.

I was also working with Milly Rowett for a significant part of the development. When mentoring, I prefer to keep things obvious, even if it means a little more typing is involved. Milly might be able to provide feedback on how valuable it was – it certainly was easier for me to explain as I typed.

The code structure completely changed when I decided to use maven (to get the AWS dependencies managed) which was pretty painless, but annoying. Not sure I would have done it differently, because I didn’t need maven until half way through. The final structure is fine, but created a change-set which was nothing but structural changes.

The code isn’t really meant to be a demonstration of “great art in writing code” or “faultless”. It was done as an implementation to show how to design code so you can test external dependencies (such as AWS) without relying on mocking libraries or having the code be solely dependent on that implementation.

]]>https://joneaves.wordpress.com/2018/03/08/now-turning-to-reason-its-just-sweetness-the-aftermath/feed/1joneavesNow, Turning to Reason, & Its Just Sweetness (the code)https://joneaves.wordpress.com/2018/03/08/now-turning-to-reason-its-just-sweetness-the-code/
https://joneaves.wordpress.com/2018/03/08/now-turning-to-reason-its-just-sweetness-the-code/#commentsThu, 08 Mar 2018 04:41:27 +0000http://joneaves.wordpress.com/?p=391This is the second part in a 3 part blog series on how to write code so you can build testable systems with external dependencies. Link to part 3 is at the end of the post. The first post can be found here

Authors comment:This post was written over a period of a couple of weeks, and developed along side the code. There is inconsistency in what I say at the start and what ends up in the code. This is to be expected, and probably worth examining how things changed – as much to see the thought process, and that while design tends to remain fairly static, the implementation changes as new information is gathered. The total time spend on the code is about 10 hours. Of that time about 4-5 hours was code developed while mentoring a (very smart) graduate. Other consideration is my unfamiliarity these days with code authoring (sadly). I imagine if I was to start again, I’d probably be able to do it all in about 4 hours (if that). The code is pretty trivial (like most code).

—- Actual post follows —

First, simple structure, as would be expected from most projects. The “contracts” section contains the verification tests for our external interfaces. The sorts of testing that is “pact-like” in that if our code breaks for unexpected or unexplained reasons, we can run our contract tests, and see if the remote systems that we have no control over have changed. These are our sanity check. They’re not there to “prove” anything – other than our confidence that at the time the code was written, they were against those contract tests. It’s a good idea, you should think about it.

Now, our code layout looks like our design, even after working through it TDD-styles. That’s pretty handy. There’s 2 useful observations and questions at this point. The first is “how do we calculate the math part?”. The answer to that is “we implement a Processor” for it. The second is “where is the application?”. The answer to that is “there isn’t one yet”, but it’s basically a simple wrapper around the EventProcessor. We can see how it will look if we examine one of the EventProcessor tests.

We can see our alternative implementations of the interfaces for DataStore, Processor, EventQueue and OutputDevice for our testing. I’m not completely happy with the EventProcessor and needing to set the delay and retry. It seems to be the right thing internally, it just looks ugly. Maybe something better will emerge further down the line and EventProcessor will become an interface, with some form of concrete implementation. For now it’s concrete, and it’s the only real co-ordination point within the code.

Where the interesting things occur within the code is in the EventQueue. The queue has the defined responsibility for processing the events. This is by design, and we can see the SuccessRemovalEventQueue in the example above has injected a List of Events (the queue) and the Processor which in this case is the KeyExistsInStoreProcessor. These particular implementations were chosen because I want to start modelling and investigating parts of the real solution that’s going to be needed. The concrete implementations here for EventQueue and OutputDevice are used as a “dipstick” (Testing with mock objects – JUnit in Action – page 106) .

This form of development makes it trivial to then compose up the application. Turns out – it’s just a setup, then construction of the dependencies. I wanted to print out when things were processed in the first version (let’s call it the MVP) so to do that we just implemented ConsoleOutput to implement the output device. Total time to create the application – 2 minutes.

You’ll notice that it’s one of the test cases, that’s been modified. That’s ok – it’s a great way to start – the actual application doesn’t need to know, and we can focus on implementing each of the features one at a time. We’ll start with the DataStore, because that’s probably the easiest part to implement first.

In the blink of an eye, the BatchFileProcessor has changed to the following. And with the great confidence that as the injected strategies have the same contracts as the tested ones, our imperative shell works as advertised.

Now, build out DoSomeMathProcessor() with TDD.

Essentially we test this by having a known set of values passed into our data store. This is made easier by having an “object mother” (ObjectMother)for the creation of the test data. Notice how we’re testing results, not trying to delve into the class to see if “math works” – but if we’re getting the right results.

We’re at the point of our code where we now have great confidence if we have a DataStore that returns us the valid data – we can add it up correctly. So, do that next.

And the contract test for S3DataStore(). Note here that I’m not attempting to mock, or stub or do anything to test the implementation of S3DataStore() with the BatchFileProcessor. This will be creating the contract for the use of the S3DataStore(). If I can use it against S3, then it works. That’s it !

There’s a little bit of context worth discussing here. For this particular bit of code to work, that means that the S3DataStore() class needs to be implemented and working. This was done in a few stages (over about 20-40 minutes as I looked up various bits of the S3 API). I started with the ds.exists() test, because that also allowed me to see if the Authentication parts were going to work.

For this test to ever work, we need to set up the world. That’s ok – we know that, we’re not trying to fake out the world here, this is a contract test to verify if our code works against the real world. This could also form part of a smoke test. I manually set up an S3 bucket in the Sydney region, and manually copied the “testdata.txt” file into it. I could use a shell script to do this, I could use a bit of Java code to do this and clean up after. That’s all completely valid, but doesn’t really help in understanding about “how to test AWS code” (really, it’s the implementations of the imperative shell we’re testing, AWS is just a particular implementation).

The implementation is pretty simple. Current implementation is naive and will not fulfil the “requirements” if there are errors, but the interesting part is it’s trivial to add all the business logic and test it. If we need to check for malformed files, we can do that – and have the load(String path) method do sensible things. We can trivially create test cases for this to make sure the processor acts correctly.

At this point – the code is now “working” – and we can run our implementation. We would then choose the next part of the project to implement. If I was going to continue, I’d probably do the output notification – mostly because that requires the least amount of setup.

]]>https://joneaves.wordpress.com/2018/03/08/now-turning-to-reason-its-just-sweetness-the-code/feed/1joneaves78C6E64D-A06F-4FA4-B1A0-B37ADA6AA55E.png8E4EEE0A-2165-445A-AB32-5EBAB3A2E316.png35989839-CF8B-4B28-BDB3-3C485DBA2B7D.png91E79EC8-523F-4B47-93AD-7E6F51F01BB0.png21188019-AEF7-4266-9F5D-C2B581685600.pngCBF5D664-7CDA-4688-98D7-375A891DFECA.pngtsm.pngs3dsc.pngs3ds.png“Sense Amid Madness, Wit Amidst Folly” (Surface Detail)https://joneaves.wordpress.com/2017/03/07/sense-amid-madness-wit-amidst-folly-surface-detail/
https://joneaves.wordpress.com/2017/03/07/sense-amid-madness-wit-amidst-folly-surface-detail/#commentsTue, 07 Mar 2017 08:02:29 +0000http://joneaves.wordpress.com/?p=337After 6 years (and change) of working with REA Group I have decided to resign. It’s mixed sadness and joy, as the place I joined all those years ago, is not the place I’m leaving. That’s good, and that’s also one of the reasons.

I’m moving to a medical genetic interpretation service called myDNA (http://www.mydna.life). In short, people don’t always respond in the same way to medicines, and some of this is because of our genes. In some cases we can look at our genetic structure and give additional advice to doctors about prescribing certain medicine.

I think this is pretty cool, and I’m actually going to be working on things that will make life better for people every day. I’ll also be spending heaps of time working with software development teams directly, pairing with developers and improving their skills. At this stage it looks like I’ll also get to learn C# – which I’m pretty excited about.

One of the things that was important to me was being able to have a balance, and I’ve negotiated that with my new employer. I’ll be working 4 days a week, giving me a day to pursue my own objectives.

I will be;

Focus on providing mentoring/coaching for Women in Technology. From juniors who want to learn to code, to seniors who want assistance with CVs, with conference talk preparation or just general chatting about the state of the industry. I want to help. I want more women in the industry and I want it to thrive. I have 3 amazing women that I’m mentoring now that are happy to provide feedback should anybody be interested.

Cycling. Yeah. A spare day a week to go cycling. Living the dream. I suspect some of this will be consumed in housework. I’ve made a deal with Mike Rowe that I’m going to ride the 3 peaks with him next year – so I’m going to need that extra time.

Consulting. If you’re a corporate, with a lot of money that wants somebody with 30 years of software development experience, team leading experience, AWS experience, privacy/security/cryptography and just general knowledge of the industry to come and give guidance, feel free to contact me and see if we can work out a deal.

From what I can tell, the office small, is open plan (but team sizes, not cattle-yard), and I get my own desk – that I can leave stuff on overnight, and have my monitors and keyboard and chair all set up how I like it. I can have photos of Jo, and George stuck there, and gaze at them when I’m thinking.

I’m unreasonably happy about these little things, but it shows how much those sorts of things count as part of being a human.

The journey starts again.

]]>https://joneaves.wordpress.com/2017/03/07/sense-amid-madness-wit-amidst-folly-surface-detail/feed/1joneavesOn the nature of solitudehttps://joneaves.wordpress.com/2015/03/20/on-the-nature-of-solitude/
Fri, 20 Mar 2015 12:58:45 +0000http://joneaves.wordpress.com/?p=306I don’t like being by myself as much as I am these days. It’s something I struggle with quite a bit. I’ve had quite a while to reflect on this topic, and there’s a couple of words that are often used to describe the situation, but they mean quite different things to me.

The first is “alone“. To me, being alone, is to not have physical or mental proximity to other humans. This is relatively rare, and for me is a choice that I make if I decide to isolate myself. I like to be alone at times, and to ride, and to run. They are my favourite things to do alone.

The second is “lonely“. This is a feeling of a lack of connectedness with other humans, and in my case I feel this more strongly without a partner. I’m certainly least lonely at this stage when I have my close buddies over, chilling and talking shit. I feel less lonely when I have George with me.

Those who know me IRL would probably consider me fairly extroverted, and that’s true to some extent. I do enjoy being in groups of humans, and it’s something I’ve become comfortable at. It’s not natural for me by any means, I was taught this by my parents, and something that I’ve worked on in my career. What many people might not understand is that I do like to be alone at times, and to contemplate the vastness during that time. I’ve never really been able to describe it well, but I like to ride (and now run) to the limits of my physical capabilities – and use that so my thoughts become focussed on “the now”.

I’ve done this for years, and really didn’t notice what I was doing until I had a conversation with a Twitter friend about what she gets out of Yoga and the mindfulness aspects of it. I like this alone, it’s active or voluntary alone-ness.

Then I read this; http://www.brainpickings.org/2014/09/03/how-to-be-alone-school-of-life/

I was impressed by how it seemed to accurately describe my feelings on the matter. The distinct difference between alone-ness and loneliness was laid bare and what was confusing to me (I like to be alone, I don’t like to be lonely) and how the world reacts to the alone-ness. I must say I’ve not really felt any great society pressure about my need to be alone. Probably because it’s hidden behind my physical activities that are considered normal to be performed solo.

Turning the gaze to loneliness it’s a bit harder to reconcile my thoughts and feelings on the emotion. While I’d like to “not care”, I find it very hard – and it’s almost something that I find defining as a human. I’m unsure what other people in similar situations to myself do, or if they feel the same way. I suspect that at at this point some form of substitution of life occurs, where distraction, numbing or soothing becomes commonplace.

Working long hours? Drinking? Drugs? Religion?

Who knows?

Excuse me while I go for a ride and think about it a bit more…

” Men will always be mad, and those who think they can cure them are the maddest of all. “

]]>joneavesIs software an asset or an expense?https://joneaves.wordpress.com/2015/03/18/is-software-an-asset-or-an-expense/
Wed, 18 Mar 2015 07:27:44 +0000http://joneaves.wordpress.com/?p=308I had a brief conversation on Twitter with Camille Fournier (@skamille) about costing for projects and development, and this is something that I’d been thinking about (but not writing about) since I was at ANZ bank about 2007 or 2008. Most of it was trying to explain/understand expectations about software development and delivery and “how much effort to put into projects”.

Even though I actually have a commerce degree, I don’t intend this observation to be a strict treatise on the accounting terms “asset costing” and “expense costing” but more about the general expectations set by considering the constructed software as an asset, or an expense.

The basic premise is that in general teams of software developers, in the absence of specific direction or rules will assume that the software being delivered will have the properties of an asset. The software will last for an extended period of time, it will be modified and updated and will not generally have a short defined end of life date.

However, in many cases the teams of people asking for software to be built will not always have the same view, and they may be well asking for systems that are developed in shorter time frames, and have a short defined end-of-life date. This generally is where the difference in expectation on project cost (and time) often comes from. We see this a lot in start-ups, where iterating and responding to new ideas trumps all possible long term benefits. With nothing right now, there is no future for the start-up.

Coupled with this particular problem, there is generally a mismatch between how much effort the demand team expects the supply team will take to construct a solution.

So, what should we do about it?

At REA, I’m starting to help our teams with this, and the first step is to get some greater definition about the expectation of not only what problem the system should help solve (“functional requirements”) but also the scope of what that system should participate in (“non-functional requirements”).

Generally these would be called something like SLA (service level agreements). How many users will it support, what is the uptime? What is the response rates? What are the transaction rates? These are pretty standard non-functional requirements that you’d see in systems descriptions. However, one key part that I’m trying to encourage people to think about is “how long should this last?”.

I think the first part of the problem is trying to get an understanding from the customers about what their goals are. Do they want to create some “disposable software” to solve this problem? Do they think that the solution to this problem should last for a long time and be enhanced over that time? Do they even understand there is a difference to the engineering required to do this?

Now, if we add to this the general trend towards microservices (smaller units of functionality that can be more easily replaced) maybe we are looking at a general shift in the way we write code for systems and how we might wish to think about, and set expectations for system development. Can we really think of software components as “write them to throw away”?

Certainly if we’re looking at treating the components or code like an expense, I think there’s a better chance. I also think that using expense based thinking for “research and discovery” might lead to more opportunities for faster iterating through ideas, knowing the expected use and lifespan of the work performed.

I’d also like to see more input from people who have tried similar ideas. It’s a slice of a topic about “software life-cycle management” that is more deserving of thought that most teams give.

]]>joneaves2014 – Random reflectionshttps://joneaves.wordpress.com/2014/12/29/2014-random-reflections/
Mon, 29 Dec 2014 04:59:38 +0000http://joneaves.wordpress.com/?p=296I get to the end of the year and wonder what happened and then I feel like I didn’t actually do anything much over the year. This time I thought I’d put some effort in to reflect on what I inflicted on the universe for the year.

Family

Starting with the most important part of my life. Life with George was again fantastic. He’s growing up into a wonderful human being. He’s so polite, so kind, so generous and just delightful to be with. I completely miss him when he’s not around. The best thing that happened in 2014 was a re-negotiation of our custody arrangements which gives me 6/14 (6 days out of 14) during the school year, and 7/14 during holidays. A great step forward, maybe 50/50 in the near future.

We have a good relationship. We’re both honest with each other about how we feel, and how we want to be treated. This leads to a few tough conversations at times, but they get easier, and pretty much all disagreements end very fast, and normally with lots of cuddles and “sorry how I behaved” – on both sides of the fence. He’s not the only one that has bad days, and it’s important to me that he knows that just a part of life and dealing with it as a family is crucial.

We have a house that we both love, his school is nice and close and we spend many, many hours a week playing cricket when we’re together. If we’re not off finding some nets, we’re playing keeping in the back yard, or watching it on TV. George loves cricket, and that’s been reflected by his achievements in playing with his new club. He plays in the under -10s, manages an average of about 20, top score of 40 (off 4 overs) and best bowling of 2/1 (off 2 overs). He’s only been dismissed once, and that was an overenthusiastic pull shot that ended up destroying the stumps. Oops. Pretty handy in the field and likes to keep as well. It’s great to see him doing well at something that he enjoys.

He’s going great at school, he works hard and enjoys turning up and doing different things with his friends. He’s a good little man.

Health

Nothing really to report here. Both of us have avoided most of the really terrible coughs and colds, and despite both of our best efforts neither have managed to end up hospitalised for our recreational (mis)adventures. I’m fit and healthy, and after spending 5+ years being completely obsessed about riding bikes I’ve started to broaden my horizons to other activities.

I was finding it harder and harder to get consistently onto the bike looking while also looking after George. I’d need to spend a good 3-4 hours in the hills to get a solid workout, and that just wasn’t possible for much of the time. So after Amy’s Ride this year decided to take a break from riding for a while (to and from work doesn’t count) and in late October/early November started to look at running. Now, I’d not done any serious running for about 20 years when I used to run in 10km fun runs. The good news is that my cardio fitness base is solid, the bad news is that I’m missing a lot of muscle development for running.

At this stage, I’m pleased with my progress, getting to 4:30 min pace for 5km and 5:30 pace for 10km. Only time will tell how the body will handle it, as I’m already noticing a few niggles. Hopefully just related to lack of muscle development in those areas.

From a mental health perspective, I’m probably in as good a shape as I’m likely to ever get. Most of the anxiety and the pretty severe dent my self-worth took during the later parts of the marriage have been repaired. I’m still pretty nervous about what relationships might mean in the future – but I’ll cross that bridge when it happens. Soon, I hope.

Friends

This was a really big year for friends. Some moved within Melbourne, some moved to another country (I miss you Rup!) and some had some bad news. The best thing for me was meeting up with friends I’ve known for close to 10 years. I was able to travel to Blizzcon in November and got to hang out for a week with the most awesome group of people from all walks of life. It would be pretty safe to say that I really didn’t want that week to end, with geeking out about computers, gaming and drinking far too many beers.

Work

There were 2 great things about work this year. The first was that REA started their graduate recruitment program, and I got to play a significant part in forming it, and getting the graduates on board and working with them. The second was that we finally managed to fill all the open roles in the Group Architecture team, and I can spend more time working with a team rather than trying to create it.

It’s fascinating working in the role that I have at REA, and it’s always challenging – most of these are people challenges, not technical ones – and I’m constantly left open mouthed at how some people react to change. I’ve blogged about my work a few times this year, I hope to do it a bit more next year.

Personal Achievements

It was a pretty good year on this front. I’ve been working on an open source project for a very long time, it’s almost part of the furniture in my life and I don’t give it much thought. It then pops up at unlikely times to make me re-evaluate what reach my software has had. The software in question is BouncyCastle. A Java cryptographic library.

It’s been shipped in over a billion devices as part of the Android operating system in 2014 alone (3bn total)

It’s being used by 12m people creating virtual environments in Minecraft

It seems that a large book selling and cloud computing company may also be using it for various things internally (unconfirmed)

So, at this stage there’s few people electronically connected that haven’t been directly or indirectly using software that I’ve written. That’s kinda cool and makes me feel pretty good.

I also managed to get back and do some conference speaking. Something I enjoyed doing years ago (pre-George) and thanks to Evan, Beth and the YOW! crew it was a great experience to do it again.

So?

2014 was a good year. Probably one of the best I’ve had in recent memory. I’m feeling more balanced as a person and more comfortable in my role as a parent. I’d like to spend a bit more time on my personal projects as I feel my software skills are deteriorating below where I’d like.

Life is good. I’m very lucky.

]]>joneavesYou are not your ideas – a strategy to lessen the blow of rejectionhttps://joneaves.wordpress.com/2014/11/20/you-are-not-your-ideas-a-strategy-to-lessen-the-blow-of-rejection/
https://joneaves.wordpress.com/2014/11/20/you-are-not-your-ideas-a-strategy-to-lessen-the-blow-of-rejection/#commentsThu, 20 Nov 2014 03:27:11 +0000http://joneaves.wordpress.com/?p=291Inspired by @dys_morphia on Twitter, I’ve decided to document my strategy for dealing with rejection of ideas. This particular approach came from a discussion with James Ross and Simon Harris many years ago working with on a consulting project.

James, Simon and I were discussing a bunch of ideas about design and implementation. We were thrashing through them thick and fast and each of us were proposing particular solutions which would be then unceremoniously torn apart by the others. To people outside our little gathering it really looked like we were intent on destruction. Nothing could be further from the truth, as even though the other 2 are mostly wrong about everything and can’t see the genius of my ideas, as the respect for our work and our worth is paramount in these discussions. Few ideas survived the withering attacks, yet none of us felt harm, hurt or lacking in respect from the participants.

After we’d been doing this for a while, we started to reflect on why this is such an “easy” task for the 3 of us to perform, yet it appears to be very stressful for others. We talked a lot about rejection and about how people feel very close affinity to their ideas and proposals, and that rejection (or criticism) of them is like a personal attack.

James made this very clear explanation about how he thinks about ideas, and why Simon and I probably feel the same way – yet others struggle.

He said(*), “Many people hold their ideas close to themselves, their ideas are hugged, like a teddy to the chest, so any attack on the idea is in very close proximity to themselves and attacks hit not only the idea, but the person behind the idea. The idea is precious, there’s not many of them, and each one is special and nurtured and getting new ideas is a hard thing to do”.

This was compared to what we do, “We feel our ideas are like balls. We generate them, we toss them into the ring for people to observe and comment on. They’re cheap and cheerful and colourful and we know there is a bucket of them we can just keep getting new ones from. Sure, some are special and different in their own way, but the ideas are tossed away from our selves, and criticism of the size and colour of the balls are clearly not directed at the person”

I don’t want people to think that James, Simon and I are reckless, or foolhardy, or don’t care about our ideas. There’s often very heated debate about our thoughts, our dreams, our visions (and our fears) when we engage in these conversations. It’s just that we realise that our ideas have a life of their own, and it’s our job to bring them to life – we’re the parent of those ideas. We’re not part of the ideas.

If you’re an aspiring artist, a software designer, a poet, an author – or even just somebody trying to work out where to go for lunch, then consider setting your ideas free – toss them away and give them life of their own. You’ve already done the important work in the communication. You can’t be held responsible for how others react to your ideas, any more than you can be held responsible for other people liking your choice in bikes (even though there is a clear right answer here) and more importantly, by giving life and freedom to the ideas, you’re making it clear of a very important fact, you are not your ideas.

(*) I can’t remember exactly what was said, so I’m going to make up the story to convey the intent.

]]>https://joneaves.wordpress.com/2014/11/20/you-are-not-your-ideas-a-strategy-to-lessen-the-blow-of-rejection/feed/1joneavesYou say you want a revolution..https://joneaves.wordpress.com/2014/09/17/you-say-you-want-a-revolution/
Wed, 17 Sep 2014 11:00:28 +0000http://joneaves.wordpress.com/?p=259“Well, you know. We all want to change the world”

My title on LinkedIn is “Reluctant Architect”. This should not be considered a reflection about how I feel about my job, the company I work at, or the work that I do. It’s more of a reflection about what the rest of the industry thinks of an Architect in the software sense.

Basically, the term has been completely hijacked and the use is completely wrong. For the most part architect in the computing industry is some old white guy who’s completely forgotten anything about software development (if they ever did any) and spends all day writing PowerPoint presentations on choosing WebSphere vs Weblogic. This is then inflicted on the organisation like stone tablets being handed down from the mountain.

I can’t find enough words to describe how much I disagree with the concept, the implementation and the horrors that are perpetrated by organisations that follow this model. It is the ultimate in disempowerment, it actively discourages teams from learning and puts power in the hands of people least capable of using it effectively.

So, while I’ve given a roasting to the way my industry has traditionally handled architecture, how, and what do I do differently?

My background is software development, and I’ve spent a lot of years working with, and mentoring teams, but without a doubt the biggest influence on my recent career has been becoming a parent. I have the most wonderful 8 year old boy, who has the same level of enthusiasm for life as his father along with the youthful confidence that everything he knows is right. At this point, you have to transition from “telling” to “experiencing”. No amount of me telling George that “eating that many lollies will make you feel sick” would convince him. So, short of doing things that would actually (or likely) kill him, I encourage him to safely explore his boundaries. Quite often there is joy in the discovery, and quite often there is cuddles and comforting words after a “learning experience”.

So, being an architect, and a reluctant one at that.

(From dictionary.com) 1550s, from Middle French architecte, from Latin architectus, from Greek arkhitekton “master builder, director of works,” from arkhi- “chief” (see archon) + tekton “builder, carpenter”. An Old English word for it was heahcræftiga “high-crafter.”

This pretty much sums up what I feel about the role in general. I am an old white guy with many scars from building systems the wrong way, or seeing other teams build things the wrong way and I wasn’t quick enough to help them. I try very hard to build relationships with the technical staff across the organisation so I can influence their approaches and thinking without needing to actually tell them “do it this way”. This sounds all a bit unicorn and rainbows and holding hands summer-time walks on the beach, but I’d say there’s very few people at REA that have any doubt about my position on various topics, and what my likely response is if they test those boundaries.

Specifically, what does this look like in practice? Glad you asked! I’ll outline the process that I go through (and have done) at REA focussing on architectural change.

The history

When I joined REA nearly 4 years ago, there was a small number of large applications, there was strong coupling and releases were painful. We were tied strongly to data centres with applications running on racked tin. Applications made many assumptions about how close they were to each other (latency for example). Control of applications were “tier” based (back-end vs front-end) and there was contention across the organisation for product releases.

The strategy

Working with Rich, the main goal was to structure the organisation to allow faster releases and improve the quality of the systems (reduce coupling) to make this possible. There was a heavy investment into cloud computing (using Amazon) as the means to reduce contention in the development and testing area, with still having a pathway to multiple production deployment environments controlled by the business aligned organisational structures (we call them “Lines of Business”)

The vision

A dashboard for each Line of Business that shows all cloud providers in the world, their latencies and suitability for applications, including cost. The teams are able to deploy services across the globe, according to data transfer requirements, latency rules and follow the “cheapest” computing and storage options.

Yeah, something like that, but less Neo and more graphs.

The direction

We need to have our monolithic coupled applications split so that each Line of Business can deploy them independently. Our operational staff need to have visibility into the health of the applications without actually knowing where they are. The systems need to support increased speed of delivery of new function for each of the Lines of Business.

The final attribute is considered one of the driving reasons for these changes – so I’m going to focus on it in future sections. However, at this point most of the work that I do is making sure the technical leaders for the various Lines of Business understand the vision and the direction without interfering too much in the actual implementation.

There’s also a lot more to the ongoing strategy involved, but that’s probably another topic for another time.

The design

I strongly value autonomy and self-discovery by the teams. I think learning by doing is the most powerful approach and Open Source ecosystems have shown that the mutations from different development approaches (generally) improve the state of the art as the development teams learn from previous implementations.

In terms of the design of “the architectural direction and improvements” I’ll explain how I’m influencing the understanding and behaviour around application deployment, modularity and most importantly monitoring and fault tolerance.

I realise that “make application deployment, modularity etc, etc better” isn’t a desirable directive, because it’s not very useful and because in many cases people don’t have a clear idea what “better” is. For developers especially many of these concepts are quite foreign, so what I aim for is smaller fine grained directives that help to provide some gentle prodding for exploration in the right areas.

By doing this, what I’m trying to get teams to “work through” is the potential difficulties involved in implementing some of the architectural improvements in their specific contexts. If I actually knew the answers I’d probably work with the teams directly, but I rarely know “exactly how” teams should implement things. I’m blessed by being surrounded by an awesome group of developers and technical specialists that are very capable of implementing improvements in their own contexts, my role is to show them the path to take.

The implementation

Taking the example of “improve modularity and decoupling”. What is needed under these circumstances is independent services. However, a key part of the total system improvements, especially when relating to multiple independent systems is monitoring and fault tolerance (and investigation). REA Group use AWS for many of our system deployments, so some of this is ‘more important’ than dealing with racked tin, but the same principles should apply.

So, now we think a bit. What can I do at this point, and what principles can I impose on the teams to move in the right direction. One of the most expensive parts of software is the operation of running systems. Most of this is because the monitoring, logging and debugging tools are “left as afterthoughts”. I could say “make sure all systems have monitoring, logging and debugging according to the checklist defined in subsection 42, document 27b-6”. That sort of directive could sound familiar to many people, and is pretty much everything I despise about “enterprise architects”.

To say the response was incendiary was possibly an understatement. Nomex underwear is standard issue for my job, but it’s very interesting to see how often it’s needed. The other thing that interested me was “what roles gave what responses”.

For the most part, experience ops people (hi Cos, love your work) saw through my facade and knew what was up. They’re also generally used to working in constrained environments, and as a result have a huge toolbox to still effectively do their work. The other good news is that these wonderful people also become great advocates for improvement, because most of the burden of running systems fall in their laps.

Developers are predictably “y u no luv me” because what their main focus is to develop and deploy rapidly, debug issues locally and repeat. There’s probably a good reason for excluding some of these principles during development, but as I will repeat, unless the developers feel the pain themselves, it’s unlikely that changes are going to be made. All that does mean is that the operation team gets sad.

Why did I choose that particular course of action?

Well, it’s pretty controversial, so there’s lots of talk (and complaining) so people communicate with each other about how terribly unreasonable that I am, and how I don’t understand “the reality’ of software development. It’s visible and easy to do (don’t start sshd, easy) and should it turn out to be a retrograde step, it’s easy to change back.

The other benefits we see from this is that our systems start to become immutable – a property that I find particularly valuable in coding, and it transfers nicely to system components as well. This is a great thing in AWS land because we can just shoot the cattle and launch another one, and I know that nobody has been on the box “fixing” it.

By not being able to log into the box, we have to think hard about how we log and monitor our system components, especially important things like tracing transactions through the system. What sort of identifiers? What do our components do under failure conditions, network connections etc

The aftermath

There’s a school of thought that I should carefully explain my reasoning behind my decisions so it’s clear to everybody, and there is limited “heat and light”. There may be some merit to that, but my role is not a dictator, it’s an educator and a communication enabler. I don’t mind if the teams are all plotting to subvert me, or even getting together to bitch about how unreasonable I am, but the point is, they’re talking about it – with each other. That’s a big win. I love watching our internal IRC when somebody proposes “how about I use a bastion box to get to the instances” and there tends to be a few comments like “don’t say that, you’ll make Jon angry”, or “shhhh, don’t say it out loud”. That’s fine. It means that people are paying attention, and that even tongue in cheek comments like that make me feel like a difference is being made.

The second part is that I’m not always sure that I’m right. Sometimes I just go with “this seems like a good idea”. Like parenting with George, provided nobody is going to die (or projects fail spectacularly) then making these sorts of decisions and directions will gain valuable insight into our projects and systems, even if we start to go in the wrong direction.

The astute readers here (well, let’s face it, if you’re reading my blog, you’re probably already astute) will notice that I’ve only described a very thin slice for the implementation. Yes, that’s true. This is a big thing, it’s a long term view and to be honest, it’s sometimes disheartening to have to wait. It’s worth the wait, just need to hold firm and be confident that what you’re doing is the right thing. So don’t be confused that the descriptions above cover all that is needed, even from a pure “this is the direction” point. There’s probably 20-30 separate parts of the implementation that is being influenced at any point in time.

I’m looking for long term change with the work I do. Not short term fixes. I want teams to participate in this journey and not just be told what it looks like. There’s also significant parts of cultural change that form part of what I’m aiming for. People do get stuck “thinking the same way” and it’s my role to try and encourage them to think in a way that systems will be constructed in superior ways.

I hope this turned out to be of value to some people, I’m happy to discuss it further in public, or private emails. I’m very happy to help organisations understand how architecture can be better integrated with the development and operation of systems.

]]>joneavesmatrix_architectMicro services, what even are they?https://joneaves.wordpress.com/2014/08/29/micro-services-what-even-are-they/
Thu, 28 Aug 2014 22:28:56 +0000http://joneaves.wordpress.com/?p=251This blog post was inspired by Jonathan Ferguson (@jonoabroad on Twitter) where the exchange started.

All the Twitters

@jonoabroad “Does anyone have an agreed term of what micro services is?”@joneaves “Does it need one?” @jonoabroad “yes. How is it any different to SOA?”

At this point, 140 characters was just going to make things harder so I suggested I’d respond with a blog post. So here it is.

Firstly, I’m going to start by saying I’ve probably got no right to be leading the charge for a definition for micro services, but I do have a lot of skin in this game, as it’s the direction that I’ve been pushing REA development for the past 2-3 years. Much of this is my personal perspective, but I do think it’s broadly applicable and does provide what I consider an alternate viewpoint on the vision for micro services that exist.

To answer Jonathan’s second question “How is it any different to SOA?”, my immediate response is “the intent is different”. With SOA, the intent is a layered architecture of co-operating services where SOA focuses on describing the organisation and co-ordination of the services. With micro services, the intent is to describe the nature of the services themselves and not quite so much the organisation and co-ordination of them.

While SOA is used as a comparison, SOA itself has no “one true definition” but merely a collection of patterns/principles and attributes regarding the organisation and co-ordination between services. I should point out that I see micro services and SOA working well together, with micro services describing attributes of the services themselves and SOA providing useful guidance on how to arrange them.

So, why do I think this way?

I’m a software developer, designer and architect. I like to think a lot about the human factors of software development and how can I put systems in place to encourage development teams to “do the right thing” when building software. There’s far too much shit software out there, and I like to have teams not contribute to that. With that in mind, why did I think micro services was a “good approach”? My definition is meant to be used to guide _development_. The benefits that we get operationally is wonderful – but that’s not the primary reason. It’s to get developers to stop building Borgified software with unclear responsibilities and brittle coupling.

First it’s probably worth providing my definition of what a micro service is, so that there’s at least some context around the discussions that may, or may not ensue. After defining the attributes, I’ll expand on why I consider them important.

Desirable attributes of a micro service is;

The responsibility is narrow. The service does one thing, and one thing well.

The code base is small. The service can be rewritten and redeployed in 2 weeks

There is no 3.

I tried to think of more, but most of them were derived from these. A valuable attribute is the ease of upgrade and redeployment. This is directly related to #1. Another valuable attribute is the ease of change. Both #1 and #2 provide support here. There is also the ability for services to be re-used effectively. This is related to #1. A person much smarter than I am once said “The unit of reuse is the unit of release”.

There’s possibly some rambly hipster crap about “REST services” and “HATEOAS” but really, that’s such flavour of the month and not really something that I think is that important. Certainly no more interesting than JSON vs XML vs ASN.1. All of these things can be done well, or badly – but don’t provide a defining point on if an implementation has desirable attributes.

The responsibility is narrow

This key point relates to design and the fundamental architectural principles. If the responsibility is narrow, then hopefully it follows that the codebase will be small. If the responsibility is narrow, then the understanding of where to make changes is clearer and design intent can be carried forward. If the responsibility is narrow, then understanding how the service fits in the broader network of services, or how the service can be reused is much clearer.

The second important part here is the ability to release the services often, cheaply and without needing to have a deep graph of dependencies. Having a narrow responsibility means that any systems that want to use the services are only coupled to that service for that responsibility. There’s no undesirable coupling.

Like object oriented software, services are best with high cohesion and low coupling. Creating services as micro services helps in this regard.

The code base is small

When I first started proposing micro services I wanted to appeal to the developers, so I said that services could be written in any language they choose, The only caveats were that the component had to conform to our monitoring and logging interfaces (to aid with deployment and operations) and that it could be re-written in 2 weeks.

This created significant consternation, not by developers, but by management. They were concerned about the “explosion of software that nobody could understand”. I did laugh while explaining my reasoning. I laughed mostly because their basis of concern was that “it would take too long”. Sadly this shows the lack of understanding about software that pervades our industry.

Most developers are perfectly capable of understanding new syntax, and generally can understand new syntax in a relatively short period of time. What takes much, much longer is understanding twisted and tortured domain logic, scattered across 6 packages and libraries all bundled together in one monolithic application.

My rationale is that if software is written according to the simple rules (narrow responsibility and small codebase) then the actual language choice is for the most part irrelevant in terms of defect fixing and extension. Sadly, I don’t have a lot of data points in this regard, as developers seem to want to choose the path of least resistance (which is normally keep writing the same old shit in the same way), but I do have a great example written by my team.

We had the need to write a service and one of the team wrote it in Go. It was working well, performed as expected and when it came to adding some additional monitoring we hit a snag because the Go runtime wasn’t supported by NewRelic. The developer who wrote it had sadly departed the team (I still miss you Eric!) so another team member re-wrote the service and had it redeployed in 2 weeks. Written in Java, using Dropwizard. It was a perfect example of exactly what I was proposing.

There are some really useful patterns that we developed while creating the service, not really suitable for addition here, but if there is enough interest I can expand on it in another post. However, the way we thought about building the initial service and more importantly the automated testing around that service made re-development trivial, and incredibly safe.