A Million Code Monkeyshttps://amillioncodemonkeys.com
Because coding isn't yet evolvedWed, 08 Aug 2018 22:14:49 +0000enhourly1http://wordpress.com/https://s0.wp.com/i/buttonw-com.pngA Million Code Monkeyshttps://amillioncodemonkeys.com
State of the Nationhttps://amillioncodemonkeys.com/2016/05/07/state-of-the-nation/
https://amillioncodemonkeys.com/2016/05/07/state-of-the-nation/#respondFri, 06 May 2016 23:37:04 +0000http://amillioncodemonkeys.com/?p=314Well, it’s been more than a whole year since we last spoke – can you believe that? I guess it’s time for a round up.

So what have I been up to? Well, last time, I was telling you how I left my first job. Since then, I’ve also left my second. Yes, I know. It was a bit of a mis-step on my behalf, but I learned an awful lot about how to interview a company and as such I am perfectly happy where I am.

Moving back half a step – I want to explain why I left my other job. It seemed so full of promise – I was getting paid roughly one and a half times the salary for slightly less responsibility doing what I thought was going to be more or less the same thing. I was wrong. So very, very wrong. Turns out there was a miscommunication. There was a small portion of ASP.NET (which is what I was primarily looking for) but it was WebForms, on .NET v3.0 (we didn’t even have linq!). I didn’t realise WebForms even fell under the ASP.NET umbrella until then. But the core product that I had been hired to work on was, again .NET v3.0, and WinForms this time (see a pattern here?). Yes, I can do a bunch more with WinForms now that I couldn’t before, out of necessity. But it was so awful. Theoretically, the code was structured as MVP, except it had one fatal flaw – there was a static dependency in the constructor of the base Presenter that we weren’t allowed to refactor, because it was part of the security system and the whole product security would need to be retested because of it. Manually tested, I mean, because this particular design flaw meant that unit tests were near impossible. Let that sink in for a second. Unit Tests were near-impossible on a Presenter. Can anyone tell me one of the major reasons we go to such lengths to structure our code such that the logic isn’t in the View? That’s right, it’s because Views are notoriously difficult and fragile to test. A couple of us were quite unhappy with that situation and started extracting our logic into “helper” classes and also dumping it directly into the view, because somewhat paradoxically, they were easier to unit test. Of course, the boss was unhappy with this sort of pattern emerging and we were discouraged from doing it. It was that systemic style of thinking and a the “everything stops for 6 weeks when we release because the whole app needs to be manually tested” (and inevitably fixed) problem that they couldn’t see would be extremely mitigated by allowing the most basic unit testing that made me just want to get out of there as soon as possible.

What did I learn about interviewing at a new company? Trust your instincts. There were a couple of moments in the interview process that made me somewhat uncomfortable for the level I was being hired at. Notably, they gave me a written C# test. A. Written. Test. For an intermediate position (I shouldn’t have accepted anything less than senior, but ignore that). It wasn’t coding to see how I solved the problem, it was literally a checklist for feature knowledge of C#. In comparison, the company I am at now gave me a problem on a piece of paper and asked me to have a think about it, produce a design for a solution, and come and talk about it. They are one of those vanishingly rare companies who hire developers comfortable that a half-way decent developer will pick up the language / technologies as they go.

There was also a moment during the interviews for the “old” job where I asked to meet the potential team I would be working with – a totally reasonable request – and I was very skillfully brushed aside. So skillfully that I didn’t notice until I got home, but by then I had already been sold on the business. Never mind.

Ok, maybe I’ve blathered on about that particular part of my career long enough. It only lasted 6 months in the end – I made myself wait 3 months before I started looking elsewhere; took 2 months to find a new job, and then 1 month of notice period. I’ve now been at my “new” job more than one and a half times the entire tenure at my “old” job and it still seems a much shorter time.

So about my new job. It’s all a bit weird – I have become one of the things I swore I never would: A Web Developer. I live in ES2015+ (new JavaScript for the layperson) for much of the time, writing a heavy-weight front end that loads just shy of 2MB before it does anything useful. When I say heavy-weight, I mean my perceptions on how much code should be loaded over the network to run a site. It’s actually positively average. Being a software blog, I’ll share some of the gory details:

I’ve learned an awful lot about JavaScript, browsers, CSS and even some of the more intricate features of recent versions of SQLServer. I’ve had to learn so much about Aurelia (often by reading the source code) that I frequently hang out in the Gitter channel helping other people out. Often learning things myself just by being there.

Which brings me to the other thing I like about this job – it’s more actively encouraging Open Source use, to the point where I have produced a plugin for the Aurelia framework. And people actually use it!! The company itself has a couple of plugins which it has open-sourced, but I have no idea whether people are using it – no one is raising issues or pull requests. Doesn’t matter, we use them extensively.

It’s not to say that this new place doesn’t have its own problems. I’m not wandering around with permanent rose-tinted glasses. Unfortunately I can’t go into those details, on account of most of them being commercially sensitive. Maybe once this current kerfuffle has blown over.

Also, personally this has been an exciting time for me – I’ve bought a house, adopted a kitten, and got engaged (she takes up the spare time I would have otherwise spent on blogging – if you want to complain that I don’t write much anymore, please direct all correspondence to her). But again, software blog, so I’ll leave it at that.

]]>https://amillioncodemonkeys.com/2016/05/07/state-of-the-nation/feed/0amillioncodemonkeysMichael’s Handy Guide to Leaving Your First Jobhttps://amillioncodemonkeys.com/2015/03/15/michaels-handy-guide-to-leaving-your-first-job/
https://amillioncodemonkeys.com/2015/03/15/michaels-handy-guide-to-leaving-your-first-job/#respondSat, 14 Mar 2015 22:56:57 +0000http://amillioncodemonkeys.com/?p=303Those of you who know me will be aware that, at the end of last year, I left the first job I got after I graduated. I had been in that job almost exactly 6 ½ years and leaving it was more difficult than I imagined it would be. But the people who are in the strongest position to provide a sounding board on your reasoning are often your colleagues, though they’re often the last people with whom you really want to bring the topic up. I’ve written down some of my experiences in the hopes that it might aid some people.

First things first, know why you want to leave. Write it down, make it more than a vague idea in your head. They might seem like really strong reasons until you write it down. They might even be fixable within your current company; so while I’m not attempting to necessarily talk you out of leaving, if you’re already happy except for one or two things, maybe leaving isn’t a great idea.

Secondly, and probably more importantly, know why you stayed as long as you did. Maybe you don’t know how to answer that question – I certainly didn’t. It was my first job since I graduated university and everything I knew about the working conditions of the industry were from that job or what Some Guy On The Internet said. Although I can give you my reasons, now that I’m on the other side and have had that thrown into stark reality for me.

I liked that what I was doing was being, for the most part, used for good. We made two-way radios, the network equipment for them and software to support their maintenance. We made good radios; good enough to be used by emergency services the world over. They, by and large, were not sold to military organisations and usually were used to make the world a little bit better. These radios were even used by the emergency organisations who helped deal with the major earthquake in our city in February 2011. Our products were helping products.

It was technically challenging with a wider variety than one would normally expect from a single company. During my stay at the company I worked on hardware drivers, embedded software, fake embedded environment targeting desktop linux, distributed build automation software, desktop application in .NET right through to the whole vertical of a modern Web Application (SQL, .NET, HTML + Javascript). And in the couple of months leading up to my departure I was working at almost all of those layers simultaneously, trying to integrate them. There are not many companies, I would wager, in the whole world where that variety is possible.

Great people – my immediate team in particular. There were some exceptions to that, but I don’t want to go into them here, particularly as this is the thing that is least under your control and there will be people you don’t get on with most places you go.

So now you have it in your head that you do want to leave and know why, it’s time to start looking for new work. Knowing what you desire in a company will help you look. Maybe you’re lucky enough to already have heard about a job through the grapevine – that’s probably your best option in terms of interviews. You get to casually talk to the person who mentioned it without feeling like you’re going to scare them off offering you the job. I know this is technically true of more traditional interviews, but the mindset is certainly different. The most interesting jobs are found that way, too, usually.

Whatever you think you know, your interview skills probably suck. Or are only good for a different time in your career, where maybe you cared more about a job in the right industry rather than the right job for you in particular. Take any interview you can get, to begin with. I had applied for a job at a company I liked, but maybe the job wouldn’t be so interesting. I went along to that interview and gave the best interview I’ve given in my life (including the one which eventually landed me the job I’m at now). I didn’t end up working for that company, but that confidence boost really helped.

Again, I want to stress you want to know what you want in a company. Don’t settle for “We do Agile”, how do they do Agile. Do they do any of the XP processes? Exactly what did they mean by the different technologies they use? How long have they been doing it? Do the words they are using mean the same thing as you think they do? Are they willing to give you a bit of a tour of the application or working environment? Are you able to sit down with them to do some coding? What god-awful enterprise software will you have to use on a daily basis? Are they willing to shell out an extra $50 for a nice keyboard? You know how during normal conversations where the phrase “what is this, an interrogation?” gets uttered as a defense… well interviews are an interrogation.

So presuming you’ve now landed the job of your dreams, be aware that the notice period of your old job will be more difficult than you realise. On the day I handed in my resignation, I totally ran out of emotional energy. Telling people that you’re leaving and having that look in their eye that they’re feeling a little bit abandoned is kind of like telling a puppy off. It’s best for everyone if you do it, but it doesn’t make it any less difficult. So at the end of the day when someone with whom I would normally be happy to have a laugh with said with her tongue firmly planted in her cheek “what a dick”, I had no response. Thankfully she was able to read the situation and possibly cottoned on to the fact that I’d spent the last wee while just trying not to cry.

And as was also said to me: “There is no good time to leave, so don’t feel guilty about it”.

My semi-last day was pretty bad. It wasn’t my last day per se, but I left at the end of the year, so some people finished up for the year earlier than my last day. I hadn’t considered that as a possibility and so I wasn’t prepared to say goodbye to some of those people yet. I took a couple of quiet walks on that day in an attempt to hold it together enough to continue getting the jobs I wanted done before I left. I certainly stretched the definition of holding it together.

So you’re starting the new job. All going well, you’ll get your hand held for the first week or so and you’ll start working and getting stuff done.

But for me, all did not go well. My first week was awful. I felt like I was having a week-long anxiety attack and nearly couldn’t bring myself to actually go to work on the Friday. Everything had changed – the amount of traffic I needed to wade through just to turn up in the mornings, what I could have for lunch, the people, the work, the process, the god-awful enterprise software, the operating system version and even the keyboard. I had picked up a fairly easy-looking task to do that week, but I had somehow expanded the scope of the story to make future work easier. In a normal situation, I would have said this was a good thing to do, but one’s first week on the job is not a normal situation. I had taken a job with a fairly large pay increase to spend a week doing what should have been a relatively small piece of work. Just get something reasonable done and get it checked in. Make it feel like you’re actually contributing, rather than some fraud who can’t tie his own shoe laces. That extra work is paying off now, but it wasn’t a fair trade-off, when my emotional well-being is considered as part of the equation.

So really, what I wanted to say was that actually, leaving your first job is going to be much harder than imagined, but in part, it is worth it for the learning experience alone. After throwing myself out of the situation where I was surrounded by the people who taught me the ABCs (or maybe the S.O.L.I.D’s) of software development I was suddenly able to see more clearly some of the intrinsic features of their code, and the features of the environment in which I worked which I find either different or missing in the new one. Also, don’t change every other major feature in your life at the same time, or you won’t have the energy to produce even a simple blog post and will dither on it for roughly a month longer than it really deserves.

So much of what the word feminist conjures up for most people is totally wrong. Feminism is not about hating or even disparaging men. Feminism is not about treating women as superior beings, it is quite simply about equality. I’m not someone who believes that men are inherently evil and are the plague that needs stamping out, but rather that women and men should be able to co-exist and be cherished for their differences, not excluded or diminished for them. One thing that the more astute of you will have noticed is that I’m a man and yet I call myself something which is predominantly known by the general public as the exclusive reserve of those without a Y Chromosome. This is not only wrong, but indeed part of the problem. The word feminist is purely descriptive. As Aziz Ansari recently said:

“So, I feel like if you do believe that, if you believe that men and women have equal rights, if someone asks if you’re feminist, you have to say yes because that is how words work,” he says, joking, “You can’t be like, ‘Oh yeah I’m a doctor that primarily does diseases of the skin.’ Oh, so you’re a dermatologist? ‘Oh no, that’s way too aggressive of a word! No no not at all not at all.’”

There are some who thing that it’s only weak mean who announce themselves as feminists. I can tell you that this is also not true. Many men with similar viewpoints to mine will try to avoid this awkwardness by watering down their position to only call themselves ‘feminist allies’, furthering the myth that only women can be true feminists. It stakes a strong person to stand up for what they believe in, regardless of who they are, so no, I don’t buy it that only weak men are feminists.

Two of the women I have dated shared with me that they were raped. Neither of them said anything to any authority figure – particularly not their parents, with one of them saying that she believed this revelation would make her father think he had failed as a parent, because of the actions of someone else entirely when she was walking home from school. So I can absolutely believe that the estimates of unreported sexual assaults are correct, if not conservative. The number of women I have dated is not much higher than that, which makes it a frightening percentage of the women I’ve known intimately who’ve experience such horrific circumstances. I thought that maybe it was because I generally try not to be an asshole towards women that I had attracted a disproportionate number of women who had experienced sexual assault. That was, until I read the posts to the #yesallwomen hashtag. That was when I learned that it was actually not just incredibly common, but actually that #yesallwomen have experienced something of the like. This. This is why I am a feminist.

I have often struggled during my life, I have to fight anxiety and depression nearly constantly (a fight that I am proud to say, I’m winning) but really, I have gone through life on easy mode. I am a straight, white, middle-class male with no religious affiliations. In short, no one persecutes me just for being me. Therefore, if I have struggled with life, even though half the world automatically gives me an advantage, based on nothing but my genetics, how do women deal with such things?

The current trend for posting placards as to why you don’t need feminism make me sad for all kinds of reasons. But firstly, I am happy that the women posting have the ability and nerve to publicly voice their opinions, even if I don’t agree with many of them and nor do I think many include much research or understanding. They seem to be based on the misconception that feminism and having healthy relationships with males are incompatible. Whereas to me, that’s entirely what feminism is, it’s about having healthy relationships with everyone. Treat everyone equally and it’ll all work out in the end, right?

Right now, given that this is a software blog, I bet you’re wondering where the software content is, right? Actually, you’ve probably already guessed. There is an overwhelming imbalance of gender in software. It’s not because males are inherently better at it, but because we, as the wider software community tend to discourage women from being part of our club. We do this in a bunch of different ways, some of which are subconscious, but a large number aren’t. I don’t want to list them, because it can all be summed up by Wil Wheaton’s philosophy of “Don’t be a dick” and actually, things will work themselves out.

But more importantly, the events involving @SeriousPony and how she and her children were targeted just because she was a prominent woman in an overly-male industry. I wouldn’t be able to do it justice if I attempted to recount the events here, but they are truly shocking and terrible. She suggested she would remove the post soon, but I recommend reading it here, if it’s still around. I’m not even outraged at her treatment, I’ve gone way beyond that and have shifted squarely into total despair for all of humanity, if even a small portion of it is capable of such endeavours. So yes, I am a feminist and I think you should be too. Not all women are equals… yet.

]]>https://amillioncodemonkeys.com/2014/08/05/287/feed/0amillioncodemonkeysThis is what a feminist looks likeMy Expertisehttps://amillioncodemonkeys.com/2014/07/01/my-expertise/
https://amillioncodemonkeys.com/2014/07/01/my-expertise/#respondTue, 01 Jul 2014 08:29:02 +0000http://amillioncodemonkeys.com/?p=272When I was one I had just begun
When I was two I was nearly new

When I was three I was hardly me
When I was four I was not much more

When I was five I was just alive
But now I am six, I’m as clever as clever;

I’m fast approaching six years as a professional software engineer. A quick back of the envelope calculation suggests that this means I’ve worked 11,820 hours, and have been becoming a software engineer for four years prior — back at the start of 2004 was when I first started university. According to Malcolm Gladwell’s definition I should be an expert by now.

But am I? I like to think of myself as better-than-average, but I’ve yet to even attain a ‘Senior’ decorator in my job title. Somehow that doesn’t gel – better than average and not a Senior with more than five years’ experience? Would having such a title make me somehow magically better at my job? Categorically not. But expert? That seems like such a lofty title.

Those of you who know me well will know that I’ve actually had an incredibly varied career in such a short space of time. It’s not that I have a short attention span or no ability to stick with a project (and it certainly isn’t because I’ve been shuffled around like the unwanted vegetables on a child’s plate), but I have a policy of grabbing every opportunity I can, even if it only half-interests me, because you never know where it will lead. And wow, has that policy led me places. Of course, I’m getting a little side-tracked, but I’ll come back and show you where I was going with that.

Firstly, we really should agree on what is meant by expert. I refuse to trot out a dictionary definition as though I were a High School Student using such a device for, well, for lack of a better word, effect. To me, an expert is someone who is as close to infallible as any human can be in a particular field. Someone you can go to and if they don’t know the answer out of hand, then they’re certainly able to go and find it. But that flies directly into the face of most of what we readily accept about Software Engineering. Software is a constant battle to reduce the horrendous amount of complexity thrown at us, and the need to deliver value to our customers in a timely manner.

So given the amount of energy we expend just trying to keep a lid on the amount of complexity we face, how can any non-superhuman claim to even approach being an expert of such a field in the comparatively short period of just ten years?

Well, when I read a little further down than just the first paragraph of the Wikipedia article on expertise, one of the things that really struck out at me was the notion of an expert having good intuition. I found it striking, because I wrote a post a while back, trying to tackle that subject[2]. I likened it to something we all have, because I was certain that the intuition I feel and attempt to foster is something all Software Professionals share. I am starting to worry that it’s not.

The stirring in my waters when a design is going off course has certainly gotten stronger of late and I’ve seen first hand when those messages don’t connect. They still misfire, I am still human, after all.

The thing is, I’ve never felt less like an expert in my entire journey. I frequently discover that some of the things I pick up are over my head to go it alone. I know that left to my own devices, I wouldn’t be able to complete the current application our Team is working on. I would certainly produce something and given enough time I might be able to make it work, but I’m fairly sure that many compromises would result. Well, many more compromises.

Which leaves me where? I’m still of the opinion that we’re all under-evolved monkeys when it comes to Software Engineering as we haven’t begun to effectively manage the sheer complexity (and the increase of such) of the problems we create for ourselves. One of the best things we can hope for, is that we can recognise our own shortcomings, to have surrounded ourselves with those who can help and the wisdom to know when it’s appropriate to forge ahead and learn or the good grace to ask for help. And the serenity to accept those whom we can’t change…

[1] For those of you who are troubled by the copyright legalities of reproducing A. A. Milne’s work in full, in New Zealand, from which I am posting this, it is totally lawful. The law here for literary works is life + 50 years. A. A. Milne died in 1956.

[2] I fumbled it and I’m not totally happy with it, which is why I’m not linking to it directly

]]>https://amillioncodemonkeys.com/2014/07/01/my-expertise/feed/0amillioncodemonkeysAgile Infrastructurehttps://amillioncodemonkeys.com/2014/05/25/agile-infrastructure/
https://amillioncodemonkeys.com/2014/05/25/agile-infrastructure/#commentsSun, 25 May 2014 02:37:19 +0000http://amillioncodemonkeys.com/?p=254By now, you’ve probably figured out that agile is this new-agey style of software development. But agile is far more than just group hugs and stand-ups. By that I mean, agile is more than a project management system. Agile is a way of managing change and being able to respond quickly to it, whilst delivering as much value to the customer as possible.

How do we achieve that? It certainly isn’t possible just because we’ve planned our sprints. Agile goes all the way down to the code base (or starts at the code base, depending on how you look at it).

Scrum is a project management system which deliberately ignores the technical practices required to engineer good software. It shows you which item will deliver the next chunk of value to the customer, whereas eXtreme Programming, or XP, is a set of technical practices which mostly ignores project management. XP’s set of good engineering principles tell you how to produce quality software. Combining the two is very common, as you then know what to work on next and then how to go about delivering it in a quality manner.

I like to call Scrum without the supporting technical practices “Hollow Scrum” because I imagine this picture (Scrum / XP) with the insides missing. Without the technical insides, agile will actually hurt your productivity. You will be lying to yourselves about “doing agile” and you will start blaming agile for not delivering the increased efficiency promised, even though it was you who broke the contract.

Over the years, we have developed and re-appropriated some tools to enable agile development and aid you to use the Technical Practices to maximum effect. Warning: It is entirely possible to use one or all of these tools and still not be agile. I don’t believe that is possible to claim to be Agile without this basic infrastructure. Your code and project will suffer greatly without it.

As a warning, I am going to use the word “Mainline” a lot, because it’s the language we use at Tait. This is the equivalent of trunk, default or master, depending on which version control system you use.

Version Control

When working in an agile context, version control takes on a new role. One of the goals of agile is to release early and often. One cannot deliver often to many different customers without some sort of tool tracking exactly what code was in each release. Maintenance is something you probably haven’t needed to deal with too much yet. Right now, if you find a bug, you’ll just fix it and include that in the next release and everything’s fine.

But what happens when you find a major security flaw? How do you tell your customers whether they’re affected or not? How do you release to an extremely conservative customer who absolutely needs a particular bug fixed, but doesn’t want to take on the risk of any other development that you’ve done in the meantime? This is one of the battles we face every day at Tait. Most of our customers are very conservative with their software choices, so a large part of what we’re trying to do is solve their individual needs whilst keeping the number of active releases of software to a minimum.

As part of releasing our software, we have a couple of different branching strategies, all geared towards slightly different requirements. On the Terminals team, the 30-odd developers all pour their changes into the mainline, and they cut a release branch to stabilise in preparation for unleashing it on the world. This is because they want the speed of 30 developers’ worth of change for the upcoming release, whilst avoiding having the teams go off in completely different directions.

In the Applications team, my current team, we treat the Mainline as the permanent release branch and cut separate branches to do each sprint on. Once the sprint is delivered, it gets merged back to the Mainline and we start a new one. This means that we can release at a moment’s notice, pulling in carefully selected changes from the sprint branch as necessary.

Version control is important for helping use to achieve this. It also comes with these other benefits:

Gives you a restore point if you decide your current changes aren’t going to work (which should happen in a functioning agile team!)

Enforces and enables automatic source-level integration in a team, so you all end up with the same code at the end of the day.

Helps to show you when two people are stepping on each other’s toes. When a file collides frequently and often, this leads to us taking a moment to see what can be done about it. It might be the version control giving us a design hint, or that we need to change how we work. But too many conflicts is often the thing that prompts the discussion.

Has branches for isolation. A Release branch as I previously mentioned or a Feature Branch, which is isolating the development of a feature from the rest of the team. In my team, this is used only as a last resort, when the feature’s likely to touch a large number of files and/or keep the build broken for an extended or unquantifiable length of time.

A log of decision-making in the commit comments. Someone on my team wrote “fixed a bug” as their commit earlier this year. We named and shamed such behaviour. In 2 weeks time, when my fast-aging memory has failed me, I don’t care that you fixed a bug, that’s written in your job description. I want to know why you fixed it, and what behavoural affects it might have on the code, so when I’ve found a new bug later and I’m scrolling through the commit log in the hopes of easily seeing the cause, I can discount your obviously perfect code.

Allows for searching of history

finding where a bug was introduced will often give clues for how we might fix it.

Can easily discover and talk to the original implementer of a particular feature about design considerations when trying to extend their work.

Can discover which versions of software are affected by certain bugs (you know, like heartbleed)

When I first joined Tait, we were using CVS and it probably caused about as many problems as it solved. We had to write a bunch of wrapper scripts and a server-side database for checking in, such that it was at least vaguely usable, but we did make it work for us. We’ve since moved on to mercurial in the Terminals and Applications teams. If Tait can move on to a modern version control system with a 15 year old code base and custom infrastructure, without losing any history, then anyone can.

One Step Build

The One Step Build is a fairly simple concept. With only the click of the metaphorical go button, developers should be able to build the software in exactly the same way as it is when deployed to the production environment.

Manual steps are bad. Humans are inherently prone to human error. If one step takes long enough for a dev to move on to something different, then there is a strong chance that they will forget it has happened and do it again. Or try to. Or break it by doing so. Or forget where they’re up to and miss a step. Or consciously skip a step.

Even if it’s just for productivity reasons, having fewer steps from building to running code is a good thing. Basically, if you’re spending any time trying to do anything other than ‘hit the metaphorical go button’, you’re churning too much time on unnecessary, error-prone tasks.

But it turns out the biggest reason comes down to pressure. When a release needs to go out the door yesterday then shortcuts will be taken. If that involves altering how you build and there’s a non-zero chance that you won’t get bit-for-bit identical builds out the other end, then the first thing you need to do before you release is start doing all your testing again, because the thing you’ve just produced is an unknown quantity. Your shortcut that saved you an hour just cost you two weeks and a lot of embarrassment when it didn’t work in some trivial way. Sometimes you’ll get lucky and everything will be okay, but humans are notoriously human. We forget things, we make mistakes. Let the machines do what they’re best at – following instructions to the letter each and every time.

When you are nearing your deadlines for this project, getting the build wrong and submitting something that is broken in obvious ways because you built it slightly differently under release pressure to how you did when testing / developing, will reduce your mark (I hope!). Having your build automated is one less thing to stress about when releasing your software.

That was one of the things that surprised me the most when I first joined Tait. The Terminals team did all of their release builds in a manual process. In my first week I was handed an instruction manual on four A4 sides of paper detailing how to do a build for release. It took me just four times doing the weekly build before I had a script in place to do the boring bits for me. Cutting the ceremony out of doing a build meant that the barrier to entry was lowered significantly and releasing no longer hurt like it used to. The next time the Apco Team were about to release, they were able to get release candidates daily.

But the one step build is slightly bigger than just a Larry Wall laziness feature. If the continuous integration shows that there is a problem, then always being confident that there is nothing special about the way the code is being built, makes reproducing those issues a whole lot easier. I’m not saying it eliminates them, because there can be other environmental factors as well. I have written and had to fix code that ran perfectly fine on my box, until it started running on the 12-core, multi-processor, massively parallel build server where my threads really were running simultaneously, meaning the tiniest windows of opportunity were reliably failing. And if the code is only failing on the build server, it can often be very hard to fix efficiently and the temptation is there to “just check in something quickly” and see if it worked. That’s not ideal and will likely hold everyone else up. It’s a good idea to have a bit of a think and, if you’re not confident you can fix it quickly, then you’re stopping the 29 other people from the opportunity to be shown the error of their ways.

Continuous Integration

On the first slide, I showed you a bunch of technical practices which all support agile. Continuous integration is more or less the parts of that picture which are able to be automated.

Checking the latest round of code out of source control. This asserts that you didn’t forget to add a file and that everyone else working on it doesn’t have something that won’t even compile because of some silly error.

building it in one step. Build servers by their very nature need to be automatic, so if you can get it working for them end to end, then you can get it going for your developers too!

Reporting any failures and making them visible. If you’re building on every single check in, then it’s very straight forward to tell who broke it. The tests all passed on the previous build and now they don’t. J’accuse!

Isolating defects in time. If unit tests are about isolating defects in space, or which area of the code, then isolating the defects in time gives you an extra axis with which to track it down.

Continuous integration is the glue that holds together the rest of agile infrastructure and makes things like TDD worthwhile. There is no point in writing tests if they never get run. Plus the only way to run them properly is to run them on any box that hasn’t been used for development. If there’s some environmental trick to getting the code to run, you can guarantee that every developer will have done it before lunchtime.

Oh, and then there’s that word integration. It turns out that writing code by yourself is relatively easy. Largely because you understand what you meant. Most of the time. But when you’re working with other people, you’re lucky if you understand what they mean even some of the time. When these mismatches of understanding occur in code, things don’t work as well as they should.

In a commercial setting, integration is a bit more complicated than textual conflicts in code. It’s also a problem of scale. There are a bunch of people, potentially multiple teams, working on the same product. They’re probably working in relatively separate areas of code, but at some point these two, or more likely, six, subsystems will have to function together to form said product. Integration is making sure that each function call works as the caller thinks it should, and that any side-effects are considered. Automated tests should explore these assumptions as best as they can, such that running them is a fairly decent indication of a correctly functioning program. Integration is hard[citation needed] because it tests our ability, as humans, to communicate.

Extreme Programming teaches us that if we find any particular task difficult, then we should do it more often or “Take it to the Extreme”. That way, if we’re agreeing with Larry Wall’s famous three virtues of a programmer, then we will soon find the pain points and codify them away. Therefore, we should integrate as often as possible, or continuously.

Whether or not you’re aware of it, every time you push code to a shared repository, you’re integrating. Your new code has to work with the existing code, so we should run the previously written tests along with the new tests that you’re sure to be checking in. This way, if some of your assumptions don’t hold or someone has changed the rules of a sub-system on you, then the continuous integration will tell you very early in the piece. You are never in a better position to fix the code you’re writing than while you’re still writing it. So as a second-best option, we’ll settle for minutes after you’ve finished. It’s still at least an order of magnitude better than even as far away as the end of the sprint.

The extreme other end is where every developer has a personal branch and you all pile your changes in at the end. If you are on a project where that is the case, I suggest you bring a sleeping bag along when everyone tries to merge, because you will be there forever trying to untangle that mess. Yes, I have wasted many hours of my life trying to integrate things from odd angles. Usually under time pressure.

It turns out that one of the secrets to a good continuous integration system is to keep the immediate build to less than 10 minutes. At this point in your coding lifetime, this probably seems trivial, but if you ever start working on a C program with thousands of files which have to all compile separately and link and then run the tests (which probably also needed to link separately), then it becomes a challenge.

10 minutes is a bit of a magic number, but mostly based on the fact that it will take you around 15 minutes to properly get immersed in a new task, so you won’t suffer needlessly from context switching if the build does fail. The code ideas should still be in your short-term memory and fixing it is probably still very cheap, relatively speaking.

The Next Step: Continuous Deployment

Once Continuous integration is really working for you, the next evolutionary step is continuous deployment. Continuous deployment only really makes sense in a web environment, where the application only lasts as long as a single request. This is quite a scary concept – having changes going live as soon as they’ve passed all the automated testing. You suddenly want to make very sure that your automated testing is up to scratch. My team are currently doing a very watered down version of that with the web app we’re developing.

We continuously deploy to our test environment. It means we’re getting some of the benefits forced upon us, in that assumptions about the cloud environment are tested (notably the SQL provider has stricter rules than that of our development machines) and our deployment is necessarily fully automated. If I can reference the last section again, it truly is a relief knowing that pressing the deploy button is going to work every time when we’re risking downtime for the customer, or more likely, the availability of our own weekend which might be spent fixing things such that the customer doesn’t notice when they arrive at work on Monday.

The problem of the broken mainline

So now you have your agile infrastructure in place, and someone checks in a change to the code which doesn’t compile or fails a test or “breaks the build”. The correct response is to fix that as soon as humanly possible. If you’ve been working on small chunks, with a solid set of tests around it, then that should be a relatively trivial affair. If you’re working on a large legacy code base written in C and your check in caused a test in a different part of the program to fail dramatically because you’ve exposed their reliance on a thread race or something equally difficult to sort out, then the problem changes somewhat.

All continuous integration systems that I have come into contact with have the built in assumption that the build success is a binary decision. Either everything worked perfectly or it is considered broken. This is a Good ThingTM as broken windows syndrome will creep in very quickly and more easily than you would ever imagine.

So it becomes the norm that the build is broken. Thus further breakages appear and go unnoticed (thanks to the binary all or nothing). So even if the original failure gets fixed, the build is still broken because of other reasons. This sort of thing really hurts productivity, to the point that my team has a light that switches on when the build is broken. This means that all team members and anyone walking past can see when the build is broken (and we do get people asking what it’s all about).

So now that the Team has decided that continuous integration helps their development, it becomes a question of Team Culture to keep the build light off, just as it was Team Culture allowing the broken windows. In order to keep the light off as much as possible, it becomes the top priority of the person who checked in the failing code (which the continuous integration system helpfully highlights for us) to get the light switched off as soon as possible. Other team members are encouraged to socially pressure the perpetrator and potentially help them to get it fixed, if need be. It shouldn’t be a pleasant experience, but we’re not out to make our developers cry.

The Applications team certainly isn’t the only team with such a mindset. Terminals have a notion of the person responsible for coordinating the fix. This rather large mouthful of a title is because they have a slow build turn-around, relatively speaking and they have the most developers working on the same code base and are more therefore more likely to suffer from multiple simultaneous breakages. This means that the first person to break it becomes responsible for getting it back to green, which usually amounts to checking the reasons for build failure quite closely and if it’s not their fault, then politely informing the person who did check in the break, that they need to look into it.

The infrastructure teams also do this sort of thing. There are many posters around their scrum boards with a doctored photo of the team lead reinforcing the social pressure in a light-hearted way that broken windows aren’t ok. (light-hearted is important when coming from a team lead!)

One of the examples of broken windows being seen as ok is within the former DSP team. They would compile their code with warnings turned off and the number of warnings grew and grew. This lead to the feedback loop suggesting that some of the terrible things they were doing were safe and a reasonable trade-off in the name of perceived efficiency. The number of bugs that would have been caught just by having compiler warnings as errors in this project is phenomenal. They more or less got away with it for so long because the DSP is a relatively small portion of the code base and is extremely parallel, with the majority of bugs showing as weirdness in the sound coming out of the radio or sound just not coming out at all. But wow, that’s such an expensive way to fix things.

The Problem of the “Special Mainline”

For a while in the Terminals Team at Tait, we had this habit of having an Apco Team Branch and an “everyone else” branch. The Apco Team would merrily do their development for months or even more than a year at a time and then try and merge the whole lot back into the everyone else Mainline. As you can probably guess, this didn’t work particularly well. Part of the reason for this was because we were using CVS at that time. CVS was the best tool of its time, but it’s not 1998 any more.

As a junior, I was semi-regularly tasked with tracking down when a particular bug was introduced and the number of times when I landed on a massive merge was dis-heartening. I often felt like I had failed, because even though I had tracked down when it happened, I wasn’t able to give any better information on how we might fix it. I no longer have such misgivings. There was a disproportionate amount of breakage in those merges, partly because we were using an inferior tool and partly because we thought it was a good idea to test inter-team communication for months at a time all at once.

When it was time to do one of those merges, we would often have a developer go dark for the better part of a week just trying to get it done. This is not a good way of doing software development – you will lose weeks at a time, without even being aware of it.

I liken it to two cars crashing into each other at high speed and in fact we would refer to them as ‘Big Bang Merges’. Two or more teams of people working on what started out the same, have now evolved their code into something that’s remarkably different. When those two objects which are no longer the same shape attempt to occupy the same space, the universe tries to correct this impossibility with explosive results. It’s much easier to grow the two code bases at the same time. It can often feel slower to develop software like this, but the alternative has the slow-down in a different place, and has the nasty side-effect where bugs can be introduced and go unnoticed more easily.

An agile board

Just to go a wee bit sideways here, in that the rest of the talk is about tools to get your code in order. The agile board is the one piece of infrastructure that really only exists in an agile environment. It needn’t be a fancy 60” touchscreen hooked up to the latest in Agile software. It might be a spare bit of wall with some post-its attached.

It’s very easy to underestimate the power of this piece of equipment

– Makes work visible

Shows to yourselves when work is taking longer than it should. “It’ll be done by next standup” 3 days in a row.

It is a nice feeling being able to see something physical moving across the board. Software is never finished and it’s very easy to forget what you’ve accomplished, so having a thing represent completion for you is reason enough to have it. Having that accomplished feeling is a good feedback mechanism for developing a rhythm of delivering on time.

When it’s a web-based tool, it makes working remotely a bit easier. As someone who worked in the European timezone for around 6 months, I can tell you that it is a good start for communicating the bare minimum about who is working on what and what is available to pick up next. But if you’re all working in the same room, then the post-its are just as effective.

It provides a bit of marketing to the rest of the company. If the big-wigs walk past and can see you’re doing something totally awesome, it helps them to think good things about your team when making decisions. In my experience, it doesn’t go the other way. They see even mundane work as “necessary”.

It greases the wheels of a large company. Project Management can look at this board and see what’s happening right now without needing to interrupt the developers actually working on it. Or they can raise a task on the backlog, knowing that the developers will likely need to investigate it a bit more themselves before they are ready to have a proper conversation about it.

If you’re looking for Free Agile Software to use on your projects, then I recommend Trello

Conclusion

Version Control, Continuous Integration, and One Step Builds are parts of the agile infrastructure because they have some common themes. They allow us to respond more quickly to change, whether that’s change in the code, change in the requirements or a change in the thinking of a team member and they help us to oil the wheels of delivering value.

But the most important aspect of agile infrastructure is something that I’ve hinted at throughout the talk and that is Team Culture. If your culture allows broken windows, or code manufactured to bypass continuous integration (doesn’t have tests) or doesn’t allow the team to jump in to help someone who is struggling on a task that we all thought would take an hour and has ballooned to 3 days, then all the 60” touchscreens in the world won’t save your project from failing. Agile Infrastructure are some tools – it’s up to you to use them effectively.

]]>https://amillioncodemonkeys.com/2014/05/25/agile-infrastructure/feed/1amillioncodemonkeysScrum / XPHollow ScrumVersionControlone_step_buildContinuous Integration@BORAT_DEVOPSbroken_windowsSocial PressureHead on Integration CollisionIt's very easy to underestimate the power of this piece of equipmentCaptain Planet, he's our hero...BDD, the A-Ha Momenthttps://amillioncodemonkeys.com/2014/03/29/bdd-the-a-ha-moment/
https://amillioncodemonkeys.com/2014/03/29/bdd-the-a-ha-moment/#respondSat, 29 Mar 2014 22:13:13 +0000http://amillioncodemonkeys.com/?p=251I’ve been professionally writing software for almost six years now, aware that testing was something that ought to be done as part of one’s job, but it was only in the past couple of weeks that I really had a solid moment really bringing home to me why Behaviour Driven Development really works.

After a sharp awakening that I just wasn’t doing my job properly (I had checked in some code that only sort-of worked, and had no automated tests verifying to what extent it was broken), I was picking up a new task. This piece of work was something I had done before. Not something like it, this exact thing, in a quick hack demonstration model that we threw away, with this being one of the changes we decided to keep. I knew what to do – I had to delete a bunch of code and it would still work. I was removing a decision from the user which wasn’t necessary with the improved workflow of the new version.

Partly because I wanted to improve my work ethic to what constitutes basic professionalism in my team, but mainly I was a little angry with myself and exasperated with the person who had shown me in stark reality how I was falling short of my job description. He is the Tech Lead on the project, and I decided to challenge him – I didn’t know how I could possibly test for a lack of a user interaction without some horribly farcical test. So I asked his advice on how one might go about TDD’ing a removal of code. To his credit, after a couple of minutes’ thought, he came back to me and said that the user interaction missing wasn’t something we should be testing, but the decision was still being made – in code. He and I both knew that our previous version was a race to get something out the door and we had fallen into some of the classic traps, dumping logic directly in with the UI code in the name of speed. UI code is notorious for wanting to do extra things, like starting an event loop, which is not conducive to automated testing. We therefore did not have such things around this particular logic.

The Tech Lead said that the best thing to do was to wrap some tests around the logic that would remain; we were removing one of the four possible outcomes and the other three still needed to work. Then, while we were looking at the code to see how to get started, it quickly became obvious that without the decision dialog in the way, we were free to move this logic to a more appropriate layer. But where exactly?

I started writing some tests to see that this decision would be made and found it much more difficult than it should have been. This made me revisit the decision of where I thought it should end up and moved my testing to a different layer. This too was a dead end. I think it was only three, but it might have been more attempts at fitting it at the right level. Still, these were two minute forays at most. Each test setup showing very quickly the problems at each layer and they were all completely revertable because I hadn’t changed any real code yet! Then I found it. The layer at which it all fitted. Writing the tests became easy and I could do it piece-meal as I slowly removed unwanted code and then moved the method to its rightful position.

I already knew the code worked – we had manually tested it quite thoroughly, so actually asserting correctness wasn’t even at the top of my list of objectives for writing a test. Although I wasn’t conscious of it at the time, I was writing the test purely so it would guide my design. This is why the title of the post says BDD, but I referred to TDD earlier. BDD is the aim to remove the conscious thought of testing for correctness and moving towards asserting that the functionality exists. In short, it’s a good way of doing TDD properly.

In the end, I spent around an hour in a single session doing something quite straight-forward, which would have taken a similar amount of time had I not let the testing guide my design. Except that I left the code in a better state, architecturally and didn’t revisit it because I missed something with my human-based testing.

I mention this as my a-ha moment, because it really solidified for me what BDD was all about. It’s not about having those green ticks appear on your screen, it’s about really becoming the first client of the service you’re writing. The practice run before unleashing it on real code. And subsequently I have found the mental battle to test-drive my design (was that pun also intended by the creators of TDD?) has completely melted away.

My team is trying something different. For a few years now, we/they have been doing Scrum. As in all successful agile implementations, it’s been tweaked to fit our workflow and calling it Scrum now is more a habit than a true description. And it is working for us …mostly. We’ve just embarked on something new and I thought I’d give a bit of a description of where we’ve come from and where we’re going. Hopefully it’ll resonate with anyone who isn’t on my team reading this – especially the bits that I now see as obviously brain-dead!

Back in the Stone Age when we started with Scrum, it kind of reminded me of having just discovered fire.

We asked questions like “What have I done since last stand-up? What will I do before next stand-up? Is there anything blocking me?”. We would vote on the number of Story Points based on an explanation and discussion in Planning-1. We broke our stories down into tasks in the planning-2 meeting. We guessed at how long a task would take, carefully noting the number of hours and calculated what would fit into a sprint, adjusting according to assumed efficiency (or lack thereof) based on a magic algorithm. But from where I’m standing today, all of that seems horrifically broken. About the only activity that we do the same is break our work up into sprints and measure our velocity in story points. When we started “doing agile” it was all about being able to tell the business when we were going to release. Of course, we got so good at it, that we changed our version control branching strategy to admit the fact that the business demands when we release (more often than just every two weeks). So the fire started keeping us warm and we were able to taste cooked flesh, but for anything other than the very basics, we were still getting burned.

And, of course, we started working on an entirely new project. This involved technologies with which we weren’t completely comfortable and work that didn’t fall into the same patterns. Suddenly our estimates were way off. Our ability to do any kind of meaningful task breakdown without the code on hand was r-educed to almost zero. The new technologies meant our time estimations were way off too. So during the retrospective, we would try to address these issues, one by one until we morphed Scrum into what we practise today, except for the new bit that I’m not ready to expose yet. Of course, our inputs changed dramatically too. We didn’t really have a release date – just an amorphous blob of work that we would need to do before we could release it.

We started doing what I refer to as ‘demo-driven development’. Our time-based sprints were forced into fitting the next deliverable and that was the only goal. Everything at the cost of the deliverable. It was actually fairly motivating and not as bad as my wording of that sentence would lead you to believe. But we never really got to perfect it. We missed the fact that a Demo is both a fully fledged story in and of itself, and that it’s a huge integration exercise, which will derail your whole sprint if you don’t jump on it asap.

Now we have a backlog and it even has some stories in it which vaguely predict an imaginary date of when we finish. We tell the project managers that this number will likely double and they pretend to believe us. But the first difference is how we go about getting stories into a sprint. Instead of sitting in ridiculously long meetings masquerading as “planning”, we groom the stories first. And by groom, I mean we try to figure out any nasty surprises that might be lurking there. We do this at our desks, on our own, giving a possible solution to the problem. This usually involves a lot of hard thinking and a bit of design – always alongside the actual code and design docs! We try to avoid designing the story completely, but get a really strong feel for it and lay it out as though other adults who are capable of abstract thought are going to pick this up and start work on it. I know, this is a big step up from being treated like a trained monkey at the various fast food outlets or supermarkets we’ve probably all worked in at some point in our lives.

So we put an educated guess on how many story points we believe it’s worth along with some acceptance criteria to know when we’re done and then (and here’s the critical bit) we pass it along to one other developer and the test analyst for their review. Only after they have said that the story makes sense to them is it moved into the approved state, waiting for voting in the planning meeting.

During planning, the person who was responsible for the initial grooming explains the story to the rest of the team and points out the scary bits / risks, with any team member able to chime in with their two cents. After everyone has exhausted their desire to speak (or the scrum master politely suggests we need to move on), we vote on the story. If there is a lot of variance, we will then get some of the outliers to defend their vote. This comes in the form of “It’s not a (n + m) because… and not a (n – p) because…”. We use standard scrum numbers for the points: 0, 1/2, 1, 2, 3, 5, 8, 13, etc. We have a definition of what each number includes, which we often refer to when defending them. 3 for example, requires some design, therefore the risk goes up from a 2. 13 means everything else is to be put on hold and even then, we might not finish it in the sprint. Your number scale will vary.

We even got pretty good at it. We managed to start achieving our sprint goals, even finding some room to play a bit more with the technical practices, like pair-programming, TDD, etc. And then we switch projects again. We toiled for about a year and released the software on time – but it wasn’t exactly a sure thing. The timing was incredibly tight and by the end of it, we were spinning our wheels just fixing bugs (the “stabilising period”). We wouldn’t have been able to conscientiously release without that time, but the agile literature implies that if we’re doing things correctly (as we believed we were), it shouldn’t be necessary. The software is able to be released at the end of each sprint, right? Most of the problem there is that we were sacrificing the technical practices in lieu of the stupidly, ridiculous (even by software standards) time-frame. The classic mistake.

But then something a little weird happened; as the business wasn’t (yet) able to tell us when the next iteration of this software was needed (“soon” isn’t a real word, you see), we embarked on a new trajectory. We went back to the demo-driven development, but altered it slightly. We removed the time period from the equation completely. Our sprints would take as long as needed, but ultimately should be focussed on achieving the demo. If it wasn’t part of the demo, it didn’t make the cut. Except for when it did. We discovered something fairly critical: once time is removed from software delivery, all urgency is lost and “oh, but we’re definitely going to need it” becomes a convincing enough argument to do some Yak Shaving. And then the sprint goal just gets further and further away, meandering until the person, whose single wringable neck is on the line for actually delivering this software, gets extremely frustrated with how it isn’t being delivered and announced the timeless experiment a failure.

We got a bit lucky, as well. During the sojourn to nowhere, the good folks in sales sold the system! Hurray! Now the project in its current form wasn’t just optimistically scheduled – it was completely and totally, laughably impossible. We had guestimated about 9 calendar months of work, which now needed to fit into about 3. Suddenly the impossible was easy. If said customer didn’t need it, it didn’t make it in. Period. Our potential contractual obligations were the only driver. The schedule was still tight, but it was normal software-tight – not impossible.

Now, having discovered that time-boxed, demo-driven development doesn’t work just as effectively as non-time-boxed, we’ve done what good engineers should do when they’ve run out of a clear path forward. We’ve gone back to first principles – but we’re trying to carry our learnings to date with us. We are now having the formal retrospective, planning I, and planning II meetings. Our stand-ups include what we’ve done, what we’re moving on to and what’s blocking us. Though that information is cleverly disguised as our leadership inquiring as to what’s happening on an individual basis, which avoids the meeting devolving into a daily vertical nap. Our planning II meeting involves a bunch more prior research and isn’t as fractured. We no longer try to account for time using hours or magic formulae. We have a strong focus on the technical practices suggested by XP – Continuous Integration, Pair Programming, and striving for the most difficult of all: TDD.

Of course, we’re not perfect. We’re still lagging behind schedule and the code still comes out with a defect rate higher than that required for release after each sprint. But I feel we are improving. Probably.

]]>https://amillioncodemonkeys.com/2014/03/17/how-agile-failed-us/feed/0amillioncodemonkeysThe Serial Killer Tooltiphttps://amillioncodemonkeys.com/2013/11/05/the-serial-killer-tooltip/
https://amillioncodemonkeys.com/2013/11/05/the-serial-killer-tooltip/#respondTue, 05 Nov 2013 07:29:13 +0000http://amillioncodemonkeys.com/?p=230Firstly, having seen in my statistics a search term used to find my last post, I feel I should point out how I came upon the name for my Serial Port Terminal Application. I was poking fun at it being a “Killer App” (because really, if there was any software which proves Serial Comms as a technology, it was written years ago). Combine that with Serial and you have a terrible pun, which seems to be one of the most important things of Free and Open Source Software. This is not a post about the other type of serial killer. Or indeed, a full time one.

For such a small feature of my program, the tooltip which shows up the timestamp of the time data was sent or received has caused me more consternation that probably the rest of the program put together. I had to touch each of the three layers of my model-view-controller, with the smallest modifications going to the controller, which is odd – in general, the bulk of the functionality of an app should live in the controller. So here’s how I did it…

After thinking about the UI briefly, taking my existing GTK+ knowledge, I decided it was going to be best to start with the model. When I first started writing this app, I had imagined a feature like this and broke the YAGNI rule. I had previously included a timestamp in the data I was collecting about the data to-ing and fro-ing. My previous code was exhibiting a massive performance problem with the linked-list I was using, in that it was grinding to a halt whenever I clicked the switch to hex mode button. I therefore decided that if I wanted to search this thing with decent performance, I would need to step my storage up a notch. For this, I chose to use an in-memory sqlite database. At work I’m already getting into a SQL head space, I’ve worked with sqlite before, and of course, its license terms make life easy for including in my liberally-licensed program.

Create a table, drop in the data as it arrives and pull it out in the most straight-forward manner imaginable. I was worried that it would be a performance bottleneck, but I wanted to let the database prove itself otherwise. Turns out it was good enough. No reason to get fancy until the simplest thing isn’t possible.

Now for the interesting bit. How do I pop up an arbitrary tooltip on the same widget, based on the text underneath the mouse? This took a bit of doing.

First things first, we need to know the position of the mouse. I attached to the MotionNotify event on the TextView, so now every time the mouse moves I store the X,Y coordinates for later. As a proof-of-concept, I set the tooltip text during the MotionNotify event and voila, I had an arbitrary tooltip based on the mouse cursor position. It was starting to be shaped like the actual feature. I didn’t really want to store it myself, and I was kinda hoping that I would find an event with the mouse X and Y coordinates attached, but I’m not that lucky.

After several mis-fires and some failed attempts at googling, I eventually came upon the QueryToolTip event. I think I tried this one pretty early on, but it turns out you need to set “HasToolTip = True” on the widget for which you’re hoping it will be called and you need to set the args.RetVal = True or nothing will show up. Tricksy Hobbitses.

The next problem is knowing exactly what piece of text is underneath the mouse cursor. We know the window co-ordinates, but that doesn’t help. For a second there, I thought I was going to have to do some stupid math based on the font size, and the buffer lines, which would almost certainly turn out to be wrong or not accurate enough in all but the simplest of cases. Luckily, it’s a common enough thing for GTK+ to have support for it built it:

So that gives us the buffer index – that’s a solid start! But it only tells us the character, not the timestamp associated with it. So it’s time to revisit the database. Every time we add data, we now need to know whereabouts in the buffer it sits. So the simplest thing possible is to keep a running total of the buffer and add the length of the data whenever we dump data in the database.

It is now time to explain what I meant by the comment “My previous code was exhibiting a massive performance problem”. When I started attempting to pull the timestamps out, I was either getting results that were wildly wrong or exceptions were being thrown with no clear reason. The database decision really came into its own at that point, as I was able to just save the whole thing to a file and dissect it. As soon as I opened it up and did a SELECT *, it became clear to me that I was storing waaay too much data. I was storing the whole 256 byte buffer, regardless of how much data was actually sent. I was storing the length of the buffer, rather than passing up the number of bytes I actually received. Whoops. Turns out it wasn’t a performance problem or a hex conversion problem or any of the other things I had considered. I was just doing something dumb. A quick fix and my performance problem disappeared, along with the conversion issues. I was quite excited about that! To the point where I re-instated the sent / received tag and the performance was still acceptable. Great Success!!

So to round the feature out, it was a completely obvious SQL query and performance appears good enough.

In the end, it was actually really obvious, but it took a bunch of google mis-fires to get there. I wanted to add a scathing remark about the GTK documentation, but when I search there now, I find exactly the information I needed, including the caveats I mentioned above. I still want to blame them somehow, but I haven’t figured out how I can do that successfully.

]]>https://amillioncodemonkeys.com/2013/11/05/the-serial-killer-tooltip/feed/0amillioncodemonkeystooltip_smudgeThe Serial Killerhttps://amillioncodemonkeys.com/2013/10/30/the-serial-killer/
https://amillioncodemonkeys.com/2013/10/30/the-serial-killer/#respondWed, 30 Oct 2013 09:24:08 +0000http://amillioncodemonkeys.com/?p=221A little over a year ago, I decided to write my own Serial Terminal. Most people’s reaction to this is “why?” Serial comms is a technology well on its way out, and what they see as the critical hit, a bunch of terminal programs already exist. Many of which are even open source. So why would I bother?

To begin with, the current applications didn’t do all of what I needed, how I needed it. Because of the nature of the kind of people who use Serial Ports with some regularity, Serial Terminal programs tend to be written by people whose day-jobs are very low level. This is wonderful if you want to be confronted with a program that resembles the cockpit of your average jumbo jet:
Now, no offence is intended to the good folk who wrote this cross-platform program and this particular one was recommended to me after I announced my own application. You can do everything with it, but just starting to do anything is somewhat intimidating. Now fair’s fair, so here’s my app’s interface for you to poke at:

Serial Killer on Windows 7

The features I require are:

“friendly-connect” (if I start typing in the terminal and it’s not connected, it should attempt to connect to the current settings)

Being able to function in both ascii / plain text and raw hex

connecting at different baudrates

being able to convert to the two modes with existing data

knowing the time when each piece of data arrived or was sent

works on both linux and windows

being able to save data from a session to a text file

being able to use the thing without having to twiddle a thousand knobs first.

None of these is asking particularly much, I would have thought and yet, my searching came up short. I use Serial Terminals at my place of work, so I also got to piggy-back off the searching of my many colleagues and yet, the terminal program used most often there is Terra-Term. Gtkterm was the one I liked the most, with its main disadvantage being that it only runs on linux. Linux is the development OS for the firmware of these devices, but the programming software only runs on windows, so most of the interaction is done plugged into the windows machine and un-plugging and re-plugging is a hassle I don’t need.

The more astute of you will notice that Gtkterm is open source, using GTK (obviously) so making it run on windows shouldn’t be too big a deal, right? Well, actually, yes. It is. I have pulled the source code and have managed to get some patches accepted. These patches were mostly clean-ups, with the exception of one, which was a Ux improvement for the logging functionality (which I’m pleased to say has made it into the 13.10 release of Ubuntu). I had three problems in my quest to move this cross-platform.

It turns out that they use a linux-only widget for the Terminal itself

Finding a decent C Open Source Serial Port library for windows is more difficult than it should be. The closest I came across was that bundled with ruby-serialport, which I could have eventually adapted to my needs.

But most importantly, the act of getting patches accepted was quite a hassle.

The maintainer has a full time job and a life, and took over this project after it was more or less abandoned by its previous maintainer. I have a lot of respect for people who do that sort of work, especially considering that it’s what I think to be a quality program. But the effort required to get simple patches accepted was, as mentioned, too much for me. It would take him around two weeks to reply to simple translation (the original source is in French) patches, so imagine if I sent in something with real functionality changes?

So one day, I decided I would have a bash at seeing what I could put together in an afternoon. Turns out, using C# and my previous GTK knowledge, I was able to make a program that sent and received characters across a serial port at a bunch of different speeds. On windows. Not bad for a single afternoon. If it weren’t for the fact that mono doesn’t implement event-based serial comms, it would have worked on Linux out of the box, too. So with a small amount more work, I had something usable working on two platforms in less time that it would have taken to get a single clean-up patch into gtkterm.

And the final reason I wrote my own seems to be something abhorred by the software community. I reinvented the wheel, because I wanted to learn how to make better wheels. It’s not a trivial task to weave usability and functionality together, something where most other comparable programs have, in my opinion, mostly failed. So now I’ve got a program I like to use and I have the ability to effect change with relatively short notice. Really, that’s all I wanted.

In my next post, I’ll dissect how I built it, focusing on some of the more interesting techniques, so that others can benefit from my wheel-making learnings. In the mean time, see here for the source code.

]]>https://amillioncodemonkeys.com/2013/10/30/the-serial-killer/feed/0amillioncodemonkeysDer Hammer Serial TerminalSerial Killer on Windows 7Software is like Cookinghttps://amillioncodemonkeys.com/2013/08/29/software-is-like-cooking/
https://amillioncodemonkeys.com/2013/08/29/software-is-like-cooking/#respondThu, 29 Aug 2013 08:48:37 +0000http://amillioncodemonkeys.com/?p=206In a previous post I mentioned that software development is a creative process. In this post, I’m going to expand upon that a little, add some metaphors and later, if I’m feeling wild, I might even add a simile or two! I’m sure F. Scott Fitzgerald is turning in his grave, given that I knowingly “laughed at my own joke” then.

Coding is like cooking. It is important that the flavours blend together to look and taste like they are part of a single entity. It’s not really ideal to find a whole lemon buried inside a cake. Or a stew. Sadly, I see this kind of thing all too often in coding.

The recipe calls for a hint of lemon, so Engineers being Engineers, think “if a hint of lemon is a good idea, then think how awesome it would be if we had a whole fruit in there, from which we could squeeze the right amount of lemon juice from whenever we wanted.” At which point there is now a lemon bolted on the side of your mixer and you find that when you try and bake a different variety of cake, the machinery forces you to have lemon in there. Or more likely, it is a multi-flavoured juice dispenser, which is so heavy the mixer can barely stand up on its own any more. And it leaks.

Which doesn’t at all lead me to my next point, so I’ll just jump to it awkwardly. Something I find myself doing quite a bit is cooking blind. Which is ironic, because sight is the main sense I actually employ when doing this. What I mean by that, is that I don’t taste my food before serving. It’s a bit inexact, and I admit, also somewhat hit and miss. And largely, when it’s a hit, it’s due more to experience of knowing the right colours, textures and combinations of flavours that work.

So I guess what I’m saying with this post is dressed up prettily, but blindingly obvious. Experience mitigates some of the worst sins, but if watching altogether far too many cooking shows on telly has taught me anything at all, it’s that life is infinitely better if one tastes food before it is served.