Tuesday, 16 December 2008

I'm very interested in the subject of gender stereotyping, which probably isn't surprising as I'm a girl in a predominantly male industry. And I like cars, and sports, and get irritated if people assume I'm not "allowed" to be interested in these things.

Far from being discriminated against, however, I find many people ask me why there aren't more women in the industry and what can be done to encourage girls into IT. If these questions were easy to answer, they wouldn't have to be asked.

But one of my personal theories is around how we raise our children. Yes, it's possible that girls are genetically, for some reason, averse to technical types of roles. Or that the working environments don't appeal to the feminine mindset. But if you tell kids from an early age that some things are for boys and some are for girls, there just aren't going to be enough girls studying "boys" subjects later in life to get a large proportion of them into "boys" jobs.

I have a lot more to say around all these possibilities but I only want to explore one small area today. I was pointed this morning to an article on Gender and Toys. I'm mainly recording it in here so I don't forget where it is. But I found it interesting because it raises many of the same questions I have, and answers almost none of them. At this stage, I think it's very difficult to say how much of a child's preferences are there because of parental or peer pressure, and how much is natural.

I don't understand why a society can discourage the use of words like chairman (preferring "chair" or "chairperson") and that works hard to at least look like they promote equal opportunities in the workplace, can encourage gender-specific advertising to children (have you watched the adverts between cartoons?) and the blatant gender stereotyping the article talks about in toy stores.

The main point to take out of it, I think, is that we should be aware of how we raise our children. Well OK so any parent will always want the best for the child and so that's a slightly fatuous statement. What I mean is, I don't think we should inflict our own preconceptions of what a child will like based purely upon their gender, or they might grow up thinking that really is what they like. We should be allowed to develop our own preferences and exercise any skills that interest us until we find what really suits us personally.

PS The assumption that Lego is for boys in that article made me very angry. Of all toys I thought Lego was pretty good at not overtly targeting an audience, although the new trend towards Lego-for-girls has worried me since first saw it. Lego is for kids! Doesn't matter what age or gender they are!</rant>

Thursday, 11 December 2008

Read if you're a developer and wondering what's missing from your job.

Read if you're a manager and you're looking to recruit the right types of developers. In particular be honest with yourself over whether your organisation is more aligned to "hygiene" or "motivation". At least one of the poor job decisions I have made is because the role was mis-sold as one and turned out to be the other.

These physical items could be extended to include your virtual environment: - Eclipse / IntelliJ Idea / Visual Studio (is it still called this?) / other dev enviornment - setup, preferences, window positions - OS - Desktop icons - do you care what / how many / positioning - Which software is always open when you're working, and does it matter what order you open them in (so they're in the correct place on your task bar) (additional: does not having enough RAM cripple you because you have to constantly shut and re-open software?)

How anal are you about setting these things up and getting them just right before you can start coding (or whatever it is you do)?

This post comes to you courtesy of my irritation with my chair. I want to code cross-legged today and my chair does not adjust in the dimension required to provide me with enough space to do this.

Wednesday, 8 October 2008

Today, I found my own blog useful. I was configuring Spring validation on my new project, and had to remind myself how to do it. We configured validation on the new project in less than an hour, which beats the two days it took me to work out how to do it the first time.

And I impressed one of my new work collegues. Apparently I am now the Spring Guru. Oooops.

Monday, 1 September 2008

I'm reading Joel Spolsky's User Interface Design for Programmers. A thought that's struck me is about architecture. It's easy to get fooled into thinking building software is a bit like being the architect for a building. I'm not even going to go into the differences between engineering practices à la building design and good practice software design. I'm going to start from the easy point, the stuff you can see.

And you can look at buildings like the Woolworth Building, and think, "I want my software to be the equivalent of the Cathedral of Commerce".

But for all its Gothic detailing, its flying buttresses and gargoyles, you can still find the entrance. You can still find your way to the elevator and up to the floor you desire. The magnificent detail does not obscure the use of the building.

The same cannot be said of user interface design. You don't simply wander past / through an impressive facade, whatever you think the "skip" button on your flash intro is supposed to do. The decoration, the clutter, is right there in front of the user, the whole time they're trying to DO something.

Do architects design their entrances flush with the walls and the same colour?
Are entrances to basilicas hidden behind flying buttresses?

No (generally). Because the design of a building is supposed to enhance the "user"'s experience, not get in the way of it.

If you're playing with acres of land there's a lot of detail you can fit in that won't hinder the ordinary person. In fact, rather sadly, many of them won't even notice it. If, however, you're coding for 800x600, or even if you're coding for a wide-screen Mega Television of Doom, there's limited space available. You want to make sure the "nice-to-haves" don't get in the way of the user's "must-dos". You want to make sure your entrances (buttons, links etc) are well marked.

Wednesday, 30 July 2008

I've learnt a lot professionally and personally during my time here in New York, but the time has come to go back "home".

I'll be relocating to London in September. I guess it won't hurt to mention on here that I'll be looking for an exciting new job when I get back there. Check out my LinkedIn profile if you are in the market for a Java tech team lead.

Wednesday, 21 May 2008

I think the statement that struck me the most when I was on the Certified Scrum Master course was: the start of the project is when you know the least about what you're doing.

Which of course is absolutely true.

So why do we come up with extensive requirements, detailed design, and fixed plans at this point of time? We haven't put anything into place yet, we haven't played with the code, the customer hasn't seen anything of what we're promising to deliver.

If we think about it this way, suddenly the waterfall method makes even less sense (assuming people do still like to work this way).

How many times have you just played with a bit of code, done a prototype, a "hello world", knocked up a basic screen, before you can even give your manager some finger-in-the-air estimates? I don't know about you but I'm not comfortable unless I have played a bit to get the feel of something before even looking at someone who asks those questions!

The empirical approach makes a lot more sense to me. So why aren't we doing it more?

Because it's harder.

I think it's harder because it works, but I daren't make such a bold claim without having a number of such projects under my own belt, or at the very least digging through the web to find examples. Which frankly I'll leave to you to do, if it matters to you.

Friday, 9 May 2008

This week I acquired some more letters which I can add after my name when I'm feeling pretentious: - SCWCD - CSM

I'm feeling a bit over-saturated at the moment as you might imagine, especially since Wednesday's exam was a detailed technical one and Thursday and Friday were spent learning about development methodologies - well, one specifically I suppose.

What I need now is a real project to apply it all. It might be time to tackle that favour I was asked for back at Christmas.

Tuesday, 29 April 2008

After the acquisition of a company with offices in New York, I pestered my company outrageously until they got fed up and finally relented – they agreed to send me to the US.

To ease the transition, I chose to move onto a project which would allow me to start working in London and continue on the same team after I had moved to New York.

In the extreme over-excitement that followed my relocation, it took me a little while to realise that effectively I was an offshore resource, no different really from any of our Indian test team, and the team needed to manage this appropriately.

I learnt a number of lessons whilst playing this game. Some of these points are also valid for teams with remote resources (e.g. people working from home).

The Time Zone Difference is the First Problem to OvercomeYes, the geographical separation and remote access is important to consider, but it's the time difference which is the killer. When your working day only (officially) overlaps for 4 hours, you have to make the most of that overlap time. Some of the steps we took to overcome this were:

Moved the daily team meeting from 9.15am GMT to 4pm GMT/11am EST. Therefore I got to participate in the meeting rather than just having my instructions passed on to me. This greatly improved communication of issues between all team members, and, more importantly to me, helped me to feel like I was still a part of a team instead of just a resource.

Updated the team plan so that instead of simply representing AM/PM activities for all team members, my time was staggered to better represent my working hours. E.g. Originally the plan looked like:

But the daily meetings would invariably go something like this:

Team Lead: Ms US Minion, it's Monday 4pm, you must be nearly finished with task 9, right?Ms US Minion (Me): Dude, it's Monday morning, I've barely finished checking my e-mail yet, I've just about glanced at the spec let alone started on the code.TL: Oh yeah, I forgot. Well, it'll be done by the end of today, right?USM: Sure, no problem.TL: Right, so Bob can kick off the build before he goes home and it'll be sweet by tomorrow morningUSM: Oh wait, you mean the end of YOUR day? Erm, no, that's not going to happen...

After a number of these types of conversations we got bored of forgetting this key point and changed the plan:

Subtle change really, but it was astonishingly useful at helping us to get our heads around when things would be delivered. If something HAD to be finished before close of business GMT, then it would be clear from the plan if that was achievable.

Plan to use the overlap time to best advantage. Otherwise something that would have had you waiting for help for an hour or two has you waiting for a day. I never really got good at this, mostly because I'm used to using my mornings for catching up on mail (which was particularly cumbersome when you have nearly a whole day's-worth of UK-based mail to get through), checking out industry news, meetings, phone-calls to the UK etc. I don't usually get into the coding zone until after lunch. Unfortunately, by 1pm EST, most of the team is wandering off home and I've forgotten to ask Bob for some pointers on task 10 which I know he's looked at before. Which means now I have to wait until tomorrow for that.

Some of the ways I tried to overcome this problem:

Save non-critical or US-based e-mail replies until the afternoon. Only deal with the time-critical ones in that early-morning e-mail frenzy.

In your daily TODO list, clearly mark the items which require help from the team and do those in the morning, EVEN IF they're not as "important" as the other items.

For items scheduled for the afternoon, take a look at them in advance, even if it's just the morning of the same day, ensuring there aren't dependencies on people in the UK. This is particularly vital for time-critical tasks like releases that need to go out that afternoon.

What not to doStop taking lunch. I fell into this trap to try to increase the overlap time between me and the UK. At the start of my time here I would not take lunch until the UK had gone home - it felt like using that hour, which falls at the end of the UK day, for “recreation” was a waste, it meant I only had 3 hours overlap with the team (if they all went home on time, and luckily for me they frequently did not). But, this is a dumb idea. For a start, the rest of the team frequently did not go home on time, leaving me pining round the office starving to death. I'm one of those people who a) likes to take lunch early and b) gets moody and irritable when hungry. So, for everyone's sanity, it's best if I take lunch. The second reason this was a poor tactical move is because I was providing second-line support for the application. So, it was all very well making myself available for the development team in the UK, but if I was away from my desk when they had all gone home, that meant the support guys who needed the development team as second-line support had no-one to turn to if I disappeared. So, all in all, not a wise course of action.

Do Not Underestimate How Important Face to Face Contact IsIt really is. Well, maybe it’s just me, I’m only writing about my own personal experiences here, and on top of that I am A Girl so maybe we are a different species after all. But do not neglect this facet.

I had daily conferences with the team, I was including in all mails, we had a team chat channel and I regularly spoke, in one form or another, to the client and to the support guys. But all of that cannot replace the inadvertent wince from someone when you talk about some aspect of the system, the tension you can read in someone’s shoulders when you’re talking to them, the cheeky grin or pleading look when someone asks you to do something they know isn’t in your remit but could really use from you.

I was fortunate, because I already personally knew all the people I had to interact with through having been on this project in the UK - it makes it a little easier to judge who they are and how they react to things. Even so, I found that getting communications without seeing the person put a strain on relationships – it’s so much harder to read a person’s intentions when you can’t see them: to excuse them for being offhand because they seem stressed; to phrase things carefully so as not to upset someone because it looks like it might be a sensitive subject. That sort of thing.

I also found conferencing into the team meetings a little harder than being there - it's more difficult to gauge when to add your piece to a discussion, since you can't see people's faces to see if they're going to say something. You can't catch someone's eye to see how they feel about something. You can end up in one of two opposite situations: a) you don't say much, because you don't know when it's appropriate to say something, and/or people forget to include you when you're not there, and/or the volume is up too low on the other end for it to be obvious when you want to speak or b) you talk too much – you can't see when people want to interrupt you or add something and/or you can just keep talking loudly and everyone else in the room has to stop and listen to you (unless they hang up on you!).

Lack of face-to-face contact with the client pretty much ruled out doing any work that required feedback from them. This will depend upon your client, of course. In the case of this client, they were very good at responding to well-planned e-mails which asked them to choose a solution from one or more options (provided the implications were well-described). However getting to the point where you have enough information to come up with these options and their implications was almost impossible if you didn't sit down in the same room as them and talk things through. Theoretically this could have been done over the phone, but it almost always needs diagrams and visuals, scribbling on paper and whiteboards etc., making the phone an inappropriate medium. As a consequence, as soon as I moved offshore, I was no longer involved in any but the most basic requirements gathering.

SolutionsWell, there aren’t any really. You’re not there, you can’t see people. You can, however, be aware of this situation and work around it. For example:

Don’t expect an offshore resource to be able to gather complex requirements from a client.

Don’t expect an offshore resource to be able to explain complex issues / potential solutions to a client. The client can ignore e-mails they don’t understand and trying to explain over the phone is difficult, and also requires finding a window that fits both schedules and both time zones.

Team members in all locations need to cut each-other a little slack – try to be precise in communications so that people can’t get hold of the wrong end of the stick, and in return try not to see the worst in someone’s hastily composed e-mail / train-of-consciousness chat.

Ensure a regular meeting with a more human element, e.g. conferencing into team meetings. Interacting in a group like that even if you can’t see people a) helps improve the sense of team and b) provides a bit more context and feedback than simple e-mails or chat. If you can get a video conference, even if rarely, that will help. I can have very visual thought-processes, and something I did to help the team to think of me as a person and not just a voice or a spam-bot is to take photos of me in my working environment, and to take the team on a web-cam tour of the US office. In return, they shared photos of the new office they had moved to since my relocation to the US. It was fun, and helped us to connect on a more human level.

SummaryCommunication is, unsurprisingly, the key to productive working when the team is geographically split. The processes we put into place to help enable this were:

Daily team meeting for the development team, at a time when all members can participate and providing facilities for all members to participate, remotely or locally (e.g. conference call). This is not just to enable communication amongst the team, but also to help offsite resources to feel a part of the team – a little “chat time” in this meeting, rather than being all work, is fundamental for remembering we’re all human, blowing off a little steam, and generally bonding.

Weekly team meeting for development team plus client plus support team, again providing a way for everyone to participate. This allows us all to swap ideas and issues regardless of where we are.

Work needs to be allocated at least 24 hours in advance. This works both ways - it cannot be expected that if I’m asked to do something as the UK team goes home, they expect it complete (or even started!) by the time they get in the next day – I might need support from the team, or from other people in that time zone. Similarly, I can't fling stuff back to the UK at the end of my working day and expect it to be worked on by the time I get in the next day, as they might have questions for me. And I personally get grumpy when woken up at 4am by a phone call.

A project plan needs to be kept up-to-date and visible to all team members. This plan is better if it clearly represents the time zone differences between team members.

Although ideally all team members should be treated equally, limitations of remote-working need to be considered when allocating work – any task which requires extensive support from the rest of the team or close relations with the client is probably not appropriate for someone who doesn’t work in the same location or the same hours as the team and client.

I was lucky:

I had a team I knew and a client I knew on a product I was familiar with (although I had to learn a LOT more in order to support it independently during the afternoons).

The UK team were workaholics and generally provided more of an overlap with my working day than I think is healthy for a bunch of 20-something males.

The UK Team Lead went above and beyond, being accessible by phone until about midnight GMT (7pm EST). I tried not to abuse this but it definitely helped resolve pressing issues instead of having to wait another day. In return, where possible I would check my mail and chat, however briefly, when I got up so I had a quick heads-up of the state of play at mid-morning GMT, well before I got into work.

In addition, I was a senior developer who had also had experience leading the UK team and gathering client requirements so I had a good view of the bigger picture of the project. So sometimes this meant I would irritatingly question every piece of work I was allocated and be nosy about the motivations behind something, but it also meant that I had the knowledge and ability to work pretty independently from the rest of the team. This may not apply to all offshore / remote resources.

Thursday, 17 April 2008

This is a great example of what happens when you try to incentivise intelligent people on very simple metrics.

They cheat.

This was well described in Freakonomics, and something Mr On Softwarebangs on about regularly. It's clear that there isn't really a good answer to the problem - actually that's not true. The answer to the problem is to have everyone working in a job they are happy in and proud of, one where they are intrinsically motivated, and give them enough information to allow them to make the correct calls when it comes to prioritising work. But I'm guessing that a large portion of the working world does not fall into this category.

Wednesday, 16 April 2008

I know there are arguments against certification, and I definitely think that using certifiction to determine whether to interview or recruit people is downright daft, because frankly learning a bunch of answers isn't all that difficult. But I personally find that completing a certification really helps to round out my knowledge in an area. I guess my thoughts are that a fairly recent certification combined with the work experience to back it up is something that would make your CV more interesting to recruiters.

As someone who has worked more on web apps than "core" Java applications, I found the 1.5 SCJP dead useful for drumming into me the facts about threading etc that I don't usually think too much about. Plus since I did it very shortly after 1.5 started being used in anger, it was a good way to get familiar with the new features. Although honestly it could've banged on a bit less about Generics, the stuff that was in the exam I have never used in real life. Well, maybe once, and even then I looked it up on the internet to remember how it worked.

I'm doing the SCWCD now, I figured I might as well "prove" I can do all the stuff I've been doing for close on a decade now (if you count the fumblings of my third year project as web development). It's full of what Rands calls Holy Shit moments. Now, don't get me wrong, I'm a damned good web developer (no, really, it's true!!) but reading the book, having it explain some of the things I always took for granted, or stuff I ALWAYS have to look up because I can never remember exactly how it works, is filling in a lot of gaps. It also explained to me WHY I never really worry too much about thread safety - I'm already unconciously designing for it and more, by coding in as stateless a fasion as I can, something you really have to try to do in web development.

It feels like it will make me a better web developer, even if it is banging on about some of the Ancient (un)Holy Ways of JSP & Servlets which has been almost entirely replaced with using frameworks like Spring MVC or Struts (to their credit, although the authors have to teach The Old Ways they do keep pointing out this is not the way to do things these days). In fact, in a few simple sentences, they explained to me why I was having a recurring nasty problem on the last web app I worked on, which I was blaming on Sitemesh. Poor Sitemesh, how I have maligned you - it was not your fault at all, it was my abuse of the <jsp:include> tag (although frankly the fact that I HAD to use a <jsp:include> tag in the first place should probably have told me that I wasn't using the correct tool for the job, Sitemesh really is not a lightweight Tiles and clearly should not be used as such).

It's at times like these that I realise what my real skill is - cramming my brain with as much pertinent information as possible for short-term retention, and recalling it in high pressure environments. It's why my GCSE results are so great. It's probably also what makes me good consultant material.

Sometimes the information even sticks in there. Just ask me how oxbow lakes are created and I'll prove it.

Monday, 14 April 2008

Today I would like to document my experiences implementing caching with Aspect Oriented Programming (AOP) and annotations. Background contextCaching may need to be implemented in your application for a number of reasons. OK, actually usually only one: performance. I would like to add my own tuppence-worth to this though - if you can get away without caching (specifically in application that provide the ability to view and change data) then do so, unless you are using a cache implementation that will handle as much of the pain as possible for you. Implementing a home-grown cache from scratch is almost never the correct thing to do in my experience, you spend lots of time debugging and tweaking the cache when you should be working on your day-job, not re-inventing something that someone, somewhere, has already done a perfectly good job of.

The example I'm about to show you is for a web application created to let users read and edit values from a database (not an unusual scenario!). Application ArchitectureThe application architecture I have assumed for this example is: Java 1.5, JSP, Spring MVC (2.0.1), Spring JDBC (2.0.4), running on Tomcat 5.5, connecting to a database (RDBMS type not important for this example). OSCacheA third party library, OSCache, provides the underlying cache for the application. This was chosen because it provides a simple solution which is easy to integrate into our Spring MVC layer and also provides JSP-level caching should we need it later.

The application uses OSCache in a very basic way. Caching could've been implemented with a HashMap and it wouldn’t have provided much less functionality (the way I'm using it), but by using OSCache we can using the "group" functionality (which allows us to cache against a key AND a group name, so we can flush and reload a whole group if necessary), and we can potentially add timeouts and other more complex functionality simply with configuration changes. See the OSCache documentation for full details.

CacheManagerPrimarily to aid unit testing, but also to provide some separation between the application and the implementation of the caching mechanism, a CacheManager interface was implemented, and the implementation version simply wraps the OSCache GeneralCacheAdministrator. Aspect Oriented Programming for CachingIf caching is implemented in a very simple way, it can be easy to forget to handle caching on all methods that require it. Also, the code to check something is in the cache and retrieve it from the database and store it in the cache if it is not, is standard for most functions. Therefore it seemed to make sense to implement caching using Aspect Oriented Programming, so it can cut across all functionality without it having to be explicitly declared in every method that might need to utilise the cache.

Spring has built-in support for AOP (and that documentation also provides a good introduction to what AOP is), so given our use of Spring MVC it shouldn't be too complicated to add Aspects to our code. Implementation: Application Context fileYou need to add a couple of things to the application context file for your app to set up the cache and enable the AOP.

Note that, like the validation, this makes use of schema-based configuration.

Now these settings are in the configuration file, they should not need to be changed unless the cache provider is changed or caching is to be fundamentally altered. Implementation: Defining Items to be CachedOriginally I had the application "magically" caching anything returned from a “get” method in the service layer and purging the cache on any “save” or “update” method.

However there are some types of objects that don’t need to be cached and cause errors when they do, as sometimes the data needs to be "fresh" from the database. So the service layer declares what needs attention from the cache manager by the use of annotations:

You may notice that this service doesn’t really add much value – it forwards the request to the DAO and little else. The purpose of the service layer, however, is to provide a simple place for things like caching, and in future potentially additional security, logging, transactions, or to string together multiple calls to DAOs for a more complex transaction.

The methods that return objects that need to be cached or that affect items in the cache are tagged with the @Cache annotation. The single argument to this is a list of the groups in the cache that the Object should be or already is associated with. This group allows selective flushing of the cache – so when a new Customer is added, only the Customer group gets flushed (and consequently refreshed) rather than the whole cache.

Note that these annotations have to be on the implementation class of the service layer, not the interface – this is because it's the implementation that is wrapped by the AOP proxy. For more information see Understanding AOP Proxies. Implementation: CachingAspectsThis class is responsible for most of the work around the caching mechanism. It defines which methods in the service layer require attention from the caching mechanism and it performs the work around retrieving from the cache and dealing with cache misses.

It uses the AspectJ AOP conventions in Spring 2.0, more information of which can be found in the Spring documentation. The main areas of interest are the annotations for each method which state when this method is to be called:

This states that this method should be called when any method that starts with the word "get" that returns a List and is tagged with the @Cache annotation is called. The @Around states that this method will be responsible for calling the original method – so in this case the original service method that was called (e.g. CustomerService.getAllCustomers()) will only be called if the list not is found in the cache. Another example is:

This method is called when a service method is called that is tagged with the @Cache annotation, starts with “get”, is passed a domain ID and returns a domain object. This is a classic example of something to be dealt with by the cache manager – again it needs to check if the item is in the cache, return it if it is or retrieve it from its original source and store it in the cache if it is not. You can have similar methods for determining which methods need to flush the cache (e.g. "update" or "create" methods).

And hey presto! An almost magical cache which does not require your developers to re-write the same caching code for all the "get", "update" and "create" methods on your service layer. All they have to do is tag the appropriate methods with @Cache and the AOP will take care of the rest of it. Disadvantages to AOPAs with many “magical” implementations, the main issue I found with this implementation of an AOP cache is that it can be difficult to debug. Caching can cause weird issues anyway (for example, if your update methods don’t correctly flush the cache you get old data being displayed, or if your cache update method doesn’t correctly retrieve the data). But when you throw Aspects into the mix, it can cause some interesting bugs that are hard to track down.

The number one key to helping to overcome this issue is a good set of unit/functional tests for your cache. The advantage of testing a centralised AOP cache is that you don’t have to thoroughly test every method that might have caching implemented. So writing a lot of good tests for the AOP cache probably pays off vs. implementing and testing caching for individual methods.

Still, strange things can creep in that can’t be detected by unit tests. For example, if a method has been tagged as a cache method through careless copy-paste coding, when it needs real-time data. Or, as I found, worse – if you don’t have a way to explicitly state which methods require caching but do it through the magic of naming conventions, you need all your developers to be fully aware of these conventions (and to not make mistakes in this area) in order to state which methods use the cache and which do not.

Although I probably spent more time tweaking and debugging the cache than almost any other individual area of the application when I used it in anger, I would still say it was worthwhile implementing it in this fashion. The benefits from removing any “difficult” bits from the service layer, so junior developers can happily work, and the ease of adding an annotation to the appropriate methods, I think improved productivity enough and allowed for much cleaner code (which also improves productivity) so that it was the right choice to make.

Thursday, 10 April 2008

Hmm. I have been so busy trying to think of "good" things to write here, and not having the time to actually write, that I see it's been 6 months since the last post.

If anyone is still out there though, I need help. I need a good Certified Scrum Master course in New York or London, preferably in April or May. Any suggestions? The one I wanted to go on was vetoed and now I find it's not running in NY again until Autumn.

PS Do you think it would be inappropriate to use the term "Scrum Mistress" on my CV?