This is article 3 in a series of posts in which I introduced the idea of building an F# robot for Robocode. The last installment showed an improved version of the robot that didn’t bump into walls, but did quake at the sight of an enemy.

The twitching behavior I described in the last post is a result of receiving a number of events each turn. When the robot is hit, it tries to evade. However, when it sights an opponent, it turns to attack. Evade, attack, evade, attack… the robot gets stuck in a loop as long as the opponent is facing it and continues to fire. Poor BetaFish. Let’s see if we can fix that:

I’m doing more sophisticated pattern matching, like the nested match in Run‘s lastEvent matching for the Evade type.

I dropped the type specifications on the handlers for Robocode events, since F#’s type inference engine can infer them

To solve the twitching issue, I created a shouldEvade method that decides whether to try to evade fire. This version will stand and take it if it’s facing an enemy. I found it odd that I had to specify that bearing was a float type, until I realized that Math.Abs has a number of overloads.

I changed the signature of randomTurn so I could use the piplining operator (|>). This operator allowed to me write robot |> randomTurn 30.0, which reads more natually to me than randomTurn robot 30.0. With pipelining, I’m able to say “with robot, do a random turn”. My code less like Yoda sounds, pipeline it makes.

I wrapped the code in Run to solve another nagging issue. In previous versions, when the battle was over, BetaFish would often throw an exception or the battle simulator would have to terminate it. The try ... with _ -> lastEvent <- EndGame facility allowed me to catch the DeathException I often receive at the end of battle.

This BetaFish version, the one I’m submitting to the code kata, does far better than its predecessors. It almost beats the Fire sample robot. Fire often wins because it turns its gun and its body separately. If we get another go-round, I will try this strategy. I took a quick stab at it, but the first attempt did very poorly.

It still has problems, don’t get me wrong. It runs like a scaredy cat if it gets fired upon a lot. It’s performs poorly because it moves the whole tank body instead of just the gun or the radar. And it is easily defeated by the C# sample robot, that runs fast along the border and fires towards the middle. BetaFish can’t get a lock on it fast enough.

Beyond the robot, there are certainly more F# features to explore. For example, I’m halfway through reading “Real World Functional Programming” by Petricek and Skeet, and a future chapter teases at a better way to handle event registration and waiting on events to fire. I can’t wait!

I hope this article series has been of interest. Drop me a line if you found it interesting and would like to see more.

You may recall me saying that the first version of this robot could be defeated by a wall. While trying to implement this feature, it became clear that firepower didn’t represent enough state for the BetaFish robot to care about. There were a number of intermediate versions between last post’s code and the above, as I experimented with created a class to encapsulate state.

While trying to use multiple classes, I was stymied for a time by Visual Studio. In an F# project, it wasn’t clear to me how to spread my solution over multiple source files. I couldn’t reference my other classes. I discovered that in a functional language, it’s necessary to order the source files to resolve dependencies, so that items do not depend on other items that are compiled afterward. It turned out that the classes I wanted to use came after the main class in alphabetical order. In Visual Studio, you can reorder source files in an F# project by right-clicking the source file in Source Explorer and moving it up or down.

After a number of iterations, I discarded that secondary class in favor of a discriminated union, which I called ActionType. Conceptually, a discriminated union is kind of like a generic C# enumeration, but of course much more useful. For example, notice that the Evade and Attack elements take a float argument, whereas the others don’t. And pattern matching provides a more robust way to examine the contents of an ActionType than C# enumeration’s ever would. To accomplish this in C#, you’d need a hierarchy of classes.

The Run method has become more sophisticated also, now matching against the AttackType instead of just a scalar value. Notice that I’m able to name the union type’s argument inline, such as in the case of Evade‘s bearing. This is a feature of pattern matching, not discriminated unions, as you can see with strength in the OnBulletHit event handler.

While this code looks more impressive, it actually stinks in battle. If you try this robot, you’ll find that once BetaFish engages the enemy, the robot freezes in battle as soon as it’s hit, twitching until it is annihilated. Can you see why? The next article in the series contains the answer.

The group here at work chose to compete via Robocode for our next code kata, and this post is the first in a series that describes how I wrote an F# robot for that challenge.

Robocode is a programming game in which a platform is provided, including an abstract Robot battle tank class, and competitors write concrete implementations. Those robots compete in real-time on-screen. The battle simulator has decent graphics, and the platform exposes a few dozen events that robots can subscribe to in their quest to be the last robot standing. For example, when the robot sees an opponent or hits a wall, an event is fired.

Robocode itself is written in Java, but there is a library called JNI (Java-.NET interface) that allows .NET robots to take part in the fun. Since I’m learning F#, it seemed like the language to choose.

First, I visited the site and downloaded the Java platform. I then followed the instructions for creating a .NET robot, which including installing JNI and some helper libraries. The instructions were written for C#, and being F#, there were some subtle nuances that I missed which cost me a couple of hours:

I followed the Visual Studio debug setup instructions included, but found that if I set a breakpoint prior to hitting F5, I got a fatal exception loading the JNI jar file. A colleague wasn’t having this problem, and we figured out he wasn’t setting breakpoints right away. Now, I use this workflow:

Disable my breakpoints

Hit F5 and let the Robocode battle simulator load

Enable my breakpoints

Start the battle

I found a post on Zamboch’s blog showing a very basic F# robot. It was enough to get me over the hump. Here’s the first version of the robot:

You can see, it has a lot of similarity to Zamboch’s sample robot. The new piece for me was the randomTurn method. This was the first occasion I’d had to use the pattern matching (match…with) construct.

Pattern matching is a little like regular expressions for code. Instead of writing a switch statement or a series of if/then blocks, I can use pattern matching to conditionally execute code. In the case above, I’m getting a random result between 0 and 1, and using that to decide whether to turn left or right. The F# compiler won’t allow me to specify an incomplete pattern, so I use the “match anything” (_) symbol to complete the pattern. The underscore is analogous to a default case in a switch statement.

You’ll notice that I have a mutable variable in this robot, firepower. Although F# supports mutables, I wanted to get rid of it. Typically, I’d do this by having a method that returns a new BetaFish robot. However, because the Robocode battle simulator expects a single instance of the Robot, this strategy won’t work. At my present knowledge level, I’ll need at least one mutable variable.
This robot isn’t very smart. In fact, it can be thwarted by running into a wall. In the next article, I’ll add some more basic functionality.

This is a story about how I applied my CSM experience and training to help a mid-sized project get off the ground.

Beginnings

When I came onto the project, the company was struggling with agile. They had been doing software engineering for a while, but in the context of controlling hardware. This project, an item state management system, was a departure for them. They already had a product in this space which they were essentially reselling from a consulting firm. The existing product was awkward to use, its feature set was limited, and the consulting firm did not wish to re-write the product to scale.

So, the company decided to write a replacement. They chose C# and .NET as the language. “Mason”, the principal engineer on the project, was a Java programmer by trade, so they needed some help. I was hired onto the team as a senior .NET engineer who could help mature Mason’s technical capabilities in .NET, and they had already hired a mid-level .NET engineer who started a few weeks before I did. It quickly became apparent that my CSM knowledge would be just as valuable.

The team had the desire but not the field experience. Mason had written a prototype, and after quick examination of the code, it was clear that his architecture would not support the ambition of the product.

The project manager, “Jason”, had chosen Scrum as the delivery methodology, yet he had no experience with it. He was on the right track with sprint-level work management, but he was having trouble with release-level planning. he asked me if I could help get the project back on track. The product owner, “Kathy”, already had a comprehensive set of use cases outlined. While they were not fully-baked, they contained enough meat to discuss in detail what the project is trying to accomplish.

Theme Sizing

I introduced the idea of themes. Kathy had already grouped the use cases into broad categories, so it seemed natural to think of these categories as themes. Jason and the product owner went off and grouped the use cases. They did it more fine-grained than I’d anticipated, putting them into a couple dozen smaller categories that I’d call epics.

I then introduced the idea of T-shirt sizing for themes. The goal of this was to talk about which parts of the application contained the greatest effort. Some use cases have some risk — we as a team aren’t sure how we’ll tackle them. Others don’t have enough definition. Still others represent a lot of work.

Over the course of a couple of afternoons, the team got them sized. The challenge remained, how do you use T-shirt sizes to create something that resembles the Gantt chart the project sponsors were comfortable with?

Theme Points

It hit me, these T-shirt sizes were a scale, just not a numeric scale. I needed to quantify T-shirt sizes.

Mike Cohn has an article about relating story points to hours in which he shows story points graphed as Bell curves. So I wrote a scale like this on the whiteboard:

Onesie | Small | Medium | Large | XL

I picked an arbitrary number to represent a Medium. I wanted it to be on a scale that couldn’t be mistaken for user story points, so I chose 500.

Onesie | Small | Medium | Large | XL
500

Of course, a Medium is not exactly 500 points. These represent a broad range, so I declared that a Medium was 500±50%, or 250-750.

Jason took these ranges and our commentary during sizing that Theme X seemed bigger than Y and assigned values to each theme.

So When Will That Be Ready Again?

By this time, we had already completed a sprint or two, enough that we had an idea that our velocity was going to be somewhere near 30. We had enough information to try to correlate these themes against a calendar. Projecting when we’d complete the epic’s functionality and armed with our velocity, Jason threw the numbers into a spreadsheet and started trying out numbers.

Our current epic was rated at 800 theme points. We thought we’d be done with it in 6-7 ideal days, which means 3 developers can get through about 750 theme points in an ideal week. This became 250 points/dev/week, a velocity of sorts.

Thanks to the spreadsheet, we were immediately able to see that calculated out to about 12 months to write the application.

Better yet, we were able to forecast what general area of the application we’d be working on months from now, which allowed us to project some milestones. We need to have this epic done by Halloween to be on track, that set of themes by Thanksgiving, and so on.

Of course, these numbers had a high margin of error. We communicated it would be more like 6-18 months to write it. We reasoned that as the team got a epic or two under our belts, Jason and I should have been able to refine the conversion rate and get a more realistic idea.

What If…

The real fruits of this exercise were that we now had a few variables that we could experiment with.

For example, that velocity of 30 was measured while we were storming as a team and 2/3 of us were new to the domain. Maybe we could expect the work would get easier as we learned the domain. This would be expressed as an increased velocity. If you increase the conversion rate from 250 to 300 after a month, you gain about a month over the life of the project.

A similar gain would be realized if a 4th developer is added after a couple of months. Other projections were made about the impact of adding a QA engineer early, which the project didn’t have until much later.

Course Correction

Of course, the project sponsors wanted it faster. To them, time to market was crucial. The company saw a dwindling market for their current offerings and were looking for ways to increase their reach. Their goals were compelling — it seemed like a good opportunity. This product would do things that others don’t.

Management needed a product that was feature-compatible with its competitors in the marketplace before the venture capital expired, so we were asked to come up with a plan that might possibly work.

We were able to use the spreadsheet to estimate how much of the application we’d have to de-scope to get it done in the timeframe we want, and Kathy was able to identify several themes as not mission critical to beta testing. The idea was that development would continue while the product was in beta, which I had discarded as too risky but Jason felt was viable.

Presentation

We presented the plan. It was, in the parlance of a senior company executive, a “success-oriented plan”. No one was comfortable with it, so some difficult conversations followed. The project was in a bad position.

There were two senior executives who vied for control of the project. One felt that the company should continue to sell their poor solution until the new one was ready. The other felt that the poor solution could not be maintained, should not be sold, and the new one needed to be rushed to market.

The second executive won and demanded a crushing work pace, complete with extended work hours. I knew from experience that such a pace was not sustainable and was prone to error on a product whose core had to be solid the first time. The customers in that market would not forgive a poor first showing.

I could not in good conscience proceed with the project as outlined, so I left. I don’t know exactly how the story ends. The other experienced .NET engineer on the team left for similar reasons. I have spoken to Mason, and he told me the project is very behind the schedule Jason and I laid out.

Conclusion

I think the exercise was worthwhile. Although the schedule Jason and I concocted was wildly speculative, it provided a frame of reference so that valuable conversations could take place. It got prompt and appropriate management attention to the project. The technique allowed us to do the analysis in a timeframe where its recommendations could be acted on.

Many of you reading this article have probably already formed opinions. It’s clear the project is in trouble. For me, it was that no project plan could overcome the fact that the delivery team did not have enough technical experience nor the business knowledge to rapidly develop such a complex product.

Without doing the analysis we did, I don’t think the errors in the original project plan would have surfaced until much later.

As you can see, I left myself a few to-do items. I boxed the effort at three hours, which is why I went ahead and borrowed the shuffle logic from the internet.

The first hour was learning to speak F# again, especially the method signatures. I played with F# when it was in beta, so I didn’t go into the kata stone cold. However, it took me a while to troubleshoot why my methods didn’t have the signature I expected.

I spent the most time writing the pairings function. Once I had the insight of using the match operator, the algorithm came together very quickly. I decided that I wasn’t happy with it being deterministic. I found that if I provided a file that was already in an appropriate order, pairings regurgitated the list. Dissatisfied, I went about trying to shuffle the array. Not being familiar with unit testing in F#, I used the F# Interactive console. I got frustrated with trying to implement the shuffle algorithm, so I decided to study a working copy.

The hard part came in integrating it with my code. I resorted to using a while loop, which I know from reading about functional programs means that I’m not thinking of the problem through. I believe I couldn’t convert this into a recursive function because shuffle returns a unit (that is, it does the work inline). If it returned an array, I bet I’ve have more luck. I’ll have to experiment further.

My attempts to rewrite the loop badly broke the program. I used comments as poor man’s source control, and I got lucky. I really should have check in the first working version into my local Subversion repository.

Some time passed. I managed to look at the do-while problem again, and after about 90 minutes, I solved both it and another issue that bugged me.

The do-while loop was solved by making the shuffling function return the shuffled contents instead of unit (void in C# parlance).

The second change was to get away from arrays. Arrays in F# are mutable, and I wanted to only use immutable data structures as much as possible.

The shuffler was the main barrier. Once I hit upon the idea to associate each person with external data, in this case a random weighting, the rest fell into place. You may also notice some language usage improvements, such as using a.Head instead of a.[0]

Here’s another solution to the Secret Santa code kata, this time in my “native” programming language of C#. This is the second solution I’ve written this week to this problem. Code katas are about experimenting with different approaches to simple but not trivial problems.

My first solution, written in PowerShell, relied on selecting random pairs of people and using a “hill-climbing” algorithm to avoid getting stuck. This time, I gave myself the constraint that the solution had to be deterministic — no randomness. I had been toying with the idea of using an Abelian ring. A couple of common examples are the hours on a clock or the modulus (remainder) operator in math. But I couldn’t decide how I’d handle iterating through the members of that ring without duplicates. I determined that I’d need to re-sort the list.

I wrote this solution using test-driven development (TDD), meaning I wrote my tests before implementation — I won’t show all of those tests today. As it turned out, I didn’t need to code a Ring class. I find when I do TDD, I often don’t end up writing half the code I thought I’d need!

Unlike my PowerShell solution which just used strings, I knew I wanted to create a data object to store the incoming rows of data in, which I named the Participant class:

These are the first two tests I wrote. I decided the simplest operation I needed to resort a list was the ability to swap two items in that list. Once I had that working, I picked the next piece, the ability to decide if two things can adjoin (“to be close to or in contact with”), and I proceeded from there.

Notice that the LineResorter<T> constructor takes a list to operate against, and a function that compares two instances of type T and determines if they can be adjoining in the list. In the case of my unit tests, I used (x,y) => x % 2 != y % 2, which is a fancy way of saying that two odds or two evens can’t be next to each other in the list. I wanted to use a different type (int) than I’d be using in my real use case to make sure I didn’t make any assumptions about the nature of type T. This comparison was the first one for two numbers that came to mind.

Each time I needed functionality out of the ListResorter, I wrote a test. I watched it fail, then I made all the tests pass. If I saw opportunities to refactor, I took them whenever all my tests were green (passing). By the time I was done, I had about a dozen tests and this class:

This class has two public methods, GetNextIndex() and Resort(). That Abelian ring idea still lives in the GetNextIndex() method, which says that the next member after the last in the list is the first, making the list circular. Resort() does what you would expect.

The other methods in the class are marked internal, so that my TDD tests can access them via the [Assembly:InternalsVisibleTo()] attribute in the Assembly.cs code file. After design is done, I would consider rewriting the test methods that talk to internal methods so that they are exercised through the public method. I don’t want my unit tests to be break should someone decide to change the internal implementation of these methods. You can see a bit of this thinking in the ThrowIfIndexesAreInvalid() method. I pulled this method out to avoid duplication of code during refactoring, where I already had unit tests in place and thus I didn’t need to write new ones.

I met my goal: the output is the same every time given the same input file.

I time-boxed the effort at two hours and I came close to reaching my limit. I initially began by writing an iterator, but abandoned it when I realized that I only wanted to traverse the collection once.

There is more that could be done to this program, of course. For example, I’m not happy about DisplayParticipants using GetNextIndex(). The only reason that DisplayParticipants() needs a ListResorter is to use its ring-like GetNextIndex() method. This functionality should have been extracted into its own class.