Lisp at Work: Evidence Based Scheduling

When I heard about this I was immediately excited: Evidence Based Scheduling is a technique that sheds light on what’s going on with missed target dates and helps identify the true cost of scope creep. I’d thought about Monte Carlo methods before, but most of the time I used it as a sort of mental substitute for doing real mathematics– at best, I thought, they could be used to help check your work. But this tool from Joel Spolsky was a good example where the Monte Carlo was actually the right answer! Very neat. Not something you can explain to regular people in five minutes, but if you can get the to-do list broken down into small enough chunks (and get people to track their time) then you have the potential to do some interesting forecasting.

Implementing a quick and dirty version of this system would be something I could do in just a few hours back in my old programming language. How long would it take to thrash it out with Common Lisp? If you’d asked me before I started this one, I would have said it should take twice as long. I’d have been thinking it would take me three times as long, but I would have said, two. (Typical developer….) Let’s take a look at how I solved the problem with my current Lisp skill set and see if we discover anything that will help us in our future Lisp hacking efforts.

The first thing I had to set up was some basic functions for working with dates. In a modern IDE driven Blub language, this is the sort of thing a developer would just pick up from the intellisense dropdown. With Common Lisp, this is going to take a moment reading through The Common Lisp Cookbook.

With that out of the way, I needed some way to set up my basic task and developer objects. On a whim, I decided to use macros for this just to practice. Because the macro is defining new global variables for me to store each new hash in, I can just type the name of a developer or task in at the REPL prompt and see what’s going on with it– that can be a big help in debugging or just browsing the data structure. But writing even simple macros like this can take a little more time than it would to write a similar function.

With some test data set up with my macros, I can now use my new-found functional programming powers to operate on it. Here’s a function that accepts a predicate function as an argument:

With it, I could now write functions to determine the total number of working hours that a developer can expect to work during a given time period– taking into account half-days, days off, and weekends:

The core idea of Evidence Based Scheduling is to get a list of values that represent how accurate a developer’s estimates are. The cool thing about it is that if random distractions come up, a developer can just charge the time of such interruptions against whatever task he’s been working on. We’ll know how often such events occur, on average, and can use that information to determine our chances of hitting a specific release date. Here we set the ratings for all of the developers and then check out the results for our simplistic test data:

Now, using that “evidence,” we’ll run 100 simulations. Each time, we’ll cycle through the tasks on our to-do list and pull a random value from the associated developer’s rating list. Using that as a factor, we can get a guess as to how long each task will take in the simulation. Simply track the number of simulations that are finished before the time was up to get a percentage chance of finishing the project on time:
CL-USER> (monte-carlo (mmddyy 12 1 7) (mmddyy 12 31 7))
Chance of success: 34

(Note that the Common Lisp: the Language index has typically been the quickest way to find the names of functions I need to know. Ansi Common Lisp sometimes works as the next best thing to an O’reilly book, but in both cases I don’t always get enough information to just take a built-in function and apply it correctly.)

So, how did the project go? Time spent working in Emacs seems to have been pretty rewarding. Even with the learning curve, I could get things done in about the same time as I would with other techniques. That was a real win: I was getting my work done and learning valuable skills at the same time. Tasks that would otherwise have been tedious were injected with some interesting intellectual challenges. But this project…? I’d said it probably take me twice as long as my old language… but the factor for working it through in Lisp was more like 5 or 6 times as long.

Where did the time go? I did write a few functions that would have probably been built-in to other languages’ libraries. To get used to playing with Lisp date numbers, I wasted time writing functions that I wouldn’t end up using. I wasted time writing some cutesy macros when more straight-forward functions would have done the job. I built a new data-structure when one that I’d already written would probably have been sufficient to the task and allowed me to think at a higher level abstraction. The syntax for talking to hash tables got unnecessarily cumbersome– I should probably have used a read macro to make that more expressive and just stick to that from now on even though my code won’t be “vanilla” Lisp anymore as a result.

Two other issues were probably significant as well. Programming in a clean functional style requires thought. And the closer I got to finishing the problem, the harder I had to think and the more I had to keep in my head at once. Somehow, I don’t remember having to think so hard while writing mediocre database code. It may be that I’m still getting used to new ideas and tools, but at each step of working on the problem I worry about whether or not I’m doing things the “right way.” And because of the inherent power of Lisp, I can to steps to address any perceived deficiency. You’re not going to let that power sit there and just ignore it! Even after solving the problem, I still want to spent an additional chunk of time equivalent to what it took just to get to this point to really sit down and find the “right answer.” That kind of effort is next to worthless in most corporate shops where it’s hard to justify unit tests or even basic refactoring. No wonder Lisp doesn’t win in the “real world.”

I was much worse at guessing how long it would take to do something in Lisp than I’d imagined I’d be. Working SICP problems is far from being the kind problem solving you typically use in the “real world” when you just need to get something done. On the other hand, I’ve got a small wish-list of things I want that would make me faster Lisp hacker. (I want, for instance, something for working magic with relational data.) If I could get the multiplier from x6 down to x2, I could justify using it more often even when I’m under pressure.

Anyways, this code file shows what I wrote for this. If you’re trying to thrash something out, you might find something useful there to help you jump start a quick-and-dirty skunkworks project. There’s a lot of room for improvement there, so as you read your Lisp texts you might keep your eye out for tricks that could have been used to cut the program length in half or make it more expressive…. I’ll do the same and post a revised version if I get around to it…. (If you’re stuck coming up with ideas for Common Lisp projects to practice on, you can always just take this one and try to add a few more features to it– that might be easier than opening up a blank file to stare at. As a bonus, you get something you can actually use at your day job to manage projects.)

Sorry, it’s slightly offtopic regarding this post – it’s about something we talked earlier about. A way ago I proposed that if you have higher-order functions you might be able to live without macros and thus use a more maintstream programming language like Python. A while ago I’ve redone this: http://www.gigamonkeys.com/book/practical-building-a-unit-test-framework.html
in Python just to see how it feels and it was easy and elegant, the result was something like:

check(
(equals, 3, add, 1, 2)
(equals, 1, subtract, 4, 3)
)

I was delighted – but what I realized that it’s an almost direct application of Greenspun’s Tenth Rule: it’s nothing but the first step towards writing a LISP interpreter. However it does not necessarily mean it’s a wrong idea – often it’s easier to use the existing code base and change the language itself (f.e. Wasabi) than the other way around. It could be a valid approach – but I realized there is NO way around Greenspun’s Tenth Rule, I can do it, it can be OK to do it, but it’s important to realize and admit that this is what is actually happening.

Right, that’s just like that PHP guy that wanted to rewrite his system in Ruby, but ended up taking the analysis and the techniques to refactor his existing system. It still had to be a “surprise” skunkworks after hours task– nobody would say “yes” to what he wanted to do. But he got it done.

So far I’m just using macros to make the code prettier. I’m not often doing things that are impossible in other languages. That can change: I’m comfortable enough with the Lisp ethos that I think I can get a lot more out of On Lisp if I read further in it now…

I’ve dealt with the same time issues you have when writing something, but in Smalltalk. For me it was finding the right object-method combinations to use to solve a problem. There are tons of them, but only a few that can do what I want. I imagine for you it’s trying to find the right functions. I did a blog post a while back on a case study I did between C# and Smalltalk. I first formulated the problem I would solve, and I did some experiments on how to solve it. I didn’t count the time for that. The reason being that I was still learning the language itself. It was one of my first projects in it. Once I nailed down a design, then I started counting time. Interestingly, even though the C# and Smalltalk solutions came to about the same number of LOC, it took me less time to implement the Smalltalk solution. One thing I chalked it up to was with all the time I had spent with Smalltalk, my C# skills had become rusty. Plus, I didn’t have to wait on the compile, run, test cycle.

I imagine you’re more comfortable with Lisp the language at this point, and are getting familiar with the functions.

I know what you mean about the business environment. Unfortunately with the way projects are managed, up front costs are everything. There’s no sense of paying up front and having the investment pay for itself later. In fact it’s the same principle no matter what language you’re working in. Good design takes time. Managers get nervous if you’re not cranking out code. As I said in my most recent blog post, “The real bang for the buck is in the design,” but most managers don’t understand this. As far as they’re concerned the only product that’s worth anything is the tangible end product. If you’re spending time thinking about it, that’s more time you’re spending “not making the machine do something”, which in their minds is all they’re after. Just make the machine do something. IMO this is Industrial Age thinking. A lot of companies have not truly entered the Information Age in the way they think.

I’ve had discussions with people about the economics of photovoltaic solar panels from time to time. I’m not all gung ho about them, but the economics are the same. You make an initial up front investment which is a LOT more than what you’re paying for electricity now, but the idea is they pay for themselves and then some over their lifespan, which I think is about 20-25 years. In the end you save money over conventional electricity, but you have to wait for that ROI. It’s difficult to get people, except for the environmentally conscious, to think in these terms. There are many people for whom the conventional model’s economics just work better. They can afford a monthly electricity bill. They can’t afford tens of thousands of dollars up front for a solar panel array.

but the code IS the design – what you call design in this context is perhaps a higher level design. Actually I think that’s what the real point in OOP – that the higher level design can go right into interfaces and abstract classes so the design drives the later, more detailed coding.

There is clearly something wrong with it as it didn’t bring the expected results – but I’m not exactly sure what. Perhaps the top-down approach – this makes deep design changes hard. Perhaps the bottom-up approach is better. But if you use the bottom-up approach you don’t really design up front – rather write the utility functions and then think about how to combine them into a design.