This blog is about programming in general and some other topics here and there. Currently the focus is on a lot of Agile practices such as CI, BDD/TDD and other tools/practices that help companies turn out high quality software fast.

Friday, November 28, 2008

While I did not formally attend QCon, I was invited to a couple of presentations by Greg Young. I saw some familiar faces there like Martin Fowler, Matthew Podwysocki, Jeff Brown, Jeremy Miller, Rod Paddock and others.

Eric Evans was very happy to see the Domain Driven Design track get some scalability/performance issues addressed. Greg's (at first glance) militant emphasis on Command/Query separation gave a clear and simple answer to avoid scaling issues when concentrating on designing and refactoring to a strong core domain. One key point to take away was the fact that our applications usually have 10X more read operations than write. This distinction is usually haphazardly represented in most designs we see today - and why we have so much money to make helping organizations scale up.

The one point I got from Greg some time earlier this year was to look at the business that you are trying to automate with an IT-free mind. That is, how would this be done (or how was it done long ago) with no computers and, instead, paper, pencils and people? They key information that is gathered out of that is the SLAs. Does that sales report need to have up to the millisecond information?

Greg has a link on his blog to the same presentation. I feel this one was more polished and was made more pleasing to the eye by incorporating some of David Laribee's use of images in stead of words. I'm hoping that the videos to the presentations get published soon. InfoQ, from what I hear, likes to spread out the release of videos over time to keep the site flowing with new material on a consistent basis.

On Saturday, Eric was kind enough to meet up with a handful of us and talk about DDD and life in general. He also toured around with us for a little bit. We saw some great views of San Francisco and a bit of the ocean shore.

Tuesday, September 16, 2008

I've been very busy for the last 3 months or so, but now can start to blog again.

A few things have changed. One of them is source control. GIT is working out really well. I highly recommend people drop SVN in favour of it. You can get an overview and tutorial from many places, so I won't bother repeating any of that. For folks developing in a windows environment, I can help a bit with some specific issues one might run into with the switch.

1. Upon installing MSysGIT, the AutoCRLF setting should be set to false before you do anything unless you are writing a multi-platform app. If you don't, you run into the danger of having your CRLFs changed to LFs in the repository. Then GIT will report file changes where there are none.

2. Stop all that typing (unless you want to strictly use the GUI via git gui):

If you want to configure bash on your machine, you could make aliases at that level - such as gci for "git commit".

3. Unless your SVN repo followed a strick trunk, branch, tag structure, don't bother with the standard layout option for cloning SVN with git svn. This leads to more headaches in the end. It's a long running process, so get it right the first time. Use --follow-parent to track moves, copies and renames.

4. Get hosting. Unfuddle is great as a free host environment. Don't get scared about the 200MB limit on the repository. GIT is very efficient and will take 1/30th the space that a SVN repo took. Unfuddle will email you when there are commits. It parses commit messages as well, so you can close an issue (tracked in unfuddle) with something like "closes #435". A commit with that message will close issue 435 in unfuddle. For $9/month you can get more users, 500MB limit and file attachments. The file attachment is a waste. I use www.drop.io to host files.

5. Setup your remote branches so you can simply "git ps" and "git pl" to send commits and receive commits from the remote repo. If you have a remote repo for the remote team over in India, you could set up something like this in the .git/config file for an "india" branch:

Tuesday, August 19, 2008

and seven seconds later on another thread: lock (dictionary) { if (dictionary.Contains("bum")) C.Out("defined"); }

... Contains returned false!

FTW??

... or was it IIS.

*** UPDATE Sept 16 ***

It was IIS. After adding tons of logging and tracking thread IDs, app domain IDs and process IDs, it was apparent that IIS would load up more than 1 either app domain or process for the app.

The long term solution was to house the app in a windows service that communicated over WCF. It was a quick refactor and opens the door for using WCF to push to clients. The new question is, does WCF hold threads? Or can you have 100,000 seldomly active clients for, say, a chat application?

Today at work the dreaded words were uttered: "Well if it's not working by the end of the day lets just go for the pull every 1 second solution."

After implementing all of the long running processes to a pub/sub model, it was not pleasant to hear that. I wasn't going to just give up on that much energy poured into the current (and best) approach.

After adding tons of logging and hammering the server to see where the leaks sprung up, I cornered the culprit. If you are pulling from MSMQ ad nauseum, and you are in a multithreaded environment, you need to protect yourself from the thread unsafe methods "Send" and "Receive" on your MSMQ object - usually done by creating the MSMQ object for just long enough to do 1 operation.

The place where this was being done was not a multithreaded scenario, so I got rid of the creation and disposal of the MSMQ object between receives.

That did it.

The other lesson here is to be careful when you do have a multithreaded app pulling from MSMQ. You may see same messages go missing. I know there are other MSMQ features that guarantee delivery, etc, but I expect this integrity out-of-the-box.

Tuesday, August 12, 2008

Just like we have issues mixing command and query when writing software, we run into similar issues when it comes to project management.

Say you have a project manager that is very technical. You would think that you could do no better. But alas, you will have bottlenecked his potential by requiring him to service the "queries" he has to provide for management (such as status reports, etc).

In the opposite corner, you can have a non-technical PM giving "commands" to the developers without the proper knowledge of the consequences of his actions.

Even in the well balanced scenario, you can overload a PM with a week or more worth of "queries"... or ask for too much direction.

So, can you scale your PM? If the role is clearly defined and we have a separation of command and query, yes you can.

Monday, April 28, 2008

I was disappointed at how little $$ was being thrown at something as good as Spec# by Microsoft. Considering the millions that have been flushed down the toilet trying to make drag and drop do what it was not meant to do, it's a crying shame. Hopefully the tide is slowly turning and great endeavors like this will be supported properly by Microsoft in the future.

Sunday, April 27, 2008

At ALT.NET there was a good talk about the merits of graphical DSLs. BizTalk and SSIS come to mind when talking about this. Say we have some data work to do and we do it in SSIS. Once we have constructed our migration or data manipulation, we can save the project.

What we have done in actual fact is saved the graphical representation of what we made. Once this solution is checked into our source repository, it is saved in a non-deterministic fashion. If we modify our solution and commit our changes again, there is no guarantee that everything that was not changed will be in the same place.

To see this directly, you can do a diff on the two versions. Since there is no way to visualize the differences, this information does not tell you anything. One is forced to probe over the details of each property to see what has changed. The best hope is that the person that made the modifications had left some good comments to go on.

So it seems like textual DSLs are the way to go. Once we have changed what we want, the diff between two versions will be very easy to analyze. This is good, but I believe there is a point in between. This is a hybrid solution that will use a textual DSL and a visualization tool to make a graph of what the text describes.

How this can work is the way Visio can make a class diagram for you. If you use the reverse engineering tool to do this, it will place classes in places it's algorithm thinks is best. This of course will not be the ideal most of the time. One will want to move things around so that the information is better conveyed.

These adjustments are what can be cast aside into a secondary file. One would not care what changes this file went through in the repository. The real information is still stored in the textual representation. Hopefully, with more attention to the preference file (storing changes only in an additive way) can help version this too.

A quick analogy can be drawn up to the .suo files that Visual Studio creates. Your solution has the same projects and settings. But the windows that you had open and a number of other preferences are stored in the suo file. These generally do not make it to the source repository.

If there was a way to use a textual DSL to describe your SSIS work, a lot of responsible programmers would be happy. Maintaining an SSIS project would be far easier - or any other drag and drop ridden solution.

Tuesday, April 22, 2008

I'm finally getting the nerve to blog again. This is probably due to how immersed in work I have been lately. The work I'm doing now is not the same old contract stuff - it's new and therefore exciting. Part of the challenge is seeing how Agile, Scrum and XP play a role in DDD, Messaging Architectures etc.

The tipping point was ALT.NET Seattle. My head is now sufficiently filled to nearly overflowing with information. Not putting "pen to paper" would be a great loss to me. If this helps others along the way then great! But the company at ALT.NET Seattle eclipses most of what I will say. I'll have to update my blog roll soon.

The Conference

The format was open spaces. This was a new experience for me. I was very pleased with it. The participation is something that is missing from the traditional format. There was a lot of video taken. David Laribee posted some as did Jeffery Palermo.

I had a good chat with Scott Hanselman and Martin Fowler in between sessions about being a polyglot programmer. Scott was making the point that "if you want to learn French you start reading books written in French" to draw a parallel of learning a new programming language. Unfortunately the real analogy in the ALT.Net space for some things such as NHibernate and the Castle implementation of Active Record which does your XML mapping via attributes, the analogy is more along the lines of "if you want to learn Rumantsch you should read books written in Rumantsch". I'm not sure if you'll find one in your library. The full discussion is on David Laribee's blog linked above.

At other points I had discussed using GIT instead of SVN, the intricacies of moving to message based architecture, new things coming out of Microsoft Research like Spec# and many other things. I'll leave these things for posts in the very near future. There were many really smart part people there and I was very happy to be a part of the event.

A special thanks goes out to Scott Bellware for pushing this forward from day one of the idea of an "ALT.NET"