Archive for November, 2010

At this week’sVirtual Brown Bag, we’ve tackled mocks, stubs, Rhino mocks, and related things, since that was one of most voted suggestions from our folks VBBers. Mark Wilkinson had mentioned he could do a quick intro to Rhino since he had just presented on that topic, so that was great: the more people sharing, the better. Make sure to check out the notes and video.

I’ve also written an article on this topic for CODE Magazine a couple of issues ago, and here’s a video out of a presentation on the same topic that I did at Houston TechFest last year.

Please keep your suggestions coming, and vote for the ones that are out there. We won’t have a Virtual Brown Bag this Thursday, since it’s Thanksgiving and most people are going to be taking the day off, but we’ll back next week! Happy “oversized bird celebrations” for all of you.

I first saw Greg presenting about CQRS at DevTeach one or two years ago. Then I saw Udi’s presentation at TechEd Europe last year. I’ve also listened to the interview with Greg on Herding Code (twice), and I recommended you do too!

In fact, I recommend you check out what those guys have to say in the resources I’ve linked to in this post, because they’re a lot more knowledgeable on these things than I am. In this post I’ll just babble about things I have been thinking around this area.

It’s 2010 already… Wake up!

We are still doing several things in software the way we used to do 20 years ago. That has got to stop! The other day, one of my twitter buddies said something along these lines: “…trying to explain to my 8-year old son what a floppy disk is as I tell him which one is the Save button…”.

Why do we still use floppy disks as an icon for Save buttons? I haven’t personally seen a floppy disk in several years. My 10-year old daughter has never seen one. Several non-computer savvy people might have seen one of those years ago, but didn’t even know what they were.

I’ve noticed that the Drawing Pad app on the iPad uses a USB plug image. Is that better?

Well, I guess so; most people using computers nowadays are likely to have used “pen drives” to copy some data around, I guess.

On the flip side, I’ve noticed that GMail doesn’t even have a “save” button when a “contact” is being edited. I suppose that makes sense: why do I have to explicitly push a “save” button? Any app could just save stuff in the background as I work. If there’s any problem with the data (such as a business rule violation), then it could just tell me. Evernote also works like that: no need to push any button to save anything, I’ve never lost any data, and I can always find whatever I’m looking for.

Most Applications Don’t Really Represent the “Real World”

Software applications are supposed to automate and make easier things that we do in the so-called “Real World”. Most applications fail miserably at that.

Let’s think about what we do in real life when we’re “creating data” with pen and paper. For instance, maybe we’re filling out an expense report. Once we’re done, we don’t have an action for “saving” that data. The form may just sit there at our desk, until we take it to the person who’s responsible for collecting those reports.

Isn’t that similar to “saving the data”? I don’t think so.

I think something like “submit expenses for reimbursement” would be a lot more appropriate (provided that’s what the actual business workflow calls for when paying out reimbursements). Or maybe, if you’re unsure as to whether you’ve provided the correct data in the form, you may want to “submit expenses for review”, which may pipe your data through a different workflow.

In real life, do people validate your forms as you’re filling them out, or when you submit it to the person responsible for collecting it? It’s usually the latter. The form may have initial constraints for validation, though (“if you responded yes on question X, please provided some extra Y information”), in order to optimize the flow.

Once the form has been submitted, it may undergo different levels of validation, depending on how the data is going to be used. Again, it depends on the workflow.

So, bottom line regarding “save” is: applications should allow the user to create data, and decide what to do with it. The user may decide “hey, I’m not done yet… just hold on to it for me here, I’ll come back to it later with more details, and then I’ll send it to whoever or wherever I should when appropriate”.

Oh, But This Grid Can Handle Millions of Rows!!

I see people getting all excited about how this 3rd party grid or that other one is capable of showing millions of rows. Why the heck would a user need that number of rows on a grid?! The user probably needs to scroll through all that data in order to find what he’s looking for.

Then the developer thinks “yeah, but this grid makes it very easy to set visual alerts so the user can spot those rows he’s interested in”. Why not tailor the app so to allow the user to ask questions and have answers provided? In other words, when a user goes to a screen in an app, the intent isn’t something like “I want to see as many rows as it’s possible”. Instead, the user is often looking for specific data to make some decisions.

Maybe the user has the following task at hand: “hey, I need to send out a thank you card to all customers whose birthday is within the next month, but only for those customers don’t owe us money”. The app should simply offer a “send out thank you card” option. No need for grids with millions of rows.

“So what about editable grids?”, one might ask. After all, those grids are all about allowing the user to perform blazing fast data entry, right? Hmm, nope. Hardly ever have I seen a situation where the user goes cell by cell, row by row, changing data as she tabs her way through the entire grid. When a user wants to edit data, she normally has a very specific set of data that needs to be edited: “several customers have changed their number of dependents”, “the area code has changed from 123 to 987 on phone numbers for all customers in zip code 98765”. Why not allow the user to express her intent and have the app produce the UI that best fits for fast data entry based on that?

Grids have traditionally sucked at allowing for “in-place” edit. It usually takes quite a bit of work to get the hosted controls (DatePickers, DropDowns, etc.) to work well. Why go through all that pain? Why not just give the user a UI tailored to each task that needs to be accomplished?

Have You Ever Been Deleted?

We have taught users to “delete” stuff, and now they want to delete anything in a way that makes no sense at all when compared to the real world. Users want to “delete employee”, “delete product”, etc.

Have you ever had an employer walk to you and say “you have been deleted!”? Probably not. Employees are “fired”, “transferred to another branch of the company”, etc. If somebody simply “deletes” an employee from the system, nobody other than the user (who may end up forgetting) will know the reason why the employee doesn’t exist in the system anymore. Udi Dahan has a great post about this. You should read it.

Earth: from Flat, to Round, to Tabular

The client says: “My customers come to my web store and place orders for some of our products”. Several developers barely hear the end of that sentence and are already thinking “yeah, I’ll have a customers table, with a primary key, with fields to store first name and last name, and an orders table with a foreign key that links an order back to a customer, and it’ll also have an order items table, which in turn has a foreign key linking it back to the orders table. The order items table also has a foreign key that links an item to a row in my products table….”.

Most developers tend to design the application with the database in mind, as if the world was tabular, and that design bleeds through the objects in the application, all the way up to the UI. Creating UI’s modeled after a database… that’s why users think of “deleting” an employee, as opposed to “firing” an employee. We need to stop doing that.

Think of another example: the “employee screen” has a “salary” field. The user may go in there and change it. Easy. But what about the intent? Why does the user want to change the salary? There’s probably several reasons for that:

maybe the employee has been promoted;

maybe he is now working only part-time;

maybe the salary was entered incorrectly originally and the user needs to fix it.

Whatever the case might be, there could be lots of things that should happen when the value changes:

maybe an email should be sent to both the employee and his manager;

maybe the payroll system needs to be updated;

maybe the accountant has to be informed.

These things shouldn’t be conveyed to the user simply by providing her a TextBox with the employee’s salary on it.

Another typical case of a UI built driven by the database design is any screen with lots of “enabled/disabled” type of logic (“if this checkbox is checked, and that date is greater than X, and the sun and the moon… blah blah blah, then that control should be disabled”).

Several years ago I’ve worked in an application where there was a form with well over 50 controls, several of them being checkboxes, and there was some crazy enabled/disabled logic there. Every single developer on the team had worked on that screen, and everybody hated doing so, since even the smallest change could break behavior. Funny thing is that controls were grouped together in the UI, conveying some sort of workflow. So why did we have to stick everything into a single form, instead of creating simpler, task-oriented forms?

Let me know your thoughts on all of this. I’ll sure have more posts on this subject.

A month or two ago, George was telling me about 42goals.com; as the site says “it’s a simple tool to track daily goals”. A thought I’d give it a try.

People use it to set and track simple goals such as decreasing the amount of coffee they drink, setting distances they want to run, and other things of that sort.

I’ve used it for a month now, and am planning on using it for at least another couple of months. In the first month I’ve tracked things such as how many pomodoros I can accomplish in a month, how many pushups, and some other small things I’ve been trying to get better at.

I had set my expectations for the full month based on my guess as to how much I’d get done daily. As it turns out, my estimates were very off. My best estimate came to be about 56% of the actual numbers. The worst one was about 23%. I think the two reasons for it were:

I had set the goals to a number a little higher than what I *thought* I was kind of accomplishing, but not tracking…

Some weekdays I’d just forget about the goals, or just be too tired to care. Most weekends I’d totally forget about everything.

For the second month, I’ve decided to track the same goals, but now I’m putting down estimates based on the numbers I’ve accomplished in the first month, increasing them by about 10%. So all I need is to do a little better than I did last month. If I get into the habit where I at least don’t forget about the goals, hopefully my numbers will improve within a few months.

As time goes by, I’ll probably be changing my list of goals, by either adding and/or removing goals. Here are some of the goals I’m thinking of adding to the list:

It’s usually not easy to find good examples that explain the Liskov Substitution Principle (LSP) to developers. But really, how hard could it be? The definition is so simple:

“What is wanted here is something like the following substitution property: If for each object o1 of type S there is an object o2 of type T such that for all programs P defined in terms of T, the behavior of P is unchanged when o1 is substituted for o2 then S is a subtype of T.”

Say what? I don’t know about you, but that kind of definition usually flies way over my head. Some people use the Rectangle versus Square example to explain the principle, which is a good one, but that doesn’t necessarily relate to things we normally do.

“Functions that use pointers or references to base classes must be able to use objects of derived classes without knowing it.”

Recently we’ve run across a violation to that principle in a project. We have an interface defined like so:

It’s a very small interface that represents resources that can be loaded in memory, and persisted afterwards in case there were changes to it.

Let’s pretend we have the following implementations of that interface:

The actual implementation of those methods doesn’t matter here; just assume that the real implementation loads and persists application and user settings.

Somewhere in the application we have some way to retrieve a list of instances of implementations of that interface, kind of like this:

Some place else, we have a method that takes in a list of those objects, and call Persist on them:

And somewhere else we may use those methods, like so:

Everything works great, until a new class is added to the system in order to handle, let’s say, some “special settings”:

It looks like the Load method does whatever stuff it’s supposed to do in order to handle loading these special settings. The Persist method, on the other hand, throws a NotImplementedException. As it turns out, those settings are meant to be read-only, therefore, the Persist method can’t really do anything.

The system is told to load the new class along with the other ones that implement that same interface:

Now when we run the app everything should still work fine, until we hit the code that tries to persist all of those loaded resources, at which point we get a big and fat “NotImplementedException”.

One (horrible) way to address this would be to change the SaveAll method:

If the specific resource being processed is of type SpecialSettings, we skip that one. Brilliant! Well, maybe not. Let’s look back at a simplified definition of the Liskov Substitution Principle:

“An object should be substitutable by its base class (or interface).”

Looking at the SaveAll method it should be clear that “SpecialSettings” is NOT substitutable by its “IPersistedResource” interface; if we call Persist on it, the app blows up, so we need change the method to take that one problem into consideration. One could say “well, let’s change the Persist method on that class so it won’t throw an exception anymore”. Hmm, having a method on a class that when called won’t do what its name implies is just bad… really, really bad.

Write this down: anytime you see code that takes in some sort of baseclass or interface and then performs a check such as “if (someObject is SomeType)”, there’s a very good chance that that’s an LSP violation. I’ve done that, and I know so have you, let’s be honest.

Another great definition for LSP comes from this motivational poster that the folks at Los Techies put together:

So what’s the fix?

The fix here is to tailor the interface based on what each client needs (Interface Segregation Principle, or ISP). The LoadAll method (which is one client of those classes) is really only concerned about the “Load” capability, whereas the “SaveAll” method (another client) is only concerned about the “Persist” capability. In other words, these is what those clients need:

The SaveAll takes in something tailored to its needs, IPersistResource’s, and the same goes for LoadAll, which only cares about ILoadResource’s (in the real app, the actual instantiation of these classes happen somewhere else). This is what the granular new interfaces look like:

Yup, it’s pretty much the former “IPersistedResource” split up into two separate interfaces, tailored to their client needs. Both the UserSettings and ApplicationSettings classes can implement these two interfaces, whereas the SpecialSettings class would only implement ILoadResource; this way, it isn’t forced to implement interface members it can’t handle.

Very often people ask what’s the most appropriate number of members in an interface. In the real world example I gave here, the original interface had only 2 members; one could say that was small enough, but as it turns out, it wasn’t. The IPersistedResource interface was doing too much (both loading *and* persisting stuff) based on the clients that use its implementers. In the end, two interfaces with a single method on them fit the bill a lot better. Interfaces with single responsibility? Yup, Single Responsibility Principle (SRP); as with design patterns, sometimes SOLID principles go hand in hand together.

Several months ago, while at Barnes & Noble with the family, I’ve decided to pick up a big 2000-piece jigsaw puzzle (ok, it may not be that big to some people, but I’m sure I had never picked up one with more than 200 hundred pieces or so…). I figured that’d be something we’d enjoy engaging together. I’ve always appreciated the “old” Manhattan skyline, from watching so many episodes of Friends, so that’s the image I chose.

The first one or two months into this “project” went very smooth, but boy, the last several months after that dragged really bad. The main problem was that sky. Over 300 pieces of the puzzle probably go in there, and they all look pretty much the same! Fortunately, the wife was decided to finish that off, picked up the slack, and managed to pull it off.

Now I just need to get some puzzle glue, put it on a frame, and hang it on the wall.

I hear we may be picking up a 5000-piece one. I’ll try to make sure we don’t get one where there’s a large section with pieces that all look the same.

Conferences, Speaking, etc.

Right at the start Alan brought up the fact that PDC 2010 was going on, with live streaming and all of that, but several of us decided to join the VBB instead. The idea of learning something useful, which you can start using immediately, over watching marketing conferences, seems to appeal to some of us.

That conversation reminded me as to one of the reasons why I decided to take a break from speaking: I have no interest in speaking at big conferences anymore. In the past, I’ve been asked to speak at some conferences where I was given the topic, content, the script, etc., and in both cases, I didn’t care about the things I was talking about. Several people have told me that they enjoy my talks because I’m very passionate. Well, it’s kind of hard to be passionate about something I don’t necessarily care about.

Also, several conferences are only interested in topics about whatever “the latest tools or technologies” are, regardless as to whether those things have been tried out in the real world or not. They favor those topics over things that people actually need on a daily basis.

I’ve submitted session proposals to some conferences and heard back things like “we don’t want talks on object-oriented programming, patterns, or that kind of stuff; we want talks on whatever thing that has just come out as a beta…“. Sometimes those things don’t see the light of the day, or are phased out after one or two releases.

The interesting thing is that my most popular talks are usually the ones that big conferences don’t care about. There’s a *huge* number of people out there who needs help with OOP, patterns, writing clean code, refactoring, etc. That’s why I’ll probably just focus on speaking at some selected user groups, CodeCamps, etc., where I know the organizers, they know me, and we both agree on what topics are of interest to the attendees, as opposed to this or that vendor.

Speaking of user groups, the Houston C# User Group is currently looking for speakers. I’ve presented there before, and it looks like I’ll be there again early on next year.

Other Topics

There were some other topics we covered at the meeting, but I’ll save details for another post. For instance, I shared something around Liskov Substitution Principle, Interface Segregation Principle, etc., but I plan on blogging about it in the next couple of days.

JB also share some stuff around Cucumber that looked cool. I’ll have to investigate more into that.

Join us for the Virtual Brown Bag meeting tomorrow: one never knows what cool things will be shared there! Don’t forget you can also suggest topics and vote for the ones that are already there.