Of late, Sensei needs to keep a clear head. That has meant learning to segment ideas and really, really, really focus on streamlined features. This is hard. Not because Sensei has a plethora of great ideas. That would be a nice problem to have. Many times in software development you end up this guy:

This is the state where you have things you want to accomplish, yet even when you pair things down to the “essential”, other essential, critical factors must be taken into consideration before you create a mess. This is life calling, and that string which suspends that giant sword that you noticed hovering over your head is about to snap. There is a good chance that you need more discipline, better execution tactics, far better honed chops, you name the metaphor. Sensei has been at this game for over 22 years, and still the speed that thought takes to become reality is way too slow.

With great sarcasm you can remind your self that some of the best work lays ahead, but the reality is that you still need to fight to be fluent, you have to claw your way to a Zen state of mind / no-mind. So chose, the art of bushido or the art of BS. Or maybe work smarter and enjoy life.

Before Sensei leaves you, ponder this: does “being done” mean that you’ve dropped off a product but have to get on the phone in order to make changes, and maybe now that you are struggling why couldn’t you figure out to take time when it was more critical to be fluent with your productivity?

A central theme for 2010 has been fluency, or the continual practice of certain methods to such a degree that your performance improves and you produce increasingly polished, effective solutions. For software development this has meant tools to save time and increase quality. It also means keeping an eye toward making the users of your solutions more efficient as well. In the spirit of “fluent solutions”, Sensei will end the year with a post that examines how to create a data paging solution for the jQuery data grid plug-in DataTables.Net.

DataTables can turn a HTML table into a fully functional data grid like the one offered by Telerik. This plug-in offers client side sorting, filtering/ search, as well as support for server-side processing processing of data. It is an extremely feature rich tool created by Allan Jardine, and is itself worthy of a series of posts. For this post on data paging Sensei recommends that you read through these examples to get an idea of what the data paging service needs to achieve.

Let’s get started with the goals we need to achieve when providing server-side data paging support:

Send data to client in the multiples or “chunks” that the client requests, and respond when the size of the sets requested is changed by the user.

Re-order the data set if the user clicks on a column heading. Honor the data set size when returning the data.

Filter across all columns of data based on user input. Implement this as partial matches, and again, honor the data set size.

Remember this is about flexibility, so we have the additional goals of:

Create a solution that can be reused.

Provide a mechanism to accommodate any type of .Net class using generics.

Number of columns being displayed (useful for getting individual column search info)

string

sSearch

Global search field

boolean

bEscapeRegex

Global search is regex or not

boolean

bSortable_(int)

Indicator for if a column is flagged as sortable or not on the client-side

boolean

bSearchable_(int)

Indicator for if a column is flagged as searchable or not on the client-side

string

sSearch_(int)

Individual column filter

boolean

bEscapeRegex_(int)

Individual column filter is regex or not

int

iSortingCols

Number of columns to sort on

int

iSortCol_(int)

Column being sorted on (you will need to decode this number for your database)

string

sSortDir_(int)

Direction to be sorted – “desc” or “asc”. Note that the prefix for this variable is wrong in 1.5.x where iSortDir_(int) was used)

string

sEcho

Information for DataTables to use for rendering

Reply from the server

In reply to each request for information that DataTables makes to the server, it expects to get a well formed JSON object with the following parameters.

Type

Name

Info

int

iTotalRecords

Total records, before filtering (i.e. the total number of records in the database)

int

iTotalDisplayRecords

Total records, after filtering (i.e. the total number of records after filtering has been applied – not just the number of records being returned in this result set)

string

sEcho

An unaltered copy of sEcho sent from the client side. This parameter will change with each draw (it is basically a draw count) – so it is important that this is implemented. Note that it strongly recommended for security reasons that you ‘cast’ this parameter to an integer in order to prevent Cross Site Scripting (XSS) attacks.

string

sColumns

Optional – this is a string of column names, comma separated (used in combination with sName) which will allow DataTables to reorder data on the client-side if required for display

array array mixed

aaData

The data in a 2D array

The data sent back is in the following form depicted below. Note that aaData is merely an array of strings – there is no column information. This will present a challenge in that you will not be able to simply serialize a collection and pass back the results.

As you may be aware, if you wish to use ASP.Net web services to serialize JSON you must POST to the service and instruct it to interpret your parameters as JSON. DataTables will POST variables as value pairs and this won’t work for us when POSTing to a web service. We’ll have to translate the variables to a usable format. Luckily DataTables allows us to intervene with the following code, where we create a JSON string by serializing a structure called aoData:

Our web service can now de-serialize aoData and parse the appropriate parameters. This gives us important items such as how many records to display, what columns to sort on, and what search terms should be applied in a filter.

DataTablePager Class

DataTablePager.cs is the work horse of our solution. It will sort, filter and order our data, and as an extra, serialize the results in format required by aaData. Here’s the constructor:

The parameter jsonAOData is the JSON string that contains the variables iDisplayStart, iDisplayLength, etc. These will be parsed by the method PrepAOData. The parameter queryable is the collection of records that will be filtered and parsed into JSON format required by DataTables.

The method Filter() coordinates all of the work. It’s pretty simple what we want to do: filter our data based on each column containing the search term, sort the result, then pull out the number of records we need to include in the page, and finally convert the collection into the format DataTables understands.

That said, there is some trickery that goes on in order to make this happen because we are creating a solution to is going to work with any IQueryable to we supply. This means that the filtering and the sorting will need to be dynamic.

To make the filtering dynamic we will build expression trees that will convert each property to a string, convert the string to lower case, then execute a Contains method against the value of that property. The method GenericSearchFilter() called on line 16 accomplishes this with the following lines of code:

We get an array of Expressions that when executed will tell us if the value matches our search term. What we want is to include the item if ANY of the properties is a match, so this means we have to use and OR for all of the properties. That can be accomplished with:

So with what is listed above we would be able to match all properties with against a single search term. Pretty cool. But DataTables raises the bar even higher. If you were to go to the samples page and filter using multiple partial words you would find that you could perform some very effective searches with phrases like “new chic”. This would select all records that had properties containing “new” OR “chic”. Imagine the scenario where your user wants to finds all cities “New York” or “Chicago”. We’ve all been there where we have a grid and can only search for one term, or worse, where we have to add a row to a search filter grid and constantly push a “query” button to perform our searches. DataTables does all of the with one search box – just type and the filtering begins.

GenericSearchFilter() handles that scenario. First the search term is parsed into individual terms if there is a ” ” supplied in the string. This means we will have to perform the propertyQuery for each term that we have. To return all of the records that correspond to each term we still need to perform the OR in groups, but then we need to AND these predicates together so we can get all of the groups per individual term. Here’s the source edited slightly for readability:

So GenericSearchFilter will build a humongous expression tree for all the properties in your class. To make this usable for the Where we convert it using Expression.Lambda and our Where clause just goes about its merry way. Because we have used generics, you can supply any class from your assemblies. One caveat, and Sensei is trying to find a resolution. If you have a string property to that is set to null, the expression tree fails. You’ll note that in the classes supplied in the sample, the properties that are of type string in the Tenant class are defaulted to empty in the constructor. A small price to pay for some great functionality. To sort our data we use the method ApplySort():

An extension method OrderBy will accept the name of column, the sort direction as parameters. The parameter initial will indicate if we are sorting mulitple times, so we can accomplish multi-property sort with syntax like

All good things …It’s been a long ride, this post. A lot of code discussed, a lot of ground covered. The solution is here. As always, play around and see how this can help you. If anything breaks, tell Sensei. If you have improvements, tell Sensei. DataTables is a great tool for your arsenal, hopefully the DataTablePager can help you integrate paging with large datasets as part of your solution offering.

Right now Sensei wants to sign off by toasting to you for wading through all of this, and for having the desire to build up your skills. Obtaining fluency in what you do is a hard road to travel, but it’s worth it because you get things done quicker and better with each session.

Sensei is going to kick it up a notch and provide you with a means to create test data with out having to recompile your projects. This is ideal for when you want to create UI prototypes. DataBulider uses CS-Script and NBuilder to create a web based data generation tool that can read assemblies and will allow you to script a process that will generate test data in the form of JSON.

This adventure is split into two parts. First a quick demo, then instructions on how to configure DataBuilder for you environment. A deeper discussion of CS-Script and embedded scripting in .Net will be part of the sequel to this action/adventure, as we all know the second movie in the series is always the best!.

Operating DataBuilder

In short you have three things to do:

Identify the assemblies that contains the objects you want to generate test data for. The path to the files can be anywhere on your system. For convenience there is an folder called Assembly that you can copy the files to. Multiple assemblies from different locations can be imported.

Create the import statements.

Create the code snippet with the NBuilder statements that will generate your data.

Here’s a screen shot of DataBuilder with each section that corresponds with the three goals stated above.

Note that after the end of the code that creates the objects, you need to include a statement

parameters["JsonDataSet"] = JsonConvert.SerializeObject(List);

Without that statement you will not get your data serialized. If you’ve entered the data as shown, hit the Build button and the resulting JSON is placed in the output box. That’s it. Looking through the output you’ll note that the first two sales dudes are James Kirk and Bruce Campbell, while the remaining records are completed by NBuilder.

The very top section “CSScript Directives” is required by CS-Script. These are directives that instruct the CS-Script engine to include assemblies when it compiles the script. The imports section is pretty straight forward.

You’ll note that the script inherits from an interface. This is a convention used by CS-Script to allow the host and script to share their respective assemblies. Sensei will discuss that in next post. The RunScript method accepts a Dictionary that contains the parameters. This will house the JsonDataSet that is expected for the screen to display the output of your data.

Advanced NBuilder Experiments
The beauty of NBuilder is that you can create test data that goes beyond “FirstName1”, and allows you to quickly create data that matches what the business users are used to seeing. If you think about it you should be able to generate test data that will exercise any rules that you have in the business domain, such as “Add 5% tax when shipping to New York”. With the scripting capability of DataBuilder you can create suites test data that can evolve as you test your system. You could also use the JsonDataSet to create mocks of your objects as well, maybe use them for prototyping your front end.

We’ll do a quick sample. Our scenario is to create assign real regions to sales agents. Furthermore, we want to only chose a range of regions and assign them at random.

Configure DataBuilder For Your Environment
Given that DataBuilder is loading assemblies you will want to run it on either your dev environment or on a test server where your co workers won’t mind if you need to take IIS up and down. Also, you’ll want to work with a copy of your assemblies in case you need to make a quick change. There are times when IIS will not release a file and if you need to make changes to the assemblies themselves it’s more convenient to copy them after you’ve re-compiled.

There are two settings you need to change in the WebConfig to match your environment.

ScriptPath – Point this to the share where you want to save any scripts. DataBuilder will scour the directory and list anything you place in there.

FizzWarePath – This needs to point to the location of the NBuilder dll. Most likely this will be the bin folder of the DataBuilder website. In the follow up post Sensei will explain what this does.

Wrapping Up For Now

We covered a lot on the whirlwind tour of DataBuilder. There’s a lot more that is of interest, particularly with respects to the embedded scripting aspects provided by CS-Script. For now, have fun playing building you data sets. In the next installment we’ll cover the scripting aspect in more detail For now, download and experiment. Here’s the source for DataBuilder with unit tests.

One of the enduring challenges for software developers and business is to create abstractions that accurately represent concrete rules for business operations. As opposed to operating like our tribal ancestors where you had to kill a goat, start a fire and listen to the blind boy tell the tale told for thousands of years, today we’d like to be able to read stories ourselves. Hopefully the story that we read matches the reality of what we have implemented in our code. Many nested if statements can quickly make verifying that the code matches the story very difficult.

A fluent validation API can assist with this. Look at the code at the top of the post. You can show that most people without having to get out the smelling salts. For your fellow developers its creates a succinct way to express precisely what the logic is. They’ll love you for it.

Janga, a fluent validation framework for creating such an API. There are three goals to be met here, and Janga fulfills these goals:

Goal 1 – Be able to chain “When” clauses together. Each test – represented by the “When” clause – needs to chained together.

Goal 2 – Accept a test on any object property where the test criteria is defined in the form of x <= y at runtime. The types of objects and their properties will not be known until runtime, so our framework must be able to analyze an object and construct a test against each property as it is presented. This is NOT the specification pattern, where you define a delegates ahead of time.

Goal 3 – Flexibly handle errors by either halting on the first error, or by proceeding with each test and logging each error as it is encountered.

The code Sensei will present here fulfills all of these goals and gives us the fluent magic we see in the sample at the top of this post. Before we delve into the details, the sources for the ideas and explanations of Lambda Expressions, fluent apis, Expression trees, should be acknowledged and applauded, because they got Sensei thinking along the right path:

Creating this api is a twisted cluster-wack of a zen puzzle. The code for this solution consists of one class and three extension methods. We’ll make use of generics, delegates and expression trees to evaluate our When clauses. In the end we’ll see that with very little code we get a lot of mileage. It took Sensei a long time to wrap his head around how to piece all of these things together, so hopefully the explanation will be clear. Note that the solution has tests that demonstrate how to use the framework, so if you want to skip the madness and just try things out, go for it.

Goal 1: Chaining When clauses together

To get the ball rolling, there is an extension method Ensure that will accept the object you wish to evaluate, encapsulate that object into a Validation class.

Creating a chain of tests is accomplished with the Validation class and successive calls to the extension method When. Validation encapsulates the object you wish to test. In our examples that’s Employee. Employee will be passed on to When, When executes a test and stores the results in Validation. After the test, When returns Validation, and this creates the opportunity to execute another extension method.

Before we continue on with reviewing dynamic evaluation by the When clause, you could stop here and still have a useful mechanism for creating validation routines. That is, you could create a extension method for each validation you want to perform. One example could be:

So instead of Ensure.When you will use Ensure.LastNameContains(“Smi”). You will also have to create a new method for each condition. This is still quite expressive and would go a long way to keeping things organized. This would be more in the spirit of the specification pattern.

Goal 2: Dynamically Evaluating Tests at Runtime

As stated, the “tests” are performed with extension method When. When accepts the Validation object, along with propertyName and the propertyValue that you are testing. The enum Compare determines the type of test to perform. The comparisons are:

The magic of When stems from the use of Expression trees as delegates. As defined on MSDN, an expression tree is:

Expression trees represent code in a tree-like data structure, where each node is an expression, for example, a method call or a binary operation such as x < y.

You can compile and run code represented by expression trees. This enables dynamic modification of executable code, the execution of LINQ queries in various databases, and the creation of dynamic queries.

This gives you the ability, at runtime, to dynamically evaluate an expression in the form of x = y, also referred to as a binary expression. And in our case, we wish to evaluate: Employee.Age = = 45. The delegate takes care of presenting the type of the Expression and it’s components to the runtime engine.

Marc Gravell explains the difference between a delegate and an Expression as:

The delegate version (Func<int,int,bool>) is the belligerent manager; “I need you to give me a way to get from 2 integers to a bool; I don’t care how – when I’m ready, I’ll ask you – and you can tell me the answer”.

The expression version (Expr<Func<int,int,bool>>) is the dutiful analyst; “I need you to explain to me – if I gave you 2 integers, how would you go about giving me a bool?”

In standard programming, the managerial approach is optimal; the caller already knows how to do the job (i.e. has IL for the purpose). But the analytic approach is more flexible; the analyst reserves the right to simply follow the instructions “as is” (i.e. call Compile().Invoke(…)) – but with understanding comes power. Power to inspect the method followed; report on it; substitute portions; replace it completely with something demonstrably equivalent, etc…

.NET 3.5 allows us to create “evaluators” with Lambda Expressions compiled as delegates that will analyze an object type, the comparisons we can make, and the values we want to compare dynamically. It will then execute that tiny block of code. This is treating our code as a set of objects. A graph representing this tree looks like so:

Each node on the tree is an Expression. Think of this as a “bucket” to hold a value, a property or an operation. For the runtime engine to know what the type and parameters of the Expressions are, we create a delegate from the Lambda expression of that node. In other words, we let the compiler know that we have an expression of type Employee and will evaluate whether Employee.Age is equal to 45.

To accomplish the magic at runtime, you need to set up “buckets” to hold Employee.Age or Employee.FirstName and their values with their respective type for evaluation. Furthermore we want to be able to evaluate any type of binary expression, so our Expression will make use of generics and a tiny bit of reflection so that we will have code that “parses” the object and it’s properties dynamically.

The type of comparison is determined by the enum Compare. Once these steps are completed we convert the expression into a delegate with the statement:

var executeDelegate = predicate.Compile();

If you are worried about performance and the use of reflection, note that the use of static will greatly minimize this impact. Basically you’ll take the performance hit on the first run but not on the subsequent runs.

Goal 3: Error Reporting

For error reporting, Validation requires the name of the object with the property ArgName, and asks that you specify whether you wish to halt when there is an error. This is accomplished with ProceedOnFailure. An error log is created when you wish all tests to complete despite their respective results. When you want to halt on the first error and throw an exception set the ProceedOnFailure to false.

Reporting the errors themselves takes place in each When clause, and this is implemented at the end of the When extension method.

Finally we need to return the Validation object so that we can chain another When operation

To recap, When is a dynamic filter where at runtime, code is evaluated and created on the fly to analyze and execute a tree representing code as an object. The expression trees can be applied to any object and evaluate the object’s properties. Holy snikes!!! If that doesn’t scare you how ‘bout chaining When’s together by always returning a Validation object so that you continue to apply another extension method to it. Twisted Zen mind torture indeed, since we have complicated looking code so that we can less complicated “business code”.

The Wolf Credo:
Respect the elders
Teach the young
Cooperate with the pack
Play when you can
Hunt when you must
Rest in between
Share you affections
Voice your feelings
Leave your mark.*

What have you done to nurture your team? Are you the resident Elvis, and if the newbies make the cut they’ll graduate from a Mort to be the next King, hand plucked by you from millions and millions of people? Can I get a little ka-ra-te with that?

What makes you an Elvis, and are you a bloated drunk Elvis at the end, or the bad-ass version 1970 version who can jump start anything? Elvis in 1970 practiced the Wolf Creedo. Watch the documentory Elvis the Way It Is 2001, just the first half hour. This short half hour will show you Elvis, after years of being away from touring, ready to return to touring again in attempts to re-start his career. The first half hour of the movie focuses on the few weeks of rehearsals before the debut concert. Elvis had a fluent, incredible means of communicating with his band members and back up singers. With a glance, a gesture, a wink, a new song would spring up. Maybe Elvis would say a quick word, hum a note, and suddenly a bass line would kick in, and not more than three beats later, the entire band and Elvis are playing a tune complete with improves. While playing Little Sister, Elvis nods, and issues “Get Back” and off the group goes playing Get back from the Beatles. Congruent would be best word to describe the synchronization that each member had.

Elvis nurtured that vibe. They all keyed off of him, for to the band he was Elvis, not the King. He lead by being a focal point, but not necessarily an ostentatious leader. When you watch the practice sessions where Elvis worked on the orchestrations of each song it is clear that he could communicate what he wanted, and worked with his band members to produce the product he envisioned.

But in order to function like this unit, each member has to practice. You, as pack leader, have to pick the scales, the arpeggios, the rudiments that you want to be second nature so that your team, the young ones and old warriors can produce what you want, fluently.

Like this:

What do you when your Scenario or User story just sucks? You’ve haggled with your peers over how to implement, the user has changed tunes and come over to your side of things by realizing that they want two things at the same time, but now that you’ve listened to everybody and re-worked your logic, you’ve just spend 6 or 7 extra hours testing. Now, you doubt that anybody really knows what the original intent of your use case was because there are so many different variants and vagaries from all the meetings, emails, hallway tests.

Now succumb to the brain death of Sarbanes-Oxley. Where is the traceability in all the discussion threads? How do you prove that you have what you want and that transactions are preserved and yada-yada-yada it just works? Before the project you thought that your team was like these guys:

But in reality you are this crew:

Sensei won’t pretend that there is a cool Zen technique to avoid hard work or failure. Maybe this type of failure of communication is a test of your core skills and your “fluency”. Look at the Elvis’ team. They’re practicing. They’ve been over the material again and again and again. That’s three again’s for each of the yada’s. To get to that point where they can adjust to his direction they’ve done much on their own time acquiring skills. Years of practice and adjustment.

Your project is like that path to acquiring a skill set, gaining mastery, being fluent. You have to build for flexibility, for

change. You CAN NOT give in to YAGNI just because this week you think you know all the answers. You won’t create a fan base that way. And just because something is written down does not mean that it’s set in stone. Remember Moses and the stone tablets? Even though he could part the waters he still had to go up the hill twice. Things will go wrong, but if you put in the time your adjustments, while painful after a long haul, won’t be that bad. 6 hours could have been 6 days. Be thankful you have good partners.

In the past Sensei has written insane tomes regarding time travel and how your best intentions really get you no where. The story today is about getting to 11, which as Nigel says is one more than ten, putting you over the top. Consider for a moment the times that you really think you’re like this guy to the right. Yep, you think you have a Martin Fowler sized audience when you are coding. The scientists of the future will study my code and say “Here, this is the start of the great insight. How interesting.” In reality you are like Spinal Tap, unaware of how absurd you can be. Code too complex, but it goes to 11! Most blokes keep it at 10, but then you need to put it over the top take it up a notch. That extra notch. That’s 11.

Here’s a thought – what about 6? Is it viable? Can you be flexible by doing a 6, just good enough to not paint yourself into a corner? “Perfection is a process, viable is an end state.” As a developer you may not be able to judge what 6 is. If you’re in tune with your fan base you’ll know but that can only come from wisdom born out of great mistakes. For those of you who study Budo you may recall the concept of short and long and how relative scale can shift your advantage. Your opponent may have a sword and you only a dagger. Short and long makes a big difference, but you can alter that equation with small maneuver. Once you’re inside and beyond the sword’s curring range you have the advantage, as your dagger is now long enough to finish the skirmish. Change the scale.

Years back Sensei was given the task of reducing shelf space utilized by paper by 25%. The CFO arrived at this goal via scientific method. It was scientific since at Sensei’s company if you don’t do what the CFO said it is axiomatic that you were in deep doo-doo. Laws of hierarchy and all. Now imagine rooms filled with documents related to contractors, account-receiveables, human resources, legal contracts, project management etc. Yah, DISPARATE is the word. Not meta data, just a meta-mess.

Now in the best of all worlds where you need to get to 11, you would have time to survey all document types and refine each attribute set before you design your system for document categorization. This foundation becomes your data model in a database and many would claim that you should a create data table per document type to house the varying number of attributes. But you have 2 million sheets of paper to scan and in 12 months re-construction at your offices begin so you need to be able to walk into a room and quickly categorize all documents, throw them into boxes, scan them, and automatically assign the meta data to the document and store the thing. Oh, and if you miss a document type or need more attributes you don’t want to go back to your database, add or modify a table, re-gen your data access layer, add the attribute to your screen all before your adjust your categorization. And remember, you need to ship out 80 to 100 boxes every 2 weeks so you need to keep the data entry flowing. Finally, you are told that some projects can have up to 50 different types of documents, but no one is sure to what degree the project documentation is complete so the number of document types per project is not known and NOBODY HAS THE TIME TO GO THROUGH THE SHELVES AND CREATE DOCUMENT TYPES BEFORE ANY DATA ENTRY IS POSSIBLE!

Play the song, ’cause it adds to the excitement!!

Several key decisions solved this mess, and the solution was simple enough that temps could walk into a room categorize and pack documents into boxes for scanning. The error rate ranged between 1 – 5% per department. These were not solutions cranked up to 11, they were 6’s:

No change to database schema or screens will be made, ever. A document was modeled with four tables with a base Document table, Document- Type table, Document-Attribute table that contained all attributes per Document-Type and finally a Document-Attribute-Value table where the meta data was stored. This way each Document-Type could be be created with simple data entry. One data entry screen that could create data controls on the fly per attribute type developed.

Import data from existing systems. The meta data for your documents resides in many of your accounting, job cost, and budget systems. Once document types are known, dumping data from accounts receivable and / or accounts payable allows your to assemble thousands of cover-sheets for all invoices. Quite literally you create a stack of paper for all possible types of invoices for all accounts, walk into a room, pull documents of the shelf, attach coversheets, and keep the sheets you didn’t use. Now, since the unused sheets have a bar-code, run these through you bar-code reader and create delete records for what you didn’t use. Now you have a complete accurate manifest of what was on the shelf and what was packed away. When the scanned images come back you can inspect them against the manifest.

People can work better with paper. As stated in the last bullet point, creating all possible types of document coversheets per account or project and printing them allows you to quickly categorize all documents. With minimal or no data entry and a stack of coversheets, anyone now can go through shelves and associate the coversheets with the appropriate documents. In other words, the subject matter experts have a tangible, traceable system that they can hand off and supervise someone who can do the grunt work. Not sure where you finished with your categorization? Just look at your stack of coversheets. Want to inspect accuracy, grab a document and compare it to the categories printed on the coversheet.

What? Process management with paper ? That sucks! No it really doesn’t. You see, a 6 to you really is an eleventy-one for your user community who is really busy. Yep, you have to be smart with your database design by focusing on one key area and that’s it. The rest of the effort is imports with SSIS packages, CSV files, and printed coversheets. But it’s easy for the users to use paper, and that keeps a flow going. 2 million sheets of paper scanned in a year. Maybe a 6 isn’t all that bad after all.

Get ready for the sound of one hand clapping, but first, fire off the song as it get’s your head straight.

Some of you want to be Elvis too much. Sensei’s going to tell you a story so you know what he’s talking about. You see, users of your apps are waaaay smarter than you, and spend more time in their fields than you ever hope to do. You need a little love. It’s called fluent interaction. Fluent. Interaction. Lord have mercy.

Process mapping helps, but in the end that takes you to overly scientific abstractions, and while user stories help some they, too, stray with you as the sole author. You in the chair, just the important details from the user, but mostly you. Should you consider yourself not Mort but an Elvis, you may want to ask yourself what Elvis you want to be:

Kick-ass Karate Elvis

Drug Ridden Elvis Wanna Be

Back to the story. Last episode, in a spate of productivity and a dose of SQL-NoSQL fever, Sensei created a slim document management solution that can be quickly applied to an existing framework with minimal impact to database schema and code base. Sitting around the conference room table the comment arose from Annie, the project lead from the Sales group:

“Why do I have to save a commission record first before I can attach a document? That interrupts my flow. I want to put in everything that I want and save, period. No dialog box thingy prompting to save first, come back and do something else. Why can’t we just do it”

Long silence. The sound of one hand clapping.

One of Sensei’s report-to’s jumped in: “Because in order to associate the document to the commission you have to save that commission first in the database, then take the id from the record and associate it document. This allows you to retrieve it later on.”

Annie: So. Can’t that just happen behind the scenes? If it’s two steps the sales gal won’t do it. She’s got calls to make.

Ssensei drifted out in research land, or as normal people call it, he spaced out for a bit. NetFlix sprang to mind, iPhone too, where you delete, it does it, but you can bring it back. Take the confirmation response out of the equation. Give the user a chance to undo their mess, but don’t get in their way. It’s fun to pretend to be the King, but what a wake up slap. The technology was right, but the user was seeing the benefit because “putting the stuff in was too clunky”. Sensei went and did want Annie wanted. Annie thinks its great. Good technology made better by the user, not the King.

Fluent. Interaction. Lord have mercy. You see, Annie’s right and user stories, UML and other brain death would never capture the essense of her perspective, particularly after she used the software. Yeah, soft deletes are great theory, but you are not thinking like a user. In order to be a better King, you gotta give the concert they want to hear. You have to know that the fans have created you, have shaped your persona. You have to know your fans, almost be them.

Elvis had a come back concert in 1968 but it almost didn’t happen as there was a huge fight with NBC. The network insisted that the show would be like a Bing Crosby special given that the air date was during the Christmas holiday season. Elvis wanted an intimate environment where he could perform up close, live with his fans. He thrived off of close contact with his fans. Know your audience. Elvis was right, and it helped re-launch his singing career and revive his legend. It was one of his best performances. For the fans.

You need to listen to your users. Spend the time to hone your craft, but work even harder to make them fans. What do they need? Is the concert for them or for you? Are you learning just to be smart or for their benefit? Fluent solutions require interaction with the fans. Thank you. Thank you very much.