Tag Archives: work life

I hate rush jobs. “Haste makes waste” was already two centuries old before Ben Franklin co-opted it, and it hasn’t lost any of its truth even as parts of our world have sped up. It is, alas, an easy little rhyme that we too often forget to apply.

Recently I had an awful rush job–a web page that varied significantly from our standard site templates, had been poorly scoped and spec’d, and that was in flux even as we neared a deadline that had been pushed up by a week. I had to cut a lot of corners to get it to work without having to tinker with code that might have affected other pages; one of the corners I cut was on a video embedded in the page. Our site shows videos in a lightbox (we use the jQuery Fancybox implementation), but the code for doing so is deeply entangled with lots and lots of things that weren’t appropriate for this page. The page owners didn’t want the lightbox, and with time running out I decided that digging into our tangled code for one part of the page wasn’t worth the effort; instead, I went with an iframe from a video sharing site and moved on to other things. I hate iframes that show external content, but sometimes you need to compromise to meet a deadline.

The problem with a quick fix like this, though, is that it will eventually come back to bite you. Especially if there are other shortcuts taken, like very poor attention to testing. Anyone who’s been doing software development for more than a few months learns quickly that testing is something that you don’t rush; indeed, the importance of solid software is something that has even made the news recently. But “test early and often” is another of those seanfhocaili we ignore all too often.

In post-deployment testing (the worst kind), the site owners discovered that the iframe didn’t scale on devices like the iPhone. And they really wanted it to scale. So while flurries of emails labeled “URGENT” (a word that should be banned from email headers; if I got to do a rewrite of the Outlook client, it would have a routine that automatically trashes any message with that word in its subject) flew around, I did some web searches to see what smart people have done. And I found a nice, simple, smart answer.

The best implementation of a responsive iframe that I found was from Anders Andersen’s Responsive embeds article, based on this A List Apart article. In short, the iframe is wrapped in a responsive div, with the height and width parameters frequently included by video sharing sites in the code they hand you.

For example, this is what the code Vimeo provides for the video below looks like, with the width and height parameters for the iframe set in absolute pixels:

The video looks fine in a full-screen browser, but look at it on a smaller device (or even shrink your browser window), and you’ll see that it quickly becomes unusable. (I’ve bumped up the height and width a bit from the original to make the behavior more obvious.)

Using Anders’ solution, though, the video works much, much better as its space shrinks:

The thing I most like about this is that it’s a pure CSS solution. I’ve also seen some JavaScript solutions, but dropping JavaScript onto a page to solve a problem like this makes me nervous, especially a page that already has quite a bit of JavaScript lurking on it. Weakly-typed, non-compiled scripting languages need to be used with great caution …

Of course, the real solution is to apply the same kind of discipline to website development that other aspects of software development receive: attention to requirements, testing, project management, and cross-functional coordination would go a long way toward saving us from bad rush jobs and unmaintainable spaghetti code. But that’s a harder fix to apply than some CSS and HTML …

I’ve worked on a variety of projects in my IT career, from small Lotus Notes applications where I’ve been the business analyst, developer, tester, system administrator, and support engineer, to multi-year enterprise initiatives with far-flung teams and huge project management systems where my role has been very narrowly defined. I’ll admit to a preference for the former–I like to write code, and waiting weeks or months for the first solid requirements to trickle in is painfully dull. But all of the projects, like most IT projects, have been plagued by the usual disconnects, missed opportunities, and frustrations with delivering what the users really want on time and on budget.

My current project, though, has been surprisingly successul. We released our first version in November, after two months of development and testing on top of about six months of thorough analysis (most of which happened before I joined the project), and since then we’ve released new and improved versions on a monthly schedule. The customer has been pleased, the application has been solid, and we continue to meet the users’ expectations.

What’s the secret?

To a great extent, it’s due to a very talented team of developers, testers, project managers, analysts, and business users. We work together well, have open and honest communication, and set up realistic and reachable goals for each release. The problem with talent, though, is that it’s not necessarily reproducible; you can’t bank on having good people in every role, or even on having good people at the top of their game most days. A project that relies on talent alone is bound to fail eventually.

What has really worked for this project is a philosophy of continual improvement. Our driving principal has been, to borrow a line from Jeff Atwood, version 1 sucks; ship it anyway.

My current workplace doesn’t have a formal “methodology” for development, no waterfall gate-checks or SCRUM masters, at least that I’ve encountered. There are rudimentary project controls and such to meet corporate governance requirements, but development teams are left largely to organize their own efforts. As a result, we’ve landed on some practices that borrow heavily from various flavors of “agile” development without professing the full “agile” theology; the guidelines that I’ve found work best on this project, and that may be reproducible on other projects, are pragmatic and contingent, flexibly implemented within a loose framework. This may not work everyplace, on every project, and it doubtless has some scalability issues, but for a mid-sized project with an aggressive schedule, these are some practices that have worked for us:

Manage the requirements to the schedule: hit the dates by containing the enhancements

We have a huge list of things we’d like the application to do, ranging from simple tweaks to pipe-dream fantasies. They’re all good requirements, all worth meeting because they represent what the users really want. But they’re not all going to go into the first, second, or third release.

Instead, we’ve promised a monthly release with at least one major system enhancement, and as many smaller enhancements as can be realistically squeezed into the time frame. Like the big rocks fable suggests, we focus on the one big thing first, and then categorize the other requirements as hard, challenging, or low-hanging fruit. Once the big requirement for the next release is ready, we knock off the smaller requirements as time permits, always mindful that no small enhancement should jeopardize the big one. It sucks to leave low fruit on the branch, but we keep our spirits up in the knowledge that we’ll have a long harvest season if we keep the customer happy.

A little spice and sizzle helps, though

The “one big rock” is usually a meat-and-potatoes affair, and it’s always filling and nutritious. But we’re also sure to include a little spice among the smaller enhancements. Refreshing the style sheet, adding a more attractive screen layout, or providing an extra screen of administrative information on the application’s performance is often cheap, easy, and low risk, but it’s very useul for maintaining customer satisfaction. The users may not notice that you’ve shaved an average two seconds off the web service response time and implemented a really nifty sorting algorithm–indeed, you’d better hope they don’t notice those things, because their only evidence should be when they fail–but they’ll ooh and ahh over a nicer interface.

Track every requirement, no matter how small

Indeed, make your requirement-tracking as granular as possible. Break the big requirements up into bite-sized chunks, and build good estimates for them (this is where something like the Pomodoro Technique can really shine). You don’t know which rocks are big and which are small unless you track them, and you don’t know if you need to scale back your release features unless you do estimates.

Open up the black box and let everyone see the work list

Having a good requirements and bug-tracking system is critical to managing in a progressive-enhancement environment. We’re using FogBugz, but other tools–Roundup and Bugzilla come to mind–are also useful. Even a shared spreadsheet is better than nothing. The key requirement is that everything is on the table and visible to the entire project team; having the project progress available at a glance, and maintained in real time, is the only way to keep everyone honest and ensure that releases happen on schedule.

Build plenty of testing time into the schedule

I’ve known thin-skinned developers who don’t like testers. Personally, I’d rather have someone on my team find my bugs before a customer does: it’s easier and cheaper to fix problems before your release date, and a good, thorough tester can be the difference between a product that people love and one that makes their jobs harder. In our current project schedule, we have a code cut-off date a week before the release date, after about three weeks of development; this should be adjusted for larger and more complicated projects, with even more time dedicated to serious testing.

Release early and often

That “three weeks of development” in our project is really three weeks of development and testing. As soon as you have something to show, even if you know it’s not ready for release, get it out there for your testing team to break. If you’ve got users who can spend time looking at things that are in development, so much the better: unvarnished responses to early iterations can flesh out requirements and ensure that you’re meeting the customer’s needs. There’s nothing worse than releasing something that’s been carefully developed, thoroughly tested, and still misses the customer’s core requirements. During the last two weeks of development on my current project, I’m deploying something to the shared development environment nearly every day (and if I had an automated build and deploy system, I’d be checking in updates even more often).

Build a solid architecure in the beginning, and build out modularly

My current project lends itself well to continual improvement, because it was architected from the beginning to be modular. It’s a service-oriented architecture that uses the Apache Commons Configuration framework to abstract the business logic into XML documents. It’s developed to Java interfaces and abstract classes as much as possible, with an eye toward identifiying and reusing patterns; if something can be accomplished through XML rather than code, that’s the direction we go.

SOA is a good fit for continual enhancements because the application layers can be clearly separated from each other; you’re less likely to break something if you don’t have to touch it. But the same principals apply to any development platform: make code small, abstract, and reusable, and avoid great big tangles of spaghetti. If you can’t see the whole method on your screen without scrolling, don’t adjust your monitor resolution: break the code up and look for the patterns.

The next release will be better

Whether the customer is pleased as punch, or grinding their teeth in angst, that’s the appropriate response: the next release will be better. The requirements we missed in this release are first on the docket for the next; the new requirements that have emerged from the testing rounds have been captured and scheduled for future deployments; there’s nowhere to go but up.

Provided that you set a pattern of actually delivering on this promise, the customer will be willing to accept that they won’t have the perfect system out of the gate. And if the customer is involved at every stage, and has a hand in the requirements triage and testing, they’ll be happy to play along with incremental enhancements. Something is almost always better than nothing, and unless they’ve got the self-control of toddlers they’ll be willing to defer some of their gratification, especially if they get a little taste of real improvement at each release.

I can imagine projects where these guidelines would fail to deliver: big enterprise initiatives with lots of interrelated parts are hard to release quickly in small pieces. At the same time, though, this may be as much a matter of perspective as of scale: if the project is too big for continual enhancement, maybe it’s really two or three or ten projects that need to be broken up and managed independently, with a longer test and integration period set aside to mesh the components. If it’s possible to deliver a little bit more often, rather than a lot after a really long time, my bias is toward the former: it keeps everyone working, ensures that real requirements are identified early, and shines some light into IT’s darkest boxes.

Task management is the bane of my existence. I’m easily distracted by shiny objects, prone to flights of fancy, and generally unreliable about estimating the time required to complete a project. In this, I don’t think I’m much different from the average software developer; we tend to be overly-optimistic about how easy a job will be, forgetting how many blind alleys we wander down before finding the “easy” road to success.

The Pomodoro Technique (named after the whimsical tomato-shaped kitchen timer with which Francesco Cirillo perfected it) boils down to a very simple strategy:

Make a list of the things you need to finish.

Set your timer for 25 minutes and do one thing–only one thing, and only that thing–until the timer rings.

Take a 5 minute break.

Repeat.

After four of these 25-minute sessions (the unit is called a “pomodoro”), take a 15 minute break.

Repeat.

Simple, but surprisingly hard to do.

Multitasking has become the standard mode for most of us in this technologically-connected, ever-accelerating world. If we’re not doing two or more things at once, we feel like we’re not getting anything done. But a great deal of new research suggests that multitasking doesn’t really make us more productive; indeed, it can make us less productive.

Humans, it turns out, don’t run multi-core processors. We’re more like the old Windows 3.1 “multitasking” model: fast at switching between tasks, but really not capable of truly doing more than one thing at a time. As a result, the multiple things that we do suffer when we switch contexts, and we end up doing several things poorly instead of one thing well.

The Pomodoro Technique is explicitly anti-multitasking. During the “pomodoro” period, you just work on your task list: no checking e-mail, taking calls, surfing the web, talking to co-workers. It’s a heads-down, fully-focused sprint to the end of the task, with a much deserved rest at the end. And for someone who’s used to “busy-ness,” it’s exhausting.

My first few days trying the technique were frustrating and tiring. I felt the oppression of the ticking clock, the twitching desire to check my e-mail once or twice just in case, and an exhausted relief at the end of a session. But then I started to internalize the “pomodoro” time unit; I found myself adapting to the rhythm quite naturally, and the time allotted to a task seemed to expand into an ample amount. Indeed, I’ve begun to feel time slow down in a surprising way: if I glance at my timer now and see that I still have five minutes left, I know that I can still get quite a bit done.

The technique also helps me be a little smarter in how I do my job. Before, when I wasn’t letting the timer set my pace, I was prone to investigate many more blind alleys. I might end up losing an entire day to one bad decision, backing out hours of effort. But knowing that I have a limited amount of time in which to finish my task, I now opt for the simpler approach. Rather than re-architect a huge chunk of code, I’ll stop and think things through, and usually find that a simple, elegant, and easily-implemented solution is available.

That five-minute break at the end is just as important as the twenty-five minutes before. It’s the time to take care of the coffee cup and the restroom, to stretch and crack knuckles and read the news, but it’s also time to switch off the task-oriented part of the brain and let the unconscious burble up a bit. If a “pomodoro” ended in frustration, with a task unfinished and more tasks piling up, I’ll often find this five-minute break from the project lets me come back refreshed and more likely to see my way out.

The Pomodoro Technique works very well in some phases of software development, and is easily incorporated into Agile and other iterative methodologies. When I have a nice collection of features to implement with clear requirements, or when I’ve got a set of bugs that the testing team has sent my way, the Pomodoro Technique excels: my tasks are clearly defined, easy to organize, and I can put my head down and get to work. And I could imagine a development team adopting the technique very effectively (especially if they use a timer in meetings: meetings that end when the buzzer goes off! There’s a concept worth implementing … ).

I’m not sure, though, that it’s as easy to apply to every technology job. I’ve had positions in the past that were much more reactive, where my daily work was largely driven by incoming requests and messages from customers and co-workers. In a culture that insists on “urgency” as a core value, the Pomodoro Technique’s managing of interruptions and “protection” of the “pomodoro” could be problematic. Of course, there are more problems than just task management in a culture like that …

I’m also unsure that it is as easy to apply in the more nebulous “design” phases of a project. There’s a good deal of exploration, guess-work, and fiddling about that goes into discovering an architecture for a reasonably-sized application. When I’m developing something from scratch, with sketchy requirements and an unfamiliar environment, it’s difficult to identify the kinds of clear-cut tasks that the Pomodoro Technique demands. And fuzzy, indeterminate tasks are exactly the kind of thing that lead one down the rabbit hole of multitasking chaos.

The other trap that I find myself trying to avoid is biting off less than I can chew. Given the time constraints, and the focus on completing a task (or set of clearly-defined tasks) before moving on, I sometimes defer larger jobs that might span multiple “pomodoros.” I feel a bit like a member of a millenarian cult, unsure if I start darning my socks because the Lord might return while I’m in the middle of it.

All in all, though, I’ve found it to be a productive technique. By turning off the multitasking trap and setting boundaries around my work, I’ve managed to become far more productive in a few weeks than I would have expected. I only hope that no one notices and starts to expect it to continue …

I should note a few of the tools I’ve found useful in exploring the Pomodoro Technique:

Task list worksheets (and a free download of the book) are available at the official Pomodoro Technique website; I use the paper worksheets and a pen rather than any fancy-schmancy technology, though there are quite a few programs based on the method (even an iPhone app or two if that’s your thing).

A simple timer program, Egg Timer Plus 3.12, from Sardine Software. Though available in a free version, I recommend spending the $5 to get the license: I created three pre-set timers (25, 5, and 15 minutes), and set different sounds to go off at the end of each. It was about the same price as a real egg timer, and less likely to get lost.

This whimsical introductory slideshare from Staffan Nöteberg makes the case for the technique, and offers a good thumbnail sketch of its methods.

All in all, a pretty cheap investment for some very good initial returns.

1I highly recommend the Martini Shot podcast to anyone who works in a technical or creative role. Mr. Long is a veteran sitcom writer who made his mark with “Cheers”; his brief spots are largely about the plight of Hollywood writers, which may seem a far cry from the life of the code monkey but is actually quite applicable. Sitcom writers get “notes,” we get bug reports; they have producers, we have project managers; they produce their work in lonely seclusion only to face the ignorant and capricious criticism of the suits in the corner office, and we … well, we do the same thing. Almost every week I find some little nugget of wisdom and insight that, if it doesn’t improve my work, at least gives me a wry and knowing chuckle.

A SCOUT IS A FRIEND TO ALL, AND A BROTHER TO EVERY OTHER SCOUT, NO MATTER TO WHAT SOCIAL CLASS THE OTHER BELONGS. If a scout meets another scout, even though a stranger to him, he must speak to him, and help him in any way that he can, either to carry out the duty he is then doing, or by giving him food, or, as far as possible, anything that he may be in want of. A scout must never be a SNOB. A snob is one who looks down upon another because he is poorer, or who is poor and resents another because he is rich. A scout accepts the other man as he finds him, and makes the best of him — “Kim,” the boy scout, was called by the Indians “Little friend of all the world,” and that is the name which every scout should earn for himself.

I subscribe to the tenets of Java, to the gospel of write-once-run-everywhere (even though, like so many other gospels, this one has never been successfully implemented in the real world). I believe in the strong typing of objects,the early binding of classes, and the clear separation of application tiers, world without end, amen.

I also attend the church of open source software, with its mutable prayer books and receptivity to revelation in the latter days. I believe in web standards, in XML validation, and in the W3C. These are my fervently-held creed, the values that guide my every line of code, and I scoff at the infidels who would disagree.

Except, of course, when I have to work with those infidels, and when those infidels have really great ideas or services I’d like to leverage. In both enterprise and web development, there’s no true church to which everyone subscribes, no matter how hard architecture steering committees try to make it so. Like it or not, your Java code has to co-exist with COBOL and Perl, your PHP needs to be VBScript-aware, and if your .NET can work with LotusScript, so much the better. We live in a diverse technology ecosystem, and we had best be able to extend a hand or accept assistance from programming worlds that are bizarrely alien to our preferred approach.

Write code that can be leveraged across the enterprise, learn the lingua franca of your environment’s technology, and strive to “play nice” in the bigger sandbox.

Once upon a time, I wrote a web portal using Lotus Domino. It was pretty well self-contained, using Domino data sources and APIs with just a smattering of JavaScript. Because everything it did happened on Domino servers, it was fast and self-contained, easy for a Domino expert to maintain in the standard Lotus tool set. And, like so many things that belong to a single programming ecosystem, it was not especially scalable or flexible.

Opening up the portal to other environments was an arduous refactoring effort, but it paid off. With a clear separation of tiers–business logic in web services accessed via SOAP over HTTP, the presentation layer in modular portlets that relied on CSS and JavaScript, and agnostic container-managed data sources–I was able to swap out pieces and make adjustments to the application without breaking a lot of interconnected pieces. There was still some Domino in the mix for a long time–a web service that delivered e-mail information–but it was far less dependent on Domino, so when the decision to switch mail platforms came along the code-level work was minimal to maintain the functionality. Even better, the services that ran the portal could be accessed by other applications, offering reusability, one of programming’s holy grails.

When developing services that are going to be leveraged across platforms, in places you may not even imagine possible in the initial design, simplicity is the key to flexibility:

Receive and return primitives: sending objects over the wire in SOAP is both expensive and limiting. You can have a strong object model on the service side and on the client side, but in the communication layer, break things into integers and strings. Converting a .NET object into a Java object is tricky at best; complex types being passed between Java and ColdFusion or PHP is a recipe for disaster.

Use lightweight service transports: my original Domino portal used a lot of DIIOP, an IBM flavor of CORBA; it caused me grief to no end because it was so resource-intensive, more than a little buggy, and difficult to use outside the IBM environment. XML-RPC was a good, if clunky, middle step, allowing a LotusScript service to work easily with a Java client. SOAP (originally Axis, later XFire) was better; REST, with its reliance on the standard HTTP protocol model, would have been better yet. If you can expose your API as a set of URLs, you’ll find many more friends than if you require a lot of proprietary components.

Use XML (with a DTD) for sharable data: most modern programming languages can use XML easily; and even languages like COBOL and LotusScript can work with XML by brute-force text parsing. It’s the closest thing we have to Esperanto in the programming world, and a great tool for cross-platform communication. Even if you don’t think the data will be shared across platforms, using XML up front will save you a lot of work when the inevitable request for integration comes along.

There’s nothing wrong with text files: for simple document management and configuration settings, plain text (or properties/.ini files) are a great cross-platform solution. ASCII is supported on most platforms, and on those where it isn’t (I had to work with EBCDIC a few times on the z/OS platform with Domino), it’s usually possible to convert without too much pain. Even better than its accessibility to programming platforms is text’s accessibility to human eyes: a lot of troubleshooting can be done in vi or Notepad if an application’s data and configuration is in plain text.

I’m still a believer in the Java model, of course, and I like to see Java code used to do great things. But I’ve learned that a dose of ecumenicism is a useful tool for keeping the peace and making friends across the spectrum of technology confessions.

A SCOUT IS LOYAL to the King, and to his officers, and to his country, and to his employers. He must stick to them through thick and thin against anyone who is their enemy, or who even talks badly of them.

Robert Baden-Powell, Scouting for Boys, 1908

Loyalty is another of the Scout values that is in short supply these days. Employees and managers look on each other with mutual suspicion, if not downright paranoia, and we are all quick to cut the human nexus for short-term gain. A narrow instrumentalism defines our work relations, and threatens to bleed over into our personal lives as well.

While blind loyalty is certainly as much to be avoided as extreme self-interest, the Scout recognizes that a lack of loyalty is a huge risk. Loyalty is a corollary of trustworthiness; it is trustworthiness over time: someone who can be trusted tomorrow as well as today is someone who exhibits the brand of loyalty Baden-Powell expounded in 1908.

Programming by the Scout Law is unlikely to cause big changes in our work culture, but it can at least get us into the habit of thinking about how loyalty can be a value in information technology. We may not be able to inspire loyalty in our managers or our co-workers, but we can encourage loyalty in our code.

Loyal code is both backwards-compatible and future-proof; it recognizes that other code depends on it, and keeps the contract intact.

In enterprise development, dependencies build up quickly. A sign of a successful component is that it is reused widely, often in ways not known nor ever imagined by the original developer. This can be a benefit–if you already have a good wheel, you don’t have to spend a lot of time making something else round–but it can also introduce a significant risk, especially if the reused component is changed capriciously.

There are practices in programming that strive to reduce this risk. One approach is Contract Programming, where the relationships between code components are defined formally and the “contract” is strictly enforced at run-time. This is especially useful in a distributed, service-oriented environment, where code components are developed by different teams, and even different companies. It also requires significant discipline to define the preconditions, post-conditions, and class invariants at design-time.

Perhaps requiring a little less rigor, but still a good way to ensure that changes in code don’t break the relationships between modules, is test-driven development. In this methodology, tests are written before the code, and the tests are executed often in the development iterations. If tests are written explicitly to ensure that the integration points are functioning, problems in the implicit contract can be uncovered before they become serious.

Of course, it’s still possible for us to be careless about our obligations even if we’re defining contracts or running frequent tests. This is where a loyalty-oriented approach to writing shared components comes into play. A loyal mindset encourages these practices:

Make as few methods public as possible; the fewer functions you expose, the fewer chances you have to break the contract.

Keep your parameters and return values simple; passing a very simple bean with primitive or core getters and setters will simplify your interface and make it easier to extend, than if you have multiple parameters that can change over time and break dependencies.

Do good design up front; understand how your methods are likely to be used, and then imagine how they might be used in the future: will your code scale if someone comes up with a novel use for your modules?

Use deprecation when you’re planning to retire a function, rather than simply yank the rug out from under a client; provide fair warning that a method signature is going to change, and continue to support older interfaces for as long as is feasible. Code isn’t, and shouldn’t be, forever, but it should be dependable.

And if you’re on the other side of the equation, making use of a shared API or service, respect the obligation to use the service wisely. Make sure you’re using the service in a reasonable way, without over-taxing it; if you require additional scalability, try to work with the service’s owner before you start to make extreme use of it. I’ve had web services die because they were forced to respond to a multi-threaded, distributed system when they were originally designed to handle a simple queue; it’s not much fun to fix that sort of problem after the client has already gone into production.

If you’ve already built your applications to be trustworthy, the next obvious step is to extend that trustworthiness over time into code that earns the loyalty of other applications. Respect the implicit contracts between components, and perhaps you’ll help to nudge us toward respecting the implicit contracts between people, too.

One of the hats I wear besides “programmer” is “Scout leader.” I’ve got a Cub Scout den of a dozen Wolf Scouts, and I serve as our Pack’s Assistant Cubmaster. It’s a side job I do because (a) I love my sons, who are enthusiastic Cub Scouts, and (b) I came up through Scouting myself: my father and grandfather were both Cubmasters, and I earned my Eagle in 1984. The basic life lessons of Scouting, especially the twelve points of the Scout Law, are about as succinct and universal a code of ethics as you could ask for. 1

This is the first of a series of posts exploring how the twelve points of the Scout Law–a Scout is trustworthy, loyal, helpful, friendly, courteous, kind, obedient, cheerful, thrifty, brave, clean, and reverent–can be applied to information technology in general, and programming in particular. It will likely be a somewhat irreverent look–I also subscribe to the notion that a Scout has fun–and, I hope, useful, whether you’re a Boy Scout Programmer or not.

If a scout says “On my honour it is so,” that means it is so, just as if he had taken a most solemn oath. Similarly, if a scout officer says to a scout, “I trust you on your honour to do this,” the Scout is bound to carry out the order to the very best of his ability, and to let nothing interfere with his doing so. If a scout were to break his honour by telling a lie, or by not carrying out an order exactly when trusted on his honour to do so, he would cease to be a scout, and must hand over his scout badge and never be allowed to wear it again.

Robert Baden-Powell, Scouting for Boys, 1908

Trust is generally hard to come by these days, especially in the business world. Every word that comes from management or analysts or consultants has to be parsed and weighed and heavily salted; plain talk and honest dealing are quaint notions in these brutal times.

While I think that much of our current discontent could have been avoided if there had been a few more Boy Scouts in MBA programs, there’s not much that we programmers can do to affect the moral compass of a system that’s trying to navigate without a needle. But maybe we can look inward a bit, and consider ways in which we could be a bit more trustworthy ourselves. I’ve got one small suggestion that may not shift the economy back onto a moral footing, but might help developers sleep better at night.

Do in your code exactly what your code says it will do; no more, no less.

Not long ago, I was refactoring some old (circa 2001) code. It was a Lotus Notes application with a web interface that used the Sun LDAP APIs to manage some LDAP nodes. It was one of my first Java applications–at the time, Java was the only option if you wanted Notes and LDAP to communicate–and it showed in more than just the way I handled string buffers.

The object model was a little over-complex, but workable; the encapsulation was pretty awful, exposing the application to a little too much of the specific LDAP implementation to make it very portable. But the worst thing by far were the side effects.

I had the basic understanding of “getters” and “setters” from my Java crash course, but I didn’t understand how to keep them away from each other. Too many of my “getter” methods inadvertently called a “setter” method (or, worse, manipulated data internally, in great big blocks of unreadable nested “if” statements). You’d think you were innocently asking for a piece of data from LDAP, when unbeknownst to the calling method you were actually writing a whole tree of nodes. Yuck!

The error handling was pretty awful, too. The methods either threw esoteric LDAP exceptions, or caught generic java.lang.Exception objects, or swallowed their errors and just kept chugging. From the outside, you really had no way of knowing if something worked (even if you could figure out what that something really was).

The first thing I did was pry the read/write functionality apart. The new public “get” methods did just that–they “got” data out of LDAP and returned it, either as primitives or simple objects, or as bean-like structures that were easy to understand. The public “set” methods prepared a bean that could be written, and the only methods that could actually modify data were named, surprisingly enough, “save” or “delete.” The side-effects were removed, so no one using the API could accidentally modify LDAP.

Cleaning up the error handling went a long way toward making the changes trustworthy. Note that Baden-Powell says that a Scout does his task “to the very best of his ability,” not that he succeeds every time. There are vagaries–network, memory, fire, flood–that even the best application can’t handle, and plenty of contingencies that any application can predict. Rather than trying to soldier on when things fail, know when to report back a useful error and stop the application from damaging your data. Subclassing the correct exceptions, setting useful error messages, and warning the API user what kinds of problems might arise so they can handle them appropriately makes your code worthy of trust.

The interesting thing about this exercise was that the trustworthy code–the code that did just what it said it would do, no more and no less–was lighter, simpler, and more portable than the overly complex code that I started with. It ran faster, too. One could say that this was because of better OO design and six or seven years of hard-won experience, but I think it was because it adhered first to B-P’s core principals.

1By universal, I really do mean universal. I disagree strongly with the current BSA policies excluding gay and atheist/agnostic Scouts and Scouters. If you’ve got a passion for working with kids, a willingness to spend hours sanding Pinewood Derby cars or tying knots, and an enthusiasm for helping boys grow, then I want you to be a Scout leader. If a kid is prone to questioning things, exploring the meaning of life, and honestly grappling with the path to adulthood, then I think Scouting is a safe, supportive, and fulfilling place for him to do it. Questions of doctrinal belief and sexuality simply don’t enter into the day-to-day efforts of running a successful Pack or Troop. Scouting in America survived for almost a hundred years without such tests and exclusions, and I’m confident that in time we’ll move past the current policies.

I’d also note that I simply can’t square the BSA’s policies with the spirit of the Scout Oath and Law. In Baden-Powell’s original formulation, “a Scout is a friend to all, and a brother to every other Scout.” That doesn’t leave a lot of wiggle room for discrimination.

The programmer needs lots of tools to succeed: IDEs, compilers, text editors, debuggers, and various other software geegaws and doodads. We’ll talk about some of these in upcoming posts. But the most important tool, and the one too often overlooked, is social capital.

This is a challenge for many of us: let’s be honest, there’s a lot of truth to the stereotype of the soically-awkward computer geek. Most of us, if given the choice, would love to spend our days heads down, working on code,w ithout the messiness of human interaction. Unfortunately, that’s not a choice that many of us have. Whether in a large shop or small, we need to interact with a wide array of people to get our work done. Customers, testers, DBAs, system administrators, managers … Our code goes nowhere, and does not good, unless we’re able to work across all the IT and business disciplines.

There are good ways to work with people, and bad ways. Most of us are good at the bad ways: standoffishness, obtuseness, and one-upmanship come naturally. These are all good ways to burn social capital to no good end. So here are a few ways I’ve found of building social capital instead, which you can then spend in more useful ways.

Offer to Help

Do you know something your co-workers don’t, that might help them with their projects? Share it. Even if their problem has nothing to do with your project, the insight will be appreciated if offered humbly and with no quid pro quo. You’ll build a reputation as an expert, and generate goodwill, with even the smallest bits of help.

Ask for Help

More even than offering to help, asking for help builds social capital. People like to feel smart, and like to be helpful, even the curmudgeons. This can be a great way to mentor junior colleagues: rather than pontificating on your way of doing something, involve them in the solution process and ask for their insights. They’ll learn by teaching, feel like an equal member of the team, and may surprise you with their skills.

Share the Credit

Give credit where credit is due. In most IT operations, many hands go into creating success; when the spotlight lands on you, acknowledge the help you received. Sharing credit is a good way to bring a new team member on board, and to repair the bridges that often become a little rickety between IT teams.

Take the Blame

“I didn’t do it. Nobody saw me do it. You can’t prove anything.” It’s funny coming from Bart Simpson, but not so funny from an IT professional. Odds are that someone did see you do it, or can prove it, unless you’re especially good at covering your tracks. So your best bet when you screw up is to take the blame and make amends.

One of my biggest screw-ups happened about three weeks into my job as a Lotus Notes developer. I made some changes to our mail purge agent to support calendering and scheduling when we upgraded to Notes 4.6; I didn’t have any test data older than the purge window, so I commented out the “purgeDays=90″ line and replaced it with “purgeDays=14″. Of course, I failed to switch back before the change went into production, and the hungry agent began chewing up an extra 76-days worth of e-mail. I had quite a welcoming committee waiting for me in the morning…

By admitting to this colossal blunder, I was able to salvage a little bit of my pride and also got the opportunity to learn, over the next week, a lot of things about tape backups, Notes replication, and mail server administration, and had the lessons of code review and good testing practices drummed into my head. Had I tried to dodge the blame, I would have ruined my relationship with the Notes administrators and, no doubt, I’d have started on a new job search.

Not every work environment is so forgiving of screw-ups; ‘fessing up may be a quick route to an exit interview. But that may not be so bad, either, even in economically grim times; do you really want to work someplace where honesty is so badly rewarded?