High Thttps://timhigh.wordpress.com
High Tech talks with High, T.Tue, 29 Aug 2017 15:00:48 +0000enhourly1http://wordpress.com/https://s2.wp.com/i/buttonw-com.pngHigh Thttps://timhigh.wordpress.com
How to Tell the Database Your Web App Usernamehttps://timhigh.wordpress.com/2012/10/11/how-to-tell-the-database-your-web-app-username/
https://timhigh.wordpress.com/2012/10/11/how-to-tell-the-database-your-web-app-username/#respondThu, 11 Oct 2012 01:14:45 +0000http://timhigh.wordpress.com/?p=267]]>One problem I’ve run across a lot in the past is that I’ve wanted to use the username of the current user for some function in the database. Some of the more common reasons I’ve run across include:

Logging the username for auditing

App-specific permissions checks at the query level

Database-level per-user SLA restrictions

Real-time monitoring of database activity

Unfortunately, as anyone who’s worked with a web site or other application with shared connections know, that information is never available from the database unless the application explicitly provides it. A database can only tell the name of the DATABASE user that is connected, and that’s the same one for all our application users:

The application can explicitly pass this value for every action, but that can end up being a world of pain, especially if your application is already pretty evolved by the time you decide it’s time to start adding some auditing to it. You may have to go through every statement like this:

This is the sort of thing that smells like an aspect to me. I want to add the “auditing” aspect to my already-existing code without having to rewrite everything. Fortunately, databases already provide a very simple mechanism to do that sort of thing: triggers. Unfortunately, to write one would require the database to know who my application user is without me telling it on every update…

Restating the problem, it would be great if my application could tell the database some global information about the current session, much like web apps store user session information, with every user connection. It turns out this isn’t an impossible request, at least with the two databases below!

User-defined Variables in MySQL

I spent a couple of hours recently trying to look up a solution to this kind of problem for my app which uses a MySQL database. I had a look at MySQL session variables, both static and dynamic, to see if there was something I could set in there that I could hijack for my purposes. The closest I came was the @@identity variable, which I could set to a numerical user id, but which would unfortunately be overridden on any INSERT (after all, that’s what it’s for).

The best part about this solution, besides its simplicity, is its flexibility. With this mechanism, your application can report anything it wishes to the database that should affect its overall behavior. For example, when running batch synchronization processes, rather than have to manually switch off each and every trigger, I tell the database to run in “synchronization mode” when I open the connection:

SET @synchronization = 1;

Oracle ClientID

I’m not really sure if the previous solution will work in Oracle (or in other databases) because I haven’t really tried. It turns out Oracle does provide some session variables that are actually suited to this purpose: client_id.

DBMS_SESSION.SET_IDENTIFIER($user_id);

The downside of this approach is it isn’t nearly as flexible as the approach above. But it has one ENORMOUS advantage: it is visible from the Oracle Enterprise Manager (OEM) screens (at least from 10g on). What this means is that you can view real-time (or historic) performance stats, drill down to see the top queries, zoom in to the worst, and see the actual user that is executing those queries! (more on a particularly interesting use of this feature in another blog post)

How to plug it in

It’s great that these databases offer a way to provide this information, but it doesn’t come for free. Your application still needs to explicitly set these values in the connection some time before it needs them. But, assuming you are following the DRY Principle like a good boy or girl, this shouldn’t be too much of a problem. In apps that explicitly open the connections, like a simple PHP site or a Java app that does its own connection management, this command can be executed right after the connection itself is created.

In apps that use connection pooling, this can be a little more complicated. In Java apps, I have found that the connection pool libraries often provide some sort of event callback mechanism, so the user can be set in the “beforeGetConnection” method or whatever. In more extreme cases, it may be necessary to resort to a more explicit use of an AOP framework, or explicitly get connections from a single source (e.g. static method, horror of horrors). When a framework is involved, the framework itself may require a little bit of hacking, or, in the case of Ruby or Python, some sort of monkey-patching to the framework classes.

One way or another, it may be hard to get to, but shouldn’t require more than a couple of lines of code. Definitely better than rewriting your whole app!

]]>https://timhigh.wordpress.com/2012/10/11/how-to-tell-the-database-your-web-app-username/feed/0bigokroEveryauth support for Facebook Canvas App with Node.jshttps://timhigh.wordpress.com/2012/04/21/everyauth-support-for-facebook-canvas-app-with-node-js/
https://timhigh.wordpress.com/2012/04/21/everyauth-support-for-facebook-canvas-app-with-node-js/#respondSat, 21 Apr 2012 18:28:48 +0000http://timhigh.wordpress.com/?p=259]]>I’m working on a Facebook version of my online debates app, http://gruff.co. It’s written in Node.js and uses everyauth for authentication support. Unfortunately, it doesn’t appear to offer support for Facebook canvas apps; only for authenticating via Facebook within your own site.

I just created a fork of everyauth and added rudimentary support, so if, like me, you’ve been desperately combing the net for a solution, give it a try!

4) Run “npm install” – it should download, unpack and install my version of everyauth

5) Make sure your findOrCreateUser() knows how to look up/save the user via the oauth user data that is supplied by Facebook via the canvas page post (you should probably print out the values just to test, or look at the Chrome dev console/firebug reports.

That should do it! Note that there are a lot of TODOs in there, like passing on any querystring params that are sent to the canvas page, and verifying the signature from Facebook. I’ll probably need to do those before this can actually be added to the project, but I have already sent a pull request to get it into the official version.

]]>https://timhigh.wordpress.com/2012/04/21/everyauth-support-for-facebook-canvas-app-with-node-js/feed/0bigokroOracleTC now on Github!https://timhigh.wordpress.com/2012/02/24/oracletc-now-on-github/
https://timhigh.wordpress.com/2012/02/24/oracletc-now-on-github/#respondFri, 24 Feb 2012 03:59:29 +0000http://timhigh.wordpress.com/?p=256]]>After years of promising, I finally managed to get around to doing a little code sanitizing, and post the code to OracleTC in a public place:

]]>https://timhigh.wordpress.com/2012/02/24/oracletc-now-on-github/feed/0bigokroThe Golden Rule for Coding Standardshttps://timhigh.wordpress.com/2011/07/22/the-golden-rule-for-coding-standards/
https://timhigh.wordpress.com/2011/07/22/the-golden-rule-for-coding-standards/#respondFri, 22 Jul 2011 19:49:15 +0000http://timhigh.wordpress.com/?p=251]]>It can be really tedious to define coding standards to the finest detail. The following rule covers about 80% of what you really need to know:

Do unto your code as you would have others do

This rephrasing of the Golden Rule gets to the heart of what standards are all about. Whatever code you write, realize it is something that you are inflicting on other developers further down the line. Think of it as your legacy, because legacy code is what it is. Imagine yourself having to maintain the code (and you may, in fact, have to), and ask yourself if that’s code you can work with.

Also keep in mind that to some degree, you are setting the standards for all work that follows, especially considering the corollary part of this rule:

Do unto the code as others have already done

If code already exists in the project, chances are there a slew of already-established coding conventions. Before writing a single line of code in a new project, take a look around at what’s already there. Ask yourself some questions, like:

How is everyone formatting their code?

Which libraries are they already using?

Are there already examples of how to do what I have to do?

What are the overall design patterns, application layers, and so on that are already in force?

and, lastly,

Will I have to come up with some new conventions or solutions that others will have to follow?

By looking around, you can save yourself a LOT of time in terms of decision-making for insignificant issues (8-ball decisions), and may come up with examples that solve the majority of your work right off the bat. If you find that you need to create some new conventions, try to follow the “feel” of the code that has already been established. Otherwise, you’re good to go – just remember the Golden Rule!

]]>https://timhigh.wordpress.com/2011/07/22/the-golden-rule-for-coding-standards/feed/0bigokroCamel Hair, the A-Team and Programmer Cross-pollinationhttps://timhigh.wordpress.com/2010/03/22/camel-hair-the-a-team-and-programmer-cross-pollination/
https://timhigh.wordpress.com/2010/03/22/camel-hair-the-a-team-and-programmer-cross-pollination/#commentsMon, 22 Mar 2010 05:50:03 +0000http://timhigh.wordpress.com/?p=238]]>“He who does not know the real Nsaa buys the fake of it”

The Joys of Cross-Pollination

One subject that comes up often in my work, but in many different guises, is the benefits of “cross-pollinating” your team. It’s a no-brainer that knowledge dissemination across your team results in more consistent work, more flexibility in terms of resource assignments, and in the end, better programmers all around. The ways of disseminating that knowledge range from brown-bag lunch presentations (which generally work for 1 or 2 lunches before they run out of steam) to required code reviews to the extreme programming approach of pair programming. I know a lot of people that cringe at this technique for any number of reasons, but even in places I’ve worked where I’ve encountered unmovable resistance to this method (either because it’s a “waste” of resources, or because the programmers themselves don’t like it), I have always been able to get an exception to the rule under one circumstance: as a way to quickly transfer knowledge from one more experienced developer to another. That’s because there’s no documentation in the world that can replace having someone explain something to you AS YOU DO IT.

Cross-pollination doesn’t end at training, however. And neither does the purpose of pair programming. Common practice states that if there’s a disparity of knowledge, the least “experienced” developer is that one that should be at the keyboard to ensure that they don’t get passed over and left behind. But it also states that the pairs should be swapped with some frequency. The exact frequency is up to the team, but I generally hear anywhere from 2 hours to one day, max. The reason for this is, in part, that “more” or “less experienced” isn’t just a question of who’s older: it’s a matter of context. I, for example, have been working in Java for more than a decade. But when I work on a Rails project, PHP, or something out of my comfort zone, I have pretty much everything to learn from someone that’s been working with the technology longer than I. And when I roll onto a new project, I have to learn everything about their process, business logic, architecture, etc.

Of course, even “equal” developers have a lot to teach each other. I can’t count the number of times I was watching a presentation, doing a code review, or sitting at someone’s table when I saw their fingers hit some magic key combination that has since saved me hours of pointing and clicking. When you work that closely with others, you pick up new keyboard shortcuts, find out about cool tools, see what blogs they’re reading, and on and on. I’ve been trying out ways to replicate this kind of knowledge transfer for distributed teams using Enterprise 2.0 tools and techniques, but nothing so far can match what you get out of working on the same machine as someone else.

Cross-pollination works on other levels, too. I consider myself very lucky for the time I spent working for Sapient. That’s where I learned what it is that an architect does, and chose my personal career path in life. It’s also where I learned how teams get inspired, and how a process can work to bring people of different disciplines together. I’m also lucky to have worked at with fantastic people here in Brazil, where I learned how a process (yes, even a “heavy” process like RUP with CMMI) can work, where I learned about TDD, Continuous Integration, and more importantly, how to stay current and involved in the community at large.

In the meantime, I hear endless stories about software development shops that are stuck in the 20th-century mindset in terms of how to develop software. Everyone’s heard of TDD, but no one bothers to do it. Everyone says they do agile, but no one seems to get it right. I had a recent conversation with a good friend of mine who said that he had just been offered a promotion to be manager of the whole development team after only 6 months at a fairly successful small software and consulting company because he’d just turned around one of their most important projects from a glaring failure to a raging success. What did he do? He introduced the “radical” concepts of unit testing, continuous builds, and functional testing (and I don’t mean automated functional tests – I mean testing the software AT ALL). “In the land of the blind, the one-eyed man is king.”

The A-Team

This is all old news, but what do I make of it? The first thing that came to mind was that there’s a great opportunity here to start a company specializing in fixing up run-down software factories and development teams. There are consultants out there doing the mentoring thing, but the ones I’ve seen generally focus on a couple of weeks of training, then they cut and run. It might be interesting to put an A-Team spin on things to send in some experts that will actually work with the team to get all the right tools installed, and then make sure everyone is properly “cross-pollinated” before leaving. My ex-team at Sakonnet seems to have gotten its share of fame in the local market, since there were a number of employers hoping to snatch us up as soon as they heard we were available, so maybe I could pull us all back together again for this mission – as long as I get to be the one with the cigars.

But thinking about this a little deeper, it occurs to me that it’s a pretty sad thing that there could be such disparity between work environments. I was discussing with my “one-eyed” friend about why this happens, and I came to the conclusion that people only learn what they have to, and only what they are exposed to. There is a saying in Ghana that “He who does not know the real Nsaa [a coarse cloth made from camel hair] buys the fake of it”. And people that have NEVER worked in a place that encourages process improvements and the quest for better practices won’t know what they’re missing. Any newcomer that aspires to modernize their work area is fighting against the tide, and if they want to make any permanent changes, they have to do it quick. As soon as they themselves get comfortable in their new environment, it’s all over. Perhaps the biggest enemy to following best practices and continuous improvement is safe, comfortable, long-term employment. I’ve been reading articles on promoting what people are calling “Employment 2.0” (when people start coining the phrase “Sex 2.0”, I’m unplugging the internet. Umm… nevermind). The idea is that by loosening ties to one single job, you increase competitiveness, you let the cream rise to the top, blah blah blah. And you INCREASE CROSS-POLLINATION.

Cross-Pollination as… a Business Model?

So there it is: it’s good to cross-pollinate your developers with each other. It can also be good to cross-pollinate them with developers from OTHER companies. This already exists: in conferences, which may be sponsored by you employer. Also, in outside professional groups, like user groups, programming Dojos and the like. But… what if we could do this as a business? What if we could combine the idea of the A-Team with employer-supported open cross-pollination? You send in a crack team of two or three senior developers to fix up a dev team’s practices. You also get a team of less-seasoned developers to help out, for CHEAP, or even for FREE. Why? Because some other company is loaning them to the cause as a form of boot camp training for
their own employees. When the short-term gig is over, they get them back, knowing that they have gained some real on-the-job experience working with the A-Team. This sort of developer loan-outs could be staggered as well. A company might loan out some key employees in advance of having the A-Team mentor the whole development department, or they may send them out on a project afterwards as a type of refresher (or because they promised to do the A-Team one favor some time in the future as payment…).

I don’t know if the idea above would really work. I’m just starting to think this through. It could be that with Employment 2.0 on the horizon, no one will want to invest in their employees anymore (in that case, it might be a good investment for you to loan out YOURSELF…). But imagine what regular company cross-pollination could do for the software and IT industry as a whole. If these assumptions are true:

There is great disparity between the productivity of different software development teams

This disparity is caused by a lack of awareness and experience in better practices

then it seems to follow that we would all have something to gain by getting out of our comfort zone every now and then and making yourself a junior to someone’s senior developer. I pity the fool who doesn’t. I’ll save any Hannibal quotes for when I have this plan working…

]]>https://timhigh.wordpress.com/2010/03/22/camel-hair-the-a-team-and-programmer-cross-pollination/feed/5bigokroNsaaRedmine Arch Decisions 0.0.9 releasedhttps://timhigh.wordpress.com/2010/03/01/redmine-arch-decisions-0-0-9-released/
https://timhigh.wordpress.com/2010/03/01/redmine-arch-decisions-0-0-9-released/#commentsMon, 01 Mar 2010 05:42:21 +0000http://timhigh.wordpress.com/?p=236]]>Just a quick note to let you know that version 0.0.9 of the Redmine Arch Decisions plugin has just been released. There is no new functionality in this release. Instead, I have taken the time to work on the recently-promised compatibility with Redmine 0.9.x (more specifically, I worked on “trunk”, which is currently 0.9.2). It was hell to get all the tests working (one of those cases when they are more of a pain in the butt than a help), and there were some other changes that had to be made, so I’ve given up on the idea of trying to maintain backwards compatibility. Instead, I have created a separate branch for the 0.8.4 version of Redmine (which I may or may not try to maintain).

More information about the plugin and this release can be found below:

]]>https://timhigh.wordpress.com/2010/03/01/redmine-arch-decisions-0-0-9-released/feed/6bigokroAnnouncing the Arch Decisions plugin for Redminehttps://timhigh.wordpress.com/2010/02/23/announcing-the-arch-decisions-plugin-for-redmine/
https://timhigh.wordpress.com/2010/02/23/announcing-the-arch-decisions-plugin-for-redmine/#commentsTue, 23 Feb 2010 04:07:07 +0000http://timhigh.wordpress.com/?p=229]]>I’ve been silent for a long time on this blog for two important reasons:

I’ve decided not to post anything unless I really have some value to add

I’ve been spending my spare time working on an open source plugin for the Redmine platform

So, without further ado, I’d like to announce the release (of version 0.0.8!) of the Redmine Arch Decisions plugin! At Sakonnet, my previous gig, they were using Quickbase to track tasks, specs, and just about everything. It was a snap to add in a new feature to track “architecture” (or technical) decisions, configure notifications for collaboration, and hook them up to our issues trackers for reference and follow-up. I wrote about this tool in a previous blog post, and I have been known to make the comment before that I couldn’t imagine working on software again without it. Well, when the time came to move on, guess what? No tool for tracking my “arch decisions”.

Fortunately, my current employers at Integritas are open to trying out new ideas, and are using the Rails-based Redmine for their issue tracking. Redmine, as with Rails in general, has a fairly usable plugin framework, and it was a great opportunity for me to get my hands dirty with RoR, so I jumped to it. Now, on the date of the release of the 8th version of my plugin (which we have been using for our projects), I feel I’m ready enough to announce it to anyone who’s looking for a way to record their technical decisions (and discuss them before they get made) without the overhead of stiff formal documents.

The following is a very brief overview of what you get in Redmine Arch Decisions 0.0.8:

Arch Decisions

Listing of Arch Decisions

The plugin includes a listing of the Arch Decisions themselves, which are currently limited to the scope of a single project. The ADs have an ID, a status, a summary, and a “Problem Description” field for more detailed information on the context of the decision. ADs currently follow a very simple workflow that isn’t being enforced, but is still useful:

Not Started

Under Discussion

Decision Made

Work Scheduled (implies that issues and/or tasks have been registered to track the implementation)

Implemented (implies that all said issues and/or tasks have been completed, or at least to the satisfaction of the scope of the decision)

Canceled

Deprecated (implies that there’s another AD out there somewhere to replace it)

Arch Decisions also have a text field called “Resolution” that should be filled out when the status is changed to “Decision Made”. The resolution should explain what the final decision was, summarize why that decision was made, and provide any additional guidance to any developers who will be making sure the AD gets implemented.

Basic information for an Arch Decision

In addition to those basic text fields, there are also important supplemental elements embedded within the decisions that play an important role in the documentation and decision-making process (note that these are a new feature that I didn’t have in the old Quickbase version):

Factors

Factors associated with an AD

One of the most important benefits of tracking technical decisions in this way is the possibility of making all decision points and trade offs explicit. There are so many reasons why this is important :

You can see on one place all the reasons for which a decision was made

You can weigh them against one another so that no one gets fixated on a single reason

You can truly validate your assumptions by making them visible and discussing them individually

If any of these reasons change in the future, you can go back and check to see if your decision is still valid

Taking a cue from Craig Larman and others, I call these reasons “Factors”. A factor can be just about anything – a requirement, a hunch, a feature, a factoid – that can be used as a justification for a particular decision. In my personal experience, I have seen these factors tossed about with reckless and wanton abandon, littering the sacred grounds of a design discussion. The RAD plugin attempts to put a little order to this chaos by giving you one place to record this information. In general, it can be detrimental to the flow of a discussion to continuously stop to record these factors, but it can be extremely productive to let the fur fly in the heat of the moment, and then carefully pick out the key factors afterwards when you’re ready to clean house.

Factors have a status, which is important in showing which ones have been “challenged” (by marking them as “Validated” once the discussion has completed), including ones that were later shown to be incorrect assumptions (“Refuted”). There is even a text field called “Evidence” wherein the user can record exactly how they came to the conclusion regarding the validity (via external URLs, quotes from a discussion, or even a lame but honest “because Tim said so”).

Also importantly, factors can be reordered on the AD view page by simply dragging a row and placing it in the order desired. This allows you to explicitly declare which factors have a greater weight or priority, which comes in useful when a trade off must be made.

One interesting thing to note about factors is that they may have varying scopes. Some may be very specific to the Arch Decision at hand (e.g. “We will get a big bonus if we pick Strategy A!” or “The coin said ‘heads'”). Some may related to more than one AD (e.g. “The company has mandated that we use open source tools for this project”). Still others may be “global truths” that can even be applied across multiple projects (e.g. “Amazon EC2 does not support multicast between instances” (can this one be refuted yet?)). Factors can be created on their own (via the separate Factors tab), or right in the AD itself. In the latter case, they are automatically given a scope of “Arch Decision”. But this can be changed to something a little more broad. When this happens, the Factor can then be added to multiple ADs as appropriate.

Strategies

Strategies for an AD

What’s a decision without options to choose from? As with factors, my experience has been that people are good at tossing out ideas, but less good at remembering what they were later on. Or understanding anyone’s ideas but their own. So the RAD plugin also separates out a section just to track what those alternatives were that everyone proposed. Each one has a “short name”, which can be useful as reference (a little better than “wait, are you talking about the one where command comes in as a message which is then republished, or the one where you stick the command in the database and then you have a periodic task to look them up?”), plus a sightly longer summary. Then there is a detailed description for what that stratesugy would really entail.

Importantly, strategies can then be officially “rejected”, with an explanation as to why (in the future, it might be interesting to point to the key Factors). When this happens, they show up at the bottom of the list, with a big red “X” so that no one is confused as to whether or not that possibility is still being discussed (nor why it was rejected).

In some cases, you have a “there can only be one” situation, where a decision could only be considered to have been made when all the other competing strategies have been rejected. In this case, the Resolution will really just be a rewrite of the surviving strategy and its implications. In other cases, you might have multiple winners, each of which composes a part of the final resolution. I find this is especially the case when you are making decisions regarding standards – some will be rejected, while others will be accepted and adopted.

Tracking

An Issue with two related ADs

With this release, ADs can finally be associated with Redmine Issues. This is very important for tracking and governance (making sure the decision gets carried out, and that it is still followed in later implementations. It’s also true that during the course of making a decision, work has to be done on the side. Thus, the association between ADs and issues includes the “type” of relationship that an Issue bears to the AD:

Task – the work is a task related to making the decision (e.g. for research)

Proof of Concept – partial implementation projects that are required to prove whether or not a particular strategy is viable

Implementation – software development work intended to implement a decision (e.g. the creation of a framework according to the design specifications stipulated by the resolution)

Governed – implementation of the issue is expected to follow the guidelines laid out by a (possibly previously-existing) decision

Since I often work with issue trackers other than Redmine (and have been too lazy to implement a real integration), it’s also possible to define an Issue by an external URL rather than via a Redmine ID. Although the external tracker won’t have a back reference to the AD, and the AD won’t be able to report on the status of the issue, it’s certainly better than having no link at all.

Collaboration

The heart of the original idea for Arch Decisions was the ability to provide a voice to everyone involved in a decision. Ivory tower type architects would do well to take heed and use this tool. Developers don’t always like to have their instructions handed to them on a silver platter (especially when they think a bowl would be better for the soup they’re expected to eat). The RAD plugin gives developers the chance to speak up by posting comments in the Discussion sections (in fact, there’s one for each Factor and Strategy as well as the main AD itself, for those times when you need to focus on a specific subject). It also gives other project members a chance to respond, since there is a “watch” feature, and change notifications can go out via email.

In the previous incarnation of Arch Decisions, there was also a button on each issue so that a developer could raise a red flag whenever there was an implementation detail that needed to be discussed. Thus, the discussion could go both ways, so that architects are not always kept in the blue about what the developers are doing, and what they need to know. This worked very well at my last place of work. Unfortunately, I haven’t implemented this feature yet, but I’m sure it won’t be long before I do.

Final Details

Installing the plugin is very straightforward: just download Redmine and follow its basic instructions, then download the plugin, stick it in the /vendors/plugins folder, and run “rake db:migrate_plugins” to set up the database. I’ll provide a more extensive guide in another post, but hopefully that’s enough to get you started. Unfortunately, the plugin only works with version 0.8.4 of Redmine. I’d like to get it working for 0.9.x soon, so if that’s important to you, give me a holler to get off my butt.

I’ve got more tips and details to discuss about the plugin, so I’ll try to get around to that as soon as possible. Until then, let me know if you have any feedback, and I really wish you the best in your future decisions!

]]>https://timhigh.wordpress.com/2010/02/23/announcing-the-arch-decisions-plugin-for-redmine/feed/10bigokroArch DecisionsArch DecisionFactorsStrategiesIssueComplexity Creep: Data Scavengerhttps://timhigh.wordpress.com/2009/05/17/complexity-creep-data-scavenger/
https://timhigh.wordpress.com/2009/05/17/complexity-creep-data-scavenger/#commentsSun, 17 May 2009 00:58:29 +0000http://timhigh.wordpress.com/?p=224]]>You’re given a simple task: get some XML data from a URL or web service, convert it to something else, and send it off down stream to some other system. Easy enough, right? Somewhere in the middle of your “80 percent done” report, you realize that the original XML is missing one silly little field – the customer’s middle name, the timezone on the date, the preferred nickname for their online avatar, whatever. Unfortunately, this little detail is critical for completing your task at hand.

If there’s no way to get this information, there’s very little you can do, besides ask for an enhancement and wait. But very often, the data’s there for the taking; you just have to go out and get it from somewhere else. Just as common, however, that “somewhere else” is not as accessible as you’d like. If you’re lucky, you can just pull another object or rowset out of the same database. If you’re not, good luck with that “80 percent” thing…

Here are some of the complications you may come across in a data scavenger hunt:

The data’s there, but you have to torture it out of data structures not meant to provide it (e.g. it’s buried in a string of text with no standard format, or in a numerical field with no clear designation for varying units)

The data’s on a remote server: performance may be impacted, you have to deal with the problem of making the call (if at all possible), handling errors, and so on

You don’t have the privilege to access the data

The data is not guaranteed to be transactionally consistent (you may be getting some stale data, or the new data reflects changes that aren’t seen by the rest of your data set)

The data is in a log file, system configuration file, admin-only database tables, or some other unholy “do not touch this!” artifact

Each one of these little beauties is a mini-rat’s nest of complexity creeps of their own. What do you do if your application doesn’t have privileges to get the data? Implement a “run-as” feature just to get the one field? Hard-code the root user password? Convince the guys on the other side of the fence to give you a user and wait?

In some cases, you may actually be able to modify the code itself to fit your needs. But that requires a new release of the software, which, if you’re accessing it from the outside, may not be an option (I’ll be posting another Complexity Creep article on this one some time soon). Also, this can lead to problems of its own: the “universal interface”, or the one-size-fits-all façade. I’ve worked with interfaces like this, where every new feature begets a new variation on the same old methods, just to avoid the need for scavenger hunting. It happens with database views, and in XML, too: to avoid making multiple remote requests, you keep adding new fields to your XML entities, until your Company XML document includes a list of all employees barfed up somewhere in the nested tags, complete with their “Address2” field and their in-flight meal preferences. This solution is the evil twin of the Data Scavenger: the Data Tar Baby.

Data scavenging is a common problem in integration projects, which is one of the reasons they can be so tough. But it can happen when building monitoring utilities, reporting features, or just about anywhere information is required. Unfortunately, when working in the “information technology” field, the odds are pretty high you’ll come across this more often than you’d like. And yet, when you do, it always seems to be that one last insignificant detail that turns your routine query into a scavenger hunt.

]]>https://timhigh.wordpress.com/2009/05/17/complexity-creep-data-scavenger/feed/2bigokroComplexity Creep: Aspect Infiltrationhttps://timhigh.wordpress.com/2009/05/15/complexity-creep-aspect-infiltration/
https://timhigh.wordpress.com/2009/05/15/complexity-creep-aspect-infiltration/#commentsFri, 15 May 2009 22:24:46 +0000http://timhigh.wordpress.com/?p=214]]>The other day, I was working on enhancements to a type of “job execution harness” that executes parallel tasks in our system. We had started out with the concept of “jobs”, essentially individual commands in a Command design pattern, and had recently evolved the idea of “groups” of jobs for managing dependencies between sets of parallel tasks. (note: just for fun I’m testing out yUML for the images here)

Harness schedules Groups and executes Jobs

As with pretty much any complexity creep story, this one starts out with a pretty simple and elegant design. You basically had an execution harness, which took care of scheduling the jobs to be executed, and the jobs themselves, which were totally independent POJOs, with no knowledge of each other, nor of the harness itself. The harness itself provided monitoring capabilities which reported the start and end of the groups, and of the individual jobs.

Harness executes Jobs and reports to Harness Monitor

We were living in separation-of-concerns bliss until the day we were given a new requirement: to support monitoring of job “steps”. “Um, what’s a job step?” we asked. It turns out that some of these jobs can take a looong time to run (several hours). Users wanted a way to see what was going on during these jobs in order to get a feel for when they would be done, and if everything was going ok.

Harness executes Jobs which contain Steps...

We wanted at all costs to preserve the harness-agnostic nature of our jobs. We thought about breaking the bigger jobs up into smaller jobs, but unfortunately, it wasn’t possible since they are essentially atomic units of work. We considered solutions for providing some sort of outside monitor which could somehow generically track the execution, but these steps were basically execution milestones that only the job itself could know. Finally, we knew we were defeated, and gave in: because of one little logging requirement, we were going to have to introduce some sort of callback mechanism to let the once harness-agnostic jobs signal to the harness whenever a step was completed.

Harness and Job both report to Harness Monitor

From a pure layering perspective, you can see that we are in trouble if we are trying to keep all the harness code (in orange) in a top layer, and the business code below. So, what are some possible solutions to the problem? We could:

Let the monitor know about the progress of the Job steps through some indirect method (e.g. through special text log statements, or indirect indications via data in the database). While it would avoid placing any explicit compile-time dependencies on the Job class to the harness, it would create a very fragile “know without knowing” relationship that the Jobs would have with the harness. Nasty.

Create a special “StepLoggingJob” abstract class that these Jobs would extend in order to gain access to some special logging facilities. Basically, these Jobs would no longer be POJO classes, in the sense that I used the terms, since they would have to extend a harness-specific framework. Unfortunately, this introduces a circular dependency.

Inject a special “StepLogger” utility class into the Jobs, either as a class member, or as a parameter on their “execute()” (or whatever) method

Option 1: Job writes special logging messages to a common store

Option 2: Job extends StepLoggingJob

Option 3: Job calls a StepLogger which reports to the Monitor

Note that we still haven’t really solved the problem… the Job class still requires something in the “harness layer”. If we were using a dynamically typed language, we could do something of a mix between option 1 and option 3 by using duck typing (the Job would know it was getting SOMETHING that could log, but wouldn’t have to know it’s from the harness layer). In order to really separate the dependencies in Java, which we use, we have to create a new layer, the “harness API layer”, and place only the StepLogger interface there:

Job knows only about the StepLogger interface

So, what happened? To summarize: because we wanted just a little more logging, we were forced to introduce a whole new layer into our application, and break the concept of 100% harness-agnostic commands. Is this an isolated case? Of course not. You see it all over the place, and logging is a great example of this. Have you ever heard someone talk about aspect-oriented programming (AOP), and give logging as an example? It’s PERFECT! With some simple configuration, you can automatically enable logging on all your methods without a single line of code. So, you can get rid of a ton of boiler-plate code related to logging, and focus on just the business logic, right? Wrong. If that were true, we all would have thrown our logging libraries in the garbage years ago. Instead, Log4J is still one of the most heavily-used tools in our library. Why? Because aspects work by wrapping your methods (and so on) with before and after logic, but they can’t get INSIDE the methods themselves.

The really useful logs are written by the method itself

I call this Aspect Infiltration: when your non-business infrastructure creeps into your business code. You can see this elsewhere, as well: in the J2EE container whose transaction control isn’t fine-grained enough for you (introduce a UserTransaction interface); in the security service that isn’t sufficiently detailed (give the code access to a security context), and so on. It’s a common issue for any container that wants to do all the work for you. There will come a time when the business code itself just knows better. And you’d better be ready to give it the API that it needs.

]]>https://timhigh.wordpress.com/2009/05/15/complexity-creep-aspect-infiltration/feed/1bigokroAspect Injection Classes 1Aspect Infiltration Class Diagram 2Aspect Infiltration Class Diagram 3 - resizeAspect Infiltration Class Diagram 4Aspect Infiltration Diagram 5Aspect Injection Diagram 6Aspect Injection Diagram 7Aspect Infiltration Diagram 8AspectLoggingComplexity Creephttps://timhigh.wordpress.com/2009/05/14/complexity-creep/
https://timhigh.wordpress.com/2009/05/14/complexity-creep/#commentsThu, 14 May 2009 15:22:16 +0000http://timhigh.wordpress.com/?p=211]]>I’ve been silent in this blog for a while now not only because I’ve been busy with family, work, and organizing IASA encounters, but also because I’m reluctant to rehash anything that’s been written before. Fortunately, I think I’ve come across something worth the pixels its written on. While working on software design over the years, I’ve noticed a common pattern, no matter how unrelated design tasks are: every design starts out really simple and elegant, but at some point can grow into a warted perturbed perversion of the original idea. In some cases, in response to a new requirement or complication, a new solution latches on to the side of the core design, like a barnacle on the keel of a ship. Other times, the whole design can be turned belly-up, like the Poseidon after a storm. What’s fascinating to me is how the great upheavals in design are often caused by the smallest additional requirements.

Everyone has heard of the concept of “scope creep”, when the requirements seem to grow faster than you can code. I want to write about what I’d like to call “complexity creep”, those moments when a tiny little requirement can mean a whole lot of extra work for the development team, or even turn your basic design concepts on their head.

Since this will be the beginning of a series of posts, inspired by some of the most agonizing moments from my ongoing work as a software architect, I won’t post any examples here. Look instead for my next posts, coming soon, on two “patterns” (oh no! YADSDPC – Yet Another Damned Software Design Pattern Catalog*) of complexity creep: “Aspect Infiltration” and “Data Scavenger”. If you come across any moments of your own, please let me know!

* Note that I use the term “pattern” here loosely, as in a group of unrelated issues with recongnizable commonalities. I’m not really planning on documenting these like software design patterns.