Archive for the ‘Projects’ Category

Grandpa: Nothing gave Buttercup as much pleasure as ordering Westley around.
Buttercup: Farm boy, polish my horse’s saddle. I want to see my face shining in it by morning.
Westley: As you wish.
Grandpa: “As you wish” was all he ever said to her.
Buttercup: Farm boy, fill these with water – please.
Westley: As you wish.
-The Princess Bride

Here to serve.

That sounds fantastic when somebody says that to me. It’s a bit harder for me to say that even though I know full well that when I serve others I usually benefit greatly if not more than the one I’m serving. My focus is not about you or me but about software. Software that can be a service to others.

Wow, really!? …and that’s just scratching the surface. This is not a new idea by any means. There seems to be an ebb and flow to centralizing and decentralizing the elements that make up a given application. What I have found is that in and of themselves they are neither right nor wrong. They are what they are. It’s for a given context that a level of appropriateness comes to play. Hardware capabilities, intranet and internet speeds and accessibility, user expectations and usage patterns. These all start telling a story and create a setting in which to place an application.

Distributed applications are cheered and jeered at the same time. Sometimes by the same people. They represent an incarnation of DRY (Don’t Repeat Yourself). As such they can be awesome. “You need widgets? Call the Widget Service!” The ease in which you can get data and functionality can be very impressive. Pulling an application into a hundred remote calls can also be something else. Slow.

If you have a single application it may not make sense to go the SOA route. You can still design and build this way to think separate but deploy combined smartly making everything a local call without the over head of serialization. This pattern really works best to serve multiple applications that have shared needs or need shared data.

If the functionality is not intended to be shared with multiple applications do not make it a service. Don’t pull in consultants and fire up a major ESB project with crazy orchestration to negotiate and transform data between disparate systems when there’s only one consumer. We’ll talk more about application design another time. Right now I’m focused on services.

What is a service?

To me it’s just an application. Instead of the consumer being a user it is another application. It’s also data with behavior. Hey… that sounds like encapsulation from the OOD school. Go figure. It is. At it’s core SOA is a very Object Oriented approach to designing an application. This is why you can design this way regardless if you’re deploying as a separate service or as an integrated business unit within your application. You should be thinking this way already. We’re just gonna pull part of the application out so others can call it without:

This service should offer value. It should be strongly cohesive and fully encapsulated. The service should only do what it is named to do. A ‘beach finding’ service should not know anything about gas stations. It knows beaches. You should not have any beach logic in your application, it’s in the beach service. If you want to know where it is ask the service. If you want to know how popular it is ask the service. If you want to know how to get there… ask the mapping service, the beach service doesn’t understand directions.

I like to think of services as Legos. Very simple to use and combine. By connecting a few together I have a working application. That’s when the magic happens. The real benefit is that the service is independent of your application and has it’s own delivery and maintenance schedule. If it’s tested and solid that part of your app is already done. Don’t do it again. It’s free for everybody after the first consumer. When your library is big enough you can build a robust new application quickly and primary focus on the business needs at hand. That’s adding value!

From a deployment standpoint I like my service layer to be out of the DMZ and locked up tighter. It’s an extension of your data and is equally sensitive. Lock it up and protect it as best you can. Sure this is a hardware issue but at some point we’re going to actually deploy something on a real box and it’ll be subject to real users in the real world. I recommend doing a security audit periodically so you know where you stand.

If you’ve followed our RESTful talks you should already be thinking of making RESTful services. Keeping them stateless will allow you to load balance and scale this tier as needed with minimal effort. Be careful as you may be stateless but if you’re caching data (yeah, yet another topic to dig into) make sure you’re either caching that externally or have a way to synchronize.

I’d be remiss not to state that your RESTful service should probably return plain text JSON constructs. They’re so easy to use these days. Most languages have utilities available to generate and consume JSON so you don’t even have to sweat this one. So you’re service can now be consumed by anyone, anywhere, and… wait. No, I don’t really like that all that much. I’m going to go back to locking up my service tier. If I really want this exposed externally I recommend creating a gateway application to handle those chores. Keep your tiers intact, and in ship shape. You’ll do just fine.

Previously I’ve mentioned that we’re on a mission (part 1, part 2). We’re instituting a new platform. A central part of this initiative is a remote service layer available to all of our applications. There is a lot of talk out there about services, orchestration, and RESTfulness, but who’s talking about what is IN that service?

Me, that’s who. Buckle up as we’re about to begin part 3 of our journey!

I was originally going to discuss the dangerous phenomenon of scope creep. This is the result of the scope of a project being expanded over its lifecycle. While this is indeed something to watch for, and can lead to disaster, I’m going to go in a different direction for this entry. Recent conversations within our team brought to light a more poignant topic. One that still gets to the heart of the matter I wanted to address!

Top Down means to start with the business use case and break it down into parts and implement those parts. From a services standpoint this would mean once the application has defined what it needs appropriate services are created to satisfy those needs.

This is where you’ll see odd service names, or service names that mirror business units. You’ll also see something else if you look closer. Methods that don’t quite fit together. If you take the business use case out do the calls seem to be a jumbled mess? Do they cross specific areas or domains?

Let’s define some more terms:

Separation of Concerns
This is the practice of ensuring there is minimal to no overlap in functionality between any two systems – in our case applications and services.

Areas of Responsibility (AOR)
This is a military term that defines a geographic area that a commander has authority over. For our purposes that refers to logical concepts and models and which application or service is the commanding officer.

If the only things that makes the individual method calls make sense is the common consumption by a single application you have a warning sign. Break the key ideas into groupings. These groupings can lead you to see the real AOR’s.

Something that may not be visible, and it one of the harder ideas to grasp is what you’re missing with Top Down design. You can meet the needs as requested and be successful. You had a request and you responded. Huzzah. You consolidated logic and optimized performance. You’ve applied caching and load balancing and have a robust system. These are all incredibly valuable and worthy in their own right… but would you believe that there’s more?

This is where Bottom Up design comes in.

Step away from the originating application and become the service. You really need to step into these shoes and now look at the world from this point of view. Ask the hard questions! Should my Beach service respond with ratings? Yeah, that makes sense. It’s information that is very specific to the beach. When I want the rating I’ll most naturally want to start there to see if I can find this data.

Should it respond with directions? Should it contain it’s own address? Oh, now we’re approaching the grey area.

How many other objects have an address? How many other objects will need to find directions? These concepts are related to the beach, but are not in the beach’s domain. I recommend keeping them out.

The beach can have a URI to an address object served by the address service. This is logical. The address service can return latitude and longitude coordinates. This is also logical. And the mapping service can give me directions, even if that service is really just Google Maps. Does Google know about my Beaches? Anything about my rating scheme? Nope, it doesn’t have to and is more powerful because it doesn’t.

Fiercely protect what is and is not in the domain of each service. When you do this something grand can now occur. Emergent applications.

When your services are designed to meet the applications needs, and yet remain flexible and locked into a core definition of what that service represents. It can be used by others. Used in ways that you didn’t design. This is the critical part. If you create a Bucket service intended to be used at the beach you could have success defining how much sand it can hold. But what a missed opportunity. The service that simply defines the bucket and rules for the bucket is more powerful. If that Bucket can be used to hold water, oil, rocks… whoa. How about turning it upside down to be stacked! You didn’t think of that one, but the service should. Not. Care. Define the entity and let the use case and applications define the rest.

A system of services that can be leveraged for uses beyond their design. New concepts and ways of assembling applications that were previously impossible. This is what we’re doing. We’re starting to see the potential. We’re fighting for the services because they can’t fight for themselves.

Elwood: It’s 106 miles to Chicago, we got a full tank of gas, half a pack of cigarettes, it’s dark… and we’re wearing sunglasses.
Jake: Hit it.
-The Blues Brothers

I’m on a mission. I’ve been tasked with stewarding a new paradigm in how we build our software here at Lampo. There are many facets to this and being inspired by Kevin’s recent posts I, too, am going to try exposing the process as I go through it. I welcome you to join me on my journey and hopefully we’ll learn something along the way.

A new paradigm.

Have you introduced something new to your organization? It can be met with criticism or fanfare… or both. I’m fortunate to work at a company that believes in the book Who Moved My Cheese. This is required reading for our employees. The essence is that change is inevitable and there are ways to enjoy this process. I recommend reading this lil’ classic if you haven’t already. Some change ends up moving a lot of cheese. Changing fundamental architecture of all applications falls into this bucket.

What is this change?

Simply put we are moving to a service oriented architecture. That’s overly simplified and doesn’t encompass the whole strategy. “One Platform” is more like it. Our existing software is not bad by any means. There’s nothing fundamentally wrong with it. It’s exactly what has made us successful. If you’ve ever had the thought of changing how things are take a moment to remind yourself that your company’s success is based on the state of things today. It’s successful. The new change is untested, and high risk. Proceed with caution!

Why change?

Yes we have been successful. Our current apps have brought us to where we are. Will they get us to where we want to go? No. We’re growing and are experiencing growing pains. This is where change comes in!

We have a fair number of applications. We also have a fair number of capabilities that are similar across many of these applications and are not truly specific to any one app. This includes, but is not limited to concepts such as handling content, managing users, security, user interface and user experience. Kevin’s recent posts on Implementing Modular Web Design is talking about one of these new foundational concepts. We are very decentralized and desire to move to a centralized system.

We’re weaving SOA into our fabric because it fits in with our objectives and our end goal to give hope. Always, always, always know why you’re in business and what you are trying to achieve. This change isn’t something we’re doing because it’s popular, or a great way to impress peers. We chose a direction that fit our team and fit our mission.

Fundamental changes are risky ventures. I’ve heard them called “science projects” and anyone bringing one up was asking for trouble. I’ve known some companies that have almost sunk their business with a “major re-write”. Embracing change also means to understand what’s at risk – both good and bad.

This type of open discussion has helped form our strategies around introducing these changes. This level of change is deep. It effects our estimates, project management, developer communication, code management, testing, deployment process, and governance. Risk management needs to be ever present when every stage of the development life cycle is effected.

Be Confident!

We have an amazing team. I’m confident in my teammates, the mission, and myself. I have no doubt we will be successful. I’m not going to measure success on adhering to a spec or standard on how to implement an SOA architecture. It’s about abstracting out common elements into a reusable toolkit in order to deliver better products quickly.

Elwood: It’s got a cop motor, a 440 cubic inch plant, it’s got cop tires, cop suspensions, cop shocks. It’s a model made before catalytic converters so it’ll run good on regular gas. What do you say, is it the new Bluesmobile or what? – The Blues Brothers

Modular Web Design by Nathan Curtis and Web Anatomy: Interaction Design Frameworks that Work by Robert Hoekman Jr. and Jared Spool are great resources for UX developers. They focus on the theory, philosophy and process of designing modularly. Unfortunately, these books don’t go into any detail about how to actually implement, in code, a modular web design system in a web app. Some excerpts in Modular Web Design hint at what other companies have done, but there are no concrete steps to guide the process of implementation.

In this post I hope to outline a global strategy that is programming-language agnostic, easy for the end-user (in this case other developers) to implement, and hopefully elegant in its abstract design.

As a UX Engineer I approach code architecture the same way I approach a wireframe design. I start with what I want the end use case to look like and work my way back through the details. In this case my end users are my fellow developers and the goal for them is a super-easy implementation of reusable components that borders on the magical. Think jQuery’s “write less. do more.” motto on steroids.

I want developers to be able to write one line of code anywhere in the view tier of the web app and have that magically transformed into a fully functional component complete with CSS and JS wirings when the page is actually rendered by the server.

The end goal is one line code snippets that transform into HTML, CSS, and JS at runtime.

In order to achieve this, you need something in the request cycle that’s looking for these one-liners and triggering the component engine that renders the HTML, CSS, and JS to spring into action. Ideally this is some kind of string parser connected to your view composition engine, preferably pretty late in the composition cycle so the components will be one of the last things rendered by the server before the user sees the page.

The high level request diagram looks something like this:

Once the ability to detect snippets and return components in their place has been enabled, you can begin to break down the input and output processes further.

At some point in the input process the snippet that the output parser is scanning for will need to be broken down into arguments that can be passed to the component engine. The syntax of the snippet will be discussed in a later post, but for now let’s assume that snippet is transformed into something the component engine can understand. We may also want to add rules at the input stage to check for things like component dependencies, or to ensure that there aren’t too many of one type of component on the page, (i.e. more than one site wide navigation component, probably not a good idea).

The component engine receives the appropriate arguments, works its magic, and returns HTML and paths to static assets. The HTML will need to be inserted in the same location that the snippet was previously. The static assets will require some more advanced routing.

Best practices are to have CSS references in the head and any static JS files referenced immediately before the closing body tag. There will need to be something at the output stage that can manage the creation and injection of the necessary <link> and <script> tags.

That’s pretty much it for the high-level overview. Obviously there’s more complexity that can be added like preventing the addition of duplicate JS and CSS files or combining those static assets into one file to lower the number of http requests, but that’s my vision for a basic implementation. A one line snippet becomes HTML, CSS, and JS all wired together. It’s modular, reusable, and wicked fast for implementation.

In my next post I’ll discuss the syntax of the invocation snippets. Specifically, strategies to prevent naming collisions, how to efficiently pass configuration parameters, and more. What are your thoughts on this high-level implementation? Would you do something different? Have you implemented something similar? If so, post a comment below.

A couple of us are working on a new project that involves Salesforce CRM. We recently came upon a few technical challenges related to our need to keep a subset of the data stored in the cloud in-sync with our internal application servers. These include:

Limited number of API calls we can make per 24 hour period. Salesforce charges a per-user license fee, and each license gives you a certain number of API calls. If our project scales as we expect it to, we could easily exceed the number of API calls.

Keeping development data in sync with our internal development server and test data with the test server.

We came up with two initial solutions:

Have our internal systems poll the cloud for change.

Have the cloud send a message to our systems informing us of a change.

The first approach requires a balancing act: how up-to-date does the information need to be on our side, vs. how many API calls we can make. If we poll every second, we would consume 86,400 calls per day – more than we will probably have allotted when we launch. We also can’t consume 100% of our API calls on polls, as once we have detected that something needs to sync, we need to make calls to download the changed objects, and also need calls to periodically send data to the cloud as well.

The second approach seems to be the better one, as outbound messages don’t apply towards daily API limits. Also, we only anticipate the synced object types would only ever incur a few hundred changes per day, far fewer than the thousands of polling calls we would have to make. The problem then becomes how to implement the sync in a way that would work in our production ‘org’, our ‘sandbox’, and the ‘developer accounts’ that we developers are using. The way our web stack is structured, however, only allows for communication from third parties to see our production web environment. We could come up with our own way to queue messages in production intended for other levels, but we would need to be even more concerned about security and we would likely be duplicating something that the marketplace already provides.

It turns out someone does: Amazon.

I’ve been familiar with Amazon’s cloud computing offerings for some time now, just have never been able to utilize them with previous employers. Amazon has a service known as SQS, or Simple Queue Service, that “offers a reliable, highly scalable, hosted queue for storing messages as they travel between computers.”

Furthermore, SQS is cheap: $0.01 per 10,000 requests (send and receive are considered separate), and about $0.1 per GB of transfer. Far cheaper than buying additional Salesforce licenses.

Salesforce has its own language known as Apex, which runs on their servers and has a syntax very similar to Java’s. SQS messages are fairly simple, with Query and SOAP based API’s available. The one complexity is the means of signing a message. SQS messages include HMAC signatures using a private key you establish with Amazon that prevents messages from being forged.

The SQS implementation is quite simple. An Apex Trigger exists on object that we need to sync. That trigger enqueues a message containing the record type, the ID of the changed record, and a timestamp. This message goes to a SQS queue that corresponds to the environment (dev, test, prod, etc…). A scheduled task on our end polls SQS every few seconds for changes.

How do you sign a message in Apex that conforms to SQS specs? Apex does have some good built in libraries, including a Crypto class that even has an AWS example in their documentation (though for a different service using a much simpler authentication scheme). Here is the solution I came up with:

Imagine never having to hand-code a contact form ever again. Now I hear you, you’re saying “Kevin, I’m an awesome developer, I don’t hand-code contact forms, I’ve got code snippets, templates, and frameworks that allow me to quickly implement contact forms, why would I want a component library?” The problem with snippets, templates, and frameworks are three words that make every developer cringe: “copy and paste.”

Imagine working on a site that has 20+ contact forms all for different business modules within the company. Now imagine that you need to change something about all 20 of those forms. Now you’re saying, “But Kevin, CSS rocks my socks off, I reuse stylesheets like a champ!” To which I say, what if you need to alter the markup? What if you need to attach a bit of javascript to transform your contact forms for mobile devices? With copy and paste there’s just no global hook at the component level.

Another argument is for existing component libraries. Most of these are javascript based: jQuery UI, YUI, LivePipe etc. They’re great, but there’s still a good deal of configuration involved. You need to write the markup, create the js bindings between markup and the library, and then download a theme or write some CSS to make it all look pretty. The type of component library I’m talking about does all these things for you.

One line of code gets you markup, JS, and CSS. All wired up to work beautifully together and provide you with usability-tested, consistent, semantically-correct components to use throughout your web app. Plus you have the control to change and upgrade components on the fly and all users of your components will receive the updates. No need to do a search and replace (AKA search and destroy). It just happens.

Have I sold you yet? Do you want to build one of these? Does it sound as amazing as a triple rainbow unicorn?

Well it did to me. So how do you implement something like this? Hoekman and Spool’s book provided me with some great theory, but nothing code based. Curtis’s book does an awesome job of describing implementation for a design department (another benefit, another post), but also left me hanging when it came to coding.

So that’s where we begin. In the next post I’ll discuss how we approached the problem of actually implementing the Lampo component library lovingly known as “Gutenberg UX.”

That’s the origin story for our component library. Have you implemented something similar or experienced any drawbacks implementing jQuery UI or another javascript component library? Comment on this post and share your story with us!

Tim: There he is! King Arthur: Where? Tim: There! King Arthur: What? Behind the rabbit? Tim: It *is* the rabbit! -Monty Python and the Holy Grail

This. Is. Ridiculous.

After SpringOne I was anxious to try Grails with RabbitMQ on my own. I downloaded the complete bundle for Windows and set it up. I’ve never run an Erlang application so I felt a bit funny, but it was simple and painless. I set an environment variable for Erlang (ERLANG_HOME). Then I just hard coded a ‘base’ directory for logging and whatnot in Rabbit (server.bat). Yeah, I could have set another env var there too, but I was too anxious.

I started the server.bat file and had RabbitMQ running. Though it was lonely.

Sir Bedevere: Well, now, uh, Lancelot, Galahad, and I, wait until nightfall, and then leap out of the rabbit, taking the French by surprise – not only by surprise, but totally unarmed! -Monty Python and the Holy Grail

I jumped into STS and ran ‘install-plugin rabbitmq’
A simple config setting (Config.groovy):

Clever code never did no-one nay but no good. Not know-how, not no-way.

Yeah, I’m not sure what I just said either. I do know that “clever” and “slick” are words that I don’t care to hear about code, or a coding solution to a given problem. Code needs to be simple. How simple? As simple as possible [but no simpler, yadda yadda yadda]. “But my code IS simple” you say. I’m done.

Cool.

…just one more thing. Do all your methods only do one, and only one, thing? Yes, I said all.

Every method (or function) should accomplish only what it’s name says it does. If you have a method “saveThing” it should only save that thing. It should not check for something else, optionally set a flag, make any other call, then save that thing, and return the price of tea. It should only save the thing. This means you will have many very small methods.

At a high level this will lead to something really powerful: Readable code. You see, as your fine granularity builds upwards you are essentially utilizing the ‘composition design pattern’ in a micro-economy.

What are some signs that you may have code that can be refactored into it’s own method?

Length. If a method is growing beyond 10 lines, take a close look and make sure it’s still doing only one job. As methods grow they want to do more than they should. You can’t convince me that 1,000 line method is as small as it can get.

Unused parameters or only using one or two variables off an object. Too much data will lead to putting process in the wrong place. If you don’t have the data you can’t do the wrong thing with it. Don’t pass in a few extra objects just in case. If you don’t need them pull them out. Use static analysis to remove unused and dead parameters (and code blocks). If you really only need to do work on a username… just pass in the string, and not the whole user object.

Comments. If you feel the need to explain a section of code inline, it should be it’s own function with a well suited name. You either use too many comments (rare, and not cool but that’s another topic) or you use one or two in confusing places. That’s the tingly feeling you’re trying to learn that says “put this code in it’s own method”.

This is the path to better code. Code that does what it says it does, nothing more and nothing less. It sounds so simple, but is the essence of a truly powerful system. Keep your code dumb, you’ll thank me for it.

If you listen to Dave Ramsey at all, you know he’s all about wise investing. The daveramsey.com development team has made a significant investment in a new RESTful service layer. We’ve invested time in building it so that our site can continue to scale and grow to reach millions more with the message of Financial Peace. And, we’ve invested in the Mach-II framework by writing the code for the new REST Endpoint for version 1.9, so that simple and powerful RESTful services are available to the entire ColdFusion developer community.

Not only that, we’re investing in an upcoming ColdFusion conference, BFusion ’10. I’ll be presenting a session titled “Building a RESTful Service Layer in ColdFusion“. Well over a dozen Dave Ramsey developers will be making the trek from Nashville to beautiful Bloomington, IN for BFusion/BFlex this year. The facilities, speakers, content, and networking opportunities are very valuable. It’s a smaller conference so it’s easy to connect with industry leaders and really dig in to learn some new things. If you’re doing anything with ColdFusion or Flex, or think you might want to get started, this conference is well worth your time.

Recently, I had a fun problem to work on with our homegrown mapping application. I wanted to create a POI that would bring multiple images together to create an icon on-the-fly so we didn’t have to make two or three dozen of them. Normally you just embed a custom image to use as an POI icon. Well, with all the type of things we might add to an icon it could cause the swf to get very large and the icons to become unmanageable. You can think of it as if you were going to show a radio station on your map. Perhaps you want to have a pretty little FM or AM graphic on it depending on the type of station. Well you would have to make two or three graphics to make sure you get all the combinations traditionally. Then embed those graphics into the system. This way we have three seperate graphics and they are smaller in file size so they don’t add as much weight to the application. In short, think of it Photoshopping with code.

So here comes the How-to. It seems pretty simple now that I am writing it, but it was a little bit of a pain to figure it out. In English what I want to do is bring in the image as a class, turn it into a Bitmap then get the Bitmap-Data out of it. Call Merge on Bitmap data and pass in the proper information and get back some new Bitmap data. We will then turn that into a Bitmap and put is into a Map Icon.

/*We can set this as a class/global variable so the functions know what kind of multiplier we need. This is just a number to know how much the merge should combine the two images*/

private var mult:uint = 0xff; //
private function createPoiBitmap(mainPoi:Bitmap,initials:String,fontSize:Number = 10):Bitmap
{
//this is the main graphic for the POI that I need to get.
var bmdMainPoi:BitmapData = mainPoi.bitmapData;
//this is the bitmap that will house the whole picture.
var holder:BitmapData = new BitmapData(45, 34, true, 0X00000000);
//this is the main POI that got pulled

//this is the rectangle placement and size it will pull from the bitmap

var ptBadge:Point = new Point(13, 0);//this is the point at which the merged item will be placed.

//add the new items to the holder image

holder.merge(bmdBadge,rectBadge,ptBadge,mult,mult,mult,mult);

}

holder.merge(bmdMainPoi,rectItm,ptItm,mult,mult,mult,mult); //make the holder image a bitmap and add it to the screen var bm1:Bitmap = new Bitmap(holder); return bm1; }
The following piece of code is actually adding the Text into the image. It is going to load up an object that I am caching so it doesn’t have to build it every time. I took out that if statement so you could see how it works.