Salesforce and Amazon SQS

A couple of us are working on a new project that involves Salesforce CRM. We recently came upon a few technical challenges related to our need to keep a subset of the data stored in the cloud in-sync with our internal application servers. These include:

Limited number of API calls we can make per 24 hour period. Salesforce charges a per-user license fee, and each license gives you a certain number of API calls. If our project scales as we expect it to, we could easily exceed the number of API calls.

Keeping development data in sync with our internal development server and test data with the test server.

We came up with two initial solutions:

Have our internal systems poll the cloud for change.

Have the cloud send a message to our systems informing us of a change.

The first approach requires a balancing act: how up-to-date does the information need to be on our side, vs. how many API calls we can make. If we poll every second, we would consume 86,400 calls per day – more than we will probably have allotted when we launch. We also can’t consume 100% of our API calls on polls, as once we have detected that something needs to sync, we need to make calls to download the changed objects, and also need calls to periodically send data to the cloud as well.

The second approach seems to be the better one, as outbound messages don’t apply towards daily API limits. Also, we only anticipate the synced object types would only ever incur a few hundred changes per day, far fewer than the thousands of polling calls we would have to make. The problem then becomes how to implement the sync in a way that would work in our production ‘org’, our ‘sandbox’, and the ‘developer accounts’ that we developers are using. The way our web stack is structured, however, only allows for communication from third parties to see our production web environment. We could come up with our own way to queue messages in production intended for other levels, but we would need to be even more concerned about security and we would likely be duplicating something that the marketplace already provides.

It turns out someone does: Amazon.

I’ve been familiar with Amazon’s cloud computing offerings for some time now, just have never been able to utilize them with previous employers. Amazon has a service known as SQS, or Simple Queue Service, that “offers a reliable, highly scalable, hosted queue for storing messages as they travel between computers.”

Furthermore, SQS is cheap: $0.01 per 10,000 requests (send and receive are considered separate), and about $0.1 per GB of transfer. Far cheaper than buying additional Salesforce licenses.

Salesforce has its own language known as Apex, which runs on their servers and has a syntax very similar to Java’s. SQS messages are fairly simple, with Query and SOAP based API’s available. The one complexity is the means of signing a message. SQS messages include HMAC signatures using a private key you establish with Amazon that prevents messages from being forged.

The SQS implementation is quite simple. An Apex Trigger exists on object that we need to sync. That trigger enqueues a message containing the record type, the ID of the changed record, and a timestamp. This message goes to a SQS queue that corresponds to the environment (dev, test, prod, etc…). A scheduled task on our end polls SQS every few seconds for changes.

How do you sign a message in Apex that conforms to SQS specs? Apex does have some good built in libraries, including a Crypto class that even has an AWS example in their documentation (though for a different service using a much simpler authentication scheme). Here is the solution I came up with:

Like this:

Related

This entry was posted on March 8, 2011 at 3:44 pm and is filed under Projects. You can follow any responses to this entry through the RSS 2.0 feed.
You can leave a response, or trackback from your own site.