Pages

Friday, January 20, 2017

You're an external system to Salesforce. Stuff happened and now there are a dozen dirty records that need to be updated in Salesforce to reflect the changes. An active Salesforce Session ID (a.k.a access token) that can be used to make API calls with is available. All the records have the corresponding Salesforce Ids, so a direct update can be performed. Ignore for the moment that the records might also be deleted and in the recycle bin or fully deleted (really truly gone).

To further complicate matters, there is a quagmire of triggers, workflow, and validation on the objects in Salesforce. This is a subscriber org for a managed package, so you can't just fix those.

Which API do you use to update those records in Salesforce?Pick a path:

REST API PATCH requests

Postmortem:

Each request round trips to Salesforce, processes all the triggers,workflow, validation on each individual record, and returns the result. Individually each request is only a couple of seconds, but collectively they take way too long for the waiting user.

Bulk API

It's primarily billed as a way to asynchronously load large sets of data into Salesforce. Let's see how we go with only 12...

You have a harrowing brush with death by API ceremony. If the asynchronous gods favor you it is a timely update. Otherwise disgruntled users tear you limb from limb as they get fed up of waiting for the results to come back.

Results:

There are five API calls to be made to complete this operation on a good day. If things go bad then you might be waiting longer than expected. You need to keep polling the API for the job to complete before you can get the results back. You're also burning five API calls where you could be using one to complete the entire operation.

Review:

With only a single call to check the batch status it came back at a respectable 3350 ms total for all the API calls. That doesn't include any of the overhead on the client side. There could be some variance here while waiting for they Async job to complete.

Apex REST Web Service

OK, I'll be honest, after all those BULK API calls I'm exhausted. Also, I can't just deploy an Apex Web Service to the production org where I was bench marking against.

Your fate is ambiguous because the narrator was to lazy to test it. Go to page 0.

Review:

Performance is probably "pretty good"™ with only one API call and one transaction that can use the bulkification in the triggers. However, you'll need to define the interface, maintain the code, create tests and mocks.

Revised Results

I had some time to revisit this, create an Apex REST web service in the sandbox, and test it.

It takes a bit more effort to create the Apex class with the associated test methods and then deploy them to production. The end result is a timely response.

Revised Review:

In the ideal world the Apex REST web service would be streamlined to the operation being performed. I sort of cheated a bit and created it to have the same signature as the composite batch API. It also bypasses any sort of error checking or handling.

To give a relative benchmark in the same sandbox, the SOAP API took 3,172 ms. That gives a time of around 4,500 ms in "production time".

Summary

Lets recap how long it took to update our dozen dirty records:

REST API PATCH requests — 24,216 to 33,096 ms

REST API Composite batch — 20,053 ms

REST API Composite tree — n/a for updates

SOAP API update call — 4262 ms

Bulk API — 3350 ms = 617 ms + 964 ms + 1291 ms + n*242 ms + 236 ms

Apex REST Web Service — 4,517 ms (extrapolated from sandbox)

I was expecting the SOAP API to fare better against the Bulk API with such a small set of records and one API call versus five. But they came out pretty comparable.

Certainly as the number of records increases the Bulk API should leave the SOAP API in the dust. Especially with the SOAP API needing to start batching ever 200 records.

The other flavors of the REST API are pretty awful when updating multiple records of the same type as they get processed in individual transactions. To be fair, that's not what they are intended for.

Your results will vary significantly as the subscriber org I was testing against had some pretty funky triggers going on. Those triggers were magnifying the impact of sub request transaction splitting by the composite batch processing. I wouldn't usually classify 4 second responses as "timely". It's all relative.

Also, I could have been more rigorous in how the timing measurements were made. E.g. trying multiple times, etc... It's pretty difficult to get consistent times when there are so many variables in a multi-tenanted environment. Repeated calls could easily create ± 500 ms variance between calls.

Tuesday, January 10, 2017

I thought I'd touch on some of the security considerations that should be made when working with JavaScript from Visualforce.

Cross Site Scripting (XSS)

The risk of cross site scripting is always something to be aware of when developing web applications and needs to be considered when using JavaScript in Visualforce as well. Generally speaking, you want to prevent untrusted user input from being reflected back into JavaScript code.

As an example - say you were trying to read a page parameter info JavaScript with the following (Example only - DON'T DO THIS):

If you load this in the browser and include a &userParam=bar in the query string then the resulting HTML is:

Great! But what if someone puts something malicious into the URL? Something like 1';alert('All%20your%20Salesforce%20are%20belong%20to%20us');var%20foo='2. Here is the result in the browser:

And again from the page source:

So definitely not what we wanted. Notice how the apostrophe characters in the user input allow the code to take on an entirely different meaning. It would be very easy to extend the example to submit the current session cookies to an external resource - compromising you Session Id and pretty much everything else from there.

The solution here is to use JSENCODE to encode any text before reflection into JavaScript. This will use a backslash to escape any unsafe JavaScript characters, such as apostrophe (').

Javascript Remoting with escape: false

When using Javascript Remoting beware the using {escape:false} in the configuration and loading the result into the DOM. Much like the basic XSS example above, it can be used to inject executing JavaScript into the page.

Avoid the urge to hack the Salesforce DOM

Not so long ago developers would put JavaScript in the sidebar so it would load with every page. This could then be used to manipulate the standard Salesforce DOM to do all sorts of things, such as showing/hiding components, adding additional validation, or changing their presentation.

Salesforce took umbrage with this approach as it opened up all sorts of consistency and security problems. As such, they shut the practice down in the Summer '15 release. Needless to say, those who had used the sidebar to manipulate the page found their workaround hack solution no longer working. Best to avoid these sorts of unsupported shenanigans if you can as they will be closed off sooner or later.

Protect the Session Id / AccessToken

The Session ID is the key to the kingdom. If someone can get hold of yours they can interact with Salesforce like you would*. If you can avoid exposing it in Visualforce then do so. E.g. Use JavaScript Remoting rather than interacting with the APIs directly.

Depending on the context where you request it, you can get a "second class session id" that doesn't grant you the same level of access as a full "first class" UI session type.

For instance, a Session ID in Visualforce from can be used to make API calls, but can't be used to access the full web UI (Such as via frontdoor.jsp).

Checking the User Session Information page can show the different Session Types that get created. In the example below note the TempVisualforceExchange Session that was created from the Parent UI Session.

As per the previous post, I needed a trigger recursion mechanism that could prevent an infinite loop but still handle subsequent changes made to the Opportunity by workflows. Using the hashCode of the Opportunity worked. Then I got the following error from the managed package trigger after insert:

This was odd, how could the after insert trigger be falling over on existing records that used the Opportunity Id in the composite key? The Opportunity had only just been inserted.

The subscriber org in this case had created there own after insert trigger on Opportunity that created additional OpportunityLineItem records. This resulted in a sequence of events that went something like:

DML Insert Opportunity

Subscribers After Insert Trigger on Opportunity

DML Insert OpportunityLineItems

Subscribers After Update Trigger on Opportunity

Managed Package After Update Trigger on Opportunity

Managed Package After Insert Trigger on Opportunity

Note how the subscribers after insert/update triggers went first and the resulting changes to the Opportunity.Amount from the new OpportunityLineItem records updated the Opportunity records. As a result, the managed packages After Update trigger fired before the After Insert trigger.

The sequence of events again, to hopefully provide some clarity:

Opportunity is inserted

Subscriber after insert trigger code inserts OpportunityLineItems for the Opportunity

Managed package Opportunity After Insert trigger fires and attempts to insert records without checking for existing records (because it is an insert, so there shouldn't be any related records yet - which doesn't hold up in practice).

The insertion of the related records fails as they were already created by the Opportunity After Update trigger that occured when the OpportunityLineItems were inserted.

I'd made the incorrect assumption in the After Insert trigger that I didn't need to check for existing records with a lockup to the Opportunity as it couldn't exist yet without completing the insertion transaction.

In hindsight it is clear enough, but it wasn't much fun to figure out from the observed behavior. It's also something I'm sure I've encountered previously. Once I find my previous notes on this problem I'll link them here. I'd previously encountered this in A Tale of Two Triggers.

So, repeating the moral of the story from my previous post in the hopes I'll remember it:

Don't assume that the After Insert trigger will occur before the After Update trigger.