I recommend you use a query using a JOIN operation, rather than an IN
(subquery) predicate.
For example:
SELECT o.ID
, o.Value
, o.RequestId
FROM Observations o
JOIN Requests r
ON r.ID = o.RequestId
WHERE r.UniqueIdentifier = '123456'
AND r.UniversalServiceId = '1234'
For optimum performance, suitable indexes would be:
... ON Requests (UniversalServiceId, UniqueIdentifi

If you think you have too many of these:
$('html').on('click', '.class', function(){ });
you could try and refactor them into one handler:
$('html').on('click', function (e) {
if (e.target.classList.contains('class')) {
// e.target is the clicked element
// do something here
}
if ($(e.target).is('class2')) {
// you can wrap e.target into a jQuery object
// the sa

No, threading is not an effective solution to pipeline bubbles. The
granularity doesn't fit: Context switching takes hundreds of cycles,
whereas the sort of stall caused by a naive implementation of bitonic
sorting is in 2-4 cycle pieces.
With that said, it's not clear what your use-case is, or where the
bottleneck will turn out to be, so multiprocessing could help. Only
one way to find out.

You cannot avoid this latency, because javascript is asynchronous.
When you call getAngles, your plugin is crossing to native side,
retrieving some data, etc and returning the result in the callback (as
it should). Meanwhile, js code keeps running (not blocking its own
execution) and executing the second alert. This is neither wrong nor
bad behaviour on the contrary, this is exactly how it should

In simple words, Latency is network delay (time taken by network while
transferring data)
In JMeter latency is time between, when request is sent to server till
first byte of response reaches the client/Jmeter. If response time is
very low enough then you wont get precise measure of latency. If
Response time is high then probably you will get correct measure.
In Jmeter Latency shares the measure