Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our User Agreement and Privacy Policy.

Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our Privacy Policy and User Agreement for details.

PEnDAR - Performance ENsurance by Design, Analysing Requirements TSB REFERENCE: 132304 Why? Seeing cost/performance hazards becoming visible late in the development process – too late to save some projects! Multi-$B problem worldwide Pressure to re-purpose commodity infrastructure for safety/mission-critical objectives; need to be able to articulate a safety case.

Constraints on system developers: cosmic, ludic and ecological. Constraints on applicability of new approaches: established procedures, market inertia etc.

We went over this picture in the first webinar. The core is to understand the relationship between the delivery of outcomes and consumption of resources. Other aspects (exceptions/failures, scaling, variability/correlations) can then be incorporated into the model.

A Quantitative Timeliness Agreement (QTA) is a relationship between the demand (the applied load, including its pattern) and the delivered quality impairment (as a probability distribution, ∆Q) Opportunity cost between one system and another sharing the same resources, and successive refinements won’t be considered in this webinar.

This can be combined with a corresponding analysis of the resource consumption

We consider a very generic remote procedure call in which a front end (which could be a web browser, an embedded sensor, a smart meter etc.) engages with a transaction across a network with a back end that includes an interaction with a database of some sort. This is illustrated in this slide, which shows the system components in blue and the typical sequence of events as numbered circles. RPC is a common design pattern: DNS lookup Web request IoT sensor system Billing system e.g. smart meters

We measure the performance of the system on the basis of passage times between the labelled observation points A - F, which we characterise using improper random variables, ∆Q.

From a quality impairment perspective, we can ‘roll up’ the last stage of the process into a ∆Q – from the viewpoint of the rest of the system, how the transition from C to D occurs is irrelevant, we are just interested in how long it might take and how likely it is to fail (its ∆Q).

In the same way we can combine the network portions with the back end to give a composite ∆Q.

Finally we can roll up the front end behaviour to give the ∆Q for the whole system. This is the quality impairment from the ‘user’ point of view, which is where we can start to impose requirements.

This is where we need to bring in a quantitative intent: how rapidly and reliably we want the system to perform. Here we choose some numbers for the sake of an example, which are plotted on the right as a CDF.

We show the state diagram for the front-end process. Note that not every situation has an explicit notification of failure as we do here. We’ll get round to C and D later!

If N=4 we can unroll the process like this. We have a non-preemptive first-to-finish synchronisation between receiving the response and the timeout. Which route is taken is a probabilistic choice, shown here with blue and green examples. Which path is taken depends on the ∆Q of the B – E path, represented here by the ‘wait for response’ states.

The network characteristics are assuming communication between a UK-broadband-connected front-end and a US East Coast cloud-located server. Combining ∆Qs means convolving the corresponding probability distributions – this is mathematically quite straightforward and computationally inexpensive.

The first curve shows the server response ∆Q, the second its convolution (twice) with the network transport ∆Q. On the right we see the result of combining this with the behaviour of the front-end, compared with the performance requirement.

Having constructed a model, it is now easy to experiment with varying some parameters.

The server performance is the same but the network has higher delays – equivalent to moving the server from London the the US West Coast, with more variety of routing choices. We can see on the right that the resulting overall ∆Q no longer meets the requirement.

We now add in a probability of the server following a ‘slow path’ (which could be as a result of load exceeding an aspect of the virtualisation constraint), modeled as a uniform delay of between 15ms and 150ms. On the left we have five ∆Q curves for the server response, with the slow path probability varying from 0% (the original curve) to 25%, 50%, 75% and 100%. On the right we see the ∆Q resulting from combining this with the front-end behaviour; even a 25% chance of following the slow path breaks the performance requirement.

In this short example we have touched on several aspects of the problem space we described in the first webinar.

Virtualisation forces detailed costing of resource usage – different from the typical situation of ‘sunk cost’ in a piece of hardware. If a server has been paid for, we don’t care whether its memory is 25% or 75% used, but if we have to pay for memory in a virtual server such a factor of 3 may be significant!

Applying a performance methodology absolutely requires some bounds on what is acceptable. ‘Better than the competition” or ‘no worse than last week’ will do provided these things can be measured. Minimum possible ∆Qs can easily be estimated and added up to determine whether requirements are at all feasible (for example whether it’s possible to have the server in another continent). This puts bounds on how much slack any real implementation has, and thus indicates where performance hazards might appear (and hence where mitigation might be helpful).

Preparation of the model used here was a day’s work. Creating all the graphs shown takes < 1s of processing on a laptop. Decomposing subsystem requirements can also be used to establish operating limits that can be continually measured.