Branch Ranging, a case study

In addition to its wholesale trade the company - one of the most diligent inventory science practitioners in the world -
ran a network of well stocked branches.

The company range was over 250,000 SKUs (product lines).

A branch might stock one tenth of the full range.

Beyond a few fast movers, branch lines move at a glacial rate.
Even the fastest movers only sell once a branch per day.

The company were applying some techniques they had honed on fast and medium movers, but doing so at branch level.
This meant they were constantly operating below the Threshold of Forecastability, the rate of sale below which any reforecast will
more likely do harm than good.

Little did we realise just how many knock-on problems reforecasting was causing …

Those knock-on problems

The problems were so widespread and interlocked that we've split this case into two.
The 'narrow' TOF (Threshold Of Forecastability) issue, which can sometimes be treated standalone, is available
here
Here we look at some of the other things whose impact - rather than existence - caught us by surprise.

Before we do, let's park TOF. Below TOF, sales are 'lightning strikes' yet mathematical models interpret them as a trend.
To put it a catchier way "If we could predict such chance events (sales of slow movers), we'd all be rich and the casinos poor."

A lot depends on the forecasts.

Most fundamental is the decision to range (stock in) branch or not.
If stocked, then safety stock, min and max, pack size all follow.
If recommended for withdrawal from stock, the issue of capability looms. Does the recommendation get past the (human) gatekeeper?
If they do, can the centre handle them?

Then, if the orders are 'spiky', should we apply a (meaningless) average or some other method?.

And then, how do we communicate, inculcate and control the new rules?

While details of the solution are client confidential, we can still outline the principles ….

The gap between min and max branch stock sets the average pick. This was corrupt because it tried to mix 2 very different 'drivers',
the volatility (or 'spikiness') of demand, and the product vs. inventory holding cost trade off.
Spikiness was overplayed. It's the bane of every inventory controllers life, and they universally over-react to it.
Yet the underlying trade-off, once we unpick the components, are very simple

Should we stock enough for the 'little and often' orders?

If yes, do we stock enough to service the spikes?

The answer is not intuitive, although one thing is crystal clear. Systems which do not make the distinction
('little and often orders' vs 'large and infrequent' aka 'mixed retail and wholesale' or 'spares vs project', all surprisingly common)
do both order streams badly.

Order size hadn't been maintained - it needs to be.
It might control pack size; when it does the safety stock calculation is a lot simpler.
When it isn't, or is maintained but unevenly, or calculated using the wrong type of average, we stock yet fail.

Our suspicion is that a learned discourse on pack size got overlooked, and someone subsequently bodged the min and max
lookup tables to take care of spikes and order size and volatility (standard deviation).
To protect themselves we might expect stock cover for the 'worst worst case'. Whatever the reason, that's what happened.

While I'm a fan of simplifying solutions so they are easy to maintain and run, Einstein had this one dead right.
"Everything should be as simple as possible. But not simpler."

How bad was it? (Some people love this stuff. I'm normally focused on 'how good could it be',
but sometimes clients need to be able to laugh at where they were; it helps them focus on where to go.

The gatekeeper, who could judge (say) 30 range/derange changes a day:-

Was presented with more than a thousand.

Of those, more than 30 would be suggestions for de-ranging items which were only suggested for ranging last month.

Would, over time, see recommendations to range or de-range nearly 100,000 different items.
The chance of them making a 'knowing' judgment (surely the most
valuable use of a gatekeeper) was low.

'Over customisation', based on lightning strikes (forecasting below TOF), had led to a scattergun customer offer.
By way of analogy, what if Cambridge M&S stocked all the trousers, but Peterborough had all the sports jackets and Huntingdon all the shoes.

Over customisation of safety stock based on standard deviation
(which is discredited anyway, there's a much better method) led to erratic maximums.

The min/max bodge meant a high max usually had a high min, so generating 'busy picks'.