Why Asset Management Leads to More Tree-caused Interruptions

If you are a regulator and you hear your utilities say “we don’t do trimming by cycle; we use an asset management approach where VM work is prioritized based on reliability,” you should see this as a red flag. Maybe you’ve already heard this and have experienced an unacceptable level of tree-related outages but were unable to find the specific fault in this thinking and approach.

If you are a senior utility executive, I’m going to show you why an asset management approach applied to VM is going to lead to deteriorating reliability and consequently, a conflict with your regulatory agency and customers.

First, let me ask, when asset management is applied to VM, what is the asset being managed? If your answer is the service, that’s not a bad answer, but I bet dollars to doughnuts that asset management is not meeting the objective. If you are thinking in terms of hardware, that focus is wrong. Asset management could be applied to VM if you see the asset as being the condition of the utility forest and you know how that condition changes. Think in terms of when a line is installed. A right of way is cleared. Dead and diseased trees that could fall into the line are removed. The utility forest has been managed to provide a certain air space and the risk of tree in-falls has been reduced. So it is the utility forest in that condition that is the asset. Over time more trees will become decadent hazard trees and tree growth will decrease the separation between trees and conductors. With that, the effectiveness and the value of the asset is eroding. When VM work is performed it increases the value of the asset. So, there is a maintenance cycle regardless of whether you use a time frame or outage statistics to trigger VM work.

Wherever I’ve encountered asset management applied to VM, the trigger for VM work has been the effect tree-related outages have on SAIFI or SAIDI. What is wrong with this approach is that outage statistics are after the fact; in fact, well after the fact.

To illustrate, I’ll share some data from a project back in the days when reliability centered maintenance was all the rage. This chart was produced by looking at the clearance established on pruning at over 43,000 sites. Growth was then forecast based on conifer or deciduous tree and crown or lateral growth. The graph shows when the trees would breach the limit of approach.

You can see from the chart that certainly for the first three years, outage statistics would not trigger action. A change of 20% or less in tree-related outages is easily obscured by the annual variation in the frequency and intensity of minor storms. According to this chart it would probably be between year 4 and 6 when alarms are raised. However, that is not really the case as work by Bill Rees’s group at BG&E, ECI (presented here by Ward Peterson of Davey Resource Group) and John Goodfellow have shown that a tree making contact with a single distribution conductor is not likely to cause an electrical fault. Electrical faults on distribution arise when tree parts bridge phases. So it would not be until year 5 that there would be any problem indicted by tree-related outages. The earliest you could follow up would be the next year or year 6. If you decide to wait for a clear trend, add another couple of years.

There are two issues with this. The first is that by the time tree-related outages show a clear trend of deteriorating reliability, not only are we on the exponentially expanding part of the slope but also at the upper end of it. Work will have been deferred for 3 to 6 years and possibly more beyond the optimum timing. Customers receive a lower level of reliability than the utility ever intended and secondly, there will be higher future maintenance costs. Based on work by Browning and Wiant, pruning work deferred for three to four years results in increased costs ranging from 40% to 69%. Based on the above data, deferring work for three years beyond the implied 3-year pruning cycle results in over 90% of sites breaching the limits of approach. In other words, using an asset management approach where VM work is triggered by outage experience is a lose-lose proposition. The customer experiences a higher incidence of outages and ultimately will have to pay more to remedy deteriorating and unacceptable service.

As already stated, the appropriate pruning cycle can be extracted from the above chart. It implies an average maintenance cycle of three years. By the end of the third growing season about 10% of the locations would have trees breeching the limit of approach if they were not pruned. If work is deferred, over the next three years the total sites with trees breeching the limit of approach would expand to over 90%. That is why when I see 15-20% hotspots (breaching limits of approach) in the field I consider the program to have passed the tipping point. Further work deferrals will have very negative consequences in terms of reliability, storm damage and future costs. For the service area from which the graph data is drawn, if a 3 year maintenance cycle were applied then it is always only the current year work area where sites would be breeching the limit of approach or 10% X 1/3. Good and properly funded VM programs experience few if any grow-in outages (0-2% of tree-related outages).

On the surface, applying asset management to VM appears logical and prudent. However, it only appears so in ignorance of how the VM workload changes. Unfortunately, by the time it becomes apparent that the approach is not providing satisfactory reliability, it requires an enormous investment in VM to get the program back on track. That puts senior utility management on the horns of a dilemma: while the asset management approach may be failing miserably, obtaining substantially increased funding will require admitting the asset management approach was wrong and in so doing expose the utility to the possibility that the regulator will require shareholders to pay for the remedy.