Fortnightly - Benchmarkinghttp://www.fortnightly.com/tags/benchmarking
enBenchmarking Storm Restorationhttp://www.fortnightly.com/fortnightly/benchmarking-storm-restoration
<div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>Power outages caused by major storms have captured the national attention again. First Quartile Consulting (1QC) conducted a series of webinars on storm restoration as part of our 2012 benchmarking program. One of the areas of inquiry was the average restoration time for major storms.</p>
<p>We set out to answer a few basic questions about storms: Why benchmark storms? What is the frequency and average response times for different magnitudes of storms? What magnitude of storms should be benchmarked?</p>
<p>We asked for basic data on all storms with greater than 1 percent of customers out and for detailed time series data on the two worst storms. Analysis identified methodologies that can be used to analyze what we termed “major storm” response (between 10 to 20 percent of customers out). Those methodologies include the following elements: Storm response as an aggregate of customers out and restoration rate; time series methodology and coincident peak; normalizing the time series as a percentage of coincident peak; composite benchmarks; and managing effects on the work force.</p>
<h4>Why Benchmark Storms?</h4>
<p><a href="http://www.fortnightly.com/sites/default/files/article_images/130701-Spark-1-H.jpg" target="_blank"><img alt="" src="http://www.fortnightly.com/sites/default/files/article_images/130701-Spark-fig1.jpg" style="width: 320px; height: 293px; float: left; margin: 5px 10px;" /></a>We identified multiple reasons for benchmarking storms from the companies’ point of view: 1) Regulatory bodies and the public will be seeking comparative data after major events. Companies should be armed with supportable data before those requests are made; 2) Companies could use the data to obtain insight into their relative performance and improvement opportunities; and 3) By sharing data and using a common methodology, companies can improve their decision-making during major events, answering questions such as: What would be a reasonable global ETR goal for this event? And how many outside resources will we need to ensure that we meet that ETR goal? 4) Tracking the frequency and impact of storms over the years might provide some interesting insight into the impact of global climate change.</p>
<p>We asked 25 participating utilities for data on all storms over the last five years (2007 through 2011) where customer interruptions were greater than 1 percent of customers. This was a somewhat arbitrary low level, but represented a significant storm with fairly widespread damage for which internal company resources would be mobilized – but probably wouldn’t require mutual assistance. We received varying levels of participation on the questions. But a total of 13 companies provided information on 500 storms in which greater than 1 percent of customers were interrupted.</p>
<p><a href="http://www.fortnightly.com/sites/default/files/article_images/130701-Spark-2.jpg" target="_blank"><img alt="" src="http://www.fortnightly.com/sites/default/files/article_images/130701-Spark-fig2-L.jpg" style="width: 321px; height: 287px; float: right; margin: 5px 10px;" /></a>As Figure 1 shows, the 13 companies reported approximate 500 storms of this magnitude during the five-year period. As might be expected, storms that affect lower numbers of customers are more common. All but two of the reporting companies experienced at least one storm with more than 10 percent of customers out. About half the companies experienced a storm with more than 20 percent of customers out during this period.</p>
<p>One client<a href="http://spark.fortnightly.com/fortnightly/benchmarking-storm-restoration?page=0%2C5#fn1" name="footnote 1" title="Thanks to Jesse Medlock at Oncor for these descriptions" id="footnote 1"><sup>1</sup></a> provided the following rough categorization scheme:</p>
<ul>
<li><strong><em>“Significant”:</em></strong> Up to 10 percent of customers out. Work queues are deep but did not require mutual assistance. The backbone held up reasonably well except in localized areas. Reclose, isolate and switch operations were probably effective. The longest outage was 24 to 48 hours.</li>
<li><strong><em>“Major”:</em></strong> 10 percent to 20 percent of customers out. Damage is widespread and probably required mutual assistance. The response curve will suffer because the work queues are going to sit idle until off-system resources arrive. Still the backbones generally held and reclose, isolate, and switch operations were generally effective. There were more feeders affected and some had to wait longer to be switched around, but all in all the outages exhibit an exponential decay-like shape to the curve. The longest outage is around 100 hours.</li>
<li><strong><em>“Catastrophic”:</em></strong> With more than 20 percent of customers out, this is the storm that the company never forgets. It takes out transmission, damages substations, and snaps feeder poles. It broke the backbone. This is Ike, Katrina, Sandy, tornado clusters and extreme ice events. The restoration curve will look almost linear. Utilities can’t get adequate mutual assistance help as quickly as they would like because their neighbors are badly damaged as well. In these situations, utilities are all competing for the same mutual aid resources. Work queues are deep and they just sit.</li>
</ul>
<p>One measure of restoration time is CAIDI – the average minutes that an affected customer is without power. As shown in Figure 2, the average CAIDI increases with the number of customers out. This might be due to severity of the damage or constraints in available manpower. CAIDI is about 100 to 400 minutes for a 1- to 10-percent storm; it increases to about 500 minutes (six plus hours) for a 10- to 20-percent event; and is 1,000 or more minutes (12 hours) for a 20-percent event.</p>
<p>The different categories of severity and response times suggest different methodologies. For “significant” events, traditional responses and measurements can still be used. Most of these storms are not “major” events as classified by IEEE and are reported in typical SAIDI, SAIFI, and CAIDI methodology.</p>
<p>“Major” storms are typically outside the 2.5 beta range specified by the IEEE and are excluded from normal reliability reporting. These events are more rare, but occur often enough to benefit from a time-series analysis to identify typical response times and areas for improvement.</p>
<p>Past the 20 percent level, “catastrophic” storms are one-of-a-kind types of storms that occur infrequently, maybe every 10 to 20 years, with different levels of severity. The response times for these events will generally be reported to the federal and state regulatory agencies and will be heavily scrutinized. While benchmarking will provide some global comparisons, it is unlikely that annual benchmarking results will be useful.</p>
<p>A discussion of the public and private databases and classification schemes is beyond the scope of this article. But suffice it to say that there is as yet no agreed upon taxonomy, terminology, or methodology.</p>
<h4>Methodology for the Worst Case</h4>
<p>We asked for the time-series data on the two worst storms each company experienced in the same five-year period. We received detailed data from 11 companies on a total of 22 worst storms. It was not really surprising that we did not find an average storm or average restoration time. The storms varied on a variety of different parameters. These worst storms ranged from 1 percent to more than 20 percent of customers out at the peak. Much of our analysis concentrated on eight storms where 10 to 20 percent of the customers were out at the peak; these were not the largest storms, but offered enough data points to make reasonable like-size storm comparisons.</p>
<p>Our worst-storm analysis was concentrated on a small percentage of the 500 overall storms reported, but was heavily weighted with larger storms. A total of 22 storms were reported over the five-year period. We ranked the storms by percentage of customers out at peak, and found 12 were between 1 and 10 percent; eight between 10 and 20 percent; and two were greater than 20 percent (including 100 percent for a company in the path of Hurricane Ike). For comparison to the larger sample of 500 storms, the percentage of customers out at peak is less than the total percentage of customers out by a factor of two or three (<em>See Figure 3</em>).</p>
<p><a href="http://www.fortnightly.com/sites/default/files/article_images/130701-Spark-3.jpg" target="_blank"><img alt="" src="http://www.fortnightly.com/sites/default/files/article_images/130701-Spark-fig3-L.jpg" style="width: 321px; height: 245px; float: left; margin: 5px 10px; border-width: 0px; border-style: solid;" /></a>We asked companies to categorize their two worst storms and got a mix of responses, such as: Wind storm (5) with winds exceeding 60 mph; Thunderstorm (5) including straight line winds, hail, possible tornados, and violent thunder-lightning storms with high winds; Hurricane (3) including Irene, Aug. 28, 2011; Monsoon (3) with rain and wind; Snow (2), specifically a wet snowstorm; Ice (2) with “galloping” transmission lines<strong>; </strong>and Other (2), specified only by date.</p>
<p>Five methodologies were applied to better analyze and understand storm response. Each element in the methodology is briefly described.</p>
<p>1) <em>Storm Response as an Aggregate of Customers Out and Restoration Rate:</em> The first step in an analysis is to gather time series data on the cumulative number of customers interrupted by hour and the cumulative number of customers restored by hour for each storm. The difference between these values at each hour is the number of customers still out. A typical curve for a particular storm might look like the example below. Each storm will have its own signature, but note that the gap between interrupted and restored is quite large, leaving many customers without power for extended periods of time. Understanding what happens in this gap is a potential improvement area for a utility: is it the time to wait for the storm to blow over? To mobilize resources? To safely commence work? To do a complete damage assessment?</p>
<p><a href="http://www.fortnightly.com/sites/default/files/article_images/130701-Spark-4.jpg" target="_blank"><img alt="" src="http://www.fortnightly.com/sites/default/files/article_images/130701-Spark-fig4-L.jpg" style="width: 322px; height: 282px; margin-left: 10px; margin-right: 10px; float: right;" /></a>Efforts to model this process using reliability functions, such as failure rates and repair response are beyond the scope of this analysis, but are actively being pursued by utilities and others.</p>
<p>2) <em>Time Series Methodology – Coincident Peak:</em> One way to understand the storm and the restoration effort is to plot a time series for the number of customers still without power. As shown in Figure 4, this simple curve is actually the net of new customers without power and the number restored in each period. One way to compare different storms is to show them all with a coincident peak – in other words have the timelines coincided to the peak number of customers out.</p>
<p>To make better apples-to-apples comparisons, Figure 5 shows seven big storms where 10 to 20 percent of customers were out at the peak. (Note: the legend shows the company number and a letter to designate the particular storm). In the effort to mine the data to make useful comparisons, we eliminated one storm in this range that had a false peak – <em>e.g., </em>a second wave of outages occurred that exceeded the first peak. Also, companies were not uniform in reporting the number of hours prior to the peak. As a result, some companies ran out of hourly buckets and we had to do some smoothing. (Note: the time buckets change to 12-hour buckets at the tail of the graph, which makes the data look like it falls off a cliff at the end).</p>
<p><a href="http://www.fortnightly.com/sites/default/files/article_images/130701-Spark-5.jpg" target="_blank"><img alt="" src="http://www.fortnightly.com/sites/default/files/article_images/130701-Spark-fig5-L.jpg" style="width: 323px; height: 282px; margin: 5px 10px; float: left;" /></a>Even with these data problems, we were able to develop a reasonable picture of storm restoration activity. Again, there was no average storm. Some ended quite quickly; some dragged on; some had a secondary peak. Most did peter out after 48 hours, but some had customers out for five days. The best performance was exponential, whereby the restoration was more rapid in the early hours than in the later hours (perhaps due to sectionalizing and bringing on big blocks of customers early). The worst performance was almost linear, with a constant restoration rate.</p>
<p><a href="http://www.fortnightly.com/sites/default/files/article_images/130701-Spark-6.jpg" target="_blank"><img alt="" src="http://www.fortnightly.com/sites/default/files/article_images/130701-Spark-fig6-L.jpg" style="width: 322px; height: 307px; float: right; margin: 5px 10px;" /></a>3) <em>Normalizing the Time Series as a Percent of Coincident Peak:</em> For the seven worst storms we analyzed (where 10 to 20 percent of customers were without power at the peak of the storm), we normalized the time series as a percentage of peak. The resulting graph (<em>Figure 6</em>) allowed the performance to be seen more clearly, but also magnified the differences in performance. The average trend line is very nearly linear, until the end, where the time buckets go from one-hour to 12-hour buckets.</p>
<p>Figure 7 shows key benchmarking statistics measured from peak.</p>
<p>4) <em>Composite Benchmarks:</em> Several companies have reported success in using restoration curves for their historical storms as reasonably accurate predictors of future performance.</p>
<p><a href="http://www.fortnightly.com/sites/default/files/article_images/130701-Spark-7.jpg" target="_blank"><img alt="" src="http://www.fortnightly.com/sites/default/files/article_images/130701-Spark-fig7-L.jpg" style="width: 322px; height: 205px; margin: 5px 10px; float: left;" /></a>Our experience with composite performance from multiple utilities is not so sanguine. The control lines around the average curve are quite wide – as much as +/- 100 percent for most of the time line. The lesson is perhaps that an individual utility can learn how its particular system responds to the types of storms it typically sees by developing and analyzing its own restoration curves. For the composite panel, the types of storms are quite different as are the demographics of each system and service territory. Also, companies design their systems differently based on their most probable severe weather risks; as one Canadian company commented, “we design our system for bad winter storms.”</p>
<p>The benchmarks from the composite panel might still be useful to bracket an individual company’s performance. The quartile values for restoration are shown on Figure 8. Note the impact of a second peak in the data that was experienced by two of the companies.</p>
<p>A company that consistently performs worse than the panel might investigate the factors contributing to this performance. However, it should be understood that the control limits are very wide and should not be taken directly as a measure of storm restoration performance, but instead as the consequence of storm severity, system integrity and other factors as well as the efficiency of the storm restoration process itself.</p>
<p>5) <em>Work Force Effects:</em> The restoration rate would also logically seem to be affected by the available workforce. The critical resources are lineman and tree personnel, but other internal resources can be significant – <em>e.g.,</em> damage assessment, drivers, wire guards. We asked companies for each storm to report maximum number of full-time equivalent (FTE) personnel in these categories, and to distinguish between internal resources (company employees and regular contractors) vs. external people brought in.</p>
<p><a href="http://www.fortnightly.com/sites/default/files/article_images/130701-Spark-8.jpg" style="text-decoration: underline;" target="_blank"><img alt="" src="http://www.fortnightly.com/sites/default/files/article_images/130701-Spark-fig8-L.jpg" style="margin: 5px 10px; width: 322px; height: 287px; float: left;" /></a></p>
<p>We received useful FTE data for 12 worst storms. Figure 9 shows a ratio of the maximum number of FTEs per 1,000 customers out at peak; the range is from 1 to 38 (including one very high value, and one very low). This ratio does not necessarily measure worker productivity, since some restoration work can be done automatically and in large sections.</p>
<p>Figure 9 provides a general sense that linemen and tree workers are about evenly matched, that other internal resources can be significant, and that external resources were generally a small factor for most storms. Companies that reported no internal tree trimmers probably misunderstood our instructions to include regular contract crews as internal resources.</p>
<p>The chart is sorted so that the smallest storms are at the top of the graph, and the largest at the bottom. There is some apparent correlation between maximum FTEs and storm size; it would seem reasonable that bigger storms are more constrained on resources.</p>
<p><a href="http://www.fortnightly.com/sites/default/files/article_images/130701-Spark-9.jpg" style="text-decoration: underline;" target="_blank"><img alt="" src="http://www.fortnightly.com/sites/default/files/article_images/130701-Spark-fig9-L.jpg" style="margin: 5px 10px; width: 321px; height: 285px; float: right;" /></a></p>
<p>Several of the data points were paired – four companies provided two data points. When the storms were similar size, the max FTE/customers out at peak ratio was similar, which suggests a consistency in response.</p>
<h4>Understanding Outage Performance</h4>
<p>Our effort to benchmark major storm restoration helped us develop several interesting findings. While we concluded that there was no such thing as an average storm, we did identify ways to analyze storm restoration response that should be of assistance to utilities in understanding their performance. Additional storm data collected through an annual T&amp;D benchmarking program will allow us to update the analysis.</p>
<p>Additional research is needed in the area of understanding the underlying impact of storms on failures in the system, as well as the utility repair response. There is also a need to inventory existing databases on storm performance and develop some consistent guidelines.</p>
<p>Composite benchmarking data, when presented in a consistent way, can give a utility an idea of its comparative performance. However it should be understood that differences in storm restoration performance may be attributable to differences in weather, geography, system design and system condition as well as differences in how companies organize and manage their storm restoration efforts.</p>
<p><strong><em>About the Authors: Tim Szybalski</em></strong><em> (<a href="mailto:tim.szybalski@1qconsulting.com?subject=Spark%20article">tim.szybalski@1qconsulting.com</a>) is a director at <a href="http://1stquartileconsulting.com">First Quartile Consulting (http://1stquartileconsulting.com)</a>, a management consulting firm that also performs annual benchmarking studies for North American utilities. <strong>Dave Carter </strong>(<a href="mailto:dave.carter@1qconsulting.com?subject=Spark%20article">dave.carter@1qconsulting.com</a>) recently retired from We Energies and is now a part-time contributor to First Quartile Consulting’s benchmarking studies and consulting projects. The authors acknowledge the contributions of Jesse Medlock, Robert Jones, and Rocky Morris at Oncor.</em></p>
<p> </p>
<hr align="left" size="1" width="33%" />
<p><a name="fn1" id="fn1"></a>1. <em>Thanks to Jesse Medlock at Oncor for these descriptions</em></p>
</div></div></div><div class="field field-name-field-import-deck field-type-text-long field-label-above"><div class="field-label">Deck:&nbsp;</div><div class="field-items"><div class="field-item even">Superstorms call for superior responses.</div></div></div><div class="field field-name-field-byline field-type-text field-label-above"><div class="field-label">Byline:&nbsp;</div><div class="field-items"><div class="field-item even">By Tim Szybalski and David Carter, First Quartile Consulting</div></div></div><div class="field field-name-field-import-image field-type-image field-label-above"><div class="field-label">Image:&nbsp;</div><div class="field-items"><div class="field-item even"><img src="http://www.fortnightly.com/sites/default/files/Benchmarking-storms.jpg" width="1500" height="994" alt="" /></div></div></div><div class="field field-name-field-subtitle field-type-text field-label-above"><div class="field-label">Subtitle:&nbsp;</div><div class="field-items"><div class="field-item even">Superstorms call for superior responses.</div></div></div><div class="field field-name-field-tags field-type-taxonomy-term-reference field-label-above clearfix">
<div class="field-label">Tags:&nbsp;</div>
<div class="field-items">
<a href="/tags/superstorm-sandy">Superstorm Sandy</a><span class="pur_comma">, </span><a href="/tags/hurricane-irene">Hurricane Irene</a><span class="pur_comma">, </span><a href="/tags/outage-response">outage response</a><span class="pur_comma">, </span><a href="/tags/restoration">restoration</a><span class="pur_comma">, </span><a href="/tags/benchmarking">Benchmarking</a><span class="pur_comma">, </span><a href="/tags/reliability">Reliability</a><span class="pur_comma">, </span><a href="/tags/major-storm">major storm</a> </div>
</div>
<div class="field field-name-field-intro-text field-type-text-long field-label-above"><div class="field-label">Intro Text:&nbsp;</div><div class="field-items"><div class="field-item even">There&#039;s no such thing as an average superstorm, and every situation requires a unique strategy. But data on utilities&#039; performance in response to major storms helps identify best practices -- and lessons learned. </div></div></div><div class="field field-name-field-publishing-date field-type-datetime field-label-above"><div class="field-label">Publishing Date:&nbsp;</div><div class="field-items"><div class="field-item even"><span class="date-display-single">Friday, June 28, 2013 (All day)</span></div></div></div>Tue, 25 Jun 2013 19:45:50 +0000mburr16635 at http://www.fortnightly.comRates, Reliability, and Regionhttp://www.fortnightly.com/fortnightly/2013/01/rates-reliability-and-region
<div class="field field-name-field-import-deck field-type-text-long field-label-inline clearfix"><div class="field-label">Deck:&nbsp;</div><div class="field-items"><div class="field-item even"><p>Customer satisfaction and electric utilities.</p>
</div></div></div><div class="field field-name-field-import-byline field-type-text-long field-label-inline clearfix"><div class="field-label">Byline:&nbsp;</div><div class="field-items"><div class="field-item even"><p>William P. Zarakas, Philip Q Hanser, and Kent Diep</p>
</div></div></div><div class="field field-name-field-import-bio field-type-text-long field-label-inline clearfix"><div class="field-label">Author Bio:&nbsp;</div><div class="field-items"><div class="field-item even"><p><b>William P. Zarakas</b> and <b>Philip Q Hanser</b> are Principals with <i>The Brattle Group</i>. <b>Kent Diep</b> is a Research Analyst at <i>The Brattle Group</i>.</p>
</div></div></div><div class="field field-name-field-import-volume field-type-node-reference field-label-inline clearfix"><div class="field-label">Magazine Volume:&nbsp;</div><div class="field-items"><div class="field-item even">Fortnightly Magazine - January 2013</div></div></div><div class="field field-name-field-import-image field-type-image field-label-above"><div class="field-label">Image:&nbsp;</div><div class="field-items"><div class="field-item even"><img src="http://www.fortnightly.com/sites/default/files/1301-FEA4-fig1_0.jpg" width="1050" height="805" alt="Figure 1 - Summary of Variables Included In Empirical Analysis" title="Figure 1 - Summary of Variables Included In Empirical Analysis" /></div><div class="field-item odd"><img src="http://www.fortnightly.com/sites/default/files/1301-FEA4-fig2.jpg" width="1006" height="1135" alt="Figure 2 - SAIDI vs. Customer Satisfaction" title="Figure 2 - SAIDI vs. Customer Satisfaction" /></div><div class="field-item even"><img src="http://www.fortnightly.com/sites/default/files/1301-FEA4-fig3.jpg" width="1006" height="1135" alt="Figure 3 - Rates vs. Customer Satisfaction" title="Figure 3 - Rates vs. Customer Satisfaction" /></div><div class="field-item odd"><img src="http://www.fortnightly.com/sites/default/files/1301-FEA4-fig4.jpg" width="1005" height="1075" alt="Figure 4- Residential Customer Satisfaction" title="Figure 4- Residential Customer Satisfaction" /></div><div class="field-item even"><img src="http://www.fortnightly.com/sites/default/files/1301-FEA4-fig5.jpg" width="1006" height="1078" alt="Figure 5 - Outage Duration" title="Figure 5 - Outage Duration" /></div><div class="field-item odd"><img src="http://www.fortnightly.com/sites/default/files/1301-FEA4-fig6.jpg" width="1425" height="1584" alt="Figure 6 - Summary of Regression Results" title="Figure 6 - Summary of Regression Results" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>It’s no surprise that customer satisfaction is increasingly important to retail electric utilities. Satisfying customers was important during the old days of utility regulation, when utility customers had little if any choice concerning their electricity supplier. It’s even more important today, when customers can invest in equipment to bypass the grid in whole or in part, and it will inevitably be more pronounced in the future, when distributed generation options become more widespread and affordable.</p>
<p>The Brattle Group’s recent research on customer satisfaction, based largely on an empirical analysis, studied the relationships across a data set that included: measures of customer satisfaction, indicators of electric system reliability, and utility cost structures as well as system characteristic and demographic variables. This analysis confirmed some of the views that have been widely held by utility managers, but which were based more on a sense of conventional wisdom than backed up by the data. It provided a few surprises as well, which are important to take into account as utilities brace for mounting competition in retail markets and develop strategies to enhance satisfaction among their customers.</p>
<h5 class="p4"><b>Defining Satisfaction</b></h5>
<p>Customer satisfaction largely depends on whether a company’s products or services fulfill a customer’s expectations—<i>i.e.,</i> whether it meets, exceeds, or falls short. Quantifying customer satisfaction involves accumulating specific customer perceptions, measured through surveys—typically using a 5- or 10-point scale, ranging from “extremely dissatisfied” to “extremely satisfied”—that are presented at various levels of aggregation.<strong><sup><a href="http://www.fortnightly.com/fortnightly/2013/01/rates-reliability-and-region/page/0/4#1" title="1. The most common scales used to measure customer satisfaction are classical “Likert” scales, which describe the range of possible attitudes from “very dissatisfied” to “very satisfied” using numeric values.">1</a></sup></strong></p>
<p>It’s fairly common practice for companies to survey customers in order to understand how customers perceive the service they receive; it’s even more widespread in recent years with the evolution of Internet and app-based survey instruments. Surveys frequently pay significant attention to non-price dimensions, especially in price-competitive environments—such as airlines and retail banking—as companies look for ways to differentiate themselves against competitors.</p>
<p>Historically, electric utilities haven’t been directly subject to price competition for electric products due to geographic franchise arrangements—although cross-fuel competition in many areas could be quite fierce. It could be argued that, with nowhere else to turn, customers had few alternatives to their local utility, thereby reducing the importance to utility management of satisfied customers. However, even the most short-sighted utility managers recognized that satisfying customers was important and that it needed to be included as an element of business strategy. For one reason, state regulatory commissions typically required utilities under their jurisdiction to conduct customer satisfaction surveys—which were taken into account in rate and other proceedings. For another, bond and equity analysts also looked at current and projected rates, as well as other customer issues when rating investments in electric utilities.</p>
<p>Currently, the threat of losing customers due to increased competition and potential bypass of the electric distribution system through distributed generation is driving electric utilities’ interest in customer satisfaction. Investment in utility infrastructure is projected to increase as growth in sales is declining; at the same time, alternatives to the electric grid are becoming more widespread and cost competitive. Also, the rates for delivering electric power are almost always volume-based, which means that defections of customers can have a large impact on unit rates. As a result, attracting and retaining customers to keep prices affordable is more important than ever.</p>
<p>Another development that has brought utility customer satisfaction to the forefront is the use of benchmarking studies, which compare levels of customer satisfaction across utilities. High scores in benchmarking studies can show that utilities are recognized by their customers as being the best in class. This notion of comparing levels of customer satisfaction across utilities can be perplexing to many utility managers. Utilities typically serve all of the retail customers in a defined geographic area on an exclusive basis; some residential—as well as small commercial—customers reside in the same utility service area for all of their lives. This means that customers aren’t necessarily in a position to directly compare their utility’s performance against other utilities, as they would be able to rank their experiences with banks or gas stations. That is, they might not know how good or bad they have it. Nonetheless, utility customers certainly have views about the quality and value of electric services, which are voiced, sometimes vociferously, and best-in-class comparisons have become an embedded part of grading companies.</p>
<p>As a result, utilities have expended considerable effort to understand the drivers of high customer satisfaction ratings, and have undertaken initiatives to improve their scores. They have enhanced their staffs, implemented new information systems, and retained experts to help them strengthen their relationships with customers. Many of their initiatives were borrowed from the best practices of customer-facing industries, including development of user-friendly web interfaces, investment in state of the art customer care centers, and training to make employees more empathetic to the plights of their customers. Other initiatives were more specific to electric utility operations, notably enhancing the electric distribution system in order to provide more reliable service. Finally, and certainly not least, numerous utilities have focused on reducing their cost structures in order to demonstrate to customers that they are delivering as much value per dollar as possible.</p>
<p>Most of the above referenced initiatives—except, of course, for the cost-reduction initiatives—can be expensive. Thus, utility managers and budgeters frequently seek to trade-off between costs and benefits; that is, to target the initiative that will provide the biggest bang—or increase in customer satisfaction—for the buck. In some cases, the answer might be obvious, but in most cases, it tends to be more elusive. This is because there are a number of factors at work. One utility might improve its standing among its customers by upgrading its distribution system, while another might do better by improving its customer interfaces or customizing marketing programs for a segment of particularly concerned customers. The conventional wisdom—<i>i.e.,</i> delivering highly reliable electric service at a low price—might provide good overall direction, but it doesn’t provide an actionable plan for addressing customer satisfaction at any particular utility.</p>
<h5 class="p4"><b>Industry Benchmarks</b></h5>
<p>Perhaps the most widely-known benchmark of customer satisfaction comes from J.D. Power and Associates, which surveys customers in a variety of industries and develops scores for the participating companies. For the electric utility industry, customer satisfaction scores were developed for nearly 125 public utilities—<i>i.e., </i>municipals and cooperatives—and investor owned electric utilities.<strong><sup><a href="http://www.fortnightly.com/fortnightly/2013/01/rates-reliability-and-region/page/0/4#2" title="2. The most recent J.D. Power survey included a panel of 124 electric utilities, 85 of which were investor-owned and 39 were non-investor-owned utilities. The panel was smaller in 2006 and 2007, with roughly 80 public and investor owned electric utilities. Residential customer satisfaction is developed on a 1,000-point scale. In 2012, the average score among the electric utilities included in the study was 625.">2</a></sup></strong> Many utilities also survey their customers on their own, the results of which are treated confidentially. The J.D. Power survey is one of the only instruments that compares utilities’ customer satisfaction on a consistent basis and is publicly available.</p>
<p>J.D. Power produces an annual report that provides a ranking of the utilities included in the study,<strong><sup><a href="http://www.fortnightly.com/fortnightly/2013/01/rates-reliability-and-region/page/0/4#3" title="3. J.D. Power also provides awards to the top performers in several categories, including those based on size and geographic region.">3</a></sup></strong> summarizes the results, and provides insight into the trends in utility customer satisfaction scores. For example, a series of storms in 2011 appears to have had a significant effect on customer satisfaction, specifically with respect to power quality and reliability as well as communications related to outage restorations. In some cases, utilities might be able to act almost immediately on study findings. However, in many cases—such as improving levels of power quality and reliability, which might require construction, development, and implementation of new systems—addressing problem circumstances can take years to effectuate. Further, it can take some time—perhaps years—for customers to fully realize the effects of hard or soft system enhancements, especially since customers tend to notice the bumps in the road more than when their service is being provided smoothly.</p>
<p>Utilities have long puzzled about the levers of customer satisfaction. Specifically, they face the classic balancing act between cost and quality. They can engineer a bullet-proof distribution system that would deliver very high levels of reliability regardless of the many perils it faces—including ice storms, hurricanes, errant drivers, and even the potential damages of squirrels and birds—but it would likely come at a very high cost, especially if such hardening included undergrounding a significant percentage of their distribution systems. Thus, utilities have long sought an algorithm that illuminates the customer trade-off of price versus quality of service. Further, they’re interested in whether other levers, such as investment in customer service systems and customized product offerings, might better fulfill their customers’ expectations.</p>
<p>The Brattle Group’s analysis seeks to confirm or refute the views widely held by utility managers concerning the key factors that determine customer satisfaction. It compiled a data set that covers utility performance (<i>e.g.,</i> financial, system operations and customer satisfaction scores), levels of investment, operations and maintenance expenditures, and demographic characteristics (primarily concerning geography and customer density) for a panel of roughly 30 investor-owned electric utilities located throughout the United States, covering a period of six years.<strong><sup><a href="http://www.fortnightly.com/fortnightly/2013/01/rates-reliability-and-region/page/0/4#4" title="4. In addition to the customer satisfaction scores from J.D. Power, data included in this analysis come from several sources, primarily Form 1 reports filed by electric utilities to the Federal Energy Regulatory Commission (FERC) and from reliability reports made public by state regulatory commissions or from electric utilities themselves. Not all utilities have publicly available information concerning customer satisfaction scores or consistent reliability indicators. Thus, the size of the data set is limited by the public availability of consistent data.">4</a></sup></strong> The primary factors considered in the analysis are summarized in Figure 1.</p>
<p>Based on common utility wisdom, a quick look at these data might be expected to show directly observable relationships between customer satisfaction and the various explanatory variables summarized above. For example, an electric utility that consistently invested in and maintained its distribution systems—as evidenced by above average levels of spending—might be expected to realize high levels of reliability. And if that same utility also had invested and maintained customer service systems and had low rates, it would achieve high customer satisfaction results. Finally, those relationships could be stretched into a matrix or algorithm, through which utility managers could manage their way to strong customer satisfaction. For example, perhaps they could spend a little less on, say, distribution infrastructure per year, in order to keep rates down without triggering noticeable levels of system degradation, with the overall result of happier customers.</p>
<p>All of this seems to make sense. However, as shown in Figures 1 and 2, scatter plots of any two variables don’t present any clear pictures. Part of the explanation for this might lie in the complexity among relationships. Few if any utilities simultaneously achieve the combinations of spending, reliability, and rates to clearly make the case.</p>
<p>Figures 2 and 3 depict the relationships between customer satisfaction scores with reliability and price, respectively—both hypothesized to be important explanatory variables of customer satisfaction. These scatter plots indicate that the majority of observations fall within a fairly tight range. However, fitting a trend line within the scatter would be challenging at best. Furthermore, scatter plots of two variables at a time—<i>i.e.,</i> customer satisfaction scores versus a single independent variable—don’t begin to explain the relative significance of a single explanatory variable compared to other such variables.</p>
<h5 class="p4"><b>Interpreting Empirical Analysis</b></h5>
<p>A review of the data included in the set confirmed definite differences across utilities concerning customer satisfaction scores as well as some of the key variables that might explain it—such as the extent of power outages. Figures 4 and 5 illustrate the distribution of J.D. Power customer satisfaction scores (based on surveys of residential electric customers) and the duration of power outages (SAIDI measured including major events) for the utilities included in the panel.</p>
<p>The figures indicate that these data tend to be fairly tightly distributed, which means that differences across utilities might not be directly observable through a graphic or visual inspection. They also indicate that explaining the determinants of customer satisfaction might require expressing some of the dependent variables in natural log form.<strong><sup><a href="http://www.fortnightly.com/fortnightly/2013/01/rates-reliability-and-region/page/0/4#5" title="5. It is clear that SAIDI scores are asymmetrically distributed, and appear to be approximate a log normal distribution. This means that we can change the form of SAIDI to log normal—or ln (SAIDI)—to better express its distribution in a regression analysis.">5</a></sup></strong></p>
<p>A regression analysis confirmed much of the conventional wisdom concerning customer satisfaction and also provided a few additional insights as to causation.<strong><sup><a href="http://www.fortnightly.com/fortnightly/2013/01/rates-reliability-and-region/page/0/4#6" title="6. Regression analyses—assuming that the results are statistically significant—provide an indication of the importance of an independent variable in explaining changes in the dependent variable. As a general practice, the results of a regression are summarized by displaying the coefficient of the independent variables considered, as well as indicating the degree to which those variables are statistically significant (as measured by t-scores).">6</a></sup></strong> This analysis used utility customer satisfaction score as the dependent variable, with independent variables including: price, reliability, spending on distribution systems, spending on customer service, the density of population in the utility’s service area, and the U.S. geographic region where the utility is located.</p>
<p>A summary of results is included in Figure 6. The key findings fall into four areas. First, the analysis indicated that, indeed, system reliability—as measured by the duration of service interruptions, their frequency, or both—significantly explains customer satisfaction scores. Furthermore, a separate but related regression showed that spending by utilities on their distribution systems was significantly correlated with achieved levels of reliability. This confirms general understanding of the cycle and effect of utility investment and operations and maintenance spending: achieving high levels of reliability requires consistent investment and spending.</p>
<p>Second, the analysis showed that rates—as measured by average residential revenue per kWh—play a significant role in explaining why customers rank utilities at a high or low level with respect to customer satisfaction. However, rate levels are less of a determinant than system reliability. In order to make the customer satisfaction scores more meaningful, the analysis standardized the customer satisfaction variable,<strong><sup><a href="http://www.fortnightly.com/fortnightly/2013/01/rates-reliability-and-region/page/0/4#7" title="7. Standardizing a variable involves centering it about the sample’s mean value and dividing it by the sample’s standard deviation. This yields a customer satisfaction variable that is measured relative to the panel of observations (i.e., not in absolute terms).">7</a></sup></strong> which allowed more directly comparing the effect that independent variables have upon the dependent variable. As indicated in Figure 6, improvements in reliability could increase customer satisfaction scores by roughly 0.23 standard deviations from the mean, while a slight decrease in rates would improve scores by less than 0.01 standard deviations. This suggests that, for the panel overall, customers might forgive their utility if rates go up, as long as they perceive that the service they receive is improving or at least consistently reliable.</p>
<p>Third, geography and locations provide statistically significant explanations of customer satisfaction scores. In fact, the regression analysis indicated that the single biggest impact on overall customer satisfaction scores comes from geographic variables—which was a somewhat unexpected finding.<strong><sup><a href="http://www.fortnightly.com/fortnightly/2013/01/rates-reliability-and-region/page/0/5#8" title="8. The analysis used “dummy” variables through which the electric utilities included in the panel were assigned to the Northeast, Southeast, Midwest, Southwest or Northwest.">8</a></sup></strong> Specifically, utilities in the Northeastern U.S. are statistically at a disadvantage compared to utilities located elsewhere in the U.S. when customers rate their levels of satisfaction. The coefficient for utilities in the Northeast is statistically insignificant—<i>i.e., </i>it’s essentially zero—while the coefficients for all other regions are positive and statistically significant. That suggests an unfortunate locational distinction for Northeastern utilities. Comparatively, they’re starting at ground zero and need to work their way up from there, whereas utilities in the other parts of the country begin above the mean.<span class="s1"> </span>It’s possible that this geographic effect reflects cultural pre-dispositions; it also might be the result of cross-correlations with storm-related service interruptions.</p>
<p>Somewhat related to geography, the analysis showed that population density of a utility’s service area—<i>i.e.,</i> a proxy for how many customers are served per mile of utility distribution system—is another statistically significant explanatory factor and positively associated with customer satisfaction. However the effect of the density of the distribution system upon customer satisfaction scores is less than the impact stemming from geographic location.</p>
<p>Finally, electric utility spending on their customer service functions is statistically significant, but explains little. This came as a surprise in light of recent findings associated with reviews of utility performance in response to last year’s storms in the Mid-Atlantic and Northeast. Those studies found that customer frustration was tied to poor communications by utilities, frequently more so than to physical restoration efforts and results. Thus, those utilities that spent more on their customer service functions—in the form of system upgrades and other resources—would be expected to have happier customers.</p>
<p>This part of the regression results likely reflect data and measurement issues more than it supports a finding that spending on customer service doesn’t matter. The variable included in the regression simply captures dollars spent per customer and per kWh of sales. It might be fair to infer that higher levels of spending on customer service can be associated with more sophisticated systems. However, it doesn’t necessarily mean that those utilities have better communications with their customers—especially during crucial events.<strong><sup><a href="#9" title="http://www.fortnightly.com/fortnightly/2013/01/rates-reliability-and-region/page/0/59. The analysis also considered lagging the customer service variable—e.g., t-1, t-2, etc.—which captured the impact of past spending have on current levels of customer satisfaction. Results for the lagged variable were similar to the results for the contemporaneous variable.">9</a></sup></strong></p>
<h5 class="p4"><b>Analysis in Practice</b></h5>
<p>At its highest level, this analysis confirms the primary suppositions underlying why some utilities succeed in achieving high customer satisfaction ratings. It supports the logical hypothesis that good service—<i>i.e.,</i> high levels of reliability, or low SAIDI—combined with low prices are key to satisfying customers.</p>
<p>Clearly there’s merit in developing empirical support for what common sense tells us must be so. However, the finding above is a prescription that can be applied to virtually any business; by itself, it provides little actionable direction to improve a utility’s customer satisfaction rating. In practice, recommending that utilities keep service levels up and prices down is about as useful as advising a stock broker to buy low and sell high.</p>
<p>The primary goal in conducting this research and analysis is to use it to develop actionable recommendations for electric utility managers.<strong><sup><a href="http://www.fortnightly.com/fortnightly/2013/01/rates-reliability-and-region/page/0/5#10" title="10. More so than incorporating our research into the academic literature. In order to be seriously considered among academic economists, the analysis will need to be fortified further—requiring elaboration upon the statistical dimensions of the analysis to better estimate the regression coefficients, the extent of their explanatory power, and the covariance across independent variables.">10</a></sup></strong> The analysis provides three key insights that can be used by utilities to improve customer satisfaction scores.</p>
<p>First, all customers expect reliable electric service at the lowest prices possible. Meeting this expectation requires system-wide investments and initiatives. Comparatively reliable service and reasonably priced delivery services, then, become the common denominators that electric utilities must provide in order to satisfy customers and regulators overall. This will satisfy a segment of customers; however, going above and beyond this foundation level of service must be addressed on an incremental basis.</p>
<p>Second, location matters. This means that customer needs and expectations vary across geographies, even among utilities with similar levels of reliability and rates. It also suggests that best practices—aimed at improving customer satisfaction scores—aren’t always portable. On first blush, the analysis might appear to indicate that some drivers of customer satisfaction are beyond the control of the utility. However, that doesn’t mean utilities in the Northeast should succumb to despair. Instead, it suggests that utilities have to proactively address these disconnects with their customers through additional customer research and analysis and more effective communications and interactions.</p>
<p>Third, recognizing variances might be more important than understanding averages. The regression analysis estimated variances and standard deviations across the panel of utilities. Likewise, customer preferences vary within utilities. While it’s possible to find the mix of cost and service that will generally satisfy customers at a common denominator level, there’s probably room to meet the expectations of a sub-segment of customers that are looking for higher levels of service. For example, a sub-set of the overall residential customer segment is interested in realizing greater energy efficiency or receiving higher quality power, and is willing to pay extra for it.<strong><sup><a href="http://www.fortnightly.com/fortnightly/2013/01/rates-reliability-and-region/page/0/5#11" title="11. More accurately, these customers are willing to make an initial investment—either directly or through their electric utility—with the expectation of realizing benefits in the form of lower overall costs in the future or higher levels of power quality.">11</a></sup></strong> These customers will be more satisfied with their utility because it enabled them to realize their goals, even though it came at a cost. By addressing the expectations of these customers separately—or incrementally—the utility also can dodge a bullet; it won’t upset its foundation customers by applying a system-wide upgrade, thereby increasing rates.</p>
<p>Utilities can realize such incremental improvements in customer satisfaction through market segmentation and other approaches. Utility marketing programs that address energy efficiency and power quality are considered to be successes because they show the utility understands the needs of a segment of its customers, and it applies tools necessary to help.<strong><sup><a href="http://www.fortnightly.com/fortnightly/2013/01/rates-reliability-and-region/page/0/5#12" title="12. Energy efficiency programs involve saving customers money by improving the efficiency of electricity consumption, ranging from caulking leaky windows in older homes to the mass replacement of light bulbs with LEDs in large warehouses. Programs that address power quality and voltage fluctuations also require an investment, frequently in an uninterruptible power supply that automatically switches the customer off the grid if it detects a transient condition on the line.">12</a></sup></strong> Plus they’re developed in an iterative fashion; that is, the programs are neither pushed by product developers nor pulled by segment managers, but instead are developed in response to customer demand.</p>
<p>Customer segmentation is hardly new to the electric utility industry. Utilities track a range of data in order to provide service and to bill customers, notably locations and energy consumption. Most utilities segment their customers based on these two criteria, in part because it’s useful when developing load forecasts, and in part because it’s the primary data that’s readily collected and available. From a customer satisfaction standpoint, segmenting customers along these lines doesn’t necessarily assist the utility in gaining insight into what it takes to satisfy those customers, nor does it lead to actionable strategies. This is primarily because customers who share common levels of electricity consumption and those who live in common locations have other characteristics that more fully define their expectations from their electric utility.</p>
<p>Customer segmentation by itself, however, is only meaningful if the utility can act to improve satisfaction in those segments—that is, if it has tools in place, or under development, to reach customer needs and expectations. Segmentation can be enhanced, refined, or even outright changed, if utilities develop new tangible tools to address other unmet customer needs. For example, new programs enabled by smart meters, the smart grid, and services related to plug-in hybrid electric vehicles will require that utilities apply more sophisticated segmentation tactics to tailor programs to meet customer expectations.</p>
<p>Without this connection between segments and programs, however, segmentation is an academic exercise; utilities might be able to develop more nuanced, and perhaps more interesting segmentations of their customers, but they will lack the ability to improve their customers’ satisfaction.</p>
<h5 class="p4"><b>Beyond Conventional Wisdom</b></h5>
<p>Analysis provides an empirical basis for some of the conventional wisdom concerning the drivers of customer satisfaction assumed by utility managers. It also places these drivers in context. Most of the electric utilities in the panel have achieved relatively consistent and acceptable levels of reliability—in terms of the frequency and duration of service interruptions—which led to these factors being statistically significant. However, the tight cluster of these observations led to low coefficient values, suggesting that improvements in reliability wouldn’t move customer satisfaction scores that much. The same is true for rate reductions. This doesn’t mean that reliability and rates aren’t important to customers; quite the contrary is true. Customers have come to expect that utilities provide electric service within a certain band of reliability and rates. Low rates—or rates that are as low as possible—plus reliable service then becomes the common denominator of a utility’s customer satisfaction strategy.</p>
<p>The geographic region of a utility’s service territory plays a strong role in customer satisfaction, the highest of all of the independent variables considered. This could be interpreted to suggest that achieving high levels of customer satisfaction is out of the control of the utility in question. However, such an interpretation would be overly simplistic. Instead, this part of the regression results indicate that customer satisfaction is largely driven by utility attention to the specific issues facing its unique customer base.</p>
<p>Is it possible to improve upon low customer satisfaction scores? Of course, but it might take time to overcome embedded customer biases. This will be particularly true for electric utilities in the Northeastern U.S., which are starting out with lower customer satisfaction scores than is the case for utilities located elsewhere in the country. Regulators and other observers need to keep this point in mind when gauging progress going forward.</p>
<p>In addition to meeting the common denominator of reliable electric service at low rates (or at least without notable increases in rates), electric utilities can improve upon their customer satisfaction scores by improving observed deficiencies (such as communications and customer interactions) and tailoring marketing programs to meet the expectations of specific customer segments, with marketing programs tangible enough to address specific customer needs. Otherwise, generalized programs might make good sound bites, but aren’t actionable enough to improve the satisfaction levels for any particular group of customers.</p>
<p class="p7"> </p>
<h5 class="p4"><b>Endnotes:</b></h5>
<p class="p8"><a name="1" id="1"></a>1. The most common scales used to measure customer satisfaction are classical “Likert” scales, which describe the range of possible attitudes from “very dissatisfied” to “very satisfied” using numeric values.</p>
<p class="p8"><a name="2" id="2"></a>2. The most recent J.D. Power survey included a panel of 124 electric utilities, 85 of which were investor-owned and 39 were non-investor-owned utilities. The panel was smaller in 2006 and 2007, with roughly 80 public and investor owned electric utilities. Residential customer satisfaction is developed on a 1,000-point scale. In 2012, the average score among the electric utilities included in the study was 625.</p>
<p class="p8"><a name="3" id="3"></a>3. J.D. Power also provides awards to the top performers in several categories, including those based on size and geographic region.</p>
<p class="p8"><a name="4" id="4"></a>4. In addition to the customer satisfaction scores from J.D. Power, data included in this analysis come from several sources, primarily Form 1 reports filed by electric utilities to the Federal Energy Regulatory Commission (FERC) and from reliability reports made public by state regulatory commissions or from electric utilities themselves. Not all utilities have publicly available information concerning customer satisfaction scores or consistent reliability indicators. Thus, the size of the data set is limited by the public availability of consistent data.</p>
<p class="p8">5. It is clear that SAIDI scores are asymmetrically distributed, and appear to be approximate a log normal distribution. This means that we can change the form of SAIDI to log normal—or ln (SAIDI)—to better express its distribution in a regression analysis.</p>
<p class="p8">6. Regression analyses—assuming that the results are statistically significant—provide an indication of the importance of an independent variable in explaining changes in the dependent variable. As a general practice, the results of a regression are summarized by displaying the coefficient of the independent variables considered, as well as indicating the degree to which those variables are statistically significant (as measured by t-scores).</p>
<p class="p8">7. Standardizing a variable involves centering it about the sample’s mean value and dividing it by the sample’s standard deviation. This yields a customer satisfaction variable that is measured relative to the panel of observations (<i>i.e.</i>, not in absolute terms).</p>
<p class="p8">8. The analysis used “dummy” variables through which the electric utilities included in the panel were assigned to the Northeast, Southeast, Midwest, Southwest or Northwest.</p>
<p class="p8">9. The analysis also considered lagging the customer service variable—<i>e.g.</i>, t-1, t-2, etc.—which captured the impact of past spending have on current levels of customer satisfaction. Results for the lagged variable were similar to the results for the contemporaneous variable.</p>
<p class="p8">10. More so than incorporating our research into the academic literature. In order to be seriously considered among academic economists, the analysis will need to be fortified further—requiring elaboration upon the statistical dimensions of the analysis to better estimate the regression coefficients, the extent of their explanatory power, and the covariance across independent variables.</p>
<p class="p8">11. More accurately, these customers are willing to make an initial investment—either directly or through their electric utility—with the expectation of realizing benefits in the form of lower overall costs in the future or higher levels of power quality.</p>
<p class="p8">12. Energy efficiency programs involve saving customers money by improving the efficiency of electricity consumption, ranging from caulking leaky windows in older homes to the mass replacement of light bulbs with LEDs in large warehouses. Programs that address power quality and voltage fluctuations also require an investment, frequently in an uninterruptible power supply that automatically switches the customer off the grid if it detects a transient condition on the line.</p>
</div></div></div><div class="field field-name-field-article-category field-type-taxonomy-term-reference field-label-above clearfix"><h3 class="field-label">Category (Actual): </h3><ul class="links"><li class="taxonomy-term-reference-0"><a href="/article-categories/customer-engagement">Customer Engagement</a></li><li class="taxonomy-term-reference-1"><a href="/article-categories/rate-cases">Rate Cases</a></li><li class="taxonomy-term-reference-2"><a href="/article-categories/security-reliability-cip">Security, Reliability &amp; CIP</a></li><li class="taxonomy-term-reference-3"><a href="/article-categories/strategy-planning">Strategy &amp; Planning</a></li></ul></div><div class="field field-name-field-members-only field-type-list-boolean field-label-above"><div class="field-label">Viewable to All?:&nbsp;</div><div class="field-items"><div class="field-item even"></div></div></div><div class="field field-name-field-article-featured field-type-list-boolean field-label-above"><div class="field-label">Is Featured?:&nbsp;</div><div class="field-items"><div class="field-item even"></div></div></div><div class="field field-name-field-image-picture field-type-image field-label-above"><div class="field-label">Image Picture:&nbsp;</div><div class="field-items"><div class="field-item even"><img src="http://www.fortnightly.com/sites/default/files/1301-FEA4.jpg" width="1500" height="868" alt="" /></div></div></div><div class="field field-name-field-fortnightly-40 field-type-list-boolean field-label-above"><div class="field-label">Is Fortnightly 40?:&nbsp;</div><div class="field-items"><div class="field-item even"></div></div></div><div class="field field-name-field-law-lawyers field-type-list-boolean field-label-above"><div class="field-label">Is Law &amp; Lawyers:&nbsp;</div><div class="field-items"><div class="field-item even"></div></div></div><div class="field field-name-field-tags field-type-taxonomy-term-reference field-label-above clearfix">
<div class="field-label">Tags:&nbsp;</div>
<div class="field-items">
<a href="/tags/customer-satisfaction">Customer satisfaction</a><span class="pur_comma">, </span><a href="/tags/brattle-group">Brattle Group</a><span class="pur_comma">, </span><a href="/tags/reliability">Reliability</a><span class="pur_comma">, </span><a href="/tags/competition">competition</a><span class="pur_comma">, </span><a href="/tags/infrastructure">Infrastructure</a><span class="pur_comma">, </span><a href="/tags/benchmarking">Benchmarking</a><span class="pur_comma">, </span><a href="/tags/saidi">SAIDI</a><span class="pur_comma">, </span><a href="/tags/system-average-interruption-duration-index">System average interruption duration index</a><span class="pur_comma">, </span><a href="/tags/saifi">SAIFI</a><span class="pur_comma">, </span><a href="/tags/system-average-interruption-frequency-index">System average interruption frequency index</a><span class="pur_comma">, </span><a href="/tags/caidi">CAIDI</a><span class="pur_comma">, </span><a href="/tags/customer-average-interruption-duration-index">customer average interruption duration index</a><span class="pur_comma">, </span><a href="/tags/jd-power-and-associates">J.D. Power and Associates</a><span class="pur_comma">, </span><a href="/tags/power-quality">power quality</a><span class="pur_comma">, </span><a href="/tags/outage-restoration">Outage restoration</a><span class="pur_comma">, </span><a href="/tags/utility-performance">utility performance</a><span class="pur_comma">, </span><a href="/tags/demographic">demographic</a><span class="pur_comma">, </span><a href="/tags/system-reliability">system reliability</a><span class="pur_comma">, </span><a href="/tags/distribution-system">distribution system</a><span class="pur_comma">, </span><a href="/tags/population-density">population density</a><span class="pur_comma">, </span><a href="/tags/communication">Communication</a><span class="pur_comma">, </span><a href="/tags/customer-segment">customer segment</a><span class="pur_comma">, </span><a href="/tags/market-segmentation">market segmentation</a><span class="pur_comma">, </span><a href="/tags/customer-interaction">Customer interaction</a> </div>
</div>
Sat, 29 Dec 2012 19:09:51 +0000meacott16386 at http://www.fortnightly.comBenchmarking PM Practiceshttp://www.fortnightly.com/fortnightly/benchmarking-pm-practices
<div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>As part of an annual electric T&amp;D benchmarking program, First Quartile Consulting (1QC) has surveyed project management practices for several years. In addition, 1QC has conducted consulting projects to help several utilities improve their project management processes. The 2012 1QC benchmarking survey and results from previous years provide insights into the project manager’s (PM) role (including role in construction), staffing benchmarks, outcome measures, and software tools.</p>
<p><strong><span style="color:#b22222;"><span style="font-size: 16px;">PM Role and Results</span></span></strong><br />Most large utilities with significant T&amp;D capital budgets have implemented some form of project management organization. One of the differences among utilities is which projects get assigned a PM. Last year about half the companies assigned a PM to all capital projects (although the process might be simplified for small jobs); this year companies are more likely to have established various cost or complexity thresholds. Utilities continue to find the right trade-off between PM overheads and the resulting benefits of better-managed projects. <em>(See Figure 1.)</em></p>
<p>PMs generally work in a matrix organization, and their authority can be described as limited, strong, or balanced. In a strong matrix the PM has more ability to control the functional resources assigned to the project, including more direct day-to-day control. Companies are more likely to report PM authority as “balanced” or “limited” now than in the past – which isn’t surprising given the functional orientation of most utilities. <em>(See Figure 2.)</em></p>
<p><span style="color:#b22222;"><span style="font-size: 16px;"><strong>PM Staffing</strong> </span></span><strong><span style="color:#b22222;"><span style="font-size: 16px;">Benchmarks</span></span></strong><br />Staffing ratios reported in the survey over the last few years have remained fairly constant. The average annual budget managed per PM is $28 million, with a median value of $19 million (vs. $25 million and $16 million in 2011), although there’s a fairly significant range between companies and between project managers, depending on the mix of projects and other factors. Support staff (cost and schedule analysts) has been roughly equal to the number of PMs, although the number is influenced by the complexity of the supporting project management scheduling and cost software.</p>
<p><strong><span style="color:#b22222;"><span style="font-size: 16px;">Outcome Measures</span></span></strong><br />The argument for formal project management is that it achieves better outcomes than informal processes. We have no definitive answer to prove or disprove this argument, although most utilities act as if they believe it’s true. The traditional outcome measures for projects are on-time, on-budget, and to-specifications. Unfortunately, the measures are hard to benchmark.</p>
<p>• <strong>Time:</strong> Most survey respondents report that more than 80 percent of projects are completed on the scheduled date – but there’s no agreement on when to freeze the due date.</p>
<p>• <strong>Cost:</strong> Similarly most companies create cost estimates at various stages in the process: 1) conceptual; 2) after preliminary engineering; and 3) after final design and release to construction. We’ve surveyed companies on accuracy goals at each project phase and received relatively consistent results, as shown in Figure 3. We’re still somewhat surprised, however, when we ask for actual performance against goals by phase; most utilities don’t measure performance against the estimate at each phase.</p>
<p>• <strong>Spec:</strong> Measuring the “to-specification” performance is also difficult to benchmark; a likely metric would be number of change orders, but the discipline in managing change orders varies greatly among utilities.</p>
<p>One portfolio measure that has emerged from survey and consulting work is “Percent jobs walked-in” – a measure of whether companies do what they say they’re going to do. One analogy is a trip to the grocery store. Say you give your son $20 dollars to go to the store and buy bread and milk, and he returns with soda and chips and $1 change. Using a portfolio measure, he was within 5 percent of budget – but in fact 100 percent of the activity was “walked in” during the last minute. Although “Walked in” is an extremely important measure of outcomes, it’s difficult to benchmark; survey respondents say walk-ins account for 20 percent of jobs, but they have different starting points for finalizing the budget.</p>
<p><strong><span style="color:#b22222;"><span style="font-size: 16px;">Construction Role</span></span></strong><br />The construction role appears to be one of the most important, but problematic issues among our utilities. Most utilities recognize the importance of having the PM involved very early in the planning process for large capital projects, but have difficulty engaging the construction folks earlier. Constructability and operability reviews are becoming more common, but the relationship between construction and engineering is often strained. The PM is seen as a key bridge between the groups – and companies are trying to find organizational solutions to involve construction, where most of the costs are incurred. Simply getting good status information is a challenge for most utilities who do not have well-developed construction management functions. When contractors are involved in design and engineering, the problems are exacerbated.</p>
<p><strong><span style="color:#b22222;"><span style="font-size: 16px;">Software Choices</span></span></strong><br />As it was previously, MS-Project was the primary PM scheduling software in this year’s survey, although Primavera was a close second. Almost all the companies rely on other systems that are cobbled together to consolidate cost, budgets, labor-hours, and construction status into useful reports. Even more challenging for most utilities is the ability to adequately forecast workload and resource demands. A schedule can’t be successful without capacity planning (<em>e.g.,</em> resources and outage constraints) to develop a reasonable schedule – and the software hasn’t proven up to the challenge for most utilities. <em>(See Figure 4.)</em></p>
<p><strong><span style="color:#b22222;"><span style="font-size: 16px;">Tracking Practices</span></span></strong><br />Although project management practices at utilities have remained fairly stable over the last several years, it’s important to track changes over time. Further, identifying best practices can help utilities adapt to their changing workloads and project demands.</p>
<p><span style="color:#b22222;"><strong>ABOUT THE AUTHOR: </strong></span><em>Tim Szybalski (<a href="mailto:tim.szybalski@1qconsulting.com?subject=Spark%20article">tim.szybalski@1qconsulting.com</a>) is a director at <a href="http://1stquartileconsulting.com">First Quartile Consulting (http://1stquartileconsulting.com)</a>, a management consulting firm that also performs annual benchmarking studies across electric T&amp;D and customer service for North American utilities. Szbalski’s career includes more than 15 years as an engineer and manager for SDG&amp;E and PG&amp;E.</em></p>
</div></div></div><div class="field field-name-field-import-deck field-type-text-long field-label-above"><div class="field-label">Deck:&nbsp;</div><div class="field-items"><div class="field-item even">Tracking changes in project management roles and outcomes.</div></div></div><div class="field field-name-field-byline field-type-text field-label-above"><div class="field-label">Byline:&nbsp;</div><div class="field-items"><div class="field-item even">By Tim Szybalski, First Quartile Consulting</div></div></div><div class="field field-name-field-import-image field-type-image field-label-above"><div class="field-label">Image:&nbsp;</div><div class="field-items"><div class="field-item even"><img src="http://www.fortnightly.com/sites/default/files/civilengineering2.jpg" width="400" height="255" alt="Benchmarking PM Practices" /></div><div class="field-item odd"><img src="http://www.fortnightly.com/sites/default/files/121220-Spark-fig1.jpg" width="1011" height="609" alt="Project Management: Project Types and Sizes" /></div><div class="field-item even"><img src="http://www.fortnightly.com/sites/default/files/121220-Spark-fig2.jpg" width="1011" height="552" alt="Project Manager&#039;s Authority Level" /></div><div class="field-item odd"><img src="http://www.fortnightly.com/sites/default/files/121220-Spark-fig3.jpg" width="1011" height="472" alt="Substation Project Accuracy Goals" /></div><div class="field-item even"><img src="http://www.fortnightly.com/sites/default/files/121220-Spark-fig4.jpg" width="1011" height="597" alt="Project Management Software Preferences" /></div></div></div><div class="field field-name-field-subtitle field-type-text field-label-above"><div class="field-label">Subtitle:&nbsp;</div><div class="field-items"><div class="field-item even">Tracking changes in project management roles and outcomes.</div></div></div><div class="field field-name-field-tags field-type-taxonomy-term-reference field-label-above clearfix">
<div class="field-label">Tags:&nbsp;</div>
<div class="field-items">
<a href="/tags/project-management">project management</a><span class="pur_comma">, </span><a href="/tags/epc">EPC</a><span class="pur_comma">, </span><a href="/tags/engineering-procurement-and-construction">Engineering procurement and construction</a><span class="pur_comma">, </span><a href="/tags/architect">architect</a><span class="pur_comma">, </span><a href="/tags/engineering">engineering</a><span class="pur_comma">, </span><a href="/tags/benchmarking">Benchmarking</a> </div>
</div>
<div class="field field-name-field-intro-text field-type-text-long field-label-above"><div class="field-label">Intro Text:&nbsp;</div><div class="field-items"><div class="field-item even">Project management plays an important role in ensuring favorable outcomes at utility projects. An annual benchmarking survey shows how utility PMs are adapting to their changing workloads and project demands over time. </div></div></div><div class="field field-name-field-publishing-date field-type-datetime field-label-above"><div class="field-label">Publishing Date:&nbsp;</div><div class="field-items"><div class="field-item even"><span class="date-display-single">Thursday, December 20, 2012 (All day)</span></div></div></div>Thu, 20 Dec 2012 17:25:19 +0000mburr16382 at http://www.fortnightly.comLabor Costs and the Rate Casehttp://www.fortnightly.com/fortnightly/2012/03/labor-costs-and-rate-case
<div class="field field-name-field-import-deck field-type-text-long field-label-inline clearfix"><div class="field-label">Deck:&nbsp;</div><div class="field-items"><div class="field-item even"><p>Incentives, staffing, and benchmarking in a tight economy.</p>
</div></div></div><div class="field field-name-field-import-byline field-type-text-long field-label-inline clearfix"><div class="field-label">Byline:&nbsp;</div><div class="field-items"><div class="field-item even"><p>David W. Sosa, Ph.D., and Virginia Perry-Failor</p>
</div></div></div><div class="field field-name-field-import-category field-type-text field-label-inline clearfix"><div class="field-label">Category:&nbsp;</div><div class="field-items"><div class="field-item even">Talent</div></div></div><div class="field field-name-field-import-bio field-type-text-long field-label-inline clearfix"><div class="field-label">Author Bio:&nbsp;</div><div class="field-items"><div class="field-item even"><p><b>David W. Sosa</b>, Ph.D. is a vice president and <b>Virginia Perry-Failor</b> is a manager, both in the San Francisco office of Analysis Group, Inc.</p>
</div></div></div><div class="field field-name-field-import-volume field-type-node-reference field-label-inline clearfix"><div class="field-label">Magazine Volume:&nbsp;</div><div class="field-items"><div class="field-item even">Fortnightly Magazine - March 2012</div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>In several recent utility rate cases, regulators, under pressure to contain rate increases, have disallowed a portion of a utility’s claimed employee compensation expenses, citing local economic conditions and the need for austerity. Ratepayers should of course expect that the costs that lie behind the rate remain “just and reasonable.” However, if a utility is unable to recover reasonably incurred costs through its rates, its overall costs might rise, jeopardizing its financial health, Future ratepayers might end up paying more for service. Quality of service ultimately might suffer. Moreover, management’s ability to keep the ship running might be compromised if companies are denied flexibility to adopt viable alternative compensation packages, or if certain components of employee compensation are inappropriately disallowed.</p>
<p>In the typical rate case, the utility offers evidence that its employee compensation costs are reasonable. If the evidence proves insufficient, regulators may choose to disallow certain requested costs. The regulator must review the evidence and consider how a cost allowance will affect rates. However, if regulators focus on specific components of employee compensation—without adequately considering the reasonableness of total costs—then the rate order might do financial harm to the utility, and, in the long term, to ratepayers.</p>
<p>Utilities can choose different ways to present labor costs to regulators to best support their claims of reasonableness—even as regulators, too, can and should consider a range of factors in reviewing compensation and utility revenue requirements. Here, we look at both sides of the rate-making process, and discuss some key trends in utility compensation practices.</p>
<h4>Trends in Cost Management</h4>
<p>A utility’s employee compensation typically comprises cash compensation—salary and incentives—and non-cash compensation, including pension and retirement plans, medical and dental care, and other benefits. The Bureau of Labor Statistics (BLS) reported that through September 2011 approximately 61 percent of employee compensation at utilities came in the form of cash wages and salaries, while the remaining 39 percent represented benefit costs.<sup>1</sup> Across all industries, the costs of non-cash compensation have climbed swiftly, prompting utilities and other employers to deploy a range of strategies for managing these expenses. Examples include retirement plan restructuring; increased use of incentive-based compensation; and reductions in headcount.</p>
<p>First, utilities have switched employees from defined-benefit pension plans to defined-contribution pension plans, thereby shifting pension funding responsibility to employees. From 1980 through 2008, the proportion of private wage and salary workers participating in defined benefit pension plans fell from 38 percent to 20 percent.<sup>2</sup> Over the same period, the percentage of workers covered by a defined contribution pension plan—that is, an investment account established and often subsidized by employers but owned and controlled by employees—rose from 8 percent to 31 percent.</p>
<p>Second, utilities have extended incentive compensation to more employees and increased the amount of total compensation at risk by implementing plans that link a portion of an employee’s compensation to his or her achievement of individual and companywide goals. A recent Towers Watson survey of utility compensation, which was cited in a decision by the Indiana Public Service Commission, reported that, “93 percent of the individuals in exempt-level positions were eligible for annual incentives.”<sup>3,4 </sup></p>
<p>Third, through a variety of mechanisms, including hiring freezes and severance programs, many utilities have reduced employee headcount in recent years. The BLS reports that total employment in utilities fell from around 600,000 in 2001 to 555,000 as of November 2011.<sup>5</sup> However, as with all workforce initiatives, utilities must be careful that any changes made don’t compromise safety, reliability, and quality of service.</p>
<p>At the same time that utilities seek to rework their employee compensation plans to better control costs, they’re also facing a wave of retirements and, as a result, a shortage of qualified workers in many areas. Between 2009 and 2015, approximately 46 percent of skilled technicians and 51 percent of engineers in the utility sector will become eligible for retirement.<sup>6</sup> Some employees have deferred retirement in light of economic conditions; still, the replacement of these skilled workers is a growing problem. Moreover, industry-wide goals to “replace aging infrastructure and achieve modernization objectives”<sup>7</sup> mean that utilities will need to add staff over and above the replacements for those retiring—including, perhaps, different resources at a time when younger qualified workers and trainable employees are in short supply.</p>
<p>In fact, utilities across the country are participating in new initiatives for identifying and training qualified candidates; the Center for Energy Workforce Development’s members include more than 80 energy-related enterprises, including utilities, but it takes time to adequately prepare employees for certain industry roles. For example, it can take 10 to 12 years to fully train a lead lineman.<sup>8</sup> Meanwhile, many U.S. universities have scaled back their electrical engineering programs, and many foreign graduate students are finding attractive opportunities in their home countries, causing the pipeline of engineering talent to run low.<sup>9</sup> These labor market conditions limit the talent pool available to utilities and put upward pressure on the levels of compensation needed to attract and retain qualified employees.</p>
<h4>Tools for Regulator Review</h4>
<p>In determining rate changes, regulators must take into account the full range of economic challenges and the remedies that utilities are employing to combat them. More specifically, regulators should focus on total compensation, plus the trend of expenses in the recent past.</p>
<p>In particular, however, regulators must stay mindful of factors that tend to make a simple apples-to-apples comparison perhaps less indicative than it might otherwise appear, such as: 1) offsetting tradeoffs between cash- and non-cash compensation schemes; 2) the financial value of goals achieved or missed under incentive compensation plans; 3) employee productivity as affected by conservation or efficiency programs; and 4) how industry benchmarking can be affected by the diversity of economic conditions among local utility service territories.</p>
<p>When regulators evaluate individual components of employee compensation, they must be careful to account for the fact that companies are changing the mix of cash and non-cash compensation. Increases in one component of compensation might offset decreases in another.</p>
<p>For example, a utility might increase employee cash salaries to offset the non-cash effect of shifting employees from a defined-benefit pension plan to a defined-contribution pension plan. The appropriate question for regulators to address is: How will changing the levels of total employee compensation affect rates? Regulators’ examination of one particular component without adequate emphasis on total costs might be misleading.</p>
<p>Regulators also must take a similarly holistic approach to evaluating incentive compensation. The objective of these programs should be to encourage individual and collective employee behavior that benefits ratepayers as well as the company. Incentive compensation programs will obviously vary across utilities, based on management objectives and company-specific circumstances. To be most effective, however, and to support the recovery of program costs, these programs should have clearly defined goals and objective measurement criteria. Program goals might include improved reliability, customer service, expense management, and financial performance. For their part, regulators need to be transparent about the extent to which they consider financial criteria—which benefit ratepayers as well as shareholders—acceptable program metrics for compensation expense to be recoverable.</p>
<p>Some utilities have seen increases in employee productivity over the past several years, and that’s a significant benefit for ratepayers. As employees work longer and harder, they reduce output-adjusted compensation costs, all else being equal. However, evaluations of productivity can be complicated when utilities are attempting to reduce output—for instance, developing energy efficiency and conservation-related resources, which is increasingly becoming the industry norm. Productivity is traditionally measured according to level of output—electricity sales, for instance—per unit of labor input; more output per unit of labor input would denote an increase in productivity. However, gains in energy efficiency might cause a decline in electricity sales per unit of labor input—and productivity, by this measure, will appear to be declining as well, even though employees are performing effectively. For this reason, standard labor productivity metrics might not capture the full scope of employee effort and achievement, thereby understating labor productivity.</p>
<p>Benchmarking can help regulators understand employee compensation cost levels and trends, and determine whether requested cost recovery is reasonable. Benchmarking also can assist regulators in evaluating more detailed questions, such as: How does the target utility compare to peers in terms of labor productivity, or in terms of cash compensation?</p>
<p>In particular, peer group benchmarking compares the business performance and practices of a company to those of comparable companies. This technique, which companies, market analysts, and regulators often rely on to evaluate operational and financial performance, can be used to assess indicators of overall company performance as well as the performance of specific activities relative to peers.</p>
<p>However, another benchmark is being introduced in rate cases with greater frequency: the comparison between measures of utility compensation and measures of local economic conditions, including wages and employment. Although regulators might find it useful to look at the local wages of workers who have skills similar to utility employees, general wage and employment rates aren’t appropriate benchmarks for evaluating employee compensation costs, for several reasons. As described above, the utility labor force is highly specialized and characterized by a scarcity of qualified personnel. Utilities compete with one another, regionally and even nationally, for employees to fill many positions. In the ratemaking context, evidence regarding total compensation costs—including over time and relative to other comparable companies—is critical. Regulators might also be interested in evidence regarding the utility’s salary structure and individual components of compensation. However, it’s critical to evaluate these measures relative to the appropriate benchmarks, which must be derived from comparable companies and not merely on the basis of geographic proximity.</p>
<p>Identifying an appropriate benchmark group—or panel of comparable companies—will allow regulators to focus on the regional or national labor market in which a particular utility competes. It also will provide a reliable context for evaluating both the level and format of utility compensation expenses. Companies should be aware that regulators might be tempted to interpret a benchmark as a bright line, so it might be important to discuss the statistical properties of the benchmark sample in any interpretation of results.</p>
<p>Two principal steps are involved in peer-group benchmarking.</p>
<p>• <i>Normalization:</i> The evaluator should determine whether the cost or performance measures at issue can be directly compared across companies, or whether a common means of measurement must be established for presentation to regulators. In the case of employee compensation, these costs will vary based on a number of factors including customers served, geographic region, and degree of vertical integration. Therefore, aggregate measures of employee compensation expense must be normalized—that is, transformed into a common unit of measurement—before a meaningful comparison can be made between the subject company’s performance and the performance of companies in the benchmark group. For employee compensation costs, measures of output, including sales and customers, are the commonly used normalization measures. Another normalization factor is number of employees.</p>
<p>• <i>Panel construction:</i> Once a common basis of comparison has been established, the evaluator needs to construct the panel of companies—a list of “comparables,” in real-estate parlance—against which financial or service-level performance can be compared. The selection criteria will depend on the objective of the exercise. For example, regulators might want to conduct a broad evaluation of a utility’s performance relative to the entire electric industry. That would require a benchmark group that includes as large a group of utilities as possible, screening for company characteristics that are relevant to the particular compensation measure at issue. As a general matter, the selection criteria for benchmark companies would be based, in part, on company characteristics that affect expense levels, such as degree of vertical integration and lines of business.</p>
<p>Since any given geographic area will likely have only one regulated electric utility and one regulated gas utility, companies must recruit for skilled workers regionally and nationally. Factoring in the previously mentioned labor challenges utilities face, regulators will need to benchmark salary ranges by job description; this lens should reflect the regional and national labor markets in which utilities compete for talent. The commonly used sources for such data include industry-specific and broad-based compensation surveys. To the extent that utilities have outsourced positions that require lower skill levels and draw from local markets—for example, non-critical security services—they wouldn’t factor into employee compensation costs.</p>
<p>Some U.S. regulatory commissions have explicitly acknowledged that utilities’ employee compensation strategies are developed to attract, retain, and motivate employees, and that the proper concern of regulators is whether a utility can demonstrate that the overall level of employee compensation expenses is reasonable. These regulators have established criteria, including market labor rates, for evaluating reasonable compensation levels, but they recognize that the allocation of the package over its various components, including incentive compensation, is a matter best left to management. The Massachusetts Department of Public Utilities (MDPU) offers an example of this approach.</p>
<p>The MDPU sets forth evaluation criteria that explicitly recognize “that the different components of compensation are to some extent substitutes for each other and that different combinations of these components may be used to attract and retain employees.” Utilities are required to demonstrate that their costs conform to those criteria and that their total unit-labor cost “is minimized in a manner supported by their overall business strategies.” Utilities are also required to compare their costs against a market-based standard.<sup>10 </sup></p>
<p>Regulators in Indiana and Nevada also have considered overall compensation against established evaluation criteria. In Indiana, regulators evaluated Vectren South’s compensation package, including incentive compensation up to a board-approved level, and found that it was at the low end of the competitive range in the market, relative to comparable companies. As a result, Indiana regulators approved the utility’s compensation request.<sup>11</sup> Similarly, in Nevada, the Nevada Public Utilities Commission (NPUC) has evaluated a combined compensation package of payroll and benefit costs. The commission found that Sierra Pacific had actually reduced its payroll and benefit costs by about $16 million, “reflecting the reduction in growth that has occurred during the recession,”<sup>12</sup> and approved Sierra Pacific’s compensation request.</p>
<h4>What Utilities Should Do</h4>
<p>Given the complex compensation issues involved, and the competing claims of stakeholders in rate proceedings, utilities need to anticipate the issues that intervenors and regulators are likely to focus on and develop a record that establishes the reasonableness of employee compensation expenses. Utilities’ compensation presentations should offer regulators clear and concise information regarding levels of total employee compensation over time and compared with other utilities. As much as possible, these presentations should conform to prior commission decisions and should reflect concerns about current economic conditions. To the extent changing circumstances justify departures from prior regulatory precedent, these departures should be identified, and the justification for the change should be clearly articulated. Among other things, the utility should be able to identify changes in employee compensation and explain to regulators why these changes have occurred and why the observed expenses are reasonable.</p>
<p>Also, to the extent that a utility has been able to reduce employee compensation costs through discrete initiatives, such as severance programs or initiatives that improve labor productivity, regulators might be tempted to appropriate some or all of the expense savings prior to the rate effective period, on behalf of ratepayers. However, this treatment is short-sighted because regulatory lag—the time between when a utility initiative begins generating expense savings and when that savings is passed on to consumers via rates—creates incentives for utilities to implement cost-savings initiatives with uncertain outcomes. If an initiative is successful, the utility will have the opportunity to capture some of the expense savings before they’re passed on to ratepayers, compensating the company for some of the assumed risk.</p>
<p>Utilities should remind regulators that regulatory lag benefits ratepayers and encourage commissions to take a forward view rather than attempting to capture expense savings retroactively. Additionally, employee compensation levels might reflect rising productivity—for example, staff reductions might have contributed to increased productivity, which benefits ratepayers. Individual compensation might have risen to reflect improved performance, even though aggregate compensation has fallen. Utilities can assist their commissions to place individual compensation levels in context by offering statistics that describe productivity through time.</p>
<p> </p>
<h4>Endnotes:</h4>
<p>1. <a href="http://www.bls.gov/iag/tgs/iag22.htm" target="_blank">http://www.bls.gov/iag/tgs/iag22.htm</a></p>
<p>2. Butrica, Barbara, Howard Iams, Karen Smith and Eric Tober, “The Disappearing Defined Benefit Pension and Its Potential Impact on the Retirement Incomes of Baby Boomers,” <i>Social Security Bulletin</i>, Vol. 69, No. 3, 2009.</p>
<p>3. State of Indiana, Indiana Regulatory Commission, Final Order Cause No. 43839, p.48.</p>
<p>4. <a href="http://data.bls.gov/timeseries/CMU2034400000000D?data_tool=XGtable" target="_blank">http://data.bls.gov/timeseries/CMU2034400000000D?data_tool=XGtable</a></p>
<p>5. <a href="http://data.bls.gov/timeseries/CES4422000001?data_tool=XGtable" target="_blank">http://data.bls.gov/timeseries/CES4422000001?data_tool=XGtable</a></p>
<p>6.<i> The Center for Energy Workforce Development 2009 Survey</i>, June 2009.</p>
<p>7. Wanda Reder, president, IEEE Power &amp; Energy Society and vice president of power systems services at S&amp;C Electric, Feb. 18, 2009.</p>
<p>8. DOE’s <i>Workforce Labor Trends in the Electric Utility Industry Report to the United States Congress</i>, pursuant to Section 1101 of the <i>Energy Policy Act</i> of 2005.</p>
<p>9. <em>Ibid</em>.</p>
<p>10. Order of The Commonwealth of Massachusetts, Division of Public Utilities in Petition of Massachusetts Electric Company and Nantucket Electric Company, pursuant to G. L. c. 164, § 94, and 220 C.M.R. § 5.00 et seq., for a General Increase in Electric Rates and Approval of a Revenue Decoupling Mechanism, DPU 09-39, Nov. 30, 2009.</p>
<p>11. State of Indiana, Public Utility Commission, Final Order, Cause No. 43839, Approved April 27, 2011, p. 50. (published at 289 PUR4th 9.)</p>
<p>12. Public Utilities Commission of Nevada, Order, Docket 10-06001, Dec. 23, 2010.</p>
</div></div></div><div class="field field-name-field-article-category field-type-taxonomy-term-reference field-label-above clearfix"><h3 class="field-label">Category (Actual): </h3><ul class="links"><li class="taxonomy-term-reference-0"><a href="/article-categories/rate-cases">Rate Cases</a></li></ul></div><div class="field field-name-field-members-only field-type-list-boolean field-label-above"><div class="field-label">Viewable to All?:&nbsp;</div><div class="field-items"><div class="field-item even"></div></div></div><div class="field field-name-field-article-featured field-type-list-boolean field-label-above"><div class="field-label">Is Featured?:&nbsp;</div><div class="field-items"><div class="field-item even"></div></div></div><div class="field field-name-field-department field-type-taxonomy-term-reference field-label-above clearfix"><h3 class="field-label">Department: </h3><ul class="links"><li class="taxonomy-term-reference-0"><a href="/department/talent">Talent</a></li></ul></div><div class="field field-name-field-image-picture field-type-image field-label-above"><div class="field-label">Image Picture:&nbsp;</div><div class="field-items"><div class="field-item even"><img src="http://www.fortnightly.com/sites/default/files/article_images/1203/images/1203-TAL.jpg" width="606" height="720" alt="" /></div></div></div><div class="field field-name-field-fortnightly-40 field-type-list-boolean field-label-above"><div class="field-label">Is Fortnightly 40?:&nbsp;</div><div class="field-items"><div class="field-item even"></div></div></div><div class="field field-name-field-law-lawyers field-type-list-boolean field-label-above"><div class="field-label">Is Law &amp; Lawyers:&nbsp;</div><div class="field-items"><div class="field-item even"></div></div></div><div class="field field-name-field-tags field-type-taxonomy-term-reference field-label-above clearfix">
<div class="field-label">Tags:&nbsp;</div>
<div class="field-items">
<a href="/tags/analysis-group">Analysis Group</a><span class="pur_comma">, </span><a href="/tags/benchmarking">Benchmarking</a><span class="pur_comma">, </span><a href="/tags/bls">BLS</a><span class="pur_comma">, </span><a href="/tags/cash-compensation">cash compensation</a><span class="pur_comma">, </span><a href="/tags/center-energy-workforce-development">Center for Energy Workforce Development</a><span class="pur_comma">, </span><a href="/tags/ces">CES</a><span class="pur_comma">, </span><a href="/tags/commission">Commission</a><span class="pur_comma">, </span><a href="/tags/commonwealth-massachusetts">Commonwealth of Massachusetts</a><span class="pur_comma">, </span><a href="/tags/congress">Congress</a><span class="pur_comma">, </span><a href="/tags/cost">Cost</a><span class="pur_comma">, </span><a href="/tags/doe">DOE</a><span class="pur_comma">, </span><a href="/tags/energy-policy-act">Energy Policy Act</a><span class="pur_comma">, </span><a href="/tags/iee">IEE</a><span class="pur_comma">, </span><a href="/tags/ieee">IEEE</a><span class="pur_comma">, </span><a href="/tags/incentive-compensation">Incentive compensation</a><span class="pur_comma">, </span><a href="/tags/indiana-public-service-commission">Indiana Public Service Commission</a><span class="pur_comma">, </span><a href="/tags/massachusetts-department-public-utilities">Massachusetts Department of Public Utilities</a><span class="pur_comma">, </span><a href="/tags/mdpu">MDPU</a><span class="pur_comma">, </span><a href="/tags/nevada-public-utilities-commission">Nevada Public Utilities Commission</a><span class="pur_comma">, </span><a href="/tags/non-cash-compensation">non-cash compensation</a><span class="pur_comma">, </span><a href="/tags/normalization">Normalization</a><span class="pur_comma">, </span><a href="/tags/npuc">NPUC</a><span class="pur_comma">, </span><a href="/tags/public-utilities-commission-nevada">Public Utilities Commission of Nevada</a><span class="pur_comma">, </span><a href="/tags/security">Security</a><span class="pur_comma">, </span><a href="/tags/towers-watson">Towers Watson</a><span class="pur_comma">, </span><a href="/tags/vectren-south">Vectren South</a><span class="pur_comma">, </span><a href="/tags/workforce">Workforce</a> </div>
</div>
Thu, 01 Mar 2012 05:00:00 +0000puradmin13416 at http://www.fortnightly.comOntario's Failed Experiment (Part 2)http://www.fortnightly.com/fortnightly/2009/08/ontarios-failed-experiment-part-2
<div class="field field-name-field-import-deck field-type-text-long field-label-inline clearfix"><div class="field-label">Deck:&nbsp;</div><div class="field-items"><div class="field-item even"><p>Service quality suffers under PBR framework.</p>
</div></div></div><div class="field field-name-field-import-byline field-type-text-long field-label-inline clearfix"><div class="field-label">Byline:&nbsp;</div><div class="field-items"><div class="field-item even"><p>Francis J. Cronin and Stephen Motluk</p>
</div></div></div><div class="field field-name-field-import-bio field-type-text-long field-label-inline clearfix"><div class="field-label">Author Bio:&nbsp;</div><div class="field-items"><div class="field-item even"><p><strong>Frank Cronin</strong> (<a href="mailto:fjcroninecon@verizon.net">fjcroninecon@verizon.net</a>) is an economic consultant residing in Acton, Mass., and <strong>Stephen Motluk</strong> (<a href="mailto:smotluk@uniserve.com">smotluk@uniserve.com</a>) is an economic consultant in Toronto.</p>
</div></div></div><div class="field field-name-field-import-volume field-type-node-reference field-label-inline clearfix"><div class="field-label">Magazine Volume:&nbsp;</div><div class="field-items"><div class="field-item even">Fortnightly Magazine - August 2009</div></div></div><div class="field field-name-field-import-image field-type-image field-label-above"><div class="field-label">Image:&nbsp;</div><div class="field-items"><div class="field-item even"><img src="http://www.fortnightly.com/sites/default/files/article_images/0908/images/0908-fea-ontario-fig-1.jpg" width="1028" height="684" alt="" /></div><div class="field-item odd"><img src="http://www.fortnightly.com/sites/default/files/article_images/0908/images/0908-fea-ontario-fig-2.jpg" width="1028" height="641" alt="" /></div><div class="field-item even"><img src="http://www.fortnightly.com/sites/default/files/article_images/0908/images/0908-fea-ontario-fig-3.jpg" width="1368" height="838" alt="" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><em>[Editor’s note: Fortnightly published <a href="http://www.fortnightly.com/fortnightly/2009/07/ontarios-failed-experiment-part-1">Part I of this article</a> in the July 2009 issue. In that installment, the authors described how the Energy Competition Act in 1998 restructured Ontario’s utilities and charged the Ontario Energy Board with implementing performance-based ratemaking (PBR) to maintain service quality. The author asserts that despite the board’s intentions, however, service quality has declined in the province.]</em></p>
<p>The Ontario Energy Board’s (OEB) experience with service quality regulation (SQR) of electric distributors has its origins in the OEB’s <i>2000 Electricity Distribution Rate Handbook</i>. In terms of SQR, this document largely was based on the Implementation Task Force Report’s<sup> 1</sup> recommendations.</p>
<p>Survey work by the task force found that more than 60 large and medium utilities covering over 80 percent of distribution customers had been collecting historical reliability data. However, a number of smaller utilities, some with only hundreds of customers, didn’t have historical data. In the end, for a variety of reasons, the task force recommended that only minimum customer-service standards be applied to the LDCs during the first generation. The levels of the minimum standards were determined through a survey of the LDCs. For reliability, the “standards” actually are weaker: For those LDCs with historical data, those LDCs should keep their performance within the range of whatever it had been during the preceding three years. The task force noted: “The OEB will review the PBR submissions to ensure compliance with the established benchmarks.” Those LDCs without reliability data should begin to collect it. The task force recommended that the latter utilities’ benchmarks be set by using peer-group averages.</p>
<p>However, despite the reluctant acceptance of the lowest common denominator for SQR by the implementation task force, the general expectation was that the OEB would move quickly, possibly even early in the first generation, but no later than the beginning of the second generation following the initial three-year PBR term, to set reliability-performance targets based on a more reasoned and judicious rationale than “just do whatever it was that you were doing.”</p>
<p>Indeed, the principles of just and reasonable rates would require that service quality and reliability standards be explicitly formulated as part of the sale of access by distributors to customers. And, the OEB itself stated its intent to move expeditiously: “Upon review of the first year's results, the OEB will determine whether there is sufficient data to set thresholds to determine service degradation for years 2 and 3.”<sup>2 </sup> Unfortunately, it’s now 2009 and the same nominal standards that applied in 2000 still apply today. As interpreted by some LDCs, however, the standards actually are lower today than in 2000.</p>
<p>In its initial PBR rate setting guidelines, the OEB spelled out the reasoning behind the standards:</p>
<p>…the Board’s approach to encourage the maintenance of service quality during the first generation PBR plan is to apply minimum standard guidelines for customer service indicators, and to apply a utility’s historic performance as its specific service reliability standards. Where a utility has not monitored service reliability in the past, it is required to initiate monitoring and reporting of the indices. (7-2)</p>
<p>Thus for the system average interruption duration index (SAIDI) and system average interruption frequency index (SAIFI), “All planned and unplanned interruptions of one minute or more should be used to calculate this index. Utilities that have at least 3 years of data on this index should, at minimum, remain within the range of their historic performance.” (7-6, 7-7)</p>
<p>With respect to service degradation and remedial action, the OEB noted:</p>
<p>In the absence of historical service quality data, it is not possible to identify service degradation during the first year of the PBR plan. However, upon review of the first year's results, the Board will determine whether there is sufficient data to set thresholds to determine service degradation for years 2 and 3. When established, the Board will issue these thresholds and any utility whose performance falls below these thresholds will be required to file a remedial action plan. (7-10)</p>
<p>It is anticipated that by the second generation PBR plan, there will be sufficient data collected to set industry service-quality performance standards. Once these standards have been established, PBR incentive mechanisms with economic consequences will be introduced around the service quality indicators. (7-10) However, it appears this work hasn’t been completed.</p>
<h4>OEB’s 2003 Review</h4>
<p>The OEB noted its responsibility with respect to service/reliability, as well as the necessity to evaluate prices, hand-in-hand with the actual service/reliability delivered to customers. In August, 2003, the OEB began a review of service-quality regulation. The OEB acknowledged: “Section 1 of the Ontario Energy Board Act, 1998 states … The Board, in carrying out its responsibilities under this or any other Act in relation to electricity, shall be guided by the following objectives: ... 3.To protect the interests of consumers with respect to prices and the reliability and quality of electricity service.”</p>
<p>Furthermore, the OEB noted that the issues of distribution prices and service quality are integrally linked together. “… [A] determination of just and reasonable rates must take into account the adequacy and level of service quality …”</p>
<p>The August 2003 notice reviewed the OEB’s initial PBR decision and specification of service/reliability indicators. Speaking of the standards in the 2000 Handbook, the notice said, “For most SQIs, the Board approved initial minimum standards. The Board determined that other aspects of service quality regulation, including remedial action and/or financial consequences of service degradation, should be considered, but that a proper assessment… required experience with the measurement and reporting of the SQIs.”</p>
<p>The notice discussed recent developments regarding second generation PBR:</p>
<p>…the Board advised stakeholders of the planned phased development of a second-generation PBR (“PBR II”) plan. A review of currently reported service quality indicators and associated standards, as well as consideration of other indicators and elements of service quality regulation, were identified as one of the components of PBR II plan development….As electricity distributors have been reporting their service performance for three years now, the Board considered it timely to review the SQIs and to further develop service quality regulation applicable to electricity distributors…</p>
<p>The notice listed the issues for review: review of the existing service quality indicators; consideration of additional or replacement indicators; the frequency and the periodicity of reported performance; defining degraded service and regulatory responses to service degradation (remedial action reports, possible financial consequences); urban and rural, large and small, and other distinctions in reporting or standards; and, the form and purpose of service quality audits in a comprehensive SQR plan (remedial plans and financial rewards or penalties).</p>
<p>Subsequently, Ontario Energy Board staff released a paper, “Service Quality Regulation for Ontario Electricity Distribution Companies” (2003 staff report). Importantly, an associated staff discussion paper reaffirmed the link between quality and rates: just and reasonable rates must consider the quality of the service provided: “Service quality regulation is integral to economic rate regulation, to setting ‘just and reasonable’ rates. From the perspective of the users or customers of the service, there must be a consideration of the ‘value’ of the product or service, where value is defined as the product or service meeting or exceeding the needs and expectations of customers relative to the price charged.”<sup>3 </sup></p>
<p>The 2003 staff report noted under cost of service (CoS) regulation, firms’ incentives weren’t at odds with service quality because they earned a return on investments and prudent and necessary costs were passed along to the customers. The staff discussion paper noted that under CoS the review process was usually annual, and embedded a review of service quality and reliability.</p>
<p>“ …Such reviews occurred periodically—often annually. Service quality could be reviewed as part of the revenue requirement and rate application, with consideration of how existing operational expenses and planned capital investments would contribute to the maintenance or improvement of service quality. Poor service quality could also be a factor considered by the regulator in reducing the allowed revenue requirement (without exacerbating the situation by the utility cutting costs and services in response to reduced revenues)…Also, the ‘rate base’ concept of CoS regulation, some argue, provides an incentive for the firm to overinvest and provide ‘gold-plated’ service, and so service degradation is thus seen as less of a risk under CoS regulation.”</p>
<p>Commenting on PBR, the 2003 staff report noted that differing incentives might result in cost containment degrading service. Under PBR, the OEB staff noted a greater need for ongoing monitoring of service performance:</p>
<p>“… PBR differs from CoS in that it provides incentives for a firm to improve its productivity … Another advantage to PBR is … less frequent detailed reviews … With less frequent detailed reviews, there is an increased need for ongoing monitoring of service performance, to ensure that any problems that do occur are addressed … Also, the incentives inherent in PBR … could result in … degraded service. Service quality monitoring serves as a counterbalance to ensure that adequate service is maintained … In some PBR plans … the service performance of the firm may be a parameter affecting rates … In other plans, aggregate penalties, or the existence of service guarantees and rebates, link the firm's financial performance to its service performance…”</p>
<p>However, after issuing the report, the OEB took no further action on service quality until 2008.</p>
<h4>Redefining ‘Mandatory’ Standards</h4>
<p>The OEB described the January 2008 staff paper<sup> 4 </sup> as an initial step in a consultation process designed to assist the OEB in determining an appropriate set of electricity distributor service quality requirements (ESQR). However, prior to any consultation or regulatory process in this proceeding, the staff discussion paper stated on page 3,“The Board has concluded that it will implement a “standards approach” to service quality regulation. Under the ‘standards approach,’ compliance with the performance standard is mandatory and can be enforced through the Board’s compliance process.”</p>
<p>Inexplicably, however, the paper doesn’t propose standards for service reliability. While “Board staff acknowledges that system reliability is critical for customers” (<i>p.30</i>), “Board staff proposes that these Original SQIs [the reliability indicators] not become mandatory ESQRs at the present time but be retained in a modified form for monitoring and reporting purposes” <i>(p. 23</i>).</p>
<p>The report ignores the <i>2000 Rate Handbook</i> and OEB Decision that established mandatory reliability standards. In 2000, the OEB stated the reasoning behind the standards as follows: “… the Board’s approach to encourage the maintenance of service quality during the first generation PBR plan is to apply minimum standard guidelines for customer service indicators, and to apply a utility’s historic performance as its specific service reliability standards. Where a utility has not monitored service reliability in the past, it is required to initiate monitoring and reporting of the indices.” (<i>7-2</i>)</p>
<p>Thus for SAIDI and SAIFI, “All planned and unplanned interruptions of one minute or more should be used to calculate this index. Utilities that have at least three years of data on this index should, at minimum, remain within the range of their historic performance.” (<i>7-6, 7-7</i>). There’s nothing unclear about this order: “Board’s …PBR plan is to … apply a utility’s historic performance as its specific service reliability standards.” This was confirmed by the OEB’s August 2003 notice which noted that in 2000 “the Board approved initial minimum standards.”</p>
<p>The 2008 report was, according to OEB staff, based on a review of other jurisdictions and found a greater incidence of monitoring than of service quality incentive and standards. However, no data or analysis was offered to support this statement. However, it’s clear that many jurisdictions worldwide that have adopted incentive regulation also have adopted SQR.</p>
<p>In fact, the report <i>Electricity Distribution Quality of Service, October 2007</i> states: “Ofgem considers quality of service to be one of its key priorities in network regulation …2006/07 was the fifth year that the DNOs [Distribution Network Operators, the UK nomenclature for LDCs] faced financial incentives on their quality of service performance …”<sup> 5</sup></p>
<p>In addition to the U.K., incentive-based SQR exists in many other European jurisdictions, and in jurisdictions like Australia. For example, the Council of European Energy Regulators (CEER) noted in its 3rd <i>Benchmarking Report On the Quality of Electricity Supply</i> (2005):</p>
<p>Price-cap regulation without any quality standards or incentive/penalty regimes for quality may provide unintended and misleading incentives to reduce quality levels. Incentive regulation for quality can ensure that cost cuts required by price-cap regimes are not achieved at the expense of quality….The increased attention to quality incentive regulation is rooted not only in the risk of deteriorating quality deriving from the pressure to reduce costs under price-cap, but also in the increasing demand for higher quality services on the part of consumers…. a growing number of European regulators have adopted some form of quality incentive regulation over the last few years.<sup> 6 </sup></p>
<p>The January, 2008 letter from the OEB also states, “Until ... the sector gains experience with any new or modified service quality indicators or requirements, it is in the Board’s view premature to move to an incentive approach.”</p>
<p>But the OEB is now in its 10th year of collecting reliability data; more than sufficient time to gain experience. Indicators such as SAIDI and SAIFI are standards that are used for monitoring and regulating service quality around the world. These indicators have been used by Ontario distributors’ association for at least 15 years; for individual LDCs much longer.</p>
<p>The staff discussion paper offers a cursory analysis on reliability for 2004 through 2006. This analysis calculates sector, rural, and urban averages, as well as OEB peer-groups’ averages. It’s unclear whether these averages are simple arithmetic averages across reporting companies, or a weighted average calculated from actual customer-hours of interruption and total number of customer interruptions divided by number of customers served.<sup> 7</sup></p>
<p>The discussion paper does examine the reliability performance of LDCs relative to various proposed benchmarks such as sector average or peer group average performance over the last three years. It finds that anywhere from 25 to 50 percent of Ontario distributors fail these benchmarks; furthermore, LDCs that fail typically have a reliability performance that is 50 to 100 percent worse than the selected average. What is clear from the data is that a very wide variation in reliability performance exists among LDCs, even within the OEB’s peer groups. Yet, this finding fails to elicit any apparent concern on the OEB’s part for the customers experiencing such degraded reliability. No explanation is offered for the fact that many customers of many LDCs are experiencing significantly lower reliability than customers of similar LDCs. What about performance over the whole period since the inception of incentive regulation (IR)?</p>
<p>The discussion paper sheds no light on whether LDCs are in compliance with the reliability guidelines established by the OEB in 2000. In fact, since its introduction of IR in 2000, the OEB has failed to confirm that LDCs operating under this regime are compliant with the mandated service-quality standards; this despite the fact the OEB repeatedly has stated that a reliable supply of power is necessary for just and reasonable rates. Indeed, the cursory analysis reported by staff would be unable to address current or past compliance.</p>
<p>The staff analysis is based on reliability data for 2004 through 2006 only. The paper indicates, “The following information is based on the reliability data filed under the RRR for the three years 2004 - 2006. Because the data reported in the earlier years may not have been reported consistently or calculated properly, staff has removed any statistics that appeared to be unreliable. This approach may result in a slightly less than completely precise and comprehensive analysis, but staff believes that the analysis based on this more selective data represents a more accurate picture of general trends.”<sup> 8</sup></p>
<p>Yet, this is data collected by these same utilities for at least 15 years and reported to the Implementation Task Force in 1999 and to the OEB in its required filings since 2000.<sup> 9 </sup> However, in choosing to reject use of its own data prior to 2004, the OEB not only misses a significant degradation in 2004 through 2006 compared with 2000 through 2003, it misses an earlier an equally significant degradation in 2000 through 2003 compared with the pre-IR 1993 through 1997 period. Only by examining the performance relative to the pre-IR period could the OEB determine compliance. The OEB sees no degradation in large part because it has chosen to eliminate the periods of higher reliability performance in its comparison.</p>
<p>The OEB doesn’t report what tests had been performed to determine that the data reported in the earlier years hadn’t been reported consistently or calculated properly. It’s unclear what methodology was used to remove statistics that appeared to be unreliable. The earlier data comes from the same population as the later data and therefore can be jointly used to assess the 2000 to 2007 trend, as well as to assess performance relative to the pre-IR period used in 2000 to set standards.</p>
<h4>Reliability of Ontario LDCs</h4>
<p>What has been the performance of the electricity distributors in Ontario relative to the minimum standards established in 2000? This question is, unfortunately, not addressed in the discussion paper, nor in any public OEB analysis. Based on the first-generation standards, each LDC must keep its reliability performance within the range of the three-year period preceding the PBR. The OEB evidently has conducted no analysis on LDCs’ compliance with the standards.</p>
<p>What was the reliability of Ontario distributors in the mid-late 1990s prior to the start of the OEB’s PBR? Two sources of data exist to examine this question. One set of data from the industry was published from 1991 onwards. A second set of data was collected by the OEB’s Implementation Task Force in 1999.</p>
<p>Since 1991, the former Ontario Municipal Electric Association (MEA) collected and published performance metrics from its members, including reliability indices. This data included returns from almost all large and medium sized utilities serving 75 to 85 percent of customers in the province <i>(see Figure 1)</i>.</p>
<p>During development of its first-generation PBR, the OEB’s Implementation Task Force undertook several surveys of the utilities, including reliability performance. Responses from more than 60 utilities serving 81 percent of customers provided annual data on reliability <i>(see Figure 2)</i>.<sup>10</sup></p>
<p>Figures 1 and 2 present these results for both municipal utilities, as well as for a composite index representing both municipal and non-municipal distributors. For municipal utilities the mean of SAIDI is 1.22 and the mean of SAIFI is 1.46—quite consistent with the results of non-municipal distributors. Looking at the PBR performance standard, for SAIDI the average three-year high value is 1.59; for SAIFI, the average three-year high is 1.84. For the industry composite index, the mean SAIDI figure is 2.07 with an upper bound of 2.53. For SAIFI, the mean is 1.36 with an upper bound of 1.75.</p>
<h4>Reliability Under PBR</h4>
<p>In January 2008, the OEB released its discussion paper on reliability, its only publicly-released analysis of LDC performance since the 1999 task force report. The paper employs only data from 2004 through 2006 to examine LDC performance. Further, no pre-PBR data and no data for the first three years of the PBR are examined. The paper states:</p>
<p>The following information is based on the reliability data filed under the RRR for the three years 2004 - 2006. Because the data reported in the earlier years may not have been reported consistently or calculated properly, staff has removed any statistics that appeared to be unreliable. This approach may result in a slightly less than completely precise and comprehensive analysis, but staff believes that the analysis based on this more selective data represents a more accurate picture of general trends.<sup> 11</sup></p>
<p>Note the OEB’s use of the term “may not have been reported consistently or calculated properly.” There’s no discussion regarding what tests have been performed to determine that the data reported in the earlier years “may not” have been reported consistently or calculated properly. It’s unclear what methodology was used to remove statistics that appeared to be unreliable. The adjustments to the data by OEB staff need to be explored in more detail.</p>
<p>The OEB’s 2008 discussion paper doesn’t examine the reliability performance of LDCs relative to the mandate ordered in its <i>2000 PBR Decision</i>; what the Paper does do is casually examine performance against several external averages calculated over the latest three years. What about performance under IR versus the 2000 to 2007 period?</p>
<p>As noted above, within its service-quality proceeding, the OEB questioned the robustness of the 2000, 2001, 2002, and 2003 data and refused to include this in its analysis. These data were filed by LDCs as part of their regulatory requirements (<i>i.e.</i>, initially under PBR data-filing requirements and then under reporting and record-keeping requirements (RRR), which incorporated the data-filing requirements initiated under the first-generation PBR). Such data had been published by the MEA for a decade and a half without any concerns being expressed by stakeholders. Individual municipal utilities had been collecting this data for 20 years or more. No evidence of any kind has been offered by the OEB to support their contentions. No information has been provided by the OEB as to why, prior to 2004, the reliability data isn’t acceptable.</p>
<p>In fact, the OEB has employed the 2002 to 2006 RRR data, among which reliability is a small part, as the foundation of its entire electric distribution regulatory framework across multiple proceedings and years: the 2005 through 2006 cohort analysis,<sup>12</sup> 2006 through 2009 cost comparison and benchmarking,<sup>13</sup> and 2007 through 2009 third-generation IR rate setting.<sup>14</sup> According to the OEB consultant’s cost benchmarking report:<sup>15 </sup></p>
<p>The econometric model that we developed was based on the largest sample of data available. This, as we have seen, is in keeping with good econometric practice since a larger sample reduces the variance of parameter estimates and thereby helps us develop models with more variables and more flexible forms. The full sample period available was 2002-2006. We included in the sample data for all companies for which requisite data of good quality were available for at least two of the four years.</p>
<p>If the 2002 and 2003 RRR data is acceptable to the OEB in these applications (including all the problems aired in those proceedings on capital costs, capitalization, outsourcing, leasing, and embedded distributors), then straightforward data on interruptions, which definitions have been constant for decades, should be easily amenable to analysis on reliability trends and compliance. Furthermore, the OEB’s consultant has used the 2002 and 2003 reliability data rejected in the 2008 staff report in sophisticated econometric model estimation. And, the OEB has used just this same reliability data in extensive applications during its cost benchmarking proceeding:<sup>16</sup> “Extensive data are available today on the operations of Ontario power distributors which are potentially useful in benchmarking their performance. The OEB is the primary source of such information… At the time of our updated study, OEB operating data from 2002 to 2006 were available.”</p>
<p>Commenting on their estimation with the reliability data, the OEB’s consultant stated <em>(p.67)</em>: “We should also note that some of the results from the first stage econometric models for the reliability variables were sensible. In the research using SAIDI as the dependent variable…we found that SAIDI was generally higher (suggesting low reliability) for companies that had more rural and less undergrounded systems and used less capital.”</p>
<p>In fact, the report noted the benefit of additional data (<i>p. 66</i>): “Additional years of data for the estimation of the cost and quality models would also be helpful.” Why not 2000 and 2001 data?</p>
<p>In fact, in the OEB’s most recent release of reliability data (June 24, 2008), data for the largest distributor is missing from 2002 through 2006.<sup>17</sup></p>
<p>The OEB is willing to employ the 2002 and 2003 reliability data in its cost benchmarking that would determine each LDC’s future annual revenue. Yet, the OEB reports that it will not use this same data for its reliability-trend analysis since this data “may not have been reported consistently or calculated properly.” If the data is good enough for rate setting, it should be sufficient for trend analysis. If accepted, the OEB’s position would mean that no compliance test could be conducted and no historical analysis prior to 2004 could be performed.</p>
<p>Starting in 2000, the OEB collected this reliability data annually (but reported on a monthly basis) from LDCs as stipulated in the OEB’s PBR rate guidelines. OEB staff have made notable comments about the accuracy of the data collected in 2000 and 2001 as well as 2002 and 2003. A detailed examination of this data yielded no systematic deficiency, just the usual data cleanup issues—<i>i.e.</i>, duplicate records, missing data, and occasional entries that appear inconsistent, such as monthly data reported at annual rates. These cleanup items occur more frequently for some of the very smallest LDCs (such as those that were subsequently acquired by Hydro One). However, all of these issues are easily resolved.<sup>18 </sup></p>
<p>The authors examined the reliability data filed by the 80 to 100 LDCs over the 2000 to 2007 period to judge whether the 2000 through 2003 data is consistent with 2004 through 2006. First, they performed a general casual comparison of reported values for each LDC. Second, multiple tests were conducted to gauge if the distributions were normal. All four tests found the annual distributions normal. Additional tests included “t” tests, “F” tests, sign tests, analysis of variance and Tukey’s HSD (honestly significant difference) post-hoc analysis. The clear conclusion supports the hypothesis that all years of data from 2000 to 2007 come from the same population.<sup>19</sup> Therefore, if OEB is willing to use 2004, 2005, or 2006, it must also use 2000, 2001, 2002 or 2003. What do data from this period show?</p>
<p>For municipal LDCs, the post-PBR SAIDI average for each year exceeds the pre-PBR of 1.22, except in the first two years of the PBR <i>(see Figure 3)</i>. By the end of the period, the final three-year average is 1.79, 46-percent higher than the pre-PBR average. In three separate years, the weighted average exceeds the upper bound standard of 1.59 and in one year they are equal. In 2002, the result exceeds the upper bound standard by 49.7 percent. The final two years exceed the standard by a wide margin. The composite post-PBR results significantly exceed the pre-PBR average of 2.07 in each year. The final three-year average is 6.01, 190-percent higher than the pre-PBR average. Results in each year also exceed the upper bound standard by a wide margin.</p>
<p>The post-PBR municipal SAIFI average for each year except one exceeds the pre-PBR average of 1.46 <i>(see Figure 4)</i>. For municipal LDCs, the final three-year average is 1.86, 27.4 percent higher than before PBR. In four years, the weighted average exceeds the upper bound standard of 1.84. The composite post-PBR results significantly exceed the pre-PBR average of 1.36 in each year. The post-PBR average of 2.31 exceeds the pre-PBR average by 70 percent. The final three-year average of 2.52 exceeds the pre-PBR average by 85 percent. Results in each year exceed the upper bound standard by a wide margin, in some cases by more than 50 percent.</p>
<p>These reliability indexes indicate significant service degradation in the province over the past eight years. These troubling findings indicate a degrading of the reliability performance for the electricity distribution sector as a whole. It’s critically important to examine this degradation and the reasons behind it. Have LDCs stopped being concerned about reliability given the <i>laissez faire</i> regulatory attitude displayed by the OEB? Have LDCs been forced to make budgetary cuts because of insufficient revenues under IR and unrealistic expectations on the part of shareholders regarding their dividend payments to provincial and municipal coffers?</p>
<p>Has the focus of LDCs been distracted by a policy environment that is always changing as government and regulator bounce from one idea to the next? Recently, the provincial government, through its regulator the OEB and through legislative changes, has initiated the eighth set of sweeping regulatory changes in 10 years affecting the electric distribution sector.<sup>20 </sup></p>
<p>The evidence confirms that LDCs have suffered operationally over this period; sector-wide distribution productivity growth under the OEB’s decade-long restructuring and IR has been significantly negative, unlike the positive, broad-based productivity growth from 1988 to 1997. And, maybe not surprising given the focus on O&amp;M, allocative efficiency has declined as well.</p>
<p>More troubling has been the incentives embedded in these frameworks. The 2004 staff report detailed the OEB’s thoughts on achieving further efficiencies in distribution: “The Board’s objective is to consider if further efficiencies are available, and if so, how to achieve them. … the paper identifies approaches available to the Board to drive further efficiencies in the electricity distribution sector.” Consistent with the theme of government over the last decade, the paper and later OEB policies place a heavy reliance on O&amp;M savings from policy-directed (<i>i.e.</i>, forced and incented mergers) but offers little to substantiate the savings expectations. These new shareholder-initiated amalgamations publicly have touted the benefits of the government’s consolidation policy: The primary benefit, they claim, is the significant reduction in operational (<i>i.e.</i>, O&amp;M) costs. In a <i>Toronto Star</i> article, “Wave of Hydro Mergers Forecast,” some recent merger experiences were discussed.<sup>21</sup> The article noted the accepted wisdom that previous mergers had produced ‘‘tangible cost savings.’’ The authors’ research finds diseconomies of scale; however, it also shows significant scope economies for outputs and inputs.</p>
<p>The 2004 staff report and subsequent reports (<i>e.g.</i>, the 2006 <i>Christensen</i> report, and the 2007 and 2008 <i>Pacific Economics </i>reports) focus on O&amp;M-based operational efficiencies associated with technical efficiency (<i>i.e.</i>, achieving the maximum output-to-input ratio) while ignoring capital (about half of total costs) and associated allocative inefficiency. Consistent with the OEB’s focus on O&amp;M, research on the post-PBR efficiency of the LDCs finds that allocative inefficiency has increased since 1997. But, whereas the gold-plated networks of the 1990s had robust reliability performance, the new, more inefficient networks have degraded reliability due to non-optimal O&amp;M expenditures.</p>
<p>Most troubling is the fact that the OEB now has formalized its IR based on O&amp;M benchmarking without considering inter-utility differences in labor capitalization policies or reliability performance. Ignoring differing capitalization policies will distort O&amp;M comparisons and create differences in benchmark outcomes that are figments of accounting alone. Ignoring reliability in the O&amp;M benchmarking will incent LDCs to cut O&amp;M even if it degrades reliability and harms customers. Indeed, the authors’ econometric research over the past decade finds that O&amp;M reductions significantly were related to reduced reliability.<sup>22 </sup></p>
<p>But, what about individual LDCs and their performance relative to the standards set by the <i>2000 Electricity Distribution Rate Handbook</i>? Some LDCs aren’t compliant with their performance standard established in 2000. On average, Ontario LDCs have been experiencing a deterioration of reliability over the 2000 to 2007 period. Furthermore, even though we noted deterioration in the 2005 to 2007 period above relative to the 2000 to 2002 period, for some LDCs their 2000 to 2002 reliability performance had degraded from their pre-PBR performance. Unfortunately, and in contravention to the OEB’s 2000 reliability mandate, some LDCs are using the post-PBR degradation to establish new, lower standards based on their most recent three-year performance. However, the OEB’s decision in 2000 was to establish a minimum floor for reliability. The intent wasn’t to establish a rolling three-year moving average where the reliability standard itself would degrade.<sup>23</sup></p>
<h4>Keeping Ontario’s Promise</h4>
<p>On average, Ontario LDCs have been experiencing a deterioration of reliability over the 1995 to 2007 period. Furthermore, on average, LDCs aren’t compliant with their standards established in 2000. Indeed, performance deteriorated during the 2005 through 2007 period relative to the 2000 through 2001 period; 2000 through 2001 reliability performance itself degraded significantly from that of the 1995 to 1997 and 1998 period. As reliability degraded, LDCs appear to have used the worsening performance to implement ever-yet lower “rolling-standards.”</p>
<p>There’s clear evidence of reliability degradation in the OEB’s data to question the assertion in the staff discussion paper that there are no concerns with reliability in the Province. With the existing data, however, it’s impossible to attribute cause for the degradation. As indicated in the staff discussion paper and as mandated by the OEB’s decision in 2000, all service reductions, regardless of cause, are used to calculate the interruptions indexes.</p>
<p>The OEB and the Ontario government has played a role in this degradation. Implicitly, the <i>laissez faire</i> regulatory attitude displayed by the OEB since 2000 has abetted the deterioration. Explicitly, the OEB’s growing fixation on partial cost benchmarking, as opposed to the total benchmarking advocated in the 1st Generation PBR, has directly incented LDCs to curtail O&amp;M expenditures so as to improve their benchmarking score. Our own research on costs, reliability and investment found such curtailments degraded reliability. The OEB in 2003 reminded stakeholders that its legislative mandate requires the OEB: “To protect the interests of consumers with respect to prices and the reliability and quality of electricity service.” If the OEB has failed to protect consumers with respect to reliability, can rates be presumed just and reasonable?</p>
<p>Reliability may have been affected by causes beyond an individual LDC’s ability to control—for example, loss of supply from the transmission system. Indeed, the Implementation Task Force argued that LDCs should be held accountable for the failures under their control (<i>p.36</i>): “One other factor that needs to be considered when calculating the indices is the effect of external causes. These causes include outages and interruptions on the transmission system, and on feeders used jointly with another utility... [T]he reliability indices reported by a utility should be adjusted so that they truly represent situations under its control.”</p>
<p>Therefore, as part of the original reliability indicator reporting requirements established in the first <i>Distribution Rate Handbook</i>, LDCs are required to record the reason for supply interruption, but not report this to the OEB. This requirement was continued in the<i> 2006 Electricity Distribution Rate Handbook</i>. The OEB should require LDCs to provide this data retroactively to 2000 so that the historical data available to the OEB can be used to determine how much of the degradation is outside the network and how much reflect interruptions that are within the LDCs’ ability to control. Both operational as well as regulatory governance remedies then need to be implemented and examined to see if they bring subsequent performance within mandated limits.</p>
<p>Standards and penalties have been shown to be effective in blunting the perverse incentives under IR. In the United States, Ter-Martirosyan found that utilities with IR, but without standards, reduce their expenditures by 37 percent throughout the time period of the analysis. On the other hand, utilities with IR and standards and penalties increased their expenditures in every year by 17 percent. The former utilities were found to have had a 64-percent increase in SAIDI and a 13-percent increase in SAIFI. The latter utilities were found to have had a 26-percent decrease in SAIDI and a 23-percent decrease in SAIFI.</p>
<p>In the long run, our preference in Ontario is to develop an incentive approach that internalizes the cost of supply interruptions so that LDCs can supply a socially optimal level of reliability. Such regimes have been successfully implemented by a number of European regulators. These efforts have been well documented; CEER is shortly due to release its fourth benchmarking report on such efforts among member countries. In the short run and in the absence of such an incentive regime, Ontario’s distributors should face financial penalties for noncompliance with mandated minimum-reliability standards. After all, the OEB itself stated that by 2003 it would be in a position “to set industry service quality performance standards. Once these standards have been established, PBR incentive mechanisms with economic consequences will be introduced around the service quality indicators” (<i>2000 Handbook, p.7-10</i>). Now, in 2009, the OEB should follow through on its decade old promise. Hopefully, it’s not too late.</p>
<p> </p>
<h4>Endnotes:</h4>
<p>1. <i>Report of the OEB Performance Based Regulation Implementation Task Force</i>, May 18, 1999.</p>
<p>2. OEB, Service Quality, <i>2000 Electric Distribution Rate Handbook</i>, March 9, 2000, p. 7-10.</p>
<p>3. <i>Service Quality Regulation for Ontario Electricity Distribution Companies: A Discussion Paper,</i> Ontario Energy Board staff, Sept. 15, 2003 (downloaded from <a href="http://www.oeb.gov.on.ca" target="_blank">http://www.oeb.gov.on.ca</a>).</p>
<p>4. Staff Discussion Paper<i>, Regulation of Electricity Distributor Service Quality</i> (Board File EB-2008-0001), Jan. 4, 2008.</p>
<p>5. Ofgem, <i>2006/07 Electricity Distribution Quality of Service Report</i>, Oct. 31, 2007, p.1.</p>
<p>6. CEER, <i>Third Benchmarking Report of Quality of Electricity Supply, </i>2005, p.31.</p>
<p>7. Another appropriate metric would be weighted-average by customer numbers.</p>
<p>8. Staff Discussion Paper, fn 7., p.25.</p>
<p>9. Reliability data spanning the period from 2000 to 2007 have been assembled from the Board’s annual PBR filings for 2000 and 2001, as well as from the RRR data for 2002 to 2007 for each utility. We have conducted time series statistical tests to examine whether or not the pre-2004 reliability data is different from the 2004 to 2006 used by the Board. We were unable to reject the null hypothesis of no difference, <i>i.e.</i>, for statistical purposes, the data appear to come from the same universe.</p>
<p>10. According to the Board, “Utilities that have at least 3 years of data…should, at minimum, remain within the range of their historic performance.” (7-6, 7-7) In this instance, the average for municipal utilities during PBR should be no higher than 1.59 for SAIDI and 1.84 for SAIFI. These standards are based on a customer weighted mean of upper boundary performances during the prior three years.</p>
<p>11. Staff Discussion Paper, f<i>n 7., p.25. </i></p>
<p><i>12. Christensen Associates, Methods and Study Findings: Comparators and Cohorts Study for 2006 EDR</i>, October 2005.</p>
<p>13. Pacific Economics Group, <i>Benchmarking the Costs of Ontario Power Distributors, </i> April 2007.</p>
<p>14. February 2008 Calibrating Rate Indexing Mechanisms for Third Generation Incentive Regulation in Ontario.</p>
<p>15. <i>Benchmarking the Costs of Ontario Power Distributors,</i> March 2008, p.43.</p>
<p>16. <i> Id </i>at<i> </i>p.36.</p>
<p>17. Fortunately, we do have filings from prior years for this LDC. It is clear that the post-PBR period, and in particular the last few years, have seen a very significant deterioration in its reliability. This LDC had a relatively good reliability record pre-PBR. In recent OEB proceedings this LDC (and others) voiced concern that budget constraints prevented replacing substantial assets deployed decades ago. Our statistical model of reliability, O&amp;M, and additions find such under investment degrades reliability.</p>
<p>18. Missing data <i>etc</i>., also occur in the 2002-2006 data the Board used for its cost comparison and benchmarking. This is not enough in itself to judge that the data is unusable. In addition, data that is identified as inconsistent for a particular LDC can be easily verified or corrected with the LDC.</p>
<p>19. See, Cronin, F. and Motluk, S. <i>“An Analytical Look at Service Reliability Degradation.”</i></p>
<p>20. Unfortunately, the Government’s pronouncements, proposals and policies often have been inconsistent, misguided, and counterproductive. These include: Bill 35, the <i>1998 Energy Competition Act</i>; the <i>2000 OEB PBR Decision</i> (OEB, 2000a); Bill 100, the <i>Minister’s Directive to the OEB</i>, and the OEB <i>Decision in the Proceedings on the Minister’s Directive in 2000 </i>(OEB, 2000b); 2002’s <i>Action Plan</i> and <i>Bill 210</i>; the February 2004 <i>OEB Discussion Paper on Further Efficiencies</i> (OEB, 2004); Ontario Ministry of Energy, <i>Electricity Transmission and Distribution in Ontario — A Look Ahead</i>, Dec. 21, 2004. (EDTO); Christensen Associates, <i>Methods and Study Findings: Comparators and Cohorts Study for 2006 EDR</i>, October 2005; Pacific Economics Group, <i>Benchmarking the Costs of Ontario Power Distributors</i>, April 2007, and finally, <i>Calibrating Rate Indexing Mechanisms for Third Generation Incentive Regulation in Ontario</i>, February 2008.</p>
<p>21. Tyler Hamilton, “Wave of Hydro Mergers Forecast,” <i>Toronto Star</i>, Oct. 21, 2006.</p>
<p>22. Cronin, F. J. and S. Motluk, <i>Modeling Electric Distributor Costs, Investment, and Reliability,</i> (in process). We used Ontario LDCs’ data to estimate a three-equation model. We find that LDCs with higher O&amp;M expenditures also have higher reliability (lower SAIDI, <i>etc.</i>). Older networks, networks with lower shares of underground lines, and networks with less capital tend to have lower reliability.</p>
<p>23. See Ottawa Hydro Holdings, Inc., <i>2006 Annual Report</i>, p.22 for a discussion of a rolling average to set reliability standards. We might note that in this case, the LDC’s reliability performance is good.</p>
</div></div></div><div class="field field-name-field-article-category field-type-taxonomy-term-reference field-label-above clearfix"><h3 class="field-label">Category (Actual): </h3><ul class="links"><li class="taxonomy-term-reference-0"><a href="/article-categories/customer-engagement">Customer Engagement</a></li><li class="taxonomy-term-reference-1"><a href="/article-categories/energy-policy-legislation">Energy Policy &amp; Legislation</a></li></ul></div><div class="field field-name-field-members-only field-type-list-boolean field-label-above"><div class="field-label">Viewable to All?:&nbsp;</div><div class="field-items"><div class="field-item even"></div></div></div><div class="field field-name-field-article-featured field-type-list-boolean field-label-above"><div class="field-label">Is Featured?:&nbsp;</div><div class="field-items"><div class="field-item even"></div></div></div><div class="field field-name-field-image-picture field-type-image field-label-above"><div class="field-label">Image Picture:&nbsp;</div><div class="field-items"><div class="field-item even"><img src="http://www.fortnightly.com/sites/default/files/article_images/0908/images/0908-FEA4.jpg" width="1336" height="1124" alt="" /></div></div></div><div class="field field-name-field-fortnightly-40 field-type-list-boolean field-label-above"><div class="field-label">Is Fortnightly 40?:&nbsp;</div><div class="field-items"><div class="field-item even"></div></div></div><div class="field field-name-field-law-lawyers field-type-list-boolean field-label-above"><div class="field-label">Is Law &amp; Lawyers:&nbsp;</div><div class="field-items"><div class="field-item even"></div></div></div><div class="field field-name-field-tags field-type-taxonomy-term-reference field-label-above clearfix">
<div class="field-label">Tags:&nbsp;</div>
<div class="field-items">
<a href="/tags/aif">AIF</a><span class="pur_comma">, </span><a href="/tags/benchmarking">Benchmarking</a><span class="pur_comma">, </span><a href="/tags/cee">CEE</a><span class="pur_comma">, </span><a href="/tags/cost">Cost</a><span class="pur_comma">, </span><a href="/tags/dc">DC</a><span class="pur_comma">, </span><a href="/tags/distribution">Distribution</a><span class="pur_comma">, </span><a href="/tags/dr">DR</a><span class="pur_comma">, </span><a href="/tags/economics">Economics</a><span class="pur_comma">, </span><a href="/tags/hydro">Hydro</a><span class="pur_comma">, </span><a href="/tags/network">Network</a><span class="pur_comma">, </span><a href="/tags/pacific-economics-group">Pacific Economics Group</a><span class="pur_comma">, </span><a href="/tags/regulation">Regulation</a><span class="pur_comma">, </span><a href="/tags/reliability">Reliability</a><span class="pur_comma">, </span><a href="/tags/reliability-standards">reliability standards</a><span class="pur_comma">, </span><a href="/tags/said">SAID</a><span class="pur_comma">, </span><a href="/tags/saidi">SAIDI</a><span class="pur_comma">, </span><a href="/tags/saifi">SAIFI</a><span class="pur_comma">, </span><a href="/tags/transmission">Transmission</a> </div>
</div>
Sat, 01 Aug 2009 04:00:00 +0000puradmin13689 at http://www.fortnightly.comBuilding the Next Generation Utilityhttp://www.fortnightly.com/fortnightly/2009/01/building-next-generation-utility
<div class="field field-name-field-import-deck field-type-text-long field-label-inline clearfix"><div class="field-label">Deck:&nbsp;</div><div class="field-items"><div class="field-item even"><p>Fundamental changes require bold strategies.</p>
</div></div></div><div class="field field-name-field-import-byline field-type-text-long field-label-inline clearfix"><div class="field-label">Byline:&nbsp;</div><div class="field-items"><div class="field-item even"><p>Jack Azagury, et al.</p>
</div></div></div><div class="field field-name-field-import-bio field-type-text-long field-label-inline clearfix"><div class="field-label">Author Bio:&nbsp;</div><div class="field-items"><div class="field-item even"><p><b>Jack Azagury</b> and <b>Tim Porter</b> are partners and <b>Andre Begosso</b> is a senior manager with Accenture’s utilities industry group consulting practice. Email Jack at <a href="mailto:jack.azagury@accenture.com">jack.azagury@accenture.com</a>, Tim at <a href="mailto:timothy.p.porter@accenture.com">timothy.p.porter@accenture.com</a> and Andre at andre.p.begosso@ accenture.com. The authors acknowledge the contributions of Curtis Bech, Accenture consultant.</p>
</div></div></div><div class="field field-name-field-import-volume field-type-node-reference field-label-inline clearfix"><div class="field-label">Magazine Volume:&nbsp;</div><div class="field-items"><div class="field-item even">Fortnightly Magazine - January 2009</div></div></div><div class="field field-name-field-import-image field-type-image field-label-above"><div class="field-label">Image:&nbsp;</div><div class="field-items"><div class="field-item even"><img src="http://www.fortnightly.com/sites/default/files/article_images/0901/images/0901-FEA4-fig1.jpg" width="1488" height="1065" alt="" title="Figure 1" /></div><div class="field-item odd"><img src="http://www.fortnightly.com/sites/default/files/article_images/0901/images/0901-FEA4-fig2.jpg" width="1594" height="1064" alt="" title="Figure 2" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>The regulated utility industry in the United States faces an unprecedented dilemma, caught between the conflicting demands needed to address the requirements of its shareholders, the environment and society. While environmental and societal pressures will continue to intensify, the targets remain uncertain and the methods for meeting and financing these requirements are less than clear. What is clear is the magnitude of the challenge ahead. The United States is facing a massive demand for investment in its electric system, from generation to distribution. An estimated $400 billion to $1 trillion will be needed over the next 20 years to upgrade our electrical infrastructure. The rate impact to the average retail customer would run between $75 and $200 per year, and that does not even consider any additional cost resulting from stricter environmental legislation. Rate increases of this magnitude are untenable.</p>
<p>The current credit crisis also complicates the challenge of powering the future. Not only will that have an immediate impact on bad debt and revenue recovery but like many other businesses, a utility needs financing to build and upgrade its asset base. What is unique about this industry, however, is that within the current regulatory construct, the deployment of capital is the most prominent avenue for growth. The credit crisis could result in delays to any proposed infrastructure investments, thus reducing rate-base growth and, with that, earnings per-share growth.</p>
<p>In addition, utilities must prepare for an inevitable decline in energy sales as societal demand for energy efficiency mounts. Consumer awareness of energy consumption is increasing and empowered consumers will demand more services (from their utilities or other providers) to allow them effectively to curtail peak loads and manage consumption. As a result, the consumer will move from being a price taker to a price setter.</p>
<p>In light of such challenges, utilities are faced with an unprecedented and conflicting set of demands. They must be a significant part of the carbon solution, while also solving the asset infrastructure challenge economically. They must identify and steward sustainable, economically viable energy resources through the next decades, and proactively assist customers’ efforts to manage their energy costs. They must renew their organizations’ skills and talent base, and continue providing reasonably predictable earnings growth.</p>
<p>While many utilities have embarked upon efforts to define a path toward the next generation utility, these efforts often are siloed initiatives driven by the generation, transmission and distribution (T&amp;D) or customer segments of the organization. Addressing the upcoming challenge will require a coordinated and integrated set of decisions so as not to sub-optimize the end-to-end value chain. Eight critical themes across the generation, T&amp;D and customer elements of the value chain will shape the future of our industry.</p>
<h4>The Generation Dilemma</h4>
<p>With the elections behind us, the country is on a path toward a significant shift in energy and environmental policy. Even though greenhouse gas (GHG) legislation will provide much-needed certainty, the path forward remains complex. The government has to balance environmental demands while managing a significant budget deficit and an economic recession that leaves little room to increase rates. This dilemma was illustrated in the way multiple renewable portfolio standard initiatives were defeated in the ballots, including Proposition 7 in California. Clearly, companies in the industry will have fewer degrees of freedom in shaping their generation portfolios as regulation and legislation play an increasing role in mandating standards and fuel types, in some cases defying fundamental technological constraints.</p>
<p>In the short term, energy efficiency and conservation provide the most cost efficient and highest impact solution to addressing the environmental challenge. In the medium to long term, there will be a gradual evolution of the generation portfolio, with coal, gas, nuclear and renewables all playing a significant role in powering the nation.</p>
<p>• <b>Theme 1:</b> As renewable energy sources increase, winning utilities will need to be at the forefront of shaping the renewable generation landscape. The new administration is pledging more than $150 billion to be invested in clean technology over the next ten years. The question facing the industry is not whether this will have a significant impact, but rather how a utility should position itself to reap the benefits from one of the most promising growth drivers for the U.S. economy. Clean technologies will reshape the energy industry in the same way the Internet reshaped the IT industry. Winning utilities cannot afford to remain in the mainframe age as new entrants reap the benefit from clean tech growth.</p>
<p>Utilities will need to take a proactive stance toward a renewable strategy. An analysis of research and development (R&amp;D) spending shows that while utilities represent 5.23 percent of U.S. capital spending, they only account for 0.067 percent of U.S. R&amp;D spending. Although much of the R&amp;D in the industry is performed by equipment manufacturers, the DOE and academic centers, utilities will have to take greater control over the evolution of technology. They will need to work with start-ups and research centers to drive and understand how to leverage new technology into their portfolios. In addition to centralized renewable sources, customer-driven distributed generation will play a core part of the country’s future renewable portfolio, and will provide either a competitive threat or an opportunity for growth.</p>
<p>• <b>Theme 2:</b> Gas will be the transition fuel of choice over the next 10 years or more. While demand for power will grow at a much slower rate, the industry needs to alleviate pressure on reserve margins and baseload capacity over the next 10 to 15 years. This will be compounded as old coal and dual-fuel units continue to be retired to meet new environmental regulations. In the short to medium term, new load requirements will be filled by natural gas for three reasons. It is the only option with short enough construction lead times, allowing utilities to meet immediate power requirements and alleviate short-term tightness in reserve margins. It is needed to support renewable energy sources such as wind and solar. And natural gas is the only politically acceptable choice.</p>
<p>• <b>Theme 3:</b> While nuclear will play a greater role in our baseload portfolio, coal plants will remain critical to meeting our future energy needs and will remain a significant source of value for their owners.</p>
<p>Gas alone will not alleviate the baseload challenge we will face in the coming years. And even though nuclear will play an increasing role, any contribution impact will not be felt for decades. Coal plants will be an integral part of the generation solution for many years to come and remain a significant source of value for their owners. Despite increasing social and political pressures against legacy fuels and power production methods, coal is simply too cheap and readily available to ignore.</p>
<p>There are several reasons for this. First, starting with the first Clean Air Act in the early 1970s, the United States has grandfathered existing power plants with respect to any environmental legislation. Second, the U.S. coal generation fleet is quite young, with an average age of 35 out of a typical asset life of 50 years. Replacing these plants with more environmentally friendly technologies in a very short period, say 10 to 15 years, would be prohibitively expensive for ratepayers. Third, stripping coal out of the generation mix would push other options—nuclear and natural gas—beyond their sustainable limits. Switching completely from coal to nuclear would add 300 nuclear units to the current 104. Current projections only show plans for up to 40 new nuclear units. While this represents a major addition to our current nuclear fleet, it is a far cry from the number required to replace coal. And finally, the underlying fundamental coal to gas spread <i>(see Figure 1)</i> is likely to persist and possibly even increase. GHG emissions prices would need to be prohibitively high in order to reverse this fuel spread.</p>
<p>Thus, regardless of ultimate GHG emission standards, coal is unlikely to be eliminated, or significantly diminished, even in the case of a nuclear renaissance. We will, however, see acceleration in the development of advanced clean-coal technologies, including, but by no means limited to, carbon capture and sequestration (which is 20-plus years away).</p>
<p>• <b>Theme 4:</b> Given the increasing difficulty in siting and getting environmental and regulatory approvals for infrastructure investments in generation, utilities must focus on operational excellence to extend the life of assets and extract more value. Historically, our generation fleet has exhibited significant variability in performance, but there are significant opportunities to improve operating performance. Benchmarking analyses commonly show 40-percent deviation in performance around the mean for units of identical make and similar vintage <i>(see Figure 2)</i>. In addition, and just as revealing, there will be 1 to 5 percent gaps in performance across the same unit when operated by different operators within an organization (normalized for weather and other conditions outside of the operator’s control). Utilities should work toward closing these gaps by adopting lean manufacturing techniques and other operational excellence techniques as a core part of their strategy to drive revenue growth through operational improvements.</p>
<h4>The T&amp;D Conundrum</h4>
<p>The challenges on the generation part of the value chain intimately are tied to those found in T&amp;D. Utilities are struggling to determine how to upgrade the existing grid, understand the threat of demand destruction so they can dimension the system for peak demand, expand and adapt the grid to cater for distributed generation and renewables, and enable evolving customer needs and expectations.</p>
<p>• <b>Theme 5:</b> An integrated perspective on the smart grid will drive the future for T&amp;D despite untested financial benefits and large technological uncertainty in the distribution system. The U.S. electrical grid was built during the middle of the 20th century. Now, more than 50 years later, infrastructure components are operating well beyond their designed lives. The past decades have been characterized by massive underinvestment in T&amp;D. System upgrades are inevitable and overdue, and this is where smart technologies enter the equation. While many utilities refer to their AMI-smart meter programs as smart-grid programs, smart grid actually encompasses three major components: smart metering, smart grid and smart in-house technology.</p>
<p>Smart-metering technology is the starting point for any smart grid strategy. In the last few years the technology has reached a point where the business case for action is clear and supported by a broad range of benefits including improved data collection, remote connect and disconnect, better theft detection, more accurate load forecasting, improved power-quality monitoring and outage management, and superior customer service. In addition, smart meters support real-time pricing that will allow for peak load shaving and alleviate strain on the utility grid, thus improving reliability and decreasing outage frequency.</p>
<p>Smart-grid technologies are broad ranging and include distributed sensors, remote control devices, smart substations, power stabilization software and pattern recognition software, to name a few. The business case for smart grid technology is not yet as attractive as for smart meters due to the longer term and unproven nature of the benefits and uncertainty surrounding costs. However, numerous utilities, driven by a core belief in smart grid within the C-suite, undertake deployment pilots supported by a well founded, albeit long term, financial business case. Xcel Energy’s Boulder smart grid city pilot is one of the more significant examples today. In most of these early deployments, the projected benefits are tied to automatic detection and response to network problems and faults, reduced outage frequency and duration, decreased energy loss and theft, improvements in power quality monitoring and rectification and reliability, and reduced expenditure through condition-based maintenance.</p>
<p>Smart in-house technology is still in the early stages of maturity and includes programmable controllable thermostats, smart appliances (<i>e.g.,</i> PHEV controllers), communications hubs and home area networks. While we are still in early pilot stages, the benefits contemplated further reinforce those attained through smart metering by driving the next level of automated demand response.</p>
<p>Smart technologies will allow the customer to have a better understanding of electricity usage, which will translate into a reduction in the information asymmetry between the utility and its customers. This will lead to an estimated 10 percent reduction in consumption. Reduction in electricity requirements will ripple through transmission and distribution as well as generation. An end-to-end approach to any strategy and business case therefore is critical.</p>
<p>Thus, although the business case for smart grid is viable, it requires: 1) continued improvements in technology, specifically around interoperability; 2) an end-to-end perspective from generation to retail; 3) a long-term investment horizon; and most important, 4) a shift in the regulatory framework to ensure shareholders adequately are rewarded for the investment. The business case will vary dramatically from one utility to the next and the sensible way forward lies in a large-scale pilot approach where technology is deployed at a city or municipal level and tested under real-world conditions.</p>
<p>• <b>Theme 6:</b> Regardless of which smart technologies are selected, utilities proactively will need to reshape the regulatory compact. The current regulatory compact was created at the beginning of the 20th century with the goal of spurring the construction of a complete and operational electrical grid. It guarantees a rate of return for every dollar of capital investment as well as reimbursement for reasonable expenses associated with operating the infrastructure.</p>
<p>While the societal benefits of reduced energy consumption are clear, utilities will experience a compounding decline in power sales and revenue. Furthermore, a reduction in peak load will reduce both the wear and tear on existing equipment, as well as the need for new infrastructure. Both will reduce the need for capital investment and inhibit the utility’s main avenue for growth. Under the current compact, utilities cannot satisfy their responsibility to shareholders to generate growth while simultaneously embarking on energy efficiency and demand-response programs that destroy demand. In an industry accustomed to 1 to 2 percent annual growth, the impact from energy- efficiency programs that potentially could reduce consumption from existing customers by 10 percent (albeit compensated by new customer growth) will be significant.</p>
<p>To many, the answer is decoupling, already enacted in many states. But decoupling alone will be insufficient as it does not completely address the threat of demand destruction and capital deferral. Along with any decoupling of rates, a provision is needed that allows the utility to recover some of the revenue it stands to lose. Utilities under such a compact would be kept financially whole while being able to aggressively promote measures that address societal demands. This regulatory compact could not last forever because the revenue requirement associated with deferred capital investments is an escalating cost the consumer should not have to bear in perpetuity. Accordingly, this regulatory compact should have a clear expiration that would provide utilities time to transform their operations. Utilities will need to be proactive in shaping this regulatory agenda. While a departure from the current, fairly safe, regulatory compact creates risks and uncertainty, a stalling or delay strategy will create significantly greater risk and be disadvantageous to shareholders.</p>
<p>• <b>Theme 7:</b> The implementation of energy efficiency and smart technologies will change the role of the customer from being a price taker to a price setter requiring very different capabilities from the utility.</p>
<h4>Customer as Price-Setter</h4>
<p>First, the customer will demand more accurate billing. This means eliminating estimated meter reads, increasing the consumers’ trust in meter-read accuracy, and improving the accuracy of bill forecasting. Second, as consumer awareness increases and new technologies— including newer distributed generation and PHEVs—continue to mature, the customer will require power that is more reliable than what can be provided by today’s grid. Specifically, this means fewer outages, more accurate outage restoration estimates, and a reduction in the cost of outages due to either more proactive identification and restoration or implementation of new battery or backup technologies. Third, customers will demand the ability to manage their energy consumption through a real-time view of energy prices. Eventually the smart grid will reach into the home and manage demand by directly controlling appliances and HVAC equipment.</p>
<p>So, what does an empowered customer really mean for the utility? Consumption will be driven by customers and influenced by externalities, such as their desire to reduce energy expenditures or carbon emissions <i>(see sidebar, “Changing Customer Expectations”)</i>. Furthermore, consumption patterns no longer will be as clear as they once were and this will make load forecasting more difficult. In other words, utilities will be in yet another difficult position where they must maximize shareholder value while simultaneously accepting more risk without any corresponding financial reward.</p>
<p>• <b>Theme 8:</b> Utilities will need fundamentally to change their customer-facing capabilities if they are to become key players in the provision of green solutions to their customers. While many utilities are focused on improving customer service and JD Power scores for current basic energy services, winning utilities will need to adopt a game-changing strategy. They must not only provide great customer service on current products, but aggressively position themselves to provide a much broader set of services around energy efficiency, sustainability and demand response. This would include time-of-use tariffs, micro-generation leasing, HVAC leasing, energy audits, energy monitoring, carbon offsetting, and large scale property portfolio efficiency management, to name a few.</p>
<p>While research shows consumers are willing to pay for additional, environmentally focused services, and are willing to entertain offers from utilities, it also reveals that utilities face a credibility gap relative to other organizations when it comes to the environment <i>(see Figure 3)</i>. This is compounded by the fact that the value chain for energy services is both extremely fragmented and extremely competitive, including organizations in the construction, energy provision, energy services, OEM and technology industries (<i>e.g., </i>Google, Microsoft, Cisco). Not only is the competition evolving but so is the target customer base. Municipalities are playing an increasingly important role and looking at their energy providers as a way to create a green proposition, often threatening municipalization. While most utilities face a significant hurdle in terms of capability building, they also are well positioned in the value chain as current owners of the customer relationship. Utilities will need to drive change along two fronts, by first building customer-service capabilities that rival those provided in other sectors, and second by proactively shaping the regulatory framework to drive a change in allowed operating scope beyond traditional services.</p>
<h4>The Next-Gen Utility</h4>
<p>Utilities face a shifting landscape where demand destruction and revenue loss are very real possibilities. In the face of these challenges three strategic options exist.</p>
<p>• <b>Lead the way:</b> Proactively shape a products and services portfolio, enabling greater energy efficiency and load management while influencing the regulatory compact, embracing the carbon challenge, moving toward the smart technologies and understanding the changing nature of the relationship with the customer.</p>
<p>• <b>Follow the leaders:</b> Maximize the current regulatory framework while laying the foundation for change by pursuing very limited forays into renewables and smart technologies, focusing on providing adequate service to the customer.</p>
<p>• <b>Resist change:</b> Attempt to delay the advent of new regulation, focusing on short-term value optimization with limited to no investment in long-term capabilities or technologies.</p>
<p>The industry faces inevitable changes. While the precise timing and form could differ, the end result is clear. Utilities using a lead-the-way strategy while exercising investment prudence will maximize shareholder value in the medium to long term. This strategy does not require being a first mover with regards to technology deployment, which doesn’t map well to utility core strengths or the regulatory environment. But it does require proactively embracing the changing landscape by: 1) Restructuring the asset portfolio to meet the evolving generation stack including, but not limited to, a clear path for the deployment of renewables and an agility to respond to technological progress; 2) Recasting the T&amp;D network to make it intelligent, two way, and able to foster real time demand management at scale, including customer premise energy network management; 3) Reinventing the customer value proposition and experience to drive demand management at scale, with customer tailored mix of commodity and energy utilization service; and 4) Proactively shaping the legislative and regulatory compacts, including changes in operating scope that move beyond the meter, as well as changes in earning structure beyond decoupling.</p>
<p>Piecemeal efforts will not produce the necessary outcomes. Bold leaders will drive their organizations to adopt a highly integrated, effective, efficient and extended operating model while delivering results along the way to earn regulators’ trust and support. This transformation and degree of turmoil is akin to that experienced within the telecom industry over the last 15 years, which has led to a complete redefinition of the winners and losers.</p>
<p>Crystallizing a dramatically different future vision and shaping and executing a roadmap thereto will define the legacy of this generation of industry leaders. While the challenge ahead is more significant and fraught with more uncertainty than previously faced in the history of the industry, it is surmountable. It all starts with that very first critical decision to lead, follow, or be left behind.</p>
</div></div></div><div class="field-collection-container clearfix"><div class="field field-name-field-sidebar field-type-field-collection field-label-above"><div class="field-label">Sidebar:&nbsp;</div><div class="field-items"><div class="field-item even"><div class="field-collection-view clearfix view-mode-full field-collection-view-final"><div class="entity entity-field-collection-item field-collection-item-field-sidebar clearfix">
<div class="content">
<div class="field field-name-field-sidebar-title field-type-text field-label-above"><div class="field-label">Sidebar Title:&nbsp;</div><div class="field-items"><div class="field-item even">Changing Customer Expectations</div></div></div><div class="field field-name-field-sidebar-body field-type-text-long field-label-above"><div class="field-label">Sidebar Body:&nbsp;</div><div class="field-items"><div class="field-item even"><!--smart_paging_autop_filter--><!--smart_paging_filter--><p>Future consumption will be driven by customers and their requirements in four major areas.</p><p><b>Ability to Manage And Reduce Costs</b></p><p>• Reduce bills by changing electricity use patterns and selecting from flexible tariff structures.</p><p>• Manage costs by viewing real-time consumption changes based on usage changes.</p><p>• Reduce bills by allowing the smart grid to automatically adjust appliance settings.</p><p>• Reduce bills by participating in voluntary conservation and demand-response programs.</p><p><b>More Accurate Bills</b></p><p>• No more estimated reads.</p><p>• More accurate usage forecasts.</p><p><b>More Reliable Power</b></p><p>• Less power outages.</p><p>• More trust in the accuracy of restoration estimates during outages.</p><p>• Reduced impact of outages through proactive sensor and self-healing equipment as well as widespread battery or backup equipment.</p><p><b>Reduced Environmental Impacts</b></p><p>• Quantify carbon emission footprint reduction through real-time information.</p><p>• Participate in broader, end-to-end energy- efficiency programs.</p></div></div></div> </div>
</div>
</div></div></div></div></div><div class="field field-name-field-article-category field-type-taxonomy-term-reference field-label-above clearfix"><h3 class="field-label">Category (Actual): </h3><ul class="links"><li class="taxonomy-term-reference-0"><a href="/article-categories/distributed-generation">Distributed Generation &amp; Microgrids</a></li><li class="taxonomy-term-reference-1"><a href="/article-categories/strategy-planning">Strategy &amp; Planning</a></li><li class="taxonomy-term-reference-2"><a href="/article-categories/transmission">Transmission</a></li></ul></div><div class="field field-name-field-members-only field-type-list-boolean field-label-above"><div class="field-label">Viewable to All?:&nbsp;</div><div class="field-items"><div class="field-item even"></div></div></div><div class="field field-name-field-article-featured field-type-list-boolean field-label-above"><div class="field-label">Is Featured?:&nbsp;</div><div class="field-items"><div class="field-item even"></div></div></div><div class="field field-name-field-image-picture field-type-image field-label-above"><div class="field-label">Image Picture:&nbsp;</div><div class="field-items"><div class="field-item even"><img src="http://www.fortnightly.com/sites/default/files/article_images/0901/images/0901-FEA4.jpg" width="1358" height="1500" alt="" /></div></div></div><div class="field field-name-field-fortnightly-40 field-type-list-boolean field-label-above"><div class="field-label">Is Fortnightly 40?:&nbsp;</div><div class="field-items"><div class="field-item even"></div></div></div><div class="field field-name-field-law-lawyers field-type-list-boolean field-label-above"><div class="field-label">Is Law &amp; Lawyers:&nbsp;</div><div class="field-items"><div class="field-item even"></div></div></div><div class="field field-name-field-tags field-type-taxonomy-term-reference field-label-above clearfix">
<div class="field-label">Tags:&nbsp;</div>
<div class="field-items">
<a href="/tags/ami">AMI</a><span class="pur_comma">, </span><a href="/tags/benchmarking">Benchmarking</a><span class="pur_comma">, </span><a href="/tags/cisco">Cisco</a><span class="pur_comma">, </span><a href="/tags/clean-air-act">Clean Air Act</a><span class="pur_comma">, </span><a href="/tags/doe">DOE</a><span class="pur_comma">, </span><a href="/tags/ev">EV</a><span class="pur_comma">, </span><a href="/tags/evs">EVs</a><span class="pur_comma">, </span><a href="/tags/ghg">GHG</a><span class="pur_comma">, </span><a href="/tags/google">Google</a><span class="pur_comma">, </span><a href="/tags/it">IT</a><span class="pur_comma">, </span><a href="/tags/microsoft">Microsoft</a><span class="pur_comma">, </span><a href="/tags/oem">OEM</a><span class="pur_comma">, </span><a href="/tags/phev">PHEV</a><span class="pur_comma">, </span><a href="/tags/xcel-energy">Xcel Energy</a> </div>
</div>
Thu, 01 Jan 2009 05:00:00 +0000puradmin13744 at http://www.fortnightly.comTrading on Carbon: How Markets Will Save the Worldhttp://www.fortnightly.com/fortnightly/2007/01/trading-carbon-how-markets-will-save-world
<div class="field field-name-field-import-deck field-type-text-long field-label-inline clearfix"><div class="field-label">Deck:&nbsp;</div><div class="field-items"><div class="field-item even"><p>Utilities should plan for U.S.-wide CO<sub>2</sub> emissions restrictions that will be more effective than state efforts.</p>
</div></div></div><div class="field field-name-field-import-byline field-type-text-long field-label-inline clearfix"><div class="field-label">Byline:&nbsp;</div><div class="field-items"><div class="field-item even"><p>Chuck Chakravarthy and John Rhoads</p>
</div></div></div><div class="field field-name-field-import-bio field-type-text-long field-label-inline clearfix"><div class="field-label">Author Bio:&nbsp;</div><div class="field-items"><div class="field-item even"><p><b>Chuck Chakravarthy</b> and <b>John Rhoads</b> are senior executive and consultant, respectively, at Accenture. Contact Chakravarthy at <a href="mailto:s.r.chakravarthy@accenture.com">s.r.chakravarthy@accenture.com</a> and Rhoads at <a href="mailto:john.r.rhoads@accenture.com">john.r.rhoads@accenture.com</a>. The authors would like to acknowledge the contributions made to this article by their Accenture colleagues William Pott, Nate Turner, and Andrew Wickless.</p>
</div></div></div><div class="field field-name-field-import-volume field-type-node-reference field-label-inline clearfix"><div class="field-label">Magazine Volume:&nbsp;</div><div class="field-items"><div class="field-item even">Fortnightly Magazine - January 2007</div></div></div><div class="field field-name-field-import-image field-type-image field-label-above"><div class="field-label">Image:&nbsp;</div><div class="field-items"><div class="field-item even"><img src="http://www.fortnightly.com/sites/default/files/article_images/0701/images/0701-FEA2-fig1.jpg" width="1285" height="808" alt="" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>If carbon dioxide (CO<sub>2)</sub> emissions restrictions are mandated at a federal level, the method almost certainly will be a cap-and-trade system based on both the European Union and United States emissions trading systems. A cap-and-trade system likely will be chosen over other alternatives for four fundamental reasons: 1) dramatic success of the U.S. SOx and NOx cap-and-trade systems; 2) compatibility with other regional trading frameworks; 3) economic efficiency in distributing credits, and; 4) business acceptance due to flexibility of abatement options.</p>
<p>The U.S. SOx and NOx cap-and-trade system, implemented in 1995, has been hailed widely as a success and has familiarized U.S. companies with emissions trading. However, the major problem with the SOx/NOx program is that it does not restrict a local geographic concentration of polluting sources. SOx and NOx are “local pollutants” that cause the most damage when concentrated within a specific geography. The problem of local concentration does not apply to CO<sub>2</sub>, since CO<sub>2</sub> restrictions are designed to reduce global rather than local concentrations of atmospheric CO<sub>2</sub>.</p>
<p>To mitigate climate change, a reduction in CO<sub>2</sub> is equally useful regardless of geography. This makes a cap-and-trade program even better suited to CO<sub>2</sub> than to SOx and NOx. The global scope of the climate-change problem justifies that restrictions be implemented at a federal rather than at a state level, and it ensures that the restrictions eventually will be extended to facilitate a global emissions-trading system.</p>
<p>Although the U.S. federal government has not yet taken steps to limit CO<sub>2</sub> emissions, many states have taken the initiative to develop their own restrictions.<sup>1</sup> In September 2006, California passed legislation that would reduce its current CO<sub>2</sub> emission levels 25 percent by the year 2020, bringing the state’s emissions to 1990 levels. The legislation also mandates a reduction of CO<sub>2</sub> emission to 80 percent below 1990 levels by 2050. To assist in this effort, California plans to participate in a cap-and-trade plan already in development with seven Northeastern states.<sup>2</sup></p>
<p>With multiple other states considering similar measures, it is easy to assume that federal CO<sub>2</sub> emission restrictions are no longer a question of “if” but “when.” These restrictions can be enacted in the following ways: 1) cap-and-trade; 2) carbon tax; 3) subsidies, grants or tax incentives; and 4) a combination of approaches.</p>
<h4>How Cap-and-Trade Works</h4>
<p>Cap-and-trade sets an annualized emissions limit over specific geographies and industries. Within this limit, or “cap,” firms would be able to sell or trade the rights to these emissions. This program imposes the lowest cost for a given cap since the industry is able to pursue the lowest cost-abatement options. An obvious benefit to this approach is that the emissions reductions are known with certainty. Credits also can be distributed through auction to offset the implementation costs associated with such a program. With a cap-and-trade program, regulators can set a ceiling price for credits based on the penalties imposed for over-emitters. This ensures that the cost of compliance under a cap-and-trade system would not become excessive.</p>
<p>Opponents of a cap-and-trade system say that it would set a limit on annual emissions improvement and therefore only give firms incentive to do the very least. A strict program could result in costs higher than the actual benefits intended, while a cap-and-trade system that includes a “safety valve” would not guarantee compliance.</p>
<p>A cap-and-trade program can be implemented from either of two different approaches—upstream and downstream trading. Upstream trading programs are implemented at the point where carbon enters the economy (<i>i.e.</i>, fossil-fuel importers and producers). Since the number of fossil-fuel producers is concentrated, implementation would be relatively easy. This scenario can be seen as most beneficial since it would place a cap on all potential carbon emissions. Opponents to upstream trading may point out that the impact is too far from the consumer, reducing exposure and incentives to reduce end-use emissions.</p>
<p>Downstream trading focuses on carbon emissions from end users (at the point of combustion). This type of program could be seen as beneficial because it is closest to the consumer and would therefore have the greatest exposure and impact. The difficulty in running such a program comes from the size and diversity of end users.</p>
<p>Initial downstream programs would regulate only the largest carbon emitters—electric utilities. This type of system is similar to that implemented in the European Union. This system also was implemented in the United States in 1995 to reduce SOx and NOx emissions, and has been widely hailed as a success.</p>
<p>The major problem with the SOx/NOx program is that it does not act to restrict concentration of polluting sources. This is not a concern when dealing with CO<sub>2</sub> restrictions, since CO<sub>2</sub> restrictions will be implemented to reduce global, rather than local, concentrations of atmospheric CO<sub>2</sub>. Given that the electric utilities market already is comfortable with SOx/NOx trading programs, implementation of a carbon-trading scheme would be relatively easy. The main concern with this approach is that alone, it only addresses roughly 40 percent of the total current carbon emissions.</p>
<h4>The Government Option: Carbon Tax, Subsidies, or Grants</h4>
<p>A form of emissions control would tax each ton of CO<sub>2</sub> produced. This tax ideally would affect the price of all goods and services associated with CO<sub>2</sub> production. Proponents of this approach say that “a carbon tax would motivate consumers to control emissions up to the point where the cost of doing so was equal to the tax.” In this manner, the tax (cost) can be set to equal the damage created by CO<sub>2</sub>.</p>
<p>Opponents of a carbon tax point out that it does not guarantee that the intended target of emissions reduction would be met. There is another issue of what to do with the tax dollars generated by the carbon tax. While some propose to return it to those most affected by the tax through income and corporate tax breaks, this would only help negate the incentive to reduce emissions in the first place.</p>
<p>In terms of subsidies, grants, and tax incentives this method would be seen as an expansion of the current U.S. legislative approach to controlling CO<sub>2</sub>. Additional incentives could be created to promote the development of less carbon-intensive technologies. Given that the rate of CO<sub>2</sub> emissions steadily has increased despite the existence of similar subsidies, this approach, in isolation, will not reduce the use of carbon-intensive fossil-fuel technologies.</p>
<p>Alternatively, when using a combination of approaches, the most politically acceptable and effective method to reducing emissions may include a combination of all three approaches described here. The imposition of a cap-and-trade scheme is practically a certainty for the utilities industry, while a carbon tax may be most appropriate for the transportation sector. Additionally, the federal government likely will use any income from these programs to promote development of carbon-efficient technologies through subsidies and various incentives.</p>
<h4>Creating an Emissions Strategy</h4>
<p>Energy producers should take steps to prepare for the coming U.S. cap-and-trade system. Specific steps can be taken now to position a company to minimize the inevitable business expense and disruption that will be caused by the implementation of CO<sub>2</sub> emission caps.</p>
<p>Companies that will prosper in this new environment should have in place, prior to the implementation of an emissions trading system (ETS):</p>
<p>1. Mechanisms to influence regulatory and legislative actions;</p>
<p>2. CO<sub>2</sub> monitoring and reporting processes;</p>
<p>3. Strategic alliances;</p>
<p>4. Internal and external trading capabilities;</p>
<p>5. Pricing carbon emission credits (CECs) and abatement options as part of an acquisition strategy;</p>
<p>6. Systems for accessing and understanding abatement costs at each facility;</p>
<p>7. CEC price-dependent plans of action;</p>
<p>8. Technology and sequestration assessments;</p>
<p>9. Fuel-mix assessments; and</p>
<p>10. Reputational positioning plans.</p>
<p>Each company should strive to create a regulatory framework that will create a competitive advantage. Strict and broad federal requirements will not necessarily put undue burden on any individual player. They may, however, put the industry as a whole at a competitive disadvantage by substitution of industries and foreign competition that is not covered by the cap-and-trade system. While foreign competition to U.S.-based electric generators is minimal, electric-power generators will be affected by decreased demand as their customers switch from electricity to other forms of energy not covered by a cap-and-trade system. This could mean a move from electric heating to natural gas, or a move by industrial customers to build small generating stations not covered by the cap-and-trade system.</p>
<p>It is in the best interest of all electricity producers to lobby for a broad program that covers multiple industries and a wide geography. Additionally, there are numerous specific regulatory factors that will determine the industry winners and losers. The early winners will be the companies best able to shape regulations, recognizing that the industry’s initial position and reaction will influence the structure and implementation of an expanded program.</p>
<p>Moreover, the implementation of any cap-and-trade system will require that point-source emissions be measured according to specified guidelines. In addition to these monitoring requirements, businesses likely will face stringent reporting requirements. Businesses should design, in advance, processes to ensure compliance by meeting monitoring and reporting requirements.</p>
<p>CO<sub>2</sub> emissions from power plants probably will need to be monitored by continuous emission monitors and reported to the U.S. Environmental Protection Agency (EPA) under guidelines similar to those specified under Title IV of the 1990 Clean Air Act Amendments. While this is a fairly straight-forward process and the technology is readily available, it may require extensive industry-wide retrofitting that significantly could disrupt ongoing operations. Alternatively, emissions may be self reported and independently audited, similar to the system adopted by the Chicago Climate Exchange, where emissions are audited independently by the National Association of Securities Dealers.</p>
<p>Implementing these systems early will reduce the risk of violations and will allow for exploitation of trading opportunities. By keeping ahead of the learning curve, businesses can avoid significant startup costs as all industry players compete for limited technology and expertise in redesigning their business processes and activities.</p>
<h4>Strategic Alliances</h4>
<p>Companies need to begin forming strategic alliances that can provide expertise and offsets under the new program. By forming alliances, companies can expand their possibilities greatly for cost-effectively implementing abatement. These alliances should include large contributors to CO<sub>2</sub> emissions (heavy manufacturing, process industries, and car companies), companies within the supply chain (coal producers and transport providers), and technology providers. Alliances also could include companies that have the potential to create offsets, such as those within the agricultural and forestry industries. Additionally, alliances could pave the way for more cost-effective trading, and could provide greater leverage in shaping regulations.</p>
<p>Industry players also must designate a group internally or expand their current emission-allowance trading group to conduct CEC trading. This same group should have the capability to trade credits externally, or it should designate an external broker with whom to partner. In preparing for regulations, creating an internal trading system often is the first and most effective way to gain experience with trading and to understand internal abatement options.<sup>3</sup> Forming such internal groups now will give a company the experience it needs in developing a functioning trading capability to take advantage of all internal and external trading options.</p>
<p>In addition, as the ETS comes into force and the price of CECs increases, operators must consider costs of abatement both internally and externally. CECs and cheap abatement options could play an important role as part of an overall acquisition strategy. For example, as part of an overall acquisition strategy, a carbon-intensive plant can be acquired, cheap abatement (or even closing) instituted, and the excess credits traded to another plant or even traded on the open market. Savvy companies can use this mechanism to reduce otherwise significant abatement costs at existing plants.</p>
<p>Meanwhile, companies must have a detailed understanding of the CO<sub>2</sub> abatement costs at each facility. Under the cap-and-trade system, a company will benefit greatly by knowing where its abatement investment will have the most impact. Currently, cost-effective options for CO<sub>2</sub> abatement are rare, as they require changing fuel inputs, reducing output, or capturing and sequestering CO<sub>2</sub>. One tactic is to make plants “capture ready” or flex-fuel, essentially buying the option to more cost-effectively reduce CO<sub>2</sub> output in the future. Implementing a carbon capture and sequestration system has been shown to be more cost effective with integrated gasification combined cycle (IGCC) than with pulverized coal (PC).<sup>4</sup> Developing IGCC technology has been a core strategy of AEP in preparing for expected regulations.<sup>5</sup></p>
<h4>The Cost of Compliance</h4>
<p>The following factors will determine the magnitude of cost imposed by the CO<sub>2</sub> cap-and-trade system on individual businesses:</p>
<p>1. Allocation vs. Auction;</p>
<p>2. Allocation Mechanism and Baseline;</p>
<p>3. Monitoring and Reporting Regulations;</p>
<p>4. Cost Pass-through;</p>
<p>5. Scope;</p>
<p>6. Offsets and Links to Other ETS Programs and Geographies; and</p>
<p>7. Top-level Cap and Reduction Timeline.</p>
<p>The most important driver of cost for holders of CECs will be the decision between allocation vs. auction. Auctioning credits is the most economically efficient way to allocate initial CECs. Auctioning 100 percent of CECs would:</p>
<p>1. Avoid the need to set a baseline allocation mechanism, thus avoiding the possibility of lobbying/favor seeking;</p>
<p>2. Provide a windfall to the issuing government agency; and</p>
<p>3. Require significant expenditure on the part of industries covered by the ETS.</p>
<p>The most likely allocation mechanism for existing sources will combine free allocation according to a set baseline with an auction of possibly 10 to 20 percent of CECs. Additionally, a policy probably will be set for allocating CECs for new sources. This allocation mechanism is probable due to the precedent set by the U.S. SOx/NOx and the EU ETS allocation mechanisms.<sup>6 </sup>These two mechanisms were designed to balance efficacy with political and regulatory support, and may act as templates for a future U.S.-based ETS.</p>
<p>Second, recognizing that the initial U.S. ETS may allocate the majority of the CECs, the choice of a baseline for allocating CECs largely will determine winners and losers. In any trading scheme, picking a baseline—the point from which emissions increases and reductions are measured—is controversial. The goal of energy producers will be not only to resist business disruption and high cost, but to fare better than direct competitors.</p>
<p>In choosing a baseline, regulators need to balance the needs of:</p>
<p>1. Rewarding CO<sub>2</sub> efficiency improvement; and</p>
<p>2. Rewarding CO<sub>2</sub> efficient producers.</p>
<p>In many ways, these needs are opposing. Should benchmarking be a driver towards CO<sub>2</sub> efficiency or a compensation for early action?<sup>7</sup> If the goal of CO<sub>2</sub> emissions restrictions is to reduce the carbon intensity of electricity production, the first need should not be considered. However, ignoring improvement when setting a baseline is unrealistic, since carbon intensity differs drastically across plants and producers. Using both needs to set a baseline recognizes that the initial U.S. ETS will be a transitory program that will, in the long term, lead to shutting down carbon inefficient methods of power production.<sup>8</sup></p>
<p>Setting the allocation mechanism for a U.S.-based ETS will have major strategic implications for regulated companies. Companies first must determine the baseline that will position them to gain maximum economic value as compared with competitors. Each company then will lobby for the baseline that is most favorable for its current and planned operations. We expect that the power-generation industry will be divided based on fuel-mix profile. The industry will be divided primarily between heavy coal users and light coal users. Heavy coal users, like AEP and Duke, will lobby for an “as is” allocation based on historic emission profiles. This means that coal plants will get significantly more CECs/MW output than would other types of plants. Light/non-coal users, such as PG&amp;E, will lobby for an “efficiency” allocation based on CO<sub>2</sub> efficiency (CO<sub>2</sub>/MW). This means that operators of coal plants may not be allocated their required number of CECs, and may have to buy CECs from operators that may be allocated more credits than they need.</p>
<p>Furthermore, companies that already have made significant investments in CO<sub>2</sub> abatement technology (including fuel switching and plant shutdowns) will lobby to get credit for past “investments.” Regulators will need to signal that they will consider past investment in abatement when determining an allocation mechanism. To not consider past investment would cause companies to delay abatement options until the allocation mechanism has been set. We expect that regulators will also be sensitive to the significant amount of money utilities already have spent on SOx and NOx abatement to meet the Phase II reductions mandated in the 1990 Clean Air Act. Cost of CO<sub>2</sub> cap-and-trade will be borne by the same companies that faced the greatest costs under the SOx and NOx programs. Regulators probably will account for this by giving companies a longer timeframe to come into compliance.</p>
<p>Another issue is how to allocate CECs to new plants. Since existing plants essentially will be given CECs, new facilities not allocated CECs will be at a significant cost disadvantage. Not allocating CECs to new plants would be a substantial barrier to entry and would keep newer, more energy-efficient plants from being built. For this reason, it is expected that new plants will be allocated, at no cost, at least a portion of their required emissions credits.</p>
<h4>Monitoring and Reporting Regulations</h4>
<p>Regulators must implement a system for compliance monitoring. The system must be substantially robust for all participants to have confidence that regulators can ensure compliance, manage data, and institute punishments for violators. These interests must be balanced to minimize costs associated with installing expensive and disruptive monitoring equipment. This becomes especially important as the ETS expands to cover smaller emitters, although at this stage it is unclear how small emitters will be monitored.<sup>9</sup></p>
<p>In addition, from an industry-wide perspective, implementation of any ETS scheme will act to increase production costs. To sustain profits, producers must have some ability to pass-through costs to customers. This is especially important for carbon-intensive producers. In this highly regulated industry, producers must have confidence that production costs associated with meeting ETS requirements will be understood by local regulators, who have the power to deny end-user price increases. It is ironic that the first phase of a U.S. ETS certainly will cover large fixed-point sources because “the energy sector has been considered to have the best possibilities to pass on costs for the allowances to the consumers and hence the allocation to this sector is often more restricted than the allocation to other sectors.”<sup>10</sup></p>
<p>The U.S. ETS will, like the EU ETS, initially cover large, fixed-point sources. While these sources may account only for a small proportion of overall CO<sub>2</sub> emissions, they will be the initial targets because they are best suited to monitoring and abatement. It is in the interest of electricity producers to advocate for as wide a net as possible, so that the economy as a whole helps to pay the price for CO<sub>2</sub> reductions, and so that electricity producers have a deeper trading market.</p>
<h4>A Global Plan and Timeline</h4>
<p>Reducing atmospheric CO<sub>2</sub> concentrations will require a worldwide, centuries-long effort. For a U.S. ETS to have any effect on long-term CO<sub>2</sub> concentrations, it will have to act as a first step in implementing a wider regime that covers all geographies and industries. From this standpoint, a reduction in CO<sub>2</sub> emissions is just as valuable regardless of where it occurs. This gives regulators incentives to link worldwide ETS, so that resources can be spent where they will have the most impact.</p>
<p>The ability to gain credits through offsets will help to limit the costs of abatement. By linking a U.S. ETS to the Kyoto Joint Implementation and CDM programs, companies greatly will expand their options for gaining emission credits. Linking the U.S. ETS to other programs and offering offsets greatly would increase support for an ETS since other players, such as agricultural companies, renewable energy producers, and brokers, would benefit.</p>
<p>At this time, it is unclear how the United States will set the top level cap and timeline for emissions reductions. Many other industrialized countries have settled on the Kyoto goals for reducing CO<sub>2</sub> emissions to 1990 levels by 2012. While this has helped to set a baseline and a goal, this expectation is unrealistic. U.S. regulators will have to choose a baseline that will lead to both significant CO<sub>2</sub> reductions while also rewarding recent actions by producers to reduce emissions. It should be expected that a baseline will therefore be set at a level determined by historical emissions at least 10 years in the past.</p>
<p>The top-level timeline for reduction will be a large factor in determining the overall cost to the industry and economy for meeting the new cap, but will have much less effect on determining individual winners and losers. While it will be the producers who face the initial business disruption and costs, it is ultimately the energy users and the consumers of their products who will face price increases. From this perspective it is easy to understand the strong and unified resistance to carbon emissions restrictions from energy-intensive industries.<sup>11</sup></p>
<p> </p>
<h4>Endnotes:</h4>
<p>1. See Appendix - State-Level Programs Addressing CO<sub>2</sub> Emissions Reductions.</p>
<p>2. See Appendix - Regional Greenhouse Gas Initiative (RGGI).</p>
<p>3. See article sidebar, “How Coal-Dependent Utilities Will Stay Clean.”</p>
<p>4. Future Carbon Regulations and Current Investments in Alternative Coal-Fired Power Plant Designs Ram C. Sekar, John E. Parsons, Howard J. Herzog and Henry D. Jacoby, MIT Joint Program on the Science and Policy of Global Change, Report No. 129, December 2005.</p>
<p>5. See “<a href="http://www.fortnightly.com/fortnightly/2007/01/how-coal-dependent-utilities-will-stay-clean">How Coal-Dependent Utilities Will Stay Clean</a>.”</p>
<p>6. The allocation mechanisms for Phase I of the EU ETS were determined at a national level allowing significant space for experimentation. The allocation mechanisms for Phase II and later will be much more standardized across the EU based upon specific experiences within Phase I EU ETS participant countries.</p>
<p>7. Benchmarking - Creating Incentives for Abatement?, CEPS Task Force on Review of the EU ETS, 24 May 2005, Lars Zetterberg, IVL.</p>
<p>8. For a detailed discussion of theory behind different allocations see Absolute vs. Intensity-Based Emission Caps A. Denny Ellerman and Ian Sue Wing, MIT Joint Program on the Science and Policy of Global Change, Report No. 100, July 2003.</p>
<p>9. Monitoring small emitters would probably require a hybrid scheme whereby one uses emission factors along with inspection certificates, similar to the urban emissions inspections regimes that are now in place for automobiles. See also: “Bringing Transportation into a Cap-and-Trade Regime,” A. Denny Ellerman, Henry D. Jacoby and Martin B. Zimmerman, MIT Joint Program on the Science and Policy of Global Change, Report No. 136, June 2006.</p>
<p>10. “Analysis of National Allocation Plans for the EU ETS”, IVL Swedish Environmental Research Institute, Zetterberg, Lars, August 2004.</p>
<p>11. See: The Alliance of Energy Intensive Industries publications, “Contribution to the EU Energy Strategic Review: Urgent Measures are Required to Improve the Functioning of Electricity and Gas Markets,” Brussels, 22nd September 2006 and “4th Annual Workshop on Greenhouse Gas Emission Trading” 4th &amp; 5th October 2004.</p>
</div></div></div><div class="field field-name-field-article-category field-type-taxonomy-term-reference field-label-above clearfix"><h3 class="field-label">Category (Actual): </h3><ul class="links"><li class="taxonomy-term-reference-0"><a href="/article-categories/etrm-markets">ETRM &amp; Markets</a></li></ul></div><div class="field field-name-field-members-only field-type-list-boolean field-label-above"><div class="field-label">Viewable to All?:&nbsp;</div><div class="field-items"><div class="field-item even"></div></div></div><div class="field field-name-field-article-featured field-type-list-boolean field-label-above"><div class="field-label">Is Featured?:&nbsp;</div><div class="field-items"><div class="field-item even"></div></div></div><div class="field field-name-field-image-picture field-type-image field-label-above"><div class="field-label">Image Picture:&nbsp;</div><div class="field-items"><div class="field-item even"><img src="http://www.fortnightly.com/sites/default/files/article_images/0701/images/0701-FEA2.jpg" width="1200" height="1500" alt="" /></div></div></div><div class="field field-name-field-fortnightly-40 field-type-list-boolean field-label-above"><div class="field-label">Is Fortnightly 40?:&nbsp;</div><div class="field-items"><div class="field-item even"></div></div></div><div class="field field-name-field-law-lawyers field-type-list-boolean field-label-above"><div class="field-label">Is Law &amp; Lawyers:&nbsp;</div><div class="field-items"><div class="field-item even"></div></div></div><div class="field field-name-field-tags field-type-taxonomy-term-reference field-label-above clearfix">
<div class="field-label">Tags:&nbsp;</div>
<div class="field-items">
<a href="/tags/aep">AEP</a><span class="pur_comma">, </span><a href="/tags/benchmarking">Benchmarking</a><span class="pur_comma">, </span><a href="/tags/cap-and-trade">Cap-and-trade</a><span class="pur_comma">, </span><a href="/tags/cdm">CDM</a><span class="pur_comma">, </span><a href="/tags/cec">CEC</a><span class="pur_comma">, </span><a href="/tags/clean-air-act">Clean Air Act</a><span class="pur_comma">, </span><a href="/tags/cost">Cost</a><span class="pur_comma">, </span><a href="/tags/environmental-protection-agency">Environmental Protection Agency</a><span class="pur_comma">, </span><a href="/tags/epa">EPA</a><span class="pur_comma">, </span><a href="/tags/it">IT</a><span class="pur_comma">, </span><a href="/tags/mit">MIT</a><span class="pur_comma">, </span><a href="/tags/parsons">Parsons</a><span class="pur_comma">, </span><a href="/tags/regional-greenhouse-gas-initiative">Regional Greenhouse Gas Initiative</a><span class="pur_comma">, </span><a href="/tags/regulation">Regulation</a><span class="pur_comma">, </span><a href="/tags/rggi">RGGI</a><span class="pur_comma">, </span><a href="/tags/sox">SOx</a><span class="pur_comma">, </span><a href="/tags/strategy">Strategy</a><span class="pur_comma">, </span><a href="/tags/technology">Technology</a><span class="pur_comma">, </span><a href="/tags/us-environmental-protection-agency">U.S. Environmental Protection Agency</a> </div>
</div>
Mon, 01 Jan 2007 05:00:00 +0000puradmin13981 at http://www.fortnightly.comEurope: Picture of a Stalled Competitive Modelhttp://www.fortnightly.com/fortnightly/2005/02/europe-picture-stalled-competitive-model
<div class="field field-name-field-import-deck field-type-text-long field-label-inline clearfix"><div class="field-label">Deck:&nbsp;</div><div class="field-items"><div class="field-item even"><p>Several hurdles remain to further liberalization and full competition in the electricity sector.</p>
</div></div></div><div class="field field-name-field-import-byline field-type-text-long field-label-inline clearfix"><div class="field-label">Byline:&nbsp;</div><div class="field-items"><div class="field-item even"><p>Bridgett Neely &amp; A.J. Goulding</p>
</div></div></div><div class="field field-name-field-import-bio field-type-text-long field-label-inline clearfix"><div class="field-label">Author Bio:&nbsp;</div><div class="field-items"><div class="field-item even"><p><strong>Bridgett Neely</strong> is a senior consultant with London Economics International and <strong>A.J. Goulding</strong> is president of London Economics International, which has conducted more than three dozen projects analyzing valutations, strategies, market opportuinites and potential transactions in Western and Eastern Europe.</p>
<p>This article is an excerpt from <a href="http://www.londoneconomicspress.com" target="_blank">Press European Power Directory</a>, <span style="font-size: 13.3333339691162px; line-height: 20.0063056945801px;">a recent publication by London Economics.</span></p>
</div></div></div><div class="field field-name-field-import-volume field-type-node-reference field-label-inline clearfix"><div class="field-label">Magazine Volume:&nbsp;</div><div class="field-items"><div class="field-item even">Fortnightly Magazine - February 2005</div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>Two major trends can be observed in Europe's electricity sector. First, the increasing importance of private-sector participation in a sector that was traditionally viewed as belonging to the state. In the 15 years since the UK started to privatize its electricity sector, there has been a complete about-face on this issue, with almost all European countries privatizing certain elements of their electricity sector, whether through the introduction of private independent power producers (IPPs), or the opening of state-owned companies to private investment, or the outright sale of energy assets. France, which has politically struggled with its ambition to open EdF's capital to private investors and has delayed this step in recent years due to fierce union opposition, is a notable exception. However, other countries, such as Greece and Italy, have succeeded in selling portions of their state-owned utilities (PPC and Enel, respectively) without undue public opposition. The Accession countries (which include the 10 countries that recently joined the EU as well as those that hope to do so in the next few years), with only a few exceptions, also are selling assets.</p>
<p>The second major trend in Europe is that of the massive amount of merger and acquisition (M&amp;A) activity across the continent. In the mid to late 1990s, major national European utilities started to buy assets and companies in other Western European countries, with the objective of becoming powerful regionwide actors. The winners of this phase were the German actors RWE, E.On, French EdF, Italian Enel, Spanish Endesa and Iberdrola, Finnish Fortum, and Swedish Vattenfall. The second phase of M&amp;A activity has been the attempt by these companies to position themselves in Eastern Europe in anticipation of EU Accession.</p>
<p>At the same time, several hurdles remain to further liberalization and full competition in the European electricity sector. First, the wave of M&amp;A activity has resulted in an oligopoly of large European utilities on the European continent, which ultimately could stymie the development of full competition. Second, the continued persistence of regional markets, such as the Scandinavian area, the Iberian Peninsula, and Southeastern Europe also prohibits the flourishing of full competition across the entire continent. Finally, concerns about the level of actual competition in these markets continue, due to difficulties that new entrants have in accessing and keeping customers in a profitable manner. These are all issues that the EU will continue to address as it refines its policies and attempts to drive forward the integration process.</p>
<h4><b>Europe's Resource Mix: A Mixed Bag</b></h4>
<p>In general, the structure and composition of the generation sector differs among most European countries, whereas the market design for wholesale trading, transmission, and distribution is becoming more similar across the region. The following four sections delineate our cross-European comparison: generation, wholesale markets, transmission and distribution, and retail supply. Several characteristics define the structure and status of the generation segment in a given country or region: the amount of installed capacity related to current peak demand and expected peak demand in coming years; the composition of fuel drivers and ongoing economic availability of that fuel; the age and state of plant; and, given Europe's ambitious Kyoto targets, the availability of renewable resources and policies for encouraging their use.</p>
<p>Sufficient generation capacity is a crucial part of energy security. In general, a reserve margin of 15 percent of operational installed capacity above expected peak demand is required to ensure a reasonable level of reliability. By this measure, markets that are currently in an overcapacity situation in Europe include Austria, France, Germany, Lithuania, and Slovakia. All of these countries have reserve margins well above 15 percent and are significant exporters. Countries currently suffering a lack of sufficient capacity include Greece, Hungary, Ireland, Italy, and Spain.</p>
<p>Supply-demand balances change with time. Supply must be adequate to meet demand over time. Thus load forecasts are crucial in determining sufficient levels of generation capacity. There are pockets of very low growth in Europe where anticipated load growth through 2010 is not expected to exceed 1 percent per annum. Countries in this category include Denmark, England and Wales, Germany, and Sweden.</p>
<p>At the other end of the spectrum is a group of high load-growth countries, where growth rates are expected to be consistently 4 percent or higher per year through 2010. These countries include Bulgaria, the Czech Republic, Greece, Italy, Lithuania, the Netherlands, Portugal, Slovenia, and Spain. These countries thus will have to not only replace the assets that need to be retired in the next 10 years, but will need to add significant new plant to meet increased load. Likewise, the supply context is likely to change as certain countries retire all of their nuclear capacity (Germany and Belgium), and Eastern Europe shuts down its more polluting plants and replaces the nuclear plants that the EU has stated are not safe.</p>
<p>Europe's fuel sources are diverse, with many energy rich countries-Germany, Poland, and Greece (coal), Scandinavia, Latvia, and Austria (hydro), the UK, the Netherlands, and Romania (natural gas), Estonia (oil shale), and Russia and Norway (gas and oil). The presence of natural resources can sometimes further complicate electricity reforms due to state ownership of natural resources, high levels of employment provided by natural resource industries, and union support for these industries. Hydroelectric resources also can complicate electricity policy. Hydro capacity usually decreases in the summer when rivers and lakes are somewhat drier. In addition, having a disproportionate percentage of hydroelectric generating capacity can leave the country exposed to price spikes during periods of poor hydrology conditions, which occurred in Scandinavia in the summer of 2003.</p>
<p>At the same time, other countries that have practically no natural energy resources, such as France, Belgium, and Lithuania became original proponents of nuclear power in the 1970s and 1980s. But nuclear power has become a divisive issue in Europe, with several countries politically opposed to its continued use, such as Germany, Belgium, and Italy, while other countries, such as France and Finland, are committed to its continued use as a reliable and environmentally safe energy source.</p>
<p>The nuclear question is particularly provocative in Eastern Europe, where some of the power plants built during the Soviet era are in dubious condition, yet few alternate supply sources exist. The EU is particularly concerned about the older Soviet reactors and has explicitly stated that Lithuania's Ignalina, Bulgaria's Kozloduy, and Slovakia's Bohunice V1 should be closed. In addition, the EU wants reactors in Bulgaria, the Czech Republic, Hungary, and Slovakia to be upgraded, and the reactors in Romania and Slovenia to be closely monitored. Yet, given that these countries need alternate sources of supply and technical assistance before the plants can be closed or upgraded, timely compliance with these nuclear safety regulations will be challenging.<b><sup>1</sup></b></p>
<p>The encouragement to use renewable energy sources-long a goal of the EU-was translated into law in 2001 through Directive 2001/77/EC, which targeted increased production of electricity from renewable energy (from 14 percent to 22 percent in 2010). The EU endorses three main types of support mechanisms for renewable energy sources (RES). The first is a fixed feed in tariff for all renewable energy and a guarantee that all electricity generated by RES will be dispatched. Such an approach is used in Austria, France, Switzerland, and Ukraine. The second approach requires suppliers or customers to buy a certain amount of RES generated electricity, which can be in the form of a "green certificate."<b><sup>2 </sup></b>If the required quota of renewable energy is not achieved, the supplier is fined. Italy, the Netherlands, and the UK use this approach. The third option is a direct subsidy from the government to cover a portion of either capital or operating costs. While the Accession countries are not as advanced in implementing RES policies (and are not subject to the same strict Kyoto Protocol targets as Western Europe), certain countries, such as the Czech Republic, Estonia, and Latvia, have started to implement active RES policies.</p>
<p><b>Wholesale Markets: Organized Exchanges Dominate</b></p>
<p>With wholesale market design under a deregulated framework, generators can sell their output into a wholesale market or to a specific customer, be that a power marketer (who then re-sells it to others), a retailer (who re-sells the output to end-users), or a direct customer (who then uses the electricity). Bilateral trading, also referred to as over-the-counter (OTC) trading, is the most common way for these sales to occur. In addition, many deregulated markets have established centralized exchanges to provide a transparent framework for energy sales and to create a benchmark price index for the region. While organized exchanges are usually more visible to the public, the volume traded on exchanges is normally a fraction of bilateral trading activity.</p>
<p>In Europe, OTC markets and organized exchanges are often used together to create a hybrid wholesale market. Western Europe has numerous organized exchanges. Nord Pool, the Nordic power exchange, is the oldest, the largest, and the most liquid exchange in the region, and it sets prices for all four Nordic countries (Denmark, Finland, Norway, and Sweden). In recent years, exchanges have developed in Germany (EEX), France (Powernext), Austria (EXAA), and the Netherlands (APX), leading to increased information about wholesale price indications in the region.</p>
<p>Most of these exchanges operate in a similar manner, with an auction system for price formation where the intersection of demand bids and supply bids for each hour sets the market clearing price for all winning bids. Some exchanges, such as Nordpool and Powernext, also offer clearing services to ensure the financial security of transactions. The exchanges have close ties to the grid operators to ensure physical delivery where needed. The products offered by the exchanges vary as the exchange matures. Thus, Nordpool has the largest variety of products and services, including standard day-ahead and futures contracts, as well as balancing products. EEX and Powernext both offer blocks of hours and futures contracts in addition to standard day-ahead hourly contracts.</p>
<p>The UK differs from its continental European neighbors in that it does not have an organized exchange and all trading activity is bilateral. In 2001, the UK regulator, Ofgem, replaced the UK Power Pool, which had existed for 10 years, with the New Electricity Trading Arrangements (NETA), a decentralized set of arrangements with a sophisticated structure for balancing generation, consumption, and imbalances. In addition, National Grid (NGC), the UK's transmission system owner and operator, manages an optional balancing mechanism (BM) to manage real-time imbalances and transmission constraints. Regardless of whether participants are active in the BM, there is a compulsory imbalance settlement process for all market participants.</p>
<p>In addition, there are markets that do not currently provide realistic pricing indications, such as OMEL in Spain and many of the nascent exchanges emerging in Accession countries, such as Czech Republic, Poland, and Slovenia. Spain's OMEL is a mandatory pool for all plants that are more than 50 MW and are not contracted through bilateral contracts. However, OMEL contains an implicit price cap due to the mechanism for recovering stranded costs, which gives an unfair advantage to incumbent Spanish generators. Moreover, Spain continues to have regulated tariffs, which are very competitive with market rates, resulting in most eligible consumers purchasing at regulated tariffs instead of acquiring via the pool. Pool prices are therefore not representative of real competitive market dynamics in Spain.<b><sup>3</sup></b> The exchanges in Eastern Europe are new institutions, are highly illiquid, and still reflect the lack of significant competition in the generation sector in most of these countries.</p>
<p><b>European Price Convergence: The Ins and Outs</b></p>
<p>Due to insufficient interconnection capacity in Europe, regional differences in wholesale power prices remain. However, prices in western continental Europe, which includes Austria, France, and Germany, are converging. The average baseload price in 2003 in the three countries ranged from 29.23/MWh to 30.63/MWh. The convergence in prices among these markets is due to the large amount of available interconnection capacity that allows arbitrage among the different exchanges, thereby leading to highly correlated prices.</p>
<p>Nordpool's prices, while providing clear indications for the four Scandinavian countries, are driven largely by hydrological conditions. Unfortunately, 2003 was a year of poor hydrological conditions, and the average price of almost Û37/MWh made prices substantially more expensive than in the continental European region. Under normal conditions, prices in Scandinavia are lower than in continental Europe.</p>
<p>Likewise, prices in the Netherlands still are affected by relatively tighter supply-demand fundamentals and the country's constrained import ability, and thus differ substantially from its Nordic and continental neighbors. As would be expected, the Netherlands' average baseload price in 2003 was substantially higher than seen on the other exchanges at Û46.47/MWh.</p>
<p>In addition, there are markets for products other than just energy. For example, there is a market for generation capacity in Belgium, France, and Ireland. These Virtual Power Plant Auctions (VPPs) were first mandated in France by the European Commission in exchange for the acceptance of Electricité de France's (EdF) acquisition of Energie Baden Württemberg (EnBW) in 2001. A purchaser of a VPP contract obtains a right, but not an obligation, to the output of virtual capacity, which entitles the contract holder to issue dispatch instructions and receive electricity output on the following day for delivery on the high-voltage grid. Though the capacity is not associated with any particular generation unit operated by EdF, it does have certain traits that mimic baseload or peaking generation characteristics. EdF is obliged to auction off 6,000 MW of capacity (6 percent of EdF's total installed capacity) for five years to any operator wishing to procure electricity produced in France. Likewise, in Belgium in 2003, Electrabel started to auction off 1,200 MW of capacity to open up its generation sector to more competition. In Ireland, ESB is auctioning off 400 MW of virtual capacity to open its generation sector.</p>
<p>Meanwhile, another market exists in Europe for interconnection capacity between countries. While some capacity is reserved on the basis of "priority lists," such as between France and Belgium, Germany, and Spain, there are also public auctions of capacity, such as on the interconnector between the UK and France. In the case of the latter, capacity is allocated based on the price bid. This approach also is used in the Netherlands, which has very congested import capacity and auctions its capacity to and from Germany and Belgium in annual, monthly, and daily auctions. Likewise, there are also auctions between Germany and Denmark's border, and between Germany and the Czech Republic's border. The price of interconnector capacity usually is determined as a differential between the average or expected market price in the two countries, and, as such, is higher in the direction of cheaper electricity to more expensively priced electricity and vice versa. As an illustration, in the 2003 auction for annual capacity rights between Germany and the Netherlands, 356 MW of capacity was auctioned between Tennet and the RWE network in Germany for an average price per megawatt of Û59,130 to the Netherlands and Û920 to Germany.<b><sup>4</sup></b></p>
<p>Finally, there are markets for real-time balancing energy in the UK and Scandinavia. Germany and Austria are establishing real-time balancing markets. Other ancillary services are usually procured on a bilateral basis, though increasingly accomplished through a competitive tender process.</p>
<p><b>European Transmission: Truly Independent?</b></p>
<p>Following the EU directives, almost all European countries have created fully independent grid operators that, even if owned by the incumbent utility, are managed independently, with sufficient "Chinese walls" in place to ensure full transparency and fair access. The only country that still has not accomplished this is Latvia.</p>
<p>Most countries in Europe have one transmission system operator (TSO) responsible for operating the entire nation's grid. However, there are a few exceptions, including Germany, Austria, and Denmark, where transmission operators are only responsible for certain regions and must coordinate with one another for cross-regional flows. The ownership of TSOs varies. In some cases, the state owns the TSO, such as in the Czech Republic and Norway. In other cases, the TSO is separated officially from the incumbent utility but continues to be owned by it, such as in France, Germany, and Austria. The TSO also can be owned by a separate private company, such as in the UK.</p>
<p>Most countries have several distribution system operators, depending on the size of the country and the original structure of the market. Many European countries had a heritage of small municipal distributors, resulting in hundreds of local distribution companies. With liberalization, many of these distributors are being acquired by the larger actors in the market, as is clearly evidenced in Germany, where E.On and RWE directly and indirectly control 70 percent of the country's distribution networks through their myriad holdings in local Stadtwerke.</p>
<p>Other countries, such as Poland, are restructuring these small distributors into bigger companies by grouping together as many as 30 local distributors under one umbrella organization. Distributors are often ripe for privatization, especially in Eastern Europe, where privatizing distribution has been a way to improve cash collection and thereby improve the financial viability of the entire electricity sector. In other regions, such as in Austria, Germany, the Netherlands, Scandinavia, and Switzerland, significant municipal ownership is likely to continue in the distribution sector.</p>
<p>Most countries have adopted clearly published tariffs for transmission and distribution grid use. Germany, with its original approach of negotiated tariffs, was the major exception, but it has been obliged by the EU to develop a regulated tariff regime. Tariff-setting regimes vary widely in Europe, with some countries, such as France, using a standard cost-of-service regime, while others, such as Austria, Denmark, Finland, Ireland, Italy, Netherlands, Norway, Portugal, Spain, Sweden, and the UK use a more ambitious performance-based rate-making (PBR) approach that requires the grid operator to regularly improve its efficiency. Accession countries, such as Hungary, Latvia, Lithuania, Slovakia, and Slovenia, also have started to widely implement PBR regimes. Romania and Bulgaria both intend to launch PBR regimes in the near future.</p>
<p>While most countries in the region have sufficient transmission capacity internally, some interconnections are still insufficient between countries. There are three distinct grid regions in Europe-the UK grid region, the Scandinavian region (Nordel), and the continental European region, usually referred to by the acronym of the group that coordinates this region, UCTE (Union of Coordination of the Transmission of Electricity). UCTE incorporates all of Western Europe and is starting to link up with the countries of Eastern Europe.</p>
<p>Italy, the Iberian Peninsula, Greece, and the Netherlands do not have sufficient import capacity, resulting in relatively higher electricity prices in those regions. However, according to the UCTE reliability assessment for 2004-2010, no significant reliability problems are anticipated in the short to mid term in any European grid zones. Moreover, the construction over the next two to five years of major interconnections across Europe will help to improve reliability and give the more isolated regions additional access to the continental European market.</p>
<p>The anticipated interconnection projects include: Belgian-French border, Belgian-Netherlands border, Spanish-Portuguese border, French-Spanish border, Swiss-Italian border, and a variety of additional connections in Southeastern Europe.</p>
<p><b>Electric Competition: Why the Incumbents Dominate</b></p>
<p>In Western Europe, where the percentage of end consumers eligible to choose their electricity supplier averages more than 75 percent, a multitude of retail suppliers has emerged. Some are the affiliates of the major European utilities, while others have a specific regional focus or a particular niche market, such as green energy. However, the number of suppliers with more than 5 percent market share is relatively low, and incumbent utilities continue to dominate.</p>
<p>The UK, which arguably has the most competitive supply sector in the region, has seven suppliers with more than 5 percent market share, while Norway has five. Spain, Italy, Belgium, and Austria have four, while the remaining countries have less. Only the UK, Sweden, and Finland have a significant number of foreign-owned suppliers, at 64, 40, and 21 percent, respectively. Among the Accession countries, Slovenia is the only country that stands out, with six suppliers holding more than 5 percent market share and 20 percent foreign ownership of suppliers.<b><sup>5</sup></b></p>
<p>Switching rates for large industrial customers in Western Europe in 2002 ranged from a low of 5 percent in Belgium to 45 percent in Denmark, with 20 percent being average. In the countries where residential and/or small commercial customers were free to choose their supplier in Western Europe, the rate of switching in 2002 ranged from 2 percent in Ireland to 14 percent in Norway. Scandinavia and the UK all had 10 percent and above.<b><sup>6</sup></b> In the Accession countries, there has been limited switching for large industrials, with only Hungary reporting more than 50 percent switching for this customer segment.<b><sup>7</sup></b></p>
<p>The impact of the competition is telling: Prices to end-consumers are steadily decreasing. In the original 15 EU members, there has been an average decrease of 11 percent for industrial and 6 percent for residential consumers from 1995 through 2002. The Accession countries have seen a contradictory trend, with prices increasing as governments reverse years of subsidized electricity and begin forcing consumers to pay the full cost of electricity.</p>
<p><b>Endnotes:</b></p>
<p>1. Richards, Mark and Georgia Quick, "Eastern Europe, Nuclear and EU Accession," Nuclear Engineering International, Oct. 31, 2002.</p>
<p>2. The Renewable Energy Certificate System grants a renewable energy certificate (REC) for each megawatt-hour of renewable production from qualifying renewable resources. Each REC is unique, which ultimately enables these certificates to be transferred from owner to owner before being used as proof of generation, or exchanged for financial support.</p>
<p>3. Plans are currently under way to create an Iberian market that will integrate Spain and Portugal. The market structure will closely resemble that of Spain. The power pool will be called MIBEL, and there will also be a bilateral market. However, this market was due to start operations in April 2004, and it is still not in place. No new starting date has been set.</p>
<p>4. TSO-Auction BV, subsidiary of TenneT (the organization that manages the auction of the interconnector capacity); All prices for Dutch interconnection auctions available at <a href="http://www.tso-auction.org">www.tso-auction.org</a>.</p>
<p>5. Commission of the European Communities, "DG TREN Draft Working Paper: Third Benchmarking, EU Report on status of the Internal Market in Electricity and Gas," Brussels, March 2004.</p>
<p>6. Ibid.</p>
<p>7. Ibid.</p>
</div></div></div><div class="field field-name-field-article-category field-type-taxonomy-term-reference field-label-above clearfix"><h3 class="field-label">Category (Actual): </h3><ul class="links"><li class="taxonomy-term-reference-0"><a href="/article-categories/strategy-planning">Strategy &amp; Planning</a></li></ul></div><div class="field field-name-field-members-only field-type-list-boolean field-label-above"><div class="field-label">Viewable to All?:&nbsp;</div><div class="field-items"><div class="field-item even"></div></div></div><div class="field field-name-field-article-featured field-type-list-boolean field-label-above"><div class="field-label">Is Featured?:&nbsp;</div><div class="field-items"><div class="field-item even"></div></div></div><div class="field field-name-field-fortnightly-40 field-type-list-boolean field-label-above"><div class="field-label">Is Fortnightly 40?:&nbsp;</div><div class="field-items"><div class="field-item even"></div></div></div><div class="field field-name-field-law-lawyers field-type-list-boolean field-label-above"><div class="field-label">Is Law &amp; Lawyers:&nbsp;</div><div class="field-items"><div class="field-item even"></div></div></div><div class="field field-name-field-tags field-type-taxonomy-term-reference field-label-above clearfix">
<div class="field-label">Tags:&nbsp;</div>
<div class="field-items">
<a href="/tags/benchmarking">Benchmarking</a><span class="pur_comma">, </span><a href="/tags/commission">Commission</a><span class="pur_comma">, </span><a href="/tags/dg">DG</a><span class="pur_comma">, </span><a href="/tags/eon">E.On</a><span class="pur_comma">, </span><a href="/tags/esb">ESB</a><span class="pur_comma">, </span><a href="/tags/hydro">Hydro</a><span class="pur_comma">, </span><a href="/tags/hydroelectric">Hydroelectric</a><span class="pur_comma">, </span><a href="/tags/ipp">IPP</a><span class="pur_comma">, </span><a href="/tags/ma">M&amp;A</a><span class="pur_comma">, </span><a href="/tags/national-grid">National Grid</a><span class="pur_comma">, </span><a href="/tags/nuclear">Nuclear</a><span class="pur_comma">, </span><a href="/tags/ot">OT</a><span class="pur_comma">, </span><a href="/tags/rec">REC</a><span class="pur_comma">, </span><a href="/tags/renewable">Renewable</a><span class="pur_comma">, </span><a href="/tags/renewable-energy">Renewable Energy</a><span class="pur_comma">, </span><a href="/tags/res">RES</a><span class="pur_comma">, </span><a href="/tags/transmission">Transmission</a> </div>
</div>
Tue, 01 Feb 2005 05:00:00 +0000puradmin10838 at http://www.fortnightly.comOperations & Maintenance: Who Has the Best Margin?http://www.fortnightly.com/fortnightly/2004/10/operations-maintenance-who-has-best-margin
<div class="field field-name-field-import-deck field-type-text-long field-label-inline clearfix"><div class="field-label">Deck:&nbsp;</div><div class="field-items"><div class="field-item even"><p>Operations &amp; Maintenance</p>
</div></div></div><div class="field field-name-field-import-byline field-type-text-long field-label-inline clearfix"><div class="field-label">Byline:&nbsp;</div><div class="field-items"><div class="field-item even"><p>Peter Manos</p>
</div></div></div><div class="field field-name-field-import-volume field-type-node-reference field-label-inline clearfix"><div class="field-label">Magazine Volume:&nbsp;</div><div class="field-items"><div class="field-item even">Fortnightly Magazine - October 2004</div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><blockquote><h3>Operations &amp; Maintenance</h3>
<h4>The process of calculating meaningful benchmarks is fraught with pitfalls. </h4>
</p>
<p>Regulatory reporting requirements for major U.S. utilities provide a wealth of data for benchmarking studies. Both the Federal Energy Regulatory Commission (FERC) Form 1 for electric utilities and FERC Form 2 for gas utilities involve the reporting of more than 2,500 unique data points per utility per year, across diverse aspects of utility operations, maintenance, and finance. </p>
<p>But the actual process of calculating benchmarks so that the results are meaningful is fraught with potential pitfalls, especially if we do not consider what is going on "physically" behind the numbers. This can be illustrated by the following simple example that compares the annual growth rate in operations and maintenance (O&amp;M) expenditures for two hypothetical utilities. Let's assume that these two utilities have the same size asset base and same size customer base, and they report their electric or gas O&amp;M figures as shown in Table 1. </p>
<p>The calculations indicate a huge disparity, with Utility A looking very inefficient, with a 25 percent increase in average O&amp;M expense annual growth during the period, and Utility B appearing to be very efficient, cutting its O&amp;M expense annual growth an average of 25 percent during the same period. But in reality, these utilities spent the same $400 million on O&amp;M during the three years in question. </p>
<p>Let's say that these utilities repeated the same pattern every three years. As shown in the table below, our analysis would arrive at the opposite result-negative 25 percent instead of positive 25 percent-for Utility A's O&amp;M annual growth value if we just happened to take the 1999 to 2001 slice of the same data. </p>
<p>What is going on "physically" here? Although this example is intentionally exaggerated, it demonstrates a true source of error in this type of analysis, stemming from the fact that a considerable portion of O&amp;M costs do not occur in neat annual bundles. Major, planned plant outages, tree-trimming projects, and other O&amp;M tasks often occur in 18- to 24-month intervals rather than annual intervals. </p>
<p>To weed out this effect, W.B. Causey's benchmarking work employs multi-year running averages of certain costs to eliminate false skew due to O&amp;M timing differences between utilities.</p>
<p>As a side note, utilities are required to submit the full Form 1 to FERC if they have met any one of the following criteria in the prior three calendar years:</p>
</p>
<li> 1,000,000 MWh total annual sales;</li>
<li> 100 MWh of annual sales for resale;</li>
<li>500 MWh of annual power exchanges delivered; and</li>
<li> 500 MWh of annual wheeling for others.</li>
<p>Multi-state utilities still report separately at the statewide subsidiary (pre-merger) level. As a result, we performed a separate amalgamation of holding company data to benchmark major holding companies against one another. A benefit of this amalgamation is that it allows for a meaningful comparison of general and administrative (G&amp;A) expenses. When comparing G&amp;A expenses between subsidiaries, it is not easy to separate a subsidiary's real G&amp;A from allocated G&amp;A, which comes down by fiat from the holding company, and which may not be distributed proportionately between the holding company's subsidiaries. In effect, once the G&amp;A snapshot is taken for the holding company instead of the subsidiaries, this potential source of skew is eliminated.</p>
<p>We performed benchmarking analyses specifically for gas and electric transmission and distribution business units, as well as separate studies for power generation. One of our most recent benchmarking analyses focused on electric distribution operations and maintenance expenditures. It compares the performance of all major U.S. electric utilities based on FERC Form 1 data. For this analysis, we selected 265 data points per utility per year, from the following major FERC schedules:</p>
</p>
<ol> 1. Cash Flow, Balance Sheet &amp; Income Statement </ol>
</p>
<ol> 2. Electric Operations &amp; Maintenance Expenses (broken out by business unit) </ol>
</p>
<ol> 3. Electric Operating Revenue (including customer base data) </ol>
</p>
<ol> 4. Material Supply </ol>
</p>
<ol> 5. Number of Electric Department Employees </ol>
</p>
<ol> 6. Plant in Service </ol>
</p>
<ol> 7. Salary &amp; Wages Distribution </ol>
</p>
<ol> 8. Electric Meters and Line Transformers </ol>
<p>The methodology has evolved extensively over several years, based on feedback from utility executives. We have found that two basic methods need to be employed, both with the type of multi-year data smoothing (averaging) discussed above:</p>
<p><b> Method 1:</b> "Ratio" Benchmarking:</p>
</p>
<li> For each O&amp;M item (and inventory item) of interest, benchmark to the business unit's related asset-, specific distribution O&amp;M costs are divided by the corresponding distribution asset values to create ratios for comparison purposes. </li>
<p><b>Method 2:</b> Growth Rate "Self" Benchmarking:</p>
</p>
<li> Take the delta between percent annual growth of expenditure versus percent annual growth in related asset; and</li>
<li> Compare this delta to the same figure for the other utilities. </li>
<p>Although these two methods generally yield similar relative rankings, Method 2 avoids the introduction of skew in the results when comparing very different utilities-such as a utility in the Northeast with an older infrastructure and higher labor costs versus a younger utility in a region with lower labor rates-because Method 2 compares utilities based upon "self benchmarks": Is your expenditure growing faster than your related asset base, and how does your self-benchmark compare to all of the other utilities?</p>
<p>It is of interest to look at how different utilities' benchmark standings improved after implementation of major software systems. Aside from direct O&amp;M costs, benefits also are evident with regard to inventory levels, salary and wages, and other items.</p>
<p>As one example, our recent study involved distribution-related O&amp;M costs related to transmission and distribution (T&amp;D) for 176 major electric utilities that submitted Form 1 data during the past 10 years. This particular study focused on seven separate benchmarks for these utilities:</p>
</p>
<ol> 1. Distribution-O&amp;M supervision and engineering </ol>
</p>
<ol> 2. Distribution-O&amp;M on overhead lines </ol>
</p>
<ol> 3. Distribution-O&amp;M on underground lines </ol>
</p>
<ol> 4. Distribution-Total salary and wages </ol>
</p>
<ol> 5. Total distribution operations expense </ol>
</p>
<ol> 6. Total distribution maintenance expense </ol>
</p>
<ol> 7. Distribution materials &amp; supplies </ol>
<p>Each of the seven O&amp;M expenditure areas was normalized to its related asset. In determining how best to make these calculations, we undertook extensive statistical analysis to ensure that the correlations between the O&amp;M data item and its corresponding benchmark asset were high. In addition, extensive checking of the input data was necessary to avoid introduction of errors (, for missing data points in certain years, or changes due to M&amp;A activities). </p>
<p>We checked the resulting final methods for good year-to-year smoothness in the resulting benchmark standings for each utility. This required statistical analysis of all the utilities to uncover any outliers that had abnormal differences in the relative sizes of their yearly results. Large jumps, which should not occur, were investigated. </p>
<p>In some cases physical or business reasons were found for significant differences. As one example, when performing the same analysis for transmission and distribution together, as opposed to distribution alone, we found the inventory levels as a percent of assets were much higher for Georgia Power than its sister subsidiary, Alabama Power. It turns out that Georgia Power has long-term agreements in place with Oglethorpe for O&amp;M activities, and it holds T&amp;D equipment inventory for Oglethorpe to support that contract, thereby creating higher than expected inventory levels as a percent of Georgia Power's T&amp;D assets. </p>
<p>The detail level of the data we employed is well indicated by the following lists of elements within the selected O&amp;M FERC schedules.</p>
<p><b>Distribution Plant Data</b></p>
</p>
<li>Structures &amp; Improvements- Additions </li>
<li>Structures &amp; Improvements- Retirements </li>
<li> Structures &amp; Improvements- Total at end of year</li>
<li>Station Equipment-Additions</li>
<li>Station Equipment-Retirements </li>
<li>Station Equipment-Total at end of year</li>
<li>Overhead Conductors &amp; Devices-Additions </li>
<li>Overhead Conductors &amp; Devices-Retirements </li>
<li>Overhead Conductors &amp; Devices-Total at end of year</li>
<li>Underground Conduit-Additions </li>
<li>Underground Conduit-Retirements </li>
<li>Underground Conduit-Total at end of year</li>
<li>Underground Conductors &amp; Devices-Additions </li>
<li>Underground Conductors &amp; Devices-Retirements </li>
<li>Underground Conductors &amp; Devices-Total at end of year</li>
<li>Line Transformers-Additions / Retirements /Yr. End Tot. </li>
<li>Services-Additions / Retirements / Yr. End Tot. </li>
<li>Meters-Additions / Retirements / Yr. End Tot. </li>
<li>Installations on Cust. Premises-Adds/ Rets / Yr. End Tot.</li>
<li>Street Lighting &amp; Signal Systems-Additions </li>
<li>Street Lighting &amp; Signal Systems-Retirements </li>
<li>Street Lighting &amp; Signal Systems- Total at end of year </li>
<p><b>Distribution Operations Expenses:</b></p>
</p>
<li>Operation Supervision &amp; Engineering</li>
<li>Load Dispatching</li>
<li>Station Expenses</li>
<li>Overhead Line Expenses</li>
<li>Underground Line Expenses</li>
<li>Street Lighting &amp; Signal System Expenses</li>
<li>Meter Expenses</li>
<li>Customer Installations Expenses</li>
<li>Miscellaneous Expenses</li>
<li>Rents</li>
<p><b>Distribution Maintenance Expenses:</b></p>
</p>
<li>Maintenance Supervision &amp; Engineering</li>
<li>Maintenance of Structures</li>
<li>Maintenance of Station Equipment</li>
<li>Maintenance of Overhead Lines</li>
<li>Maintenance of Underground Lines</li>
<li>Maintenance of Line Transformers</li>
<li>Maintenance of Street Lighting</li>
<li>Maintenance of Signal Systems</li>
<li>Maintenance of Meters</li>
<li>Maintenance of Misc. Distribution Plant</li>
<p>For comparison of one or more utilities against the same benchmark, we statistically normalized the results for all seven benchmarks so that the median value was at the 50 percent mark, and created radar charts such as the one below. This provides a snapshot of how individual utilities performed against all seven benchmarks as numbered below. This example shows the 2003 results of a strong performer (Tucson Electric Power) and a weak performer (Idaho Power Co.) across all seven benchmarks listed above ().</p>
<p>Comparing this to the same results for these two companies in 2002 produces interesting results. The changes in the seven benchmarks from one year to the next are relatively small, typically less than 10 percent-a good indicator that the numbers are telling a realistic story.</p>
<p>Another way of viewing our results is to create a bell curve for all utilities across one benchmark's results for a single year. In Figure 3, Idaho Power is highlighted on the left and Tucson Electric on the right.</p>
<p>All 100 utilities in this benchmark were rated with the same methodology for 2002. For the vast majority of these utilities, the changes in performance for this benchmark were small between the two years-generally staying within 10 percent to 15 percent of their original percentile slot. </p>
<p>As a final example, for any one of the benchmarks, we can chart the performance of utilities across time. Again staying with our two example utilities, the results for Distribution O&amp;M on overhead lines, benchmarked to Total Distribution Overhead Plant in Service for each year, are shown in Figure 5.</p>
<p>The consistency in relative standings of these two utilities across the above three views of the study's results provides confidence in the methodology employed for this benchmarking study. Results such as these can assist utilities in determining where to focus efforts to improve business processes, as well as identifying where the greatest competitive advantage can be gained in any prospective upgrades of legacy enterprise software systems. In addition, utilities on an M&amp;A path can study the results of their own benchmarks alongside benchmarks of the corresponding business units of acquisition candidates to determine the optimal M&amp;A candidate based on complementarities in the operating performance of the two company's various business units. </p>
<p><b>Articles found on this page are available to subscribers only. For more information about obtaining a username and password, please call our Customer Service Department at 1-800-368-5001.</b></p>
</blockquote>
</div></div></div><div class="field field-name-field-members-only field-type-list-boolean field-label-above"><div class="field-label">Viewable to All?:&nbsp;</div><div class="field-items"><div class="field-item even"></div></div></div><div class="field field-name-field-article-featured field-type-list-boolean field-label-above"><div class="field-label">Is Featured?:&nbsp;</div><div class="field-items"><div class="field-item even"></div></div></div><div class="field field-name-field-tags field-type-taxonomy-term-reference field-label-above clearfix">
<div class="field-label">Tags:&nbsp;</div>
<div class="field-items">
<a href="/tags/alabama-power">Alabama Power</a><span class="pur_comma">, </span><a href="/tags/benchmarking">Benchmarking</a><span class="pur_comma">, </span><a href="/tags/commission">Commission</a><span class="pur_comma">, </span><a href="/tags/distribution">Distribution</a><span class="pur_comma">, </span><a href="/tags/federal-energy-regulatory-commission">Federal Energy Regulatory Commission</a><span class="pur_comma">, </span><a href="/tags/federal-energy-regulatory-commission-ferc">Federal Energy Regulatory Commission (FERC)</a><span class="pur_comma">, </span><a href="/tags/ferc">FERC</a><span class="pur_comma">, </span><a href="/tags/georgia-power">Georgia Power</a><span class="pur_comma">, </span><a href="/tags/idaho-power">Idaho Power</a><span class="pur_comma">, </span><a href="/tags/installation">Installation</a><span class="pur_comma">, </span><a href="/tags/ma">M&amp;A</a><span class="pur_comma">, </span><a href="/tags/td">T&amp;D</a><span class="pur_comma">, </span><a href="/tags/tucson-electric-power">Tucson Electric Power</a> </div>
</div>
Fri, 01 Oct 2004 04:00:00 +0000puradmin11094 at http://www.fortnightly.comThe Reliability Spending Conundrumhttp://www.fortnightly.com/fortnightly/2004/03/reliability-spending-conundrum
<div class="field field-name-field-import-deck field-type-text-long field-label-inline clearfix"><div class="field-label">Deck:&nbsp;</div><div class="field-items"><div class="field-item even"><p>What is the right and prudent level of spending on service?</p>
</div></div></div><div class="field field-name-field-import-byline field-type-text-long field-label-inline clearfix"><div class="field-label">Byline:&nbsp;</div><div class="field-items"><div class="field-item even"><p>Daniel O&#039;Neill</p>
</div></div></div><div class="field field-name-field-import-volume field-type-node-reference field-label-inline clearfix"><div class="field-label">Magazine Volume:&nbsp;</div><div class="field-items"><div class="field-item even">Fortnightly Magazine - March 2004</div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><blockquote><h3>What is the right and prudent level of spending on service?</h3>
</p>
<p>Times have changed for electric utilities. The combination of deregulation, mergers, major storms, and widespread outages has shifted the industry's emphasis to reliability. That wasn't always true. Even 20 years ago, the growth of load was adding so much to ratebases and driving such large rate increases that regulators spent a lot of time reviewing plans for capacity additions-and challenging utilities for over-spending. Because of these "prudency reviews," excessive costs sometimes were disallowed as additions to ratebase.</p>
<p>Generation today is deregulated in many states, while many states have excess supply. And concerns about reliability are escalating. Consequently, regulators are focused on whether utilities are spending enough money to ensure quality of service.</p>
<p>Trending, benchmarking, and modeling are three good tests utilities and regulators can use to determine the right amount of spending for the desired quality of service.</p>
<h3>Trending</h3>
<p>One of the most commonly used tests for spending prudency examines the trend of spending and service levels. Looking back over a specific time period, what has the utility spent on reliability, service, or system integrity? How does that compare to service level and performance over the same time?</p>
<p>Take five years of spending on reliability, service, or system integrity, and compare that with five years of service level performance. If the results look like Figure 1, where spending and service are both going down, there is a problem. The utility either is not spending enough or not spending it prudently.</p>
<p>Some utility executives may consider trending a backward-looking, late-emerging test. If trending is known to be part of the utility's review process, it can cause forward-looking decision makers to take action in advance to avoid the situation. If a utility is decreasing its spending on service-related categories, it would do well to ensure that service levels are trending up. When the curves slope in opposite directions, the case that spending is inadequate or imprudent is less compelling.</p>
<p>When there is more than one indicator of spending or service level, the test is more complicated. For some utilities, "inconclusive" results may be considered good enough. However, for more progressive organizations, the objective (and challenge) is to ensure that the results tell the same positive story: spending is going down and service problems are going down too. This will help stabilize rates, continue service improvements, and beef up return on investment.</p>
<h3>Benchmarking</h3>
<p>The results of benchmarking to determine whether the spending level is right and prudent also can be either ambivalent or compelling. Presumably, a company that spends less than its peers on service-related problems, and which has a better service level, has no problem. But is that all there is to it?</p>
<p>Many companies want all their benchmarks to be in the first quartile or even the first decile-where "first" applies to the end of the scale with low costs and high levels of service. Other utilities may have a compact with their customers and regulators to find the right point along some tradeoff curve between cost and service.</p>
<p>Frequently in today's environment, rates are capped or expected to remain stable. In such circumstances, a company with low service-related spending and poor service may be pressured to explore ways to improve service without a rate increase. Part of any prudency review is determining whether the spending was efficient and effective in accomplishing what the customer wanted.</p>
<p>As with any benchmarking, arguments may be made that the peers are not really comparable. However, in electric generation, where benchmarking has been used extensively, a certain type of plant should be able to achieve a certain efficiency no matter where it is, adjusting for how it is dispatched. For transmission and distribution, regional differences between territorial geography and climate and even customer preferences can be used to argue that cost and service indicators are not comparable, so peers often are chosen from comparable territories.</p>
<p>Benchmarking can be particularly compelling when it is linked to a best practice. For example, if an electric company's peers are trimming trees on a 4-year cycle, trimming less than 25 percent of its miles per year would raise concerns-particularly if the company's tree-related customer interruptions are higher than its peers.</p>
<p>For gas companies, the annual rate of replacement of leak-prone cast iron and bare steel tends to be 1 to 2 percent of a company's inventory when that inventory is over 500 miles. If a company were to replace only half of 1 percent, it would cause regulators to be concerned about long-term system integrity. And as companies with smaller inventories move toward more rapid replacement (some even adopting 10-year replacement goals), it puts pressure on the others to consider accelerating their policies too, even though replacement could drive up costs.</p>
<p>An analysis of prudency is incomplete without an examination of benchmark results. Even though the results may seem inconclusive or can be explained by differences in territory, the question has to be asked, "How does this compare with others?"</p>
<h3>Modeling</h3>
<p>A good modeling approach that relates the spending level to the service level is probably the best test of prudency. The model should not replace the trending and benchmarking tests, but it should be consistent with the story told by those two tests.</p>
<p>A model allows the decision makers to ask "what if" questions, and it helps them see what can be done to fix a problem. Not only can a model raise an alarm that costs are decreasing and service problems are rising, but a good model can tell you what spending level is required to fix the trend and achieve the desired level of service.</p>
<p>An effective model requires an appropriate degree of complexity. For starters, it needs to be a dynamic model that can predict how spending today and tomorrow will affect the level and the trend of service in the future. So it will probably have at its heart a set of difference equations (the discrete equivalent of the differential equations some of us dealt with in calculus) that can exhibit dynamic behavior. In addition, it should have some details about which programs address which indicators. For example, the model should show how tree trimming affects one aspect of electric reliability, as well as how adding new lines and substations affects another aspect of reliability. Ideally, the model should have a dual function of optimization and prioritization. That is, it should help determine not only the right level of spending, but also which programs should be funded and in what order.</p>
<p>Typically, this type of model can be represented in a funding curve like that shown in Figure 2. On the horizontal axis, the curve shows the level of funding, starting from a basic minimum and increasing as additional discretionary programs are added. On the vertical axis, various measures might be used, from a simple point-scoring method to an appropriate valuation method that computes the incremental benefits associated with the incremental cost of each program.</p>
<p>The trouble with the point-scoring approach is that it can prioritize but not optimize. It can tell you which projects to do in what order, but it cannot tell you what the right level of spending should be. To accomplish that, the model must be able to predict the impact on service indicators and to value that service in such a way that when the incremental value/cost ratio equals 1.0, the right level of spending has been achieved. This method has been shown to be a good test of prudency, and it can subsume the other two measures. In fact, a good model can explain the trend in spending and performance and make the benchmarking moot.</p>
<p>One of the advantages of such a model is that it only gets better with time. As the relationships are proven out and refined, the confidence in the model's predictions grow, and the conclusions become even more compelling and credible. The model's elaborate detail will allow you to fine-tune optimization and prioritization.</p>
<h3>Boards Care Too</h3>
<p>Utility executives will find that their boards of directors are equally concerned about achieving the right and prudent level of spending. Board members know that one of the key risks in utility finance can be the disallowance of costs for rate recovery. They also recognize that they may be forced to significantly increase spending to address a perceived or real service quality problem. Knowing that spending levels are right and prudent for ensuring service helps directors feel good about approving business plans.</p>
<p>So, what is the right and prudent level of spending? To make your case and make it firmly, apply all three tests. If the first two support your case, that's good. If the two tests are inconclusive, be prepared to explain why you think they are not relevant. In either event, rely on the third approach, modeling, to provide not only a conclusive answer but also a way to achieve the promised results, to monitor progress, and adjust if necessary. To do anything else would not be prudent. And that means it would not be right.</p>
<hr />
<p><b>Articles found on this page are available to subscribers only. For more information about obtaining a username and password, please call our Customer Service Department at 1-800-368-5001.</b></p>
</blockquote>
</div></div></div><div class="field field-name-field-members-only field-type-list-boolean field-label-above"><div class="field-label">Viewable to All?:&nbsp;</div><div class="field-items"><div class="field-item even"></div></div></div><div class="field field-name-field-article-featured field-type-list-boolean field-label-above"><div class="field-label">Is Featured?:&nbsp;</div><div class="field-items"><div class="field-item even"></div></div></div><div class="field field-name-field-tags field-type-taxonomy-term-reference field-label-above clearfix">
<div class="field-label">Tags:&nbsp;</div>
<div class="field-items">
<a href="/tags/benchmarking">Benchmarking</a> </div>
</div>
Mon, 01 Mar 2004 05:00:00 +0000puradmin10997 at http://www.fortnightly.com