
Part 1 of a 2-part series on improving reliability.
Grant McEachran, S&C Electric Company
“Worst-performing feeders,” “low-reliability feeders,” “worst-served customers”— a broad range of terminology is used in different regulatory jurisdictions across the globe, but the meaning is broadly the same. These terms all refer to parts of the electricity grid and thus, by definition, the customers served by those grids that experience markedly below-average levels of power reliability.
Detailed reliability reporting is embedded in most regulatory regimes worldwide, and it has been for some time. In many cases, utilities face reliability targets or are required to meet predefined standards, and in some jurisdictions, performance is incentivized through the use of financial rewards or penalties.
However, in the vast majority of cases the primary focus is on “average” performance, either on the feeder or, most commonly, across the whole network. The result is a less direct regulatory focus on worst-performing areas. This is changing, and this paper explores the reason for the increased focus on the worst-performing parts of the grid.
Utilities worldwide report reliability data. This is not surprising because, along with safety, customers consistently identify reliability as one of the most important aspects of utility performance. Most regulatory regimes continue to prioritize reporting “average” reliability. While important, this masks a subset of customers experiencing markedly lower levels of performance.
Historically, there was less focus on areas of “worst performance.” This is changing. The reason is closely linked with aspects of reliability and resilience addressed in recent S&C Electric Company publications. In “Moving Beyond Average Reliability Metrics,” S&C considered how reliability metrics are evolving and demonstrated changing customer needs were driving the use of more customer-centric metrics. In addition, in “Trends in Reliability & Resilience – the Growing Resilience Gap,” S&C explored developments in reported reliability performance and, in doing so, highlighted the growing resilience challenges facing networks globally.
Building on our findings in those papers, this paper explores the increased focus on the worst-performing parts of the grid, how that focus is linked to wider trends in reliability and resilience, and why it is critical to the success of the energy transition.
Defining ‘worst performance’
A range of terminology is used for customers or parts of the grid that experience below-average levels of power reliability. For the most part, the terms used have a similar meaning, but there are some important differences.
In the U.S. and Canada, two terms are used interchangeably: “worst-performing feeders” and “worst-performing circuits.” In many U.S. states, utilities are required to report feeder-level information and, in cases such as in Texas, to outline investment plans for addressing underperformance. In Texas, no distribution feeder serving 10 or more customers should have a System Average Interruption Frequency Index (SAIFI) or System Average Interruption Duration Index (SAIDI) value for a year that is more than 300% greater than the system average of all feeders during two consecutive reporting years. Similarly, in Alberta, Canada, Rule 2[1] requires utilities to report annually on the 3% of circuits with the highest SAIDI values, identify the factors behind the poor performance, and describe the actions taken to improve reliability.
In Great Britain, the term “worst-served customers” (WSC) is used, and the definition has evolved over time. The present version defines a WSC as one experiencing on average at least four higher-voltage interruptions per year over a three-year period (i.e., 12 or more over three years, with a minimum of two interruptions per year). Distribution utilities receive a per-customer monetary allowance dependent on realizing a percentage improvement target.
Finally, in Australia the terminology varies by jurisdiction. In South Australia, distribution utility SA Power Networks is required to report annually on “Low-Reliability Feeders” (LRFs), including actions to improve performance. The scheme defines LRFs as feeders within a particular region that have exceeded twice the mean unplanned SAIDI for two consecutive years. In Northern Territory, the Utility Commission’s Electricity Industry Performance Code requires Power & Water to report annually on the five “worst-performing feeders” and that it provide details on associated remedial actions. Other states and territories in Australia use similar mechanisms.
Measuring reliability performance
Traditionally, IEEE SAIFI and SAIDI measures have been favored for measuring reliability performance. While useful, both measures consider “average” performance. They reveal nothing about the experience of individual customers, including those on the worst-performing parts of the network.
As demonstrated in S&C’s “Moving Beyond Average Reliability Metrics” paper, this picture is changing. Customers Experiencing Multiple Interruptions (CEMI) is used increasingly throughout the U.S., while Florida Power and Light (FPL) is using the Customers Experiencing Multiple Momentaries (CEMM) metric to drive performance improvements for customers most affected by momentary interruptions. A number of utilities in Ontario, Canada, use the Feeders Experiencing Sustained Interruptions (FESI) metric, while Sweden and Finland use Customers Experiencing Long Interruption Durations (CELID).
These are just examples of approaches to measuring performance, but even this limited overview reveals something about how to approach “worst performance”:
“Worst performance” can be measured in different ways: Across all jurisdictions studied we found differences in measurement. Some were absolute measures, i.e., an aggregate position, whereas others were relative measures comparing to other feeders or customers or to a point in time. Such differences make sense because performance challenges will vary, but it makes comparisons between jurisdictions difficult.
“Worst performance” is a not static measurement: Different challenges can emerge over time, such as population migration or changes in weather patterns, where performance levels improve in some areas and decline in others. Utilities will have varying levels of control over those factors.
There are different ways to support those experiencing “worst performance”: In some jurisdictions, the emphasis has been on monitoring and reporting. In others, utilities are required to have action plans for areas with lower levels of performance. Some have specific targets linked directly to funding. The approaches vary, but we have observed an increasing trend toward using incentive-based regulation in this area.
The number of jurisdictions focusing on “worst performance” has increased: While the approaches to measuring “worst performance” and regulatory responses may vary, the attention being given to “worst performance” is greater than it was 10 years ago. More regulators also have signaled their intention to consider introducing metrics in this area in the future.
Why attention is turning to ‘worst performance’
As with many aspects of energy policy, views on addressing the challenges associated with “worst performance” have been evolving. There are three main drivers of this change.
Driver 1: Climate Change
Our energy grids face an increasing threat from climate change. In some parts of the world, the challenges are hurricanes and high winds. In others, they are ice storms and heavy snow. Flooding and wildfires are also increasing in prevalence. The risks vary, but what these incidents have in common is they often have a disproportionately greater impact on areas already experiencing relatively poor performance.
In 2019, SA Power Networks in Australia identified long rural feeders made up 122 of the 156 low-reliability feeders. Similarly, in Great Britain, Western Power Distribution (WPD) noted those experiencing high numbers of faults are “generally located on the end of long rural circuits or on remote parts of the network.” This is not surprising because many of the areas most exposed are at the grid edge. Investment in grid hardening is, therefore, likely to improve performance for many of the worst-performing feeders.
Driver 2: Changing Customer Needs
Energy customers are relying more on the power grid. This was acutely demonstrated during the Covid-19 pandemic, when our working, shopping, and schooling patterns changed, placing a greater strain on different parts of our grids because of a load shift from cities and office buildings to homes. Environmental goals also have a significant impact. Decarbonization of the power sector means a greater emphasis on electrifying heat and transport. As levels of electrification rise, it will further reinforce reliance on the grids.
Meeting the increased demands on the grid and thereby supporting the energy transition means a greater emphasis on grid resilience. Because the parts of the grid with worst-performing feeders is where the challenge will be most acute, performance-level improvements in these areas will ultimately determine success.
Driver 3: The importance of ‘equity’
The energy transition presents many opportunities, but an important consideration is to ensure all customers can benefit from those opportunities. This means no one should be excluded because of where they are connected to the grid.
A greater focus on economically and environmentally disadvantaged customers is evident in the U.S. For example, in a recent presentation, Illinois utility ComEd highlighted its worst- and best-performing circuits for 2021 in the context of performance in areas deemed as Equity Investment Eligible Communities (EIEC), i.e., communities that would most benefit from investments to combat historic inequities. Figure 1 shows a portion of the 1% worst-performing circuits could be found in both EIEC and non-EIEC areas on ComEd’s network. The utility also outlined its plans to target poor-performing circuits for improvements.
* 1% Worst Performing Circuit per IL Administrative Code, Title 83 Section 411
Reliability is particularly important for vulnerable customers. This is recognized in the distribution utility investment plans in Great Britain. Both Scottish & Southern Energy Networks (SSEN) and WPD indicated their intention to prioritize WSC based on the proportion of vulnerable customers per feeder.
In Part 2 of this article, we will explore the how automation and undergrounding can help utilities improve network reliability and resilience. We will also discuss how performance standards are evolving to help utilities meet their customer needs today and into the future.
This topic was originally presented at the 2022 CIGRE Grid of the Future (GOTF) Symposium
About the Author
Grant McEachran is a Regulatory Affairs Director at S&C Electric Company with 24 years of experience in the energy industry. In his current role, Grant is responsible for tracking and analyzing trends in electricity policy across a range of jurisdictions with a particular focus on North America, Europe and Australasia. Prior to joining S&C Electric, Grant was a senior economist for the British energy regulator, Ofgem, where he was responsible for developing and implementing key components of the current revenue/rate setting process – the RIIO framework. Grant also previously worked as an economist for the Commerce Commission in New Zealand.