In the previous article in this series, we discussed the many factors that impact energy output from solar arrays and why it's important to improve the insight into array performance if lifetime energy yield is to match expectations.
We discussed Performance Ratio as a measure of actual output versus nameplate capacity, outlined that field statistics indicate a potentially wide range of performance (0.38-0.81) and noted the widely held belief that a PR of 0.80 is best in class. To continue to drive down solar levelized cost of energy (LCOE), the industry should be challenging this assumption and putting in place the technology and operations and maintenance (O&M) practices to set a new expectation of performance entitlement. Advanced panel level monitoring and optimization technology is now emerging that can help make arrays “intelligent,” enabling enhanced O&M practices that can help realize these heightened expectations.
This second article in the series explains how these new technologies can be used to diagnose array impairments with an unprecedented level of accuracy and specificity, allowing new site design and O&M practices to raise the bar on performance expectations.
There is an argument put forward by the designers and owners of larger-scale arrays that monitoring at the inverter or sub-array level is “good enough.” It gives insight into large loss factors, such as inverter outage or blown combiner box fuses, and enables rapid response triggered by alerts. Periodically, engineers can do a “walk through” of the array, doing visual inspection of panels and connections, flipping string fuses and testing string conditions. Once or twice a year on defined dates, the array will be cleaned. There may even be a provision to disassemble a few hundred panels a year to send off for flash testing to monitor degradation rates against warranty commitments. This baseline O&M scheme is better than most deployed in the U.S. right now. It might cost $0.03/Watt per year and starting with an optimal performance ratio of 0.80, might be contracted to maintain the array at 0.75.
Why is this considered “good enough”? The O&M team is running virtually blind to all except catastrophic availability loss. Yet according to data from NREL on array loss factors, these availability losses only account for 2 percent of the yield loss to nameplate capacity. If we back out the losses that should be considered irrecoverable, approximately 10 percent due to resistive wiring loss and DC/AC transformation loss, there is 90 percent of the potential energy at stake. Industry belief is that a best in class array loses another 10 percent of this, a decently managed array another 5 percent and an average array can be a further 10 percent down beyond that, operating at a performance ratio in the 0.65-0.70 range. Yet an inverter or sub-array monitoring system is only really giving insight into limiting catastrophic availability loss, about 2 percent of the problem!
The good news is that the winds of change are sweeping across major parts of the industry, driving a trend towards more granular monitoring. In many parts of Europe, and with some investors in the USA, it is no longer possible to get a large site financed unless monitoring is performed at least down to the string level. This is a positive trend for solar as a whole, but as we’ll discuss in this article, it still falls short of what can be achieved in terms of operational and financial performance improvement.
A solar array is an excellent sensor. All of the information needed to diagnose problems and define action to maintain an array at peak performance is present within the array. Whether the problem is blown fuses, defective bypass diodes, dirty panels, loose connectors, degraded panels or seasonal shade encroachment, the characteristic signals of all impairments are present within the voltages and currents flowing within the array and how they vary over time.
With current monitoring technology, even at string level, once an issue is suspected an engineer is dispatched with a digital voltmeter (DVM). He disconnects the suspect parts of the array, removes fuses or connectors and starts measuring with the aim to diagnose the faults. In a large array, chasing faults through the array hierarchy can be time consuming, but once isolated the remedy for the fault can be assessed and plans can be made to get the replacement components necessary to fix the problem.
With the advent of panel level monitoring, we now have a multitude of virtual DVMs located throughout the array, performing these same accurate measurements constantly throughout the day. This is being done while the array is operational without requiring the intrusive disconnects necessary for human investigation. If we couple precision panel level data with a complete architectural model of the array, we can build a comprehensive real time picture of how the array is performing and use machine-based intelligence to recognize impairments, assess financial impact, understand the cost of remediation and recommend action. The O&M team can verify the action plan and have the components in-hand with a specific work plan, before they even visit the site.
Before we discuss how such a system is able to differentiate between impairments and hence define a precise plan of action, let’s first consider the elements of the ideal comprehensive monitoring system. It contains four key elements:
Armed with a system that contains these key elements, let’s consider how some of the most common Performance Ratio impairments can be recognized. Some of these are simplified for ease of description. Note that NREL data on array impairments would indicate that effective management of the following five impairment categories can have a 40 percent or greater impact on the performance ratio of an array.
To add your comments you must sign-in or create a free account.