In the previous article in this series, we discussed the many factors that impact energy output from solar arrays and why it’s important to improve the insight into array performance if lifetime energy yield is to match expectations.
We discussed Performance Ratio as a measure of actual output versus nameplate capacity, outlined that field statistics indicate a potentially wide range of performance (0.38-0.81) and noted the widely held belief that a PR of 0.80 is best in class. To continue to drive down solar levelized cost of energy (LCOE), the industry should be challenging this assumption and putting in place the technology and operations and maintenance (O&M) practices to set a new expectation of performance entitlement. Advanced panel level monitoring and optimization technology is now emerging that can help make arrays “intelligent,” enabling enhanced O&M practices that can help realize these heightened expectations.
This second article in the series explains how these new technologies can be used to diagnose array impairments with an unprecedented level of accuracy and specificity, allowing new site design and O&M practices to raise the bar on performance expectations.
There is an argument put forward by the designers and owners of larger-scale arrays that monitoring at the inverter or sub-array level is “good enough.” It gives insight into large loss factors, such as inverter outage or blown combiner box fuses, and enables rapid response triggered by alerts. Periodically, engineers can do a “walk through” of the array, doing visual inspection of panels and connections, flipping string fuses and testing string conditions. Once or twice a year on defined dates, the array will be cleaned. There may even be a provision to disassemble a few hundred panels a year to send off for flash testing to monitor degradation rates against warranty commitments. This baseline O&M scheme is better than most deployed in the U.S. right now. It might cost $0.03/Watt per year and starting with an optimal performance ratio of 0.80, might be contracted to maintain the array at 0.75.
Why is this considered “good enough”? The O&M team is running virtually blind to all except catastrophic availability loss. Yet according to data from NREL on array loss factors, these availability losses only account for 2 percent of the yield loss to nameplate capacity. If we back out the losses that should be considered irrecoverable, approximately 10 percent due to resistive wiring loss and DC/AC transformation loss, there is 90 percent of the potential energy at stake. Industry belief is that a best in class array loses another 10 percent of this, a decently managed array another 5 percent and an average array can be a further 10 percent down beyond that, operating at a performance ratio in the 0.65-0.70 range. Yet an inverter or sub-array monitoring system is only really giving insight into limiting catastrophic availability loss, about 2 percent of the problem!
The good news is that the winds of change are sweeping across major parts of the industry, driving a trend towards more granular monitoring. In many parts of Europe, and with some investors in the USA, it is no longer possible to get a large site financed unless monitoring is performed at least down to the string level. This is a positive trend for solar as a whole, but as we’ll discuss in this article, it still falls short of what can be achieved in terms of operational and financial performance improvement.
A solar array is an excellent sensor. All of the information needed to diagnose problems and define action to maintain an array at peak performance is present within the array. Whether the problem is blown fuses, defective bypass diodes, dirty panels, loose connectors, degraded panels or seasonal shade encroachment, the characteristic signals of all impairments are present within the voltages and currents flowing within the array and how they vary over time.
With current monitoring technology, even at string level, once an issue is suspected an engineer is dispatched with a digital voltmeter (DVM). He disconnects the suspect parts of the array, removes fuses or connectors and starts measuring with the aim to diagnose the faults. In a large array, chasing faults through the array hierarchy can be time consuming, but once isolated the remedy for the fault can be assessed and plans can be made to get the replacement components necessary to fix the problem.
With the advent of panel level monitoring, we now have a multitude of virtual DVMs located throughout the array, performing these same accurate measurements constantly throughout the day. This is being done while the array is operational without requiring the intrusive disconnects necessary for human investigation. If we couple precision panel level data with a complete architectural model of the array, we can build a comprehensive real time picture of how the array is performing and use machine-based intelligence to recognize impairments, assess financial impact, understand the cost of remediation and recommend action. The O&M team can verify the action plan and have the components in-hand with a specific work plan, before they even visit the site.
Before we discuss how such a system is able to differentiate between impairments and hence define a precise plan of action, let’s first consider the elements of the ideal comprehensive monitoring system. It contains four key elements:
- An ability to collect voltage, current and power with a high degree of accuracy at the panel level. Since one of our objectives should be to track panel degradation levels, the accuracy needs to be better than the variable we’re trying to measure. For instance, to track panel warranty degradation rates of 0.7 percent per annum, a measurement accuracy of 0.5 percent or better is advisable. Collecting data at the panel level is critical, because as we’ll discuss later, the biggest cause of energy loss is due to mismatch across the array and this starts at the panel level, with individual panels affecting the performance of string and sub-array neighbors.
- A monitoring system that understands the full array topology. This includes both the connectivity hierarchy and the physical layout of the array. This is critical, since some impairments are connectivity hierarchy based, such as blown fuses and weak strings, whereas others are spatial, for example soiling, shading or temperature gradients.
- Data collection period that is sufficiently high to achieve the objectives of the monitoring system. For instance, if the goal is to be able to identify intermittent connectivity or ground faults, the measurement period needs to be rapid enough to catch such faults. Also note that the environmental sensors need to be sampled at the same rate as the voltage and currents to be able to normalize all data for accurate diagnosis.
- A machine-based analysis system that can use the wealth of data to recognize impairments and recommend prioritized action. Given the immense amount of data, this cannot be left to human vigilance.
Armed with a system that contains these key elements, let’s consider how some of the most common Performance Ratio impairments can be recognized. Some of these are simplified for ease of description. Note that NREL data on array impairments would indicate that effective management of the following five impairment categories can have a 40 percent or greater impact on the performance ratio of an array.
A simple fuse or wiring fault is probably the easiest to detect. All string panel voltages move to Voc and no string current. The most likely cause is a string fuse, although a cable disconnect cannot be ruled out. Another potential more difficult fault is a permanent or intermittent leakage to ground. This would be pinpointed by a drop in string current at some point along the string, thus isolating the fault to a particular panel or connector.
Panel damage and degradation:
There are different footprints for each type of fault. Defective bypass diodes show up as voltage drops of 33 percent or 66 percent to string average. This fault could be due to a fallen leaf or heavy soiling, so panels exhibiting this behavior can be committed to a watch list until the next cleaning event, after which the fault is confirmed or corrected. Panel degradation is usually a current impairment and is tracked using normalized data analyzed over time versus the panel commissioning start point and the array average. The effects of temperature, irradiance and differential soiling are backed out using filters. Confirmatory IV tracing can be performed using diagnostic user options prior to initiating warranty claim discussions.
The monitoring system understands the hierarchy to the tracker level, irrespective of string configuration. Each tracker can be considered a sub-array and compared with its neighbors over time. Alignment issues will show up as current drops consistent across connected trackers, but inconsistent with adjacent non-connected trackers. Here the system will be backing out the effects of differential shading, soiling or dynamic cloud effects. Another interesting impairment diagnosis, is timing errors in the algorithms used to prevent inter-row shading at the beginning and end of the day. The rows of panels at the bottom of a tracker will show the voltage and current drops associated with hard shade, with their string connectivity dictating whether the impairments show up as current drops or string voltage drop out. Of course, the system can accurately detect this, since it understands both connectivity and spatial location.
A significant step forward in advanced monitoring systems is the ability to detect soiling across the entire array or in sub-array zones (spatial not hierarchical). Progressive soiling has some very interesting characteristics that can only be properly detected by time-based analysis or normalized data, backing out the effects of known array impairments and mismatches. This is truly a major advantage of intelligent machine based analysis. Light soiling initially shows up in the current domain, much like panel mismatch. However as it progressively gets worse (power >10 percent down) it moves into the voltage domain with a footprint very different to defective bypass diodes. Note that soiling is perhaps the biggest impact factor in large unshaded arrays, with power impacts of 20 to 50 percent not uncommon depending on environmental factors. Analysis shows that a threshold based cleaning strategy triggered at 10 percent power loss for an array in Southern California, would yield a 4 percent energy harvest increase and fully pay for a panel level monitoring system before considering all other benefits. The ability to perform zonal soiling analysis is particularly useful in large arrays, since array edges next to agricultural activity or a road could suffer much more from soiling effects and there is no need to clean the entire array if only a portion is impacted.
Most large scale arrays aren’t built with shading issues, but the ability to detect shading from encroaching vegetation (hedgerows and trees), understand the energy impact of such shade and make decisions on timely pruning, can represent a meaningful financial improvement in some large arrays. In terms of detection, the voltage and current footprint mirrors that of heavy soiling, but has a time of day component, a seasonal factor and is restricted to a tight spatial zone. Thus it is easy to differentiate from other more static impairments.
Another interesting factor is that some impairments are amplified in low irradiance conditions, so the system has an ability to weigh the analysis of certain defects to low light conditions, enhancing the ability to arrive at an accurate diagnosis.
Such a panel level monitoring system, backed by machine-based intelligent analysis and diagnosis, enhances large site O&M by building a Virtual NOC. It enables real time diagnosis of faults, logs all maintenance requirements and can dispatch resources when issues trigger predefined financial thresholds, using defined alert routing. Note that the array also knows when faults have been corrected, automatically clearing alerts and providing a foolproof mechanism to track the effectiveness of site O&M.
The advent of panel level electronics and enhanced industry knowledge concerning the characteristics of array impairments and their effect on array output, is presenting a unique opportunity to redefine entitlement in terms of performance ratio. Precise machine-based monitoring removes human vigilance, eliminates potential data overload, eliminates the time and effort to hunt down issues and maximizes the effectiveness of O&M dollars in mitigation and remediation.
The next article in this series will discuss how advanced panel level monitoring and selective optimization can be combined and scaled to create intelligent utility scale arrays that can push the envelope of performance ratio entitlement and enable next generation O&M models.