By David L Haak, global lead enterprise architecture, Accenture Smart Grid Services
Data is the fundamental currency of the smart grid, and it will be hitting utility systems in a deluge. For context, think of grid data as equivalent to famous novels. If, every second, a legacy grid produces the data equivalent to one copy of A Tale of Two Cities by Charles Dickens, then a smart grid can produce the data equal to some 846 copies of Leo Tolstoy’s War and Peace, a novel more than four times longer than the Dickens tome, according to playful calculations by Accenture staff.
Meanwhile, utilities have high hopes for smart grid performance. They’re hoping a smarter grid will help them tackle a host of problems, including an aging workforce, carbon concerns, reliability issues, demand growth, capacity constraints, competition and more.
To tackle these issues, the grid — and the utility’s business as a whole — must become more observable, controllable, automated and integrated. The smart grid also must facilitate improved asset and work management, as well as integration of renewable energy sources, distributed generation and storage as components of the supply mix. And, it will, provided utilities plan for it.
The smart grid uses sophisticated sensing, embedded processing, digital communications and software designed to generate, manage and respond to network-derived information. But, the assets that make the grid smart — data — also make it difficult to manage, which means utilities must implement data management solutions to turn a potentially bewildering flood of data into useful operational information. To do this, utilities and their stakeholders need a holistic view of the data components and characteristics.
This view starts with clear understanding of how smart grid data are generated, what they consist of and the benefits they can deliver. Optimally, utilities will design smart grid functions with business objectives in mind, rather than designing a grid first and then seeking potential benefits after the fact.
In general, data management design should extract clean, consistent and well understood information that drives targeted benefits for the business. In addition, utilities must ensure that grid data are governed, readily measurable and observable.
Since utilities historically have been unable to observe power distribution grids, developing a true smart grid requires the creation of an explicit grid-observation strategy. Parts of this strategy development already exist in most utilities, but the design will need to “close the loop” to optimize grid performance on a continual basis.
Creating such a strategy requires a solid understanding of master data, as well as the nature and flow of smart grid data through the organization. Data are initially generated by network devices such as meters and sensors, before being transported for storage and processing by various applications: the persistence phase.
Then they are transformed into actionable operations-oriented information for network and technical analysis, requiring new visualization capabilities. Finally, the resulting analytics applicable for non-real-time operational consumption are integrated at the enterprise level to drive strategic decision making.
What actually comprises smart grid data? There are five separate classes of smart grid data, each with its own unique characteristics.
1. Operational data — Representing the electrical behavior of the grid, these include data such as voltage and current phasors, real and reactive power flows, demand response capacity, distributed energy capacity and power flows, plus forecasts for any of these data items.
2. Non-operational data — Reflecting the condition and behavior of assets, these include master data, data on power quality and reliability, asset stressors, use and telemetry from instruments not directly associated with grid power delivery.
3. Meter data — These data show total power usage and demand values such as average, peak and time of day. They don’t include items such as voltages, power flows, power factor or power quality data, which are sourced at meters but fall into other data classes.
4. Event message data — Comprised of asynchronous event messages from smart grid devices, these data include meter voltage loss/restoration messages, fault detection event messages and event outputs from various technical analytics.
5. Metadata —These overarching data are necessary to organize and interpret all the other data classes. Metadata include information on grid connectivity, network addresses, point lists, calibration constants, normalizing factors, element naming and network parameters and protocols.
Utilities face significant challenges across all five classes in applying smart grid data to their processes. Since raw data from smart grid devices and systems aren’t directly usable or even comprehensible, they need to be transformed into useful information before they can be acted upon.
Further complications include the need for some information to be used directly by automated systems, while other information must be presented to people in forms they can easily understand. Data also must be used on many different time scales, with cycle times ranging from milliseconds to months. And, information must be managed in a way that matches the local industry structure and regulatory requirements.
Given these factors, most utilities face four major data management challenges in developing smart grids.
The first is in matching the data acquisition infrastructure to the required outcomes. This includes decisions around issues such as the number, kind and placement of data measurement devices, the use of communication networks and data collection engines, and the chosen data persistence architectures.
The second challenge is in learning to apply new tools, standards and architectures to manage grid data at scale. This involves pursuing the development and adoption of new open standards for interoperability, creating and managing distributed data architectures, and applying new analytics tools to make sense of the flood of data.
The third challenge is transforming processes throughout the business to take advantage of smart grid technology. Accenture research suggests that smarter grids will impact up to 70 percent of retail/customer and transmission and distribution processes.
The fourth challenge is managing master data to enable the benefits from smart grid capabilities. As utilities increase customer experience through channel management, outage notifications and energy advice, effective master data management is the core nervous system to foster success and growth.
Considering the deluge of smart grid data ahead, what approach should utilities take? First, utilities should ensure that the five data classes previously highlighted are reflected in the data integration architecture. Second, they must use the right analytics to turn the mass of data into usable information and business intelligence.
If designed properly, the data architecture will provide the capabilities utilities will need to deal with future change and evolution in their smart grids and business environment. To do this, the architecture will need to include more than just data stores, but also elements such as master data management, services and integration buses to effectively share data and information.
Smart grid data architecture must provide a sound platform on which to apply relevant and sophisticated data analytics. Grid data is simply too voluminous for people to comprehend directly, and a large amount of data will be used by systems without human intervention. Technical analytics are critical software tools and processes that transform raw data into useful, comprehensible information for operations decision making.
To date, Accenture has catalogued more than 200 smart grid analytics and several classes of technical analytics such as:
“- Electrical and device states (including traditional, renewables and distributed energy resources)
“- Reliability and operational effectiveness (system performance)
“- Asset health and stress (for asset management)
“- Asset use (e.g., transformer loading)
When incorporating analytics into the data management design, two major considerations are data time scales (latency) and volume scalability. Due to varying application requirements, some analytics must be available at high speed and with low latency (milliseconds), primarily at the level of grid sensors and devices.
Others fall into the seconds-to-minutes range, including those for operational processes such as operational efficiency verification, real-time use optimization (load balancing) and outage management, and some may play out over hours, days, weeks or months.
To incorporate varying levels of latency accurately into the data management architecture, utilities should construct a data latency hierarchy. This enables the data to be treated and analyzed differently on the basis of its latency and applicability, ranging from the lowest-latency data, where real-time technical analytics feed into protection and control system, to the highest latency where operational analytics can feed into business intelligence management dashboards and reporting.
There also are a number of techniques utilities can use to drive the benefits from smart grid. One is complex event processing, a computing platform that involves continually running static queries against multiple dynamic data streams. This enables a utility to manage the bursts of asynchronous event messages generated by smart grid devices and systems when an event (usually a problem) arises on the grid.
Complex event processing is not widely utilized in the utility industry and a fundamentally different approach to the standard transaction management approach used universally today. However, this approach does have proven scalable usage across other industries, such as financial services and airlines.
Another valuable platform for consideration is visualization techniques — effectively a direct extension of analytics for the human eye and brain. By replacing hard-to understand columns of streaming numbers with well-considered graphic depictions integrated from multiple sources, visualization platforms can provide instant comprehension and avoid “swivel-chair integration,” or the process in which a human user re-keys information from one computer system to another.
Accenture’s experience with smart grid data management has to the following list of key best practices when developing and implementing smart grid solutions:
1. Recognize smart grid data classes and their characteristics to develop comprehensive smart grid data management and governance capabilities.
2. Consider how data sources can support multiple outcomes via analytics and visualization to realize the maximum value from the sensing infrastructure.
3. Consider distributed data, event processing and analytics architectures to help resolve latency, scale and robustness challenges.
4. Holistically consider the smart grid challenge when planning data management, analytics and visualization capabilities — not just advanced metering infrastructure — to avoid stranded investments or capability impediment.
5. Design data architectures that leverage quality master data to match data classes and analytics/application characteristics. A giant data warehouse is rarely maintainable.
6. Look to new tools such as complex event processing to handle challenges around processing new classes of data. Managing the new smart grid data deluge via historical transaction processing approaches is likely not scalable.
7. Develop business process transformation plans at the same time as—and in alignment with—smart grid designs.
Following these points of leading practices can improve a utility’s chances of reaping optimal long-term returns from its smart grid investment.