Intuitive Visualizations and Proactive Performance Tracking
By Shawn Fountain, BRIDGE Energy Group
Analytics is an exciting and dynamic area given its potential to help with both tactical and strategic business challenges. One of the challenges where analytics is being leveraged is improving operational performance. The difficulty is finding the right tools to do so practically and cost-effectively. Leveraging the following guidelines can help gain maximum value from analytics investments:
1. Improve operational performance by addressing specific business needs to increase reliability, reduce risk and improve customer and regulatory satisfaction.
2. Focus on the end result, including the user experience and making the results available via intuitive and concise visualizations.
3. Avoid noise by providing the right information in the fewest clicks via an intuitive and concise user interface.
4. Increase productivity by automatically highlighting and prioritizing issues.
5. Establish clear ownership and expectations by setting goals and tracking performance against those goals.
In many ways these are common sense recommendations. Applying them effectively and consistently, however, is more difficult than it would seem. In the race to leverage analytics, there is additional risk of not following a reasonable process and of not being practical, resulting in not achieving the expected value.
For the purposes of this article, the term “analytics” is used in a broad sense. Some make a distinction between analytics, analysis and visualization. In this article, analysis is considered to be users looking at data in reports, dashboards and other visualizations to determine proper actions to be taken. The visualizations are the views themselves. Users may create views ad hoc that might be formalized in dashboards and other reusable visualizations. Analytics feeds information for visualizations and analysis. Analytics can be defined as using math to identify or predict issues. For example, summing up missing reads to get an aggregate missing read rate for a month would not be analytics in the formal sense. Using standard deviation compared to a peer average to identify slow meters would be analytics in the formal sense. In this article, “analytics” is being used generically to mean the use of data to identify or predict issues with or without math or both.
Address Specific Business Needs
FIGURE 1: Simplified Analysis and Analytics Business Value Framework |
In some ways, big data and data analytics can be treated as exploring new ways to look at things-fishing for as yet unidentified issues and experimenting with ways to better predict future outcomes. Such generalist pursuits might result in value not otherwise obtained via a more focused approach. To better ensure results add value, efforts need to be focused on specific business needs and associated use cases. A formal framework, like the one shown in Figure 1, is needed to determine the proper effort and tool investments for the highest priority business benefits.
Let’s take last gasps from AMI meters as an example. Do you justify providing access to those data sets for analysis and analytics? How might those data sets be useful to the core high-level utility drivers for maintaining safety, reliability and customer satisfaction? Integrating last gasps with the outage management system (OMS) to reduce outage times and expedite customer communications could increase reliability and customer satisfaction. Many false positives and duplicate last gasps exist, however. Analytics could be used to determine and continuously fine-tune the filter necessary to send the right information. It, therefore, is fine to leverage the last gasp data as long as integration to the OMS in the short- to mid-term is a reasonable expectation. If there is no such expectation to leverage for an actionable outcome, then decisions should be made on more near-term and actionable investments.
FIGURE 2: Keys to a Good User Experience |
Focus on the End Result
The end result should be based on who will use and act on what is delivered. For our purposes here, that constitutes what buttons end users click to improve operational performance. The main goal is to provide the right information in the right manner. Key success factors for a good experience are delivering results in an intuitive way, as well as providing as much information as possible on each screen. Utilities and their consultants have the right knowledge to determine the right information. Utilities and many utility-focused consultancies often do not have the right human factor skills to design a good user experience. As Figure 2 illustrates, to better ensure success, organizations should focus on the end result early in each project, including creating detailed mockups, which can validate the visualization tool selection.
FIGURE 3: Formal Summary Screens vs. Additional Department Specific Screens |
Avoid too Much Noise
Tremendous amounts of valuable data are available and there are many valuable ways to look at that data. Making everything available to large numbers of people, however, can result in decreasing returns. An organization should fix the number of the formal summary screens per area, and that fixed number should be small. Other screens can be added outside the formal framework, if needed. It is best, however, to have a primary and limited set of visualizations (Figure 3) that will be informative for the largest number of people. More segmented groups can have additional visualizations for their more specialized needs.
Significant time can be spent iterating how to best fit and visualize the information on each screen and across screens. In many situations we have changed the originally planned visualization to make the information fit on the given screen. It is easy to put one view on a screen; it is another matter to fit four or more views on the same screen without scrolling. Other things to consider include not using labels, or graph keys, to avoid clutter, because after an initial review, users might not need such information. There is a tradeoff in terms of more initial upfront review in exchange for a cleaner user interface for the long term. The best approach is to question every item on the screen to confirm it needs to be there.
FIGURE 4: Visualization Screen of Forecasts vs. Actuals |
Automatically Highlight Issues
With the variety of visualization tools available today, a rich and user-friendly view can be produced quickly via a drag and drop user interface. Getting multiple views on one screen with reasonable end results is more problematic for human factor design reasons, not due to tool limitations. Many tools support tolerance-based color-coding. The tolerances are generally hardcoded, however, unless complex configuration or coding workarounds are applied. Alternatively, the color-coding might be relative to the set of data vs. some other baseline. In such cases, an entity (e.g. a meter) with more issues than other entities will be color-coded red, even if the number of issues is below the tightest tolerance band and therefore not worth the user’s attention. This type of color-coding is static color-coding.
Another type of color coding, dynamic color-coding, applies more sophisticated rules. In many situations, dynamic color-coding is beyond the reach of the typical user or even super users. An example of dynamic color-coding is highlighting when the number of issues is outside a range of the base population, where the base population is changing size (e.g. electric hookups for a new neighborhood).
Clear Ownership and Goals
It’s been said often that success is better ensured when there are clear goals and accountability. But executing for success if not always successful. Visualization tools make it far easier to combine in one place operational activities (the core visualization) and tracking performance on the goals of the underlying investment.
As an example, let’s look at a project for delivering a solar forecasting solution that leverages an analytics engine. The core requirement here is to produce the solar forecast. There could easily be multiple visualizations showing the forecast aggregated up the electric grid (e.g. transformer, lateral, feeder, substation). The visualizations could show the forecast vs. actuals for the same period (see Figure 4). They could also incorporate a geospatial view to allow a user to quickly see where there was more or less solar kWh (compared to nameplate), and where the forecast differed from actuals. This allows the user to identify situations where geographical drivers lead to differing solar outputs or greater disparity in forecasts to actuals or both.
Many would be satisfied with this functionality, which would be valuable. There could be more value, however, if performance was tracked and reported in the same visualization area. For example, the forecast accuracy goal and the actual forecast accuracy could be displayed. In addition, a geospatial view could be added that could pinpoint the need for different forecast models across the service territory.
Conclusion
Analytics can bring tremendous value toward materially improving operational performance. In the race to leverage analytics, utilities should avoid risks associated with not being practical, not focusing on specific business benefits and not delivering actionable functionality. To deliver on the promise of analytics, utilities must set clear goals and expectations, follow a formal process to assess and decide on analytics investments, and focus upfront on the user experience to realize benefits across the entire organization.
Shawn Fountain is principal consultant at BRIDGE Energy Group. He has more than 15 years of experience in the energy space with a focus on software implementations. Fountain is well versed at business process mapping and organizational transformation. He has deep experience in project management and project oversight roles and frequently serves as a subject matter expert for requirements analysis for software implementations.