A smarter approach to asset inspection for electric utilities

Image by Joss Rogers from Pixabay

By Reynaldo Gomez

In the US alone, utilities own 185 million poles and they are aging much faster than maintenance budgets are expanding. If we add in the effects of increasingly disastrous weather events like forest fires and hurricanes, it becomes clear that the utilities must think smarter about how they inspect and maintain various grid assets.

There are no silver bullets here, but there is one powerful tool that is underutilized by utilities – computer vision. This tool leverages Artificial Intelligence (AI) and accelerated computing to understand and contextualize data streaming in from images or video, which can then be employed to automate visual inspection of the grid and greatly streamline operations.

Of course, with the promise of AI comes a lot of hype, and I will not indulge in it with this article by implying that it can solve all of humanity’s problems. Instead, I will give a brief overview of computer vision and discuss how the utilities can leverage these solutions to increase grid resiliency.

A primer on computer vision

Let’s say you wanted to teach a computer how to recognize a picture of a STOP sign. You might start by defining the shape of an octagon, defining the color ‘red’, defining the shape of letters ‘S’, ‘T’, ‘O’, ‘P’, and defining the shape of a metal pole. What if you’re looking at it from the right? Or the left? Or if it’s leaning to the side at an angle? What if it’s nighttime and there isn’t much light?

It’s clear that hard-coding what a simple STOP sign looks like for a computer quickly becomes burdensome. AI takes a different approach. Instead of teaching a computer how to recognize a STOP sign by hard-coding every possible detail, we feed the computer thousands of images of STOP signs and use a neural network to break down the images into something a computer understands.

While oversimplified, this is the essence of computer vision and there are three necessary ingredients: neural networks, big data with labels, and the compute power to combine them. Labeled data for computer vision means you have an image AND a label explaining what it is (e.g. STOP sign, dog, cat, transformer).

The basic math and theory of neural networks have been around since the 1980’s and Big Data came around in the mid-2000’s. The first time all three ingredients were combined was in 2012 when Alex Krizhevsky used NVIDIA GPUs to train the infamous AlexNet neural network and opened the Pandora’s Box of computer vision.

Since then, we’ve seen a Cambrian explosion of AI-based computer vision models capable of tackling a wide range of problems from detecting cancer cells to detecting manufacturing defects. Today, computer vision is considered a solved problem.

What about electric utilities?

It’s great that the healthcare, manufacturing, oil and gas, and other industries are deriving value from computer vision solutions, but how can utilities take advantage of it today?

There are a few exemplary companies offering off-the-shelf solutions that can be deployed today. Noteworthy.ai developed an ingenious device that takes a stereo camera and a Jetson Edge GPU and sticks it to the top of utility trucks. Every time the truck gets rolled for any reason, it automatically detects grid assets, marks their GIS location, inspects for defects (e.g. broken cross-arms, corrosion, vegetation overgrowth), and takes inventory of pole components.

FirstEnergy, a utility serving more than 6 million customers, used this technology for a pilot program to track its utility pole network of 269,000 miles of distribution lines. The team collected more than 5,000 high-resolution images of its poles, expanding its database by fivefold to be able to avoid wasted trips and deliver power safely. A larger pilot project bringing computer vision to other FirstEnergy business units is currently in progress, tracking things like streetlights and vegetation growth around power lines.

Utilities everywhere are investing in internal data science teams capable of developing computer vision models for inspection. They have all the necessary ingredients! Open-source neural networks like ResNet-50 or YOLOv3 are readily available and come pre-trained with the ability to detect common objects.

Existing drone-based inspection programs provide large datasets of video and images, checking the ‘big data’ box. Accelerated compute power from GPUs can be easily accessed on-prem and in all major clouds. The biggest challenge they face is from the need to label data – a subjective and time-consuming task. In the past, there was no way around manual labeling. However, recent advancements in synthetic data generation offer the ability to create hundreds of thousands of pre-labeled, photo-realistic images that can be used to train accurate AI models in a fraction of the time.

By leveraging the power of computer vision to automate asset inspection, utilities will drastically reduce O&M costs, increase the value of drone programs, and increase resiliency of a rapidly aging grid.


About the Author

Reynaldo Gomez earned his BS in Nuclear Physics from the University of Texas in 2013 and is now earning an MS in Management Science and Engineering from Stanford. He spent three years at Schlumberger WesternGeco as a geophysicist before moving to IBM and now sits on the Energy team at NVIDIA. Reynaldo manages the partner ecosystem for the energy vertical with a focus on machine learning, deep learning, and high performance computing.

Emergency powers to restart coal plants? – This Week in Cleantech

This Week in Cleantech is a weekly podcast covering the most impactful stories in clean energy and climate in 15 minutes or less featuring John…
power pole and transformer

How Hitachi Energy is navigating an ‘energy supercycle’

Hitachi Energy executives share insight into the status of the global supply chain amidst an energy transition, touching on critical topics including tariffs and artificial…