What clean energy developers need to know about hosting capacity maps

An hosting capacity map represents a visualization of available hosting capacity based on the data from an HCA. (Courtesy: NREL)

Contributed by David Bromberg, Ph.D., Co-Founder and Chief Executive Officer, Pearl Street Technologies

With interconnection being the #1 generation and storage project killer, developers are turning to the popular idea of pre-screening potential project locations to identify sites with low prospective grid upgrade costs. This is commonly done via a “hosting capacity map.” Hosting capacity maps are literal maps of the grid with an estimated injection capacity at all (or a subset) of potential points of interconnection. The idea is simple: if the injection point has the capacity for 100 MW of new generation, then you can build a 100 MW project and incur zero grid upgrade costs. If you go above 100 MW, then your project contributes to grid reliability issues and it gets cost-allocated for the upgrades to fix them. Hosting capacity maps have gained so much traction as possible solutions to improve interconnection success that even FERC is insisting that grid operators and utilities offer them for free on their public websites.

So, interconnection is a solved problem now, right? A hosting capacity is calculated, a developer submits a project at or below that capacity, and it survives the interconnection process at zero cost.

Unfortunately, reality is a bit more complicated.


SAVE THE DATE! The next edition of the GridTECH Connect Forum will be held in Orlando, Florida on February 26, 2024. We’re bringing together developers, utilities, and regulators to take on the critical issue of DER interconnection in the Southeast. Register to secure your seat today.


How hosting capacity values are computed

The goal of hosting capacity analysis is to identify the maximum project size for which no grid upgrades would be identified in an interconnection study. In order to do that, you have to run an analysis akin to an interconnection study:

  • First, you need to build power flow models where all existing generation in the queue, except for the projects in the open cluster for which you’re prospecting, are dispatched as prior-queued units following the relevant grid operator’s or utility’s study practices.
  • For a new project at a given POI, you then need to conduct an impact study assessment based on relevant study procedures, such as a DC contingency analysis.
  • Distribution factors are then used to compute the impact of the new project on existing facilities in the grid, such as the power flows on transmission lines and transformers.
  • The next step is to incrementally increase the size of the new project until you reach the maximum hosting capacity of the POI. You’ve reached the maximum hosting capacity when there’s an overload on any line, in any contingency, in any of the power flow models and the project meets whatever allocation criteria threshold the grid operator/utility deems as being responsible for the overload.
  • You then have to repeat these steps to compute a hosting capacity value for each POI you’re prospecting.

Set it and forget it? Not a good strategy

Congratulations! You have computed the hosting capacity for POIs across the grid. Take a moment to reward yourself, and then re-compute everything. Once you’re done with that, start over. We’re being somewhat facetious here, but pointing out one of the most common complaints about hosting capacity maps: the data gets stale very quickly. If you compute hosting capacity values based on the current state of the queue at one point in time, and then you never re-run those computations as the queue changes, the hosting capacity values become very inaccurate.

We’ve pointed out in a previous article how quickly the queue can change (see Figure 1 below for a count of changes in Southwest Power Pool’s queue broken out by type of change and month, showing roughly 11 material changes per month). If there’s a 100 MW project at a POI, and it withdraws, what is the new hosting capacity at that and nearby POIs? If a 100 MW project enters the queue ahead of you at the POI you wanted, what is the new hosting capacity? If prior-queued projects enter interconnection agreements and fund the construction of new grid upgrades, how does that impact the hosting capacity? These are questions that are difficult to answer from stale data.

Figure 1: SPP Queue Changes from April-December 2022

When popularity is a problem

Yogi Berra – perhaps just as famous for his confusing witticisms as he is for his baseball career – was once asked why he stopped going to a restaurant in St. Louis. His reply? “Nobody goes there anymore. It’s too crowded.” We’ve all seen this in action in some form: word gets out about a favorite restaurant, a little-known shortcut, a local musician, etc., and suddenly what was once obscure is now hugely popular.

Hosting capacity maps are really no different. If one developer was in the know about 100 MW of capacity at a point of interconnection, they could theoretically submit a 100 MW project into the queue and make it through (barring changes in the queue that could negatively affect the project, as noted above). If 100 developers see a potential point of interconnection that could support 100 MW of capacity and they all submit a 100 MW project, who wins? Who knows. But certainly, not everyone who trusted the hosting capacity map.

This is related to a slightly more subtle issue. With a one-size-fits-all solution, every developer sees the same information and cannot test their own assumptions. If you have a pretty good idea of what projects from the queue might withdraw and you want to see how that changes the hosting capacity values, you’re stuck. The data is what it is, it’s static, and you can’t inject any assumptions.

So what is hosting capacity good for?

For all the issues noted above, it may seem like hosting capacity maps have no value. That isn’t totally fair. Depending on how realistic the assumptions were in generating the map, how up-to-date the information is, and how much other information you’re combining the data with (such as historical success metrics for a given POI), hosting capacity information could be a good starting point for assessing the quality of a project location from an interconnection perspective. You just need to take the data with a grain of salt (perhaps a shaker of salt).


GO DEEPER: Check out the Factor This! policy playlist, including episodes on the Inflation Reduction Act, interconnection, transmission planning, tariffs, and more. Subscribe wherever you get your podcasts.


A set-it-and-forget-system vs. set-it-and-forget-it data

The key drawbacks we’ve discussed in this article are how quickly hosting capacity data can become stale and that it doesn’t allow developers to test their own assumptions. This is what happens with “set-it-and-forget-it” data, where you define your assumptions, run the calculations, and leave the data up in perpetuity. But imagine instead you had a “set-it-and-forget-it” system for computing hosting capacity. After you define your assumptions, such as what generators from the queue you want to include in the analysis, you trigger an automated process to build the underlying grid models and run the analysis on the fly. With such a system, you don’t have to worry about data going stale or being the same for everyone. If you can compute hosting capacity maps on demand based on the most recent inputs and your assumptions, you can update your map at any time.

Key takeaways

Hosting capacity maps can be a handy tool for getting a rough, initial idea of a project’s interconnection viability. But if the map is static, goes stale, or doesn’t allow you to test new assumptions that could impact the hosting capacity results, take the map with a grain of salt: it may give you a sense of false confidence in your project’s success.

wind turbines in front of an orange sunset

Renewables permitting has been ‘paralyzed’ by Trump – This Week in Cleantech

This Week in Cleantech is a weekly podcast covering the most impactful stories in cleantech and climate in 15 minutes or less.

Redefining microgrids: A vision for dynamic grid control

Erik Amundson, vice president of microgrid engineering and chief technology officer at OATI, provided insights into the evolving role of microgrids during the OATI Energy…