By: David Riester
It’s mainstream knowledge that the gating factor for AI data center buildouts is, broadly speaking, power. From there, the discourse gets clumsy, disjointed, and often just plain wrong. I want to propose a framework for talking about data centers and power to improve decision making at both i) the societal/policy level, and ii) the infrastructure deployment level. Specifically, I think we need to call out three specific bottlenecks and solve problems through that lens:
1. Red Tape (of Connecting to Grid) – Specifically the large-load interconnection queue process. This is the sticky spot the DOE is aiming to address with its Advanced Notice of Proposed Rulemaking directive to FERC1.
2. Transmission/Deliverability – Simply the grid’s ability to deliver electrons to an aspiring data center. Often there is enough power nearby, but there is no way to get it to the data center POI. This is the category in play when leaders talk about “investing in America’s transmission system”.
3. Power Scarcity – Is there enough electricity in a given area to support data center additions, without cannibalizing other users or causing regional price spikes?
Media coverage (and the actions of our federal government) tend to hammer on #1 and #2 - the red tape and the need to invest in America’s transmission infrastructure. Where the topic of plain old power sufficiency comes up, it’s usually in the context of facilitating collocated generation. I’m sure we’re all tired of the “Power-starved AI Industry Finding Solutions In The Most Unlikely of Places: Right Next Door” [barf] articles. Perhaps the simplicity of the power sufficiency problem renders it boring journalism – or perhaps folks fear that saying “solar” or “wind” out loud will get them on some sort of list. Whatever the reason, until last autumn few were talking about #3 much at all. It’s futile to invest in pulling out all the stops on the i) “red tape” and ii) grid upgrade fronts while ignoring the very basic electron shortage America increasingly suffers from.
Chris Wright’s DOE “ANOPR” to FERC (October 23rd) serves as a good launching point. It directly addresses number 1 (red tape), and I’m optimistic that the “final rule” will prove effective. Anyone in the data center development sphere knows that the current process by which large loads secure interconnection to the grid is long and opaque. Every ISO and/or utility does it a little differently, low barriers-to-entry cause comically long and chaotic queues, and there’s virtually no actionable information on costs or timing. So, while I’m sure the final rule will be imperfect, the DOE is pushing the right buttons with the ANOPR. FERC has a big task in front of them, as revealed by the strong resistance/reactions emerging during the docket comment period, of which there were two major categories. The first: state level authorities complaining of federal power usurpation. The second: utilities/RTOs noting the practical impossibility of properly studying and readying the physical grid for large data center interconnections in 60 days. Both complaints have merit, particularly the second. Nevertheless, I’m optimistic that FERC will find a palatable landing spot that accomplishes much of what the DOE order set out to do.
Yet, rightly indeed…in the ANOPR’s wake came a wave of appeals for more power from the artificial intelligence and hyperscaler communities, beginning with OpenAI’s letter to the White House Office of Science and Technology Policy2. A few excerpts that leapt out to me:
- “[The] limits on how much electricity the US can generate to power AI development threaten both our ability to seize this once-in-a-century opportunity, and our advantage on the most consequential technology since electricity itself.”
- "For the US, unlocking electrons will unlock our greatest national economic opportunity since electricity drove the latter half of the Industrial Age. Electricity is a strategic asset”
It pleads for letting the open markets dictate which sources of power best address the current market demand, and shines a light on the United States’ recent power capacity adds relative to China’s (51GW vs 429GW in 24’)
- “ ...we recommend that OSTP prioritize closing the ‘electron gap’ between the US and PRC”
- “[we recommend] setting an ambitious national target of building 100 GW a year of new energy capacity”
In context with the severe natural gas turbine shortage and lead time problems - neither of which are about to change (more on that below) - it doesn’t take a power aficionado to infer that OpenAI is inviting the federal government to back off and let solar and wind capacity - the cheapest, fastest-to-market sources of power - come online (“encourage…the federal government to step back and let markets work”). That including the words “solar” and “wind” in the letter was too big a political risk for OpenAI is a tempting topic to dive into, but to avoid this unraveling into political bloviating, we’ll leave that for another day.
Indeed, we can all go blue in the face trying to cut the red tape, find ways for AI to jump the queue, and fast track permitting, yet still find many would-be data centers sidelined for sheer lack of electricity. Or, in a different variation of the same problem…sidelined because IF those same data centers get built, local electricity prices would spike intolerably.
The back half of this paper will take a hard look at our evolving, deepening power shortage problem in the context of current market (and yes, political) realities. But before diving into that, we should acknowledge the complexity of the broader power bottleneck problem, and the other two varieties thereof. Because, while I do very much want to shine a spotlight on the elephant in the room - power shortages - I don’t want to trade one oversimplification (“cut the data center red tape!”) for another (“just build more clean energy!”). It’s time we retrain ourselves to be more specific about “power access” as the biggest barrier to data center buildouts. Doing so will help America move from i) pointing in the general direction of an oversimplified problem, to ii) addressing the real underlying bottlenecks, with actions calibrated to the specific needs of a given locality.
AI Power Shortages: The Three Needs
Let’s look at three case studies, one sub-region for each of the three types of bottlenecks: i) red tape, ii) local transmission conditions, iii) local power capacity.
To do this, I asked for help from our preferred data/analytics provider, Orennia3. In the process of identifying a good “case study” region for each of the three bottlenecks, they prepared a “zoomed out” national dual-axis heat map, of sorts. Like many great visuals, it’s beautifully simple on the surface, with extraordinary data and analytics underneath.
Image: Regional Grid Fortitude and Power Sufficiency
So, here’s what you’re looking at: teal indicates better grid conditions; pink indicates better power availability; blue means both are good; grey means both are poor. So, for example: Nevada (except Vegas) suffers all bottlenecks, Montana has ample power but poor grid deliverability, New Jersey has mostly good transmission conditions but is short on power, Southern Michigan enjoys good grid and power conditions.
Zooming in from here, I asked Orennia to scour the country for one good example of each of the three primary bottlenecks, and create a dashboard designed to illustrate the local transmission and power circumstances for each.
Red Tape (Example: Denver)
Starting with a location that enjoys an abundance of power, and a local grid with deliverability across many vectors: Denver. To be clear, the claim here is not that Denver suffers from any specific red tape barriers; rather, due to favorable conditions with respect to grid deliverability and power availability, federal and state efforts to remove barriers and cut the red tape are rather likely to accelerate data center buildout, simply because the other bottlenecks aren’t in play and the conditions for data center deployment are very good.
In the images to come:
- Green power lines4 have at least 50 MW of withdrawal capacity
- Grey power lines have less than 50 MW of withdrawal capacity
- Blue circles are firm power generators (existing or on track for near-term completion)
- Grey circles are renewable power generators (existing or on track for near-term completion)
- For scale: the dashed red circle is a 100 mile radius
- And in the second image for each location, the purple diamonds show data centers (existing or advanced in load interconnection process)

Looking at the same map, but with large load (mostly data centers) added, it’s somewhat surprising to discover comparatively few such facilities online/imminent (for comparison, the next two metro areas we will look at have 2.2 and 6.3 gigawatts of planned and operational data center capacity).

The availability of both generation and transmission in this area would suggest more data center activity than we see in reality. What gives? One factor may be the large load interconnection process: Xcel Energy (and the PUC) has implemented strict tariffs on data centers to protect residential ratepayers, and their load interconnection process requires large upfront security deposits with tight PIS timing requirements5, which may deter investment in early-stage data center projects. Non-power factors play a role as well, such as water/fiber availability, or economic issues such as property taxes or sales taxes (for instance, unlike Arizona, Wyoming, and Utah, Colorado offers no sales tax relief on data center equipment). Each of these hurdles arise from valid concerns and competing priorities;
I don’t mean to be heedless in declaring these “red-tape” problems. The point is that, in this metro area, the opportunity for data center buildout is favorable from a power perspective because i) the grid is robust, and ii) there is enough generation to serve some new large loads. In areas like Denver, building more generation or transmission is not the critical path; the best way to get more data centers online would be to streamline the interconnection process as well as look into non-grid barriers to entry such as water, fiber, and other economic considerations.
Transmission/Deliverability Bottlenecks (Example: Kansas City)
Most everyone appreciates the fundamental challenge of physically getting power to load. You can have more electrons production than anyone could ever dream of but still be severely limited by the ability to move electrons from where they are produced, to where that load would use them. This is a basic challenge - one lived daily for every person that works in the power sector – and sometimes for those who don’t (i.e. power outages). For those who don’t battle this problem professionally, extreme, “macro” examples can help conceptualize:
- There are many places in the US (West Texas, Pacific Northwest, Appalachia) where natural gas is nearly free, or even negatively priced6. Why don’t we build a couple dozen combined cycle plants in these spots and power up the whole country…or at least those regions? Well, in each case there isn’t all that much steady, long-term electricity demand in the immediate area, and energy export to other regions is limited by transmission lines. Moreover, even with free gas (reduced OPEX), the economics of combined cycle plants are shaky these days due to high CAPEX and supply constraints.
- The wind resource in the Great Plains is so abundant it could power the US many times over. Similar with the solar resource in the Desert Southwest7. These are both areas of the country with abundant vacant land available for development to generate a nation’s worth of power. Alas, the same problem: there are not enough transmission resources to move the electrons from the corridor to places with more electrical load. Even on a more regional level, the amount of transmission required to take advantage of these wind/solar resources is wildly insufficient.
These large-scale phenomena are well-documented8, but not relevant to the primary thrust of this article. They are shared because you’ll find “mini” versions of the same bottlenecks in smaller regions across the country, where actual power plants can’t get electricity to actual users.
Let's use the Kansas City area as an example. In the first image we can see a strong fleet of local generators, but little in the way of transmission lines with any headroom:

Add in the current and aspiring data centers, and the clear congestion problem pops out:

Compared with Denver, you’ll note that nearly all of the transmission lines connecting generation to load are shaded grey (i.e. do not have additional capacity available). Put simply: if Kansas City is to experience the data center explosion on the horizon, there’s a lot of work to do with respect to local transmission line upgrades. With money and time, this can all get done, but hyperscalers and data center developers need to be realistic about how much money and time it will take to get to the promised land. You can’t “hack” grid upgrades, and buying your way to the front of the line is a messy, unreliable, unpopular, and possibly illegal strategy.
Generation Shortage (Example: Columbus)
As stated, this is the most overlooked power bottleneck variant and the focus of this piece from here on out.
In areas where this is the problem, the transmission infrastructure has a lot of “headroom” (line capacity) and numerous “deliverability pathways” between i) generators (current or potential), and ii) potential large load installations. What the area doesn’t have is enough power.
Columbus, OH serves as a good case study in power scarcity. The grid is very robust, with substantial available capacity on most of the medium and high voltage lines in the area, a plethora of lines running from candidate generator locations in surrounding area to existing (and would-be) load locations, and deliverability flexibility. Yet very little in the way of locally generated power.

The Columbus, OH area (served primarily by AEP Ohio and falling within PJM) does not generate significant power within the area (our map highlights a 100 mile radius). Instead, it acts as a massive "load sink" that imports power from two specific regions: the Ohio River Valley to the South (lots of coal) and Eastern Ohio (mostly new natural gas). Franklin County itself has almost zero utility-scale generation. The electrons powering the metro area, and all the aspiring data centers under development, are currently traveling hundreds of miles from i) the 1.8 GW Cardinal Power Plant (coal), part of which is slated to retire in 2028, ii) the 2.6 GW Gavin Power Plant (coal) ~80 miles to the South, iii) Guernsey (1.8 GW of natural gas, 2023 build) 80 miles to the East, and iv) Hanging Rock/Washington (natural gas plants along the Ohio River way to the South).
The image below illustrates the region’s popularity amongst the data center crowd. The purple diamonds are data centers in the PJM large load interconnection process. In total that’s over six gigawatts of new load slated to interconnect into an electrical neighborhood with virtually no local generation.

The power mix is almost entirely firm power (coal and NG) which has driven all this data center development. But it’s not enough, and it’s expensive. This set of circumstances screams for more clean energy capacity. There’s a good foundation of firm power, numerous places where solar/storage hybrid generators could interconnect without major grid upgrades, a huge wave of load demand coming fast, and local electricity rates that could really benefit from some cheaper electrons.
The PJM interconnection queue (generation side) has been closed for years amidst major queue reform. PJM is (understandably) very high on the list of greenfield market targets in the clean energy project development community, and that’s good. But let’s be frank – the pipeline of projects that could come online in the medium term is awfully thin, and no one should expect otherwise in a market only just now reopening its doors to new generation after years of stagnancy. So, where is the electricity going to come from? The prevailing assumption is that AEP Central Ohio is going to build out their continuously-expanding the gas generator fleet, milk the big coal plants for longer than planned, and rely on data centers “bridging” to capacity adds by building on-site, collocated plants.
Colocation
Ah yes…Colocation – the answer to all our grid capacity challenges, right? Take a guess how many operating large scale data centers there are in the country with operating collocated natural gas plants. Zero. Seriously…go look it up. Remove the natural gas qualifier and the answer is maybe one (Digital Crossroads in Indiana). This calls for its own article: but the solve-everything-with-colocation fantasy is not going to defuse our power bottleneck problem. It is a mirage; a salve for a huge burgeoning industry desperate to remove barriers and control their growth rate. And it makes for apparently irresistible media stories (“An AI Industry Starved For Power Takes Matters Into Their Own Hands”, etc.). But if plopping down a big power plant right next to your load center were easy/fast and even vaguely economical, our power landscape would already look very different. I know, I know…these companies are big, powerful, full of special people, can afford cost inefficiency, so this time it’ll be different.
I know how this ends: a sprinkling of exceptions where collocated generation comes online – usually much later than envisioned and at huge costs – with many more failed efforts that result in a pivot to grid-sourced power two years after the data center hoped to come online. Where does that put us? Some mixture of i) stalled data centers waiting for power, and ii) new data centers soaking up swaths of regional power. In the latter case, what happens to local power prices? Well…PJM approved a special tariff9 for data centers wherein they’re required to foot the bill for their price impacts and cover all network upgrades themselves. Ok, that’s all well and good; I’m optimistic measures like that will effectively shift much of the cost burden to the data centers. But this is a perfect example of how we (the royal we) dodge the question… the real problem. So, I ask again: where is the power going to come from?
Power Scarcity and Current Events
In the early summer Segue published a series of articles focused on the likely effects that the various proposed reconciliation bills would have on America’s power landscape. One exhibit in particular got a lot of attention, and led many to a horribly incorrect conclusion. Here it is:

By our methodology, the total power supply America could expect over the next 5 years – on the aggregate, national level - would land somewhere in the middle of the range of anticipated load growth during the same period. Again…on a national level. Because that blue line (a) lands within the range of load growth estimates, and (b) lands much higher than earlier bill drafts, many folks less familiar with power markets arrived at the conclusion we should be able to meet all this expected load growth from the AI boom. Segue even fielded a barrage of incoming requests for interviews and quotes from outlets – both media and political – expecting us to validate that the policy seemed calibrated to ensure America will have enough power.
Shame on us for poor framing. Of course, power supply and demand don’t meet on the national level. They meet on the local level. The more you “zoom in” geographically – into smaller electrical neighborhoods – the more likely you are to find a mismatch of supply and demand. Increasingly, within those tightened frames, supply shortages are likelier than demand shortages.
To bring this home: the percentage of “areas”10 as defined by Orennia where projected supply growth is equal to our greater than projected demand growth is about 24%.
As the OBBB was being debated and drafted, there was a cacophony – from in-industry and major media outlets alike – of warnings that the bill would result in a short-and-medium-term power generation shortage. These warnings were rooted in very simple logic and objective fact, and the direct ramifications were laid bare for everyone to see. A quick review:
- Artificial intelligence happens at data centers; and the scarce resource gating the buildout of data centers is power availability.
- These days about 90% of the new power being added the electricity grid is solar, wind, or storage; even amidst the headwinds of last year, 92% of new power capacity was clean energy.11
- That is because these technologies have become i) the cheapest, and ii) the most available (fastest to build and place in service). Yes, subsidies contribute to that reality, but that’s rather beside the point. Perhaps we can have our small-minded tribal whine-fest about subsidies at a time when China’s NOT building 10x the amount of power we’re building?
- Natural gas generation is not a practical solution because i) it’s become rather expensive, ii) the lead time for most single and combined cycle turbines is 4-5 years, and iii) gas pipeline capacity is a constraint in most of the US.
- Nuclear isn’t a real solution in the context of needing power to scale nimbly, quickly, and cheaply. If you disagree with that, you’re basically announcing yourself as outside the reality-based conversation. Look, we’re rooting for safe, cost-effective nuclear to show up (well, I am, in any event), but let’s touch base in seven years.
- Therefore, if capturing the once-a-century opportunity (economic, world-leadership, military advantage, scientific advantage, and on and on) artificial intelligence represents is a priority for the United States, then supporting the continued rapid integration of solar, wind, and storage is an obvious priority.
It probably goes without saying that this narrative did not produce the desired outcomes. Not only did the OBBB end up chopping clean energy off at the knees, but half a dozen other actions (or inactions) by the federal government since the passing of the bill have piled on the hurt, leaving the nation’s leading sources of new power generation well and truly screwed. We’re all smiling and pretending it’s going to be OK, but projects and companies are dying rapidly, and the pain looks likely to continue. And I don’t mean for “us” (the solar/storage/wind sector); I mean for anyone who uses power. Or anyone “long” AI. Or anyone worried about what it looks like for China to get the upper hand on an epoch-defining technology.
Let’s not sugar-coat things: the energy sector best positioned to increase power supply in the near term has been targeted and harassed at every opportunity. From the OBBB’s abrupt termination of the ITC, to DOI mandates, FEOC, treasury guidelines, tariffs, terminated federal leases/permits, USDA loan program termination…it’s genuinely difficult to convey the severity of the attack to anyone not living it. America spent 20 years cultivating an energy transition that has been mostly successful. Between 2005 and 2023, the U.S. economy (GDP) grew by ~43%, while power sector carbon emissions fell by ~35% and solar/wind became the two lowest LCOE in the energy mix12. We did it, and we did it just in time to “meet the moment” of a rapidly accelerating AI arms race…only to self-sabotage at the precise moment we should be reaping the rewards of hard-won modernization. We’ve apparently decided to rough up and demonize an American success story instead of celebrating and embracing the opportunities it affords.
And for what purpose? No, seriously…what is the aim? The major turbine manufacturers seem wholly uninterested in ramping up production13. Nuclear SMRs aren’t ready for reasons that have nothing to do with money or politics, so they’ve little to gain from the attack. Maybe the antagonistic stance of the government is meant to help struggling existing thermal power plants become more profitable? Or make life easier for utilities who bear the heavy burden of integrating intermittent resources? While I’m sympathetic to the latter, that’s an awfully lopsided cost/benefit analysis if indeed those are the motivations behind the attack on clean energy.
Natural Gas Headwinds
Now is probably as good a point as any to take a brief detour to explain why natural gas cannot be our “savior”…
The assumption that natural gas plants are a low-cost commodity has been shattered by a hyperinflation of CAPEX. According to NextEra Energy’s 2025 market analysis, the cost to build a new Combined Cycle Gas Turbine (CCGT) facility has effectively tripled in just three years, soaring from a historical baseline of ~$785/kW (in 2022) to between $2,400 and $3,000/kW today14. This spike is driven by labor shortages, turbine costs, a 40% rise in EPC wrap rates, iron and steel costs, and the leap in high-voltage transformer costs15. The math is broken: a standard 1,000 MW gas plant that used to cost ~$800 million now requires a ~$2.5 billion upfront investment. As noted in GridLab’s recent report on CCGT costs and the market implications, this has forced utilities to abandon new builds in favor of buying old, inefficient plants, and thereby doing nothing to solve the net capacity shortage.
Even if capital were unlimited, the physical hardware is unavailable. The supply chain for "heavy frame" gas turbines (the 300+ MW class units required for baseload reliability) is completely saturated. GE Vernova confirmed in its December 2025 investor update that its order backlog had reached a record 80 GW against a current output of 20GW/year, meaning the company is effectively "sold out" through 202916. Reports from IEEFA17 and McCoy Power Reports confirm the same: the 3 major OEMs—GE Vernova, Siemens Energy, and Mitsubishi Power—are effectively "sold out" through 2029-2030. The lead time between signing a purchase order and receiving a turbine on-site has stretched from a historical 18 months to nearly 7-8 years for complex projects. Consequently, any "dash to gas" initiated today would yield zero electrons until the early 2030s, leaving the grid exposed to the immediate deficits in the late 2020s.

Yet, contrary to the "Field of Dreams" logic - where one might assume OEMs would rush to build new factories to meet surging demand - the "Big Three" have all chosen a different course. Scarred by the market crash of the early 2000s (where they overbuilt capacity and lost billions), executives at Siemens Energy and GE Vernova have explicitly stated (on 2025 earnings calls) that they will prioritize "margin over volume" and won’t chase a temporary bubble.18 The big 3 are expanding capacity only incrementally (e.g., GE's modest target to move from 55 to 80 units/year) rather than flooding the market. Vernova can squeeze out an extra 4GW from ramping up existing facilities such as their South Carolina expansion, but we’re talking about going from 20GW now to 24GW in 2028.
If you’re like me, you assumed that there was a mad dash to sign turbine POs and supply agreements in the wake of the OBBB. This did not happen…because it had already happened - the first half of 25’ was indeed very robust, with 42.7 GW in orders. However, by the time the shuddering severity of the OBBB (with respect to clean energy’s prospects to add capacity) became clear, the door had already slammed shut. In fact, Siemens’ year-over-year Q3 orders dropped 2.5% from 24’ to 25’19. Utilities realized that placing an order in 2025 for a 2031 delivery was a futile solution to a 2026 problem.
What does this all mean with respect to creating the power abundance necessary to support rapid data center buildout? Well…it’s time to cut through the denial – there’s no natural gas bailout coming, no matter how badly some want that to be true. If we want to add power to the grid in the next five years, it simply must come from solar, storage, and wind. And in the long term, if we want low electricity prices – for data centers and consumers alike – we’re still going to want clean energy to make up most of the new power capacity mix.
Renewables Project Attrition
To understand what’s actually happened to our power supply pipeline since it became clear the government was going to try to assassinate renewable energy in the OBBB, let’s start with a projection Segue made after the OBBB was passed.


We estimated that 81 GW of solar, wind, and storage that would otherwise have been placed-in-service in the next ~5 years, will be cancelled as a direct result of the OBBB. (The graph shows the cancellations according to when MWs were slated to place in service, not the time of cancellation.) Six months removed from OBBB’s passing, the actual attrition pattern emerging, and in the three quarters since the imminent bloodbath revealed itself, it’s clear that the “Great Contraction” is well underway. Here's the high level “before and after” graph showing the total GWs in the interconnection queue – from in-study to under construction (excluding suspended, operating) - at the end of 2024, compared to the end of 2025:

Ouch. That is a 27.5% reduction in potential clean energy capacity in 11 months.
For the sake of intellectual integrity, let’s acknowledge some essential context for digesting that statistic. There’s more going on here than just policy headwinds. For example, CAISO imposed a queue weening process for Cluster 15 oriented around permitting, causing a more severe drop than macro conditions would otherwise have on their own. Similarly, it’s probably fair to say that PJM’s transition cluster and subsequent ~3 year queue closure would have resulted in significant shrinkage even without the clean energy attack of 2025. Most critically, setting aside ISO-specific conditions, we should acknowledge the big caveat: most of these interconnection queues were far larger than they should have been before 2025 happened. In other words, a lot of attrition was coming one way or another. For 10-15 years, wind, storage and solar (mostly solar) development exhibited an irrational exuberance pattern produced by i) low barriers-to-entry, ii) over-capitalization, and iii) frothiness in the platform and project M&A markets. Should MISO have ever had a 300 GW queue for a market with projected peak demand of ~140 GW and existing generation of ~200 GW? Of course not. But a gradual reduction would be healthier with both retirements and power demand accelerating. I posit a leveling-out would be preferred to a plummet, in the circumstances. So, while I think it’s understandable to deem this a much-needed catalyst for culling bloated queues, we shouldn’t be cavalier about the market signal. Be careful what you wish for – for a country ostensibly seeking “energy abundance”, you don’t want queued power to drop by 27.5% in 11 months.
Double clicking on an ISO that serves as a good microcosm, look at what has happened in the MISO queue this year:
Anyone who’s actively developing in MISO knows why this has happened. The sheer mass of projects in the queue combined with the pronounced need for significant transmission upgrades resulted in extremely imposing network upgrade cost estimates in interconnection studies. The game theory of such a situation is hard enough in an environment friendly to new clean energy installations – amounting to little more than a game of chicken with other queued projects sharing the upgrades – but when you add all the headwinds and uncertainty associated with ITC sunsets, tariffs, FEOC, elevated permit risk, and on and on…“staying in” is awfully intimidating.
Meanwhile, data center aspirations across MISO are hockey sticking. From the “Data Center Delta” down south, to the “Silicon Prairie” in IA/MN/ND, to the Southern hook around Lake Michigan (Mount Pleasant, WI to Jefferson, IN), regions within MISO already serve as major hyperscaler data center hosts with more in the pipeline. While that power capacity freefall remains a crime-in-progress, here’s what’s happening in data center land:

What’s more, the MISO grid contains local grid conditions across the spectrum, but scattered about the region are dozens of pockets – similar to Columbus, OH shown above – where the grid is actually rather robust. With FERC’s eventual implementation of the DOE order likely to force a clearer pathway to “spot load” grid interconnection (from a procedural standpoint), I expect a plentiful universe of “electrical neighborhoods” well-poised for data center additions, if the power shows up to serve it.
This “ships-passing-in-the-night” phenomenon is playing out across much of the country, with PJM, WECC, SERC, and CAISO standing out most starkly. ERCOT too, though its queue remains huge, and the ISO continues to react more nimbly than any other. When the macro conditions improve, ERCOT will probably be the first to see power growth keep pace with load growth. Interestingly, the SPP queue has continued to grow. The fundamentals-driven wave of solar and storage development (that started in earnest ~3 years back20) has had enough momentum to cut through the attack of ‘25. I expect transmission constraints will be especially common in SPP, but that will indirectly lead to a power shortage too. With imposing, hard-to-predict network upgrades, developing through the IX study/deposit rollercoaster in SPP can feel a lot like the NU Roulette described in MISO. That, in turn, is leading to extreme attrition. In 2025, ~90% of the projects in the various SPP queues withdrew from those queues21. That’s 1,462 project withdrawals representing 310 GW. Now, to be fair, this was part of an intentional “backlog clearing process” that essentially forced queued projects to “shit” (sign an LGIA) or “get off the pot” (withdraw), and the withdrawal group was particularly wind-heavy. But this result was more extreme than anticipated – by SPP and the broader market – almost certainly a result of all the headwinds emerging during the clearing process. There was $33 billion worth of network upgrades that were shared amongst the withdrawn projects. As one such party, I can attest to the reality that many developers were tendered LGIAs which would probably have been signed (i.e. the upgrade costs were otherwise palatable) but for the unsavory macro conditions. A lot of that power would be of considerable utility to the hyperscalers in, say, the Omaha area or GRDA and the load it serves…right about now, where the data center community has found the bottom of the power/capacity barrel22.
Conclusion
To bring this back around to Segue’s original prediction of how much power capacity would be lost because of OBBB (about 81 GW in the next 5 years), we can use Orennia’s data and analysis to produce a refreshed estimate that implicitly wraps in 2025’s other doozies. Orennia developed a useful metric within their data ecosystem called “Chance of Success”, where each development project is assigned a probability of becoming an operating power plant. Like any such metric applied to thousands of projects nationally, it’s by no means perfect. But, applied in the aggregate and taken with a grain of salt, Segue finds it useful.

Where Segue initially estimated lost solar, wind, and storage power capacity at 81 GW, Orennia’s current, equivalent projection is ~141GW. This feels about right to us. The isolated impacts of OBBB are the biggest dish in the 2025 pupu platter, but all the other conditions (attacks) add up to a lot, and most of those hadn’t revealed themselves at the time we published our analysis in the summer.
That is a lot of power to not have in an environment where the most significant governor on AI advancement and implementation is the availability of power.
And, while this seems self-evident, I must emphasize that data centers having access to power requires there being enough power in the first place. We can – and should – continue the “cut the red tape!” and “invest in our grid infrastructure!” refrains…but, while both of those sentiments are “right”, they implicitly fail to acknowledge the obvious – we simply need more power!
Our national posture toward solidifying and maintaining global artificial intelligence leadership is in direct conflict with our posture toward clean power. We’ve made leadership in AI a national priority and specifically the key to securing a leg up on China (in every realm). Yet, while we’re taking some actions to facilitate this outcome via DOE orders to FERC, attempts at permitting reform, and investment in major transmission lines, we are simultaneously waging total war against the primary and best source of readily available power capacity additions. It’s like ordering a steam ship “full steam ahead!” while simultaneously ordering the stokers to dump all the coal into the ocean.
This does not need to be a conventional energy vs. clean energy thing. It’s about energy abundance. If you look around, you won’t find many voices in the clean energy community screaming “gas plants are evil!” from the mountaintops. And we know solar and wind are intermittent resources that cannot fulfill the power needs of new, or existing, load by themselves. We get it; and we are (generally) adopting a pragmatic attitude towards how the AI boom is likely to affect the power mix.
In that spirit, we’d do well as a society to try adopting broader notions of the “tribe” we’re each aligning ourselves with. Fear tends to make people contract psychologically and align with our smallest, tightest “tribe” – instead of national or species level alignment, we retreat to “conventional energy,” or “Minnesotans”, or “Employees of Company X”, or “progressive”/”conservative” just to give a few examples. It’s anthropological, even biological…so completely understandable. I do it just as much as anyone; surely this paper contains biases of my own arising from the same impulse. But history has shown us that “crossroads” moments – and I think the emergence of artificial intelligence is one – call for resisting the impulse to retreat to our smallest tribes…to step back and appreciate the bigger stakes and the need for broader alignment and cooperation across larger tribes.
America does well when Americans share an objective (the moon, defeating Hitler, etc.). Doing what we need to do to safely unlock the potential of AI while gleaning the benefits of being a global leader in doing so could be a moonshot we all get behind. Part of that push is meeting this moment with enough power generation to fuel it, without getting snagged in culture wars. This is an engineering and infrastructure deployment problem, not a virtue-signaling platform. We should be thoughtful, specific, and intellectually honest about the nature of our “power bottleneck”, and act to address these bottlenecks.
Are we doing this thing or not?
[3] Disclosure: Segue and Orennia share a common investor
[4] You may note all the “power lines” are straight vectors, which is not how the world works. The lines actually represent the available capacity on any pathway from one bus/sub to another, and then shows that universe of pathways as a single vector. Which, though initially confusing to the eye, is more useful.
[8] https://gridstrategiesllc.com/wp-content/uploads/ACEG_Grid-Strategies_Fewer-New-Miles-2025_vF.pdf
[9] Citation
[10] For a sense of scale, Orennia’s “areas” are, on average, about the size of the Hexagons in the national map shown earlier
[12] Lazard 2025 report
[13] Siemens and GE Vernova investor call transcripts; Siemens Energy Strategy Update (Profitability & Growth)
[16] https://www.utilitydive.com/news/ge-vernova-gas-turbine-investor/807662/#:~:text=9%2C%202025.,25%25
[18] Siemens CEO, Christian Bruch, during Q2 earnings call
[20] For those who were doing SPP before it was cool, good one. I see you Eolian. I don’t mean to imply no one was developing storage or solar in SPP before 22’, merely that the mainstream rush seemed to start around there.
[22] You’ll know you’ve found it when you’re trying to convert airplane jet engines into CCGT turbines