Distributed Generation Advantages and Disadvantages

Distributed generation generally means more than one power source feeding the same loads including sources at multiple locations but it can also mean stand alone or isolated generation at the point of use. Typical for this definition are generator and UPS at mission critical sites such as data centers and laboratories. These can operate in complete isolation, in parallel with the utility grid, in parallel as part of a local grid. Power can also be transferred between the utility grid and local grid in either open transition or closed transition mode.

Multiple generation sources tied together means that sufficient power can be made available for the entire load where no one generator is sufficient by itself. This allows sufficient redundancy to take units off line for maintenance or where one or more fail. It also means there can be additional reserve capacity for unexpectedly large loads. These are among their advantages.

However, there are also serious disadvantages too. As the network or grid becomes more complex, it becomes increasingly difficult to analyze. Every loop is analyzed as a Kirchoff voltage loop, that is a differential equation and all loops must be solved as simultaneous equations to completely describe and predict the behavior of the network. Networks can quickly grow to hundreds, thousands, or even tens of thousands of loops. It is frequently assumed incorrectly that a steady state analysis is sufficient to assess the adequacy of the network. This can lead to a fatal mistake. In addition to the problem of keeping the network stable and controlled under steady state conditions, in an upset condition the loads will redistribute as transients. Under these conditions where the loads redistribute at unpredictable and uncontrollable rates it is possible to cross the time current curve of protective devices such as fuses, circuit breakers, and protection relays momentarily causing tripping other power sources off line. This can cause a global cascade network collapse because the initial fault causes transient overloads that radiate out at the initial fault that can't be contained or isolated quickly enough. This can happen even where the total capacity of the distributed network at steady state is far more than sufficient than the load would suggest.

An example of this was the power blackout of the entire Northeast United States power grid in October 1965 resulting from a single transformer failure in Niagara New York. Nearly 40 years later in August 2004 a similar event occurred due to an overloaded feeder in Ohio. The blackout extended all the way through Northeast Canada as well as the US. Only about two or three years ago, taking a single piece of equipment off line in Arizona caused a blackout for five million users in Southern California, Arizona, New Mexico, and Northern Mexico.

About the only advice is to build far more capacity in all parts of a network than you are going to need so that it can absorb these transients. That is not being done as the reliability of capital plant on both a local and regional level, both privately and in the public utility network has been taken for granted for decades. As capital plant ages and its capacity is utilized close to its rated capacity, the likelihood of global cascade network collapses grows from a possibility to a near certainty. Getting these systems back up and on line can take one or more days as the network must be brought back up gradually and controllable to avoid another unexpected collapse.

Leave your comment (Registered user only)