This post is part of our Industry 101 Series, an ongoing campaign to provide a foundation of knowledge about our unique industry. To learn more about this campaign, please click here.


Of the physical components comprising smart grid architecture, the communications technology, a collection of communication network components that enable the flow of information throughout the grid, is the most complex.

Advanced metering with two-way communications has the potential to make meters a core element of an integrated system and better manage utility services. But what kind of communications are appropriate?

Advanced metering infrastructure (AMI) systems employ a wide array of communications technologies, including radio frequency (RF) mesh, power line carrier (PLC), RF point-to-point, and cellular. Since utilities must manage multiple communications networks, they often look for solutions that support a variety of applications and fully integrate into future operational plans.

Throughout the world, technology adoption varies by region. In Europe and North America, RF mesh and PLC are deployed most often, with Europe having a greater tendency toward PLC technology than the U.S. This is largely due to the fact that grids in Europe connect many more homes per transformer than in the U.S. Data sent from the meter over the power line may not have to pass through a transformer to reach a collector, so some PLC technologies used in Europe operate in wider bandwidths. Technology choices are oftentimes driven by local regulations. Many countries restrict use of RF mesh technologies, as unlicensed frequencies have raised concerns about interference.


7.3.1 PLC

Power-line communication (PLC) is a communication method using electrical wiring to simultaneously carry data and alternating current (AC) electric power transmission or electric power distribution.

This method is useful for utilities who want to make a more gradual migration to smart grid technology, because they can leverage existing power lines as the communications network. However, it should be noted that power lines are not well suited for fast, near real-time data communications.

Troubleshooting signal issues can be difficult when poor connections, feeder switching, worn or faulty line hardware, and other power line issues cause signal interference. It can also be difficult to push meter data in any volume from the field into the SCADA system or other applications in near real-time.

While these types of networks were once the only option for rural locations, new technologies have increased radio coverage capabilities in most geographical areas.


  • It’s cost effective. Power-line communication can transmit over long distances. In North America, this can be a cost-effective advantage for utilities serving rural communities. Because it is a hard-wired system, topographical and other physical obstacles do not affect performance.
  • It leverages existing investments. Because PLC utilizes existing infrastructure, the utility owns the communications system and there may not be as much of a learning curve involved with implementing this type of communication method.
  • It returns data useful in analyzing grid performance. This system uses the distribution network to send signals from the meter to the substation. Signal strength provides the utility with analytics that can help isolate and troubleshoot problems with insulators, transformers, and other grid devices.


  • Potential for network interference. Utilities with large industrial customers that may introduce noise and harmonics on the power line have found that this noise on the system may affect performance and distort communications.
  • Less bandwidth. Narrower available bandwidth can impact data capacity and the speed or rate at which data can be accessed in some applications.

7.3.2 POINT-TO-MULTIPOINT (Wireless)

Given the challenges with power line networks, point-to-multipoint networks began to gain popularity. These networks depend on high power transmitters to talk directly to each endpoint (or repeater) on the network.

While point-to-multipoint networks are an improvement over power line networks, they still depend on a limited number of radio paths between the endpoints and a radio base station. This makes these networks more susceptible to signal fading or shadowing caused by hills, valleys, and radio-reflective or radio-absorbing obstructions. Sometimes, the only remedy is an additional high-power base station or repeater, which can be relatively expensive.

Some providers of point-to-multipoint solutions require FCC licensing for their high-power networks. While sometimes sold as an advantage due to the designated spectrum on which they operate, FCC licensed spectrum has been subject to reallocation as recently as a last year when the VHF paging spectrum was “narrow banded.” As a result, any devices not capable of accommodating a new, narrower channel were left abandoned in the field or now require a mass hardware or firmware upgrade, or possibly both.

Operating under an FCC license does give the user a certain degree of legal recourse if someone or something accidentally or purposely encroaches on the dedicated frequency. But this in no way guarantees a “clean” channel.

Tracking down and finding sources of interference can be a difficult endeavor, and even then, this requires FCC action to enforce spectrum protections once any violators are identified.


  • It requires less infrastructure. One tower may be used to cover a large geographic territory.
  • It doesn’t raise interference issues. Since it is a licensed network, no concerns exist about other devices interfering with network communications.
  • It’s easy to deploy. Since the network may be deployed from a few towers, even a single tower, less infrastructure may be required.


  • Challenges of securing tower space: It may require installation or leasing of a tower. It may result in transmission congestion. The more meters and devices communicating to a single point, the more likely network congestion and bandwidth limitations are.
  • It’s not self-healing or self-routing. Unlike an RF mesh system, which offers many communications pathways, a point-to-point system has no built-in redundancy. So if the base station goes down, communication to thousands of meters may be lost. This can compromise grid performance and reliability.
  • It requires licensing. Along with a fee, a limited bandwidth is associated with each license. In addition, a second license may be required for distribution automation functions or other advanced grid management applications. The internal labor required to manage and maintain these licenses is also a consideration.

7.3.3 RF MESH (Wireless)

Many of today´s U.S. AMI deployments are built on an RF mesh framework. With wireless mesh networking technology, meters and other devices route data via nearby devices creating a mesh of network coverage.

Mesh networks enable end devices to communicate to the collector through multiple hops if necessary. This characteristic of mesh networks enhances network performance in three ways. First, it provides a cost-efficient way to deploy and build a network that encompasses greater distances while requiring less transmission power per device. Second, it improves system reliability since each end device can register with the collector via another communication path if the present communication path becomes inoperable. Third, by allowing end devices to act as repeaters, it is possible to deploy more nodes around a collector, thereby reducing the number of back haul paths –  a major cost factor.

Wireless mesh networks were originally developed for military applications. Over the past decade, the size, cost, and power requirements of radios has declined, enabling multiple radios to be contained within a single mesh node, thus allowing for greater modularity; each can handle multiple frequency bands and support a variety of functions as needed—such as client access, backhaul service, and scanning.

Some later wireless mesh networks use nodes with more complex radio hardware that can receive packets from an upstream node and transmit packets to a downstream node simultaneously (on a different frequency or a different CDMA channel), which is a prerequisite for a switched mesh configuration.


  • RF mesh technology can be regionally distributed, so the operator can target specific areas without needing to deploy the entire service territory.
  • It’s self-healing. If one module loses communication with the network, the network automatically finds another path to bring communications back to the head-end system. So, the network operator never needs to worry about the entire network being down.
  • It’s self-forming. The network’s intelligence enables the signal to find the optimal route back to the head-end system. This is particularly important in areas with many obstructions, such as mountains or high-rise buildings.


  • RF mesh technology may require more infrastructure than other options, especially in rural areas where meters are more spread out across the service territory.
  • It may raise interference concerns. Unlicensed frequencies used in RF mesh may raise some concerns about interference. Some countries restrict use of frequencies in the unlicensed spectrum, including RF mesh.



Smart meter traffic is characterized by small session duration, limited mobility, and a large number of devices. Therefore, it is not handled efficiently by existing wireless broadband access networks run the traditional way.

Broadband wireless networks provide ubiquitous wide-area coverage, high availability, and strong security and are, therefore, a strong candidate for handling smart meter communications. Wireless operators naturally see an enticing business opportunity in advanced metering infrastructure (AMI), because they stand to obtain additional revenue streams from existing cellular networks. Government agencies have encouraged such network sharing to reduce AMI’s energy footprint. Broadband wireless networks were not designed, however, to efficiently meet the traffic requirements of AMI.

Existing wireless broadband networks presuppose traffic that is typically modeled as consisting of individual sessions. In those sessions, duration or time scale exhibits a heavily tailed distribution and is usually orders of magnitude larger than the packet timescale. That is, the length of sessions varies widely and a typical session requires a great many packets to communicate digitally. This allows for each session to be treated as an independent connection, subject to admission control mechanisms, with associated signaling procedures for setup of radio and network resources. The signaling associated with connection setup represents minimal overhead compared to the total data transferred over the session duration.

In contrast, most AMI traffic is expected to originate from stationary devices or devices with very limited mobility and will consist of just a few payload packets between the meter and the meter data management system. Furthermore, it is expected that in normal operations, most meter traffic will be regular as opposed to being ad-hoc. That is, meters will periodically report data on the uplink and downlink data from the management system may follow. After that, a long period of inactivity until the next time meters report data will follow.

This deterministic behavior, coupled with potentially very long sleep durations between communication attempts with the network, allows for optimizing the operation of the meter so that it is scheduled to connect (or re-connect) to the wireless broadband network only at specific time instances and only for a limited period of time. During that connected interval, the meter and management system can exchange information as needed. We refer to this kind of system as time controlled scheduling.

The advantage of supporting time controlled operation is that the meter is connected to the wireless network only for short intervals of time, as needed, allowing networks’ resources to be more efficiently managed and for a very large number of devices to be multiplexed to a common base station. To contact a meter outside of its scheduled connection window, the protocol can be enhanced so that the network alternatively sends a notification indication to one or more neighboring meters located in a connected state at that time, and the meters in turn relay the request to the meter in question over a secondary wireless channel that uses an unlicensed spectrum, like ZigBee or Wi-Fi.

Congestion control is recognized as another challenge for AMI. The very large numbers of smart meters give rise to potential “traffic burst” scenarios, which can arise when large numbers of devices are simultaneously reacting to a common event, such as a power outage. To minimize the impact on the wireless broadband interface, an application layer congestion control protocol can detect the common event and stagger — that is to say, buffer or queue — transmissions from meters.

Each meter is assigned a probability (p) to transmit an alarm upon detection of a shared event. The meter will either queue (with probability 1-p) or transmit (with probability p) this event. If the message is queued, the meter will continue to monitor the air interface for an event notification from the network. This notification can be in the form of an explicit message sent from the base station or, alternatively, the base station may update the transmission probability p to 0. Upon receipt of such notification, which is sent only if another meter was able to successfully transmit the shared event notification to the station, the meter will discard the queued message.

If no such notification message is received after a random period of time, the meter will again attempt to see if the message should stay queued or be transmitted. The process is repeated until either the message is transmitted or an event notification is received from the base station. The algorithm can be generalized to allow for different event transmission probability values for different categories of shared events, with high priority given to more critical events. The delay handling can be different for different categories of shared events. The back-off delay can be made shorter for more critical events.


  • Faster deployments. Cellular enables long-range communication and can be rolled out quickly using the existing cellular infrastructure.
  • It leverages an existing network maintained by the cellular company. In most utility service territories, cellular already reaches the majority of customers.
  • It’s optimal for targeted applications. Cellular can be deployed cost effectively to support small groups of customers, even a single customer.
  • It’s proven technology. In use for more than a decade, cellular technologies are well established and reliable and are continually improved upon — particularly as it relates to security.
  • It’s secure. Because they already provide service to billions of customers worldwide, cellular networks extend the promise of safety and performance to utilities.


  • It may require head-end system changes. In the North American market, most of the widely deployed head-end systems are optimized for either RF (mesh or point-to-point), PLC, or a combination of these. Incorporating a communications technology less-widely for AMI purposes, such as cellular, may require modifications to the head-end solution.
  • It has obsolescence issues. Cellular networks tend to roll over prior to the useful life of the metering technology, so many operators are concerned about how long a deployed technology will remain viable.
  • It’s a network availability issue. The mission-critical communications that smart grid networks require need nearly 100% network availability. When utilities share public cellular networks, they are often at the mercy of the carrier’s priorities in the event of an outage.
  • It can be unreliable. If a natural disaster impacts the cellular infrastructure, networks may become overburdened.


If you enjoyed this article, click here to start from the beginning of our Industry 101 Series.

Or to continue your journey, click here to access the next installment of our Industry 101 guide.


Here is a list of relevant reading material our expert identified as sources for additional information:


This post is part of our Industry 101 Series, an ongoing campaign to provide a foundation of knowledge about our unique industry. To learn more about this campaign, please click here.



Since the beginning of the global movement towards electricity deregulation and market-driven pricing, utilities have been looking for a way to balance consumption and generation. Traditional meters only provide information for total consumption between meter reads. They provide no information as to when the energy was consumed at each metered site. Smart meters provide a way of measuring this site-specific information, allowing utility companies to charge customers different prices for consumption based on the time of day and the season.

Smart metering offers many potential benefits from the consumer’s perspective, including:

  •  An end to estimated bills, which can be a major source of complaints for many customers
  •  A tool to help customers better manage their energy consumption – smart meters provide up-to-date information on gas and electricity consumption to help people manage their usage and reduce their energy bills

Electricity pricing usually peaks at certain predictable times of the day and the season. In particular, if generation is constrained, prices can rise if power from other jurisdictions or from more costly generation methods is brought online. Proponents of variable pricing state that billing customers at a higher rate for using energy during peak times will encourage customers to adjust their consumption habits to be more responsive to market prices. Furthermore, they suggest that regulatory and market design agencies hope these price signals could delay the construction of additional generation, or at least the purchase of energy from higher priced sources, thus controlling the increase of electricity prices. Whether or not low income and vulnerable consumers will benefit from time-of-use tariffs is a concern, however.

Another benefit of smart meters is the ability to connect and disconnect service and read meter consumption remotely. Not only does this save costs for utilities, the lack of manual meter readings also means the end of estimated bills. Smart meters offer additional possibilities for the future – such as improved time-of-day tariffs, offering cheaper rates at off-peak times to smooth out national energy usage throughout the day.



In-home display (IHD) units provide energy customers with real-time energy consumption feedback. IHD units can acquire consumption information through a sensor with built-in RF and/or PLC. However, a more effective solution transmits information from a smart meter via a home area network.

Types of IHD units can vary from simple wall-mounted segment LCD displays to battery-operated products with color TFT displays and touchscreens. Advanced IHDs can display energy consumption advice from energy providers in addition to raw energy consumption information.

Features and Benefits of In-Home Displays:

In Home Display Unity

  • Range of microcontrollers, from entry-level 8-bit to sophisticated ARM9 with embedded LCD graphics display controllers, provide flexibility to support any application.
  • Flexible touch solutions, from buttons and wheels to sophisticated touchscreens, provide support for a wide range of user interface features and capabilities.
  • Power line communications (PLC) system-on-a-chip (SoC) solutions with full digital implementation deliver best-in-class sensitivity, high performance, and high temperature stability.
  • Power-efficient solutions support battery-operated products.
  •  Low-power RF transceivers for connectivity.

In-house displays can range from a basic segment LCD to a more sophisticated color TFT. The display choice drives the processing power required, and the main microcontroller can be either an entry-level 8- or 32-bit microcontroller, to a more powerful embedded MPU with on-chip TFT LCD controller. As products become more sophisticated, the user interface will as well.

The communications within the IHD depend on the implemented architecture of the HAN (typically RF or PLC). Wireless connectivity can also be supported via secure digital input/output (SDIO) cards.



Home Network and Smart MeterUtilities can send commands to a smart meter by both radio and carrier current communications, depending on the type of meter being used.  For example, in California, the utilities presently deploying smart meters control the meters using a 902-928 MHz FHSS radio.  The intended range and frequencies used for sending commands to a smart meter can also vary from utility to utility.

Each smart meter electric meter is equipped with a network radio. The radio periodically transmits your hourly meter readings to an electric network access point. Then, his data is transmitted to the utility through a dedicated radio frequency network. Radio frequency technology allows meters and other sensing devices to communicate and route data securely. The electric access points and meters create a mesh of network coverage.

Data collected at the access points from nearby electric meters is transferred to the utility industry through a secure cellular network. Radio frequency (RF) mesh-enabled devices, such as meters and relays connect to other mesh-enabled devices. The devices function as signal repeaters, relaying data to access points. The access point devices gather the information, encrypt it, and send it securely to the utility industry using a third-party network. The RF mesh network sends data over long distances and various terrain. The mesh always seeks the best route to transmit data. This helps ensure that the info travels from its source to its destination quickly and efficiently.


Home Network and Smart Meter

Home network and smart meter access points are tightly coupled. The term “home network” is not confined within a home. It applies to a closely located territory. The home network is controlled by the home area network (HAN) that connects smart appliances, electric vehicles, storage, and on premise electricity generators to an access point – the smart meter. A smart meter is able to interface digitally. The devices working in concert allow load management at peak hours and overall energy control.  Peak load management is a critical consideration in the electricity market due to high associated costs. Other forms of energy control, though nice to achieve in theory, cannot currently incur the same level of reliability that is required.  The amount of data transfer at a given point will likely consist only of a number representing the instantaneous electricity use of each device, expressed in watts. Hence the bandwidth requirement usually falls between 10kbps – 100 kbps per device. The required bandwidth could grow exponentially for large office buildings, so the chosen networking technology must scale.

Low-power, short-distance, and cost-effective technologies are well suited for on-site communications. Several choices are available: 2.4 GHz Wi-Fi, 802.11 wireless networking protocol, ZigBee (based on wireless IEEE 802.15.4 standard), IEEE 802.15.4g wireless smart utility networks (SUN) and HomePlug (a form of power line networking that carries data over the existing electrical wiring). Internet protocol (IP) based on uniform standardization is widely used for communications on the premise.

It should be noted that in-home applications can leverage the smart grid. They can also exist independently without being part of a smart grid. For instance, any meter – smart or traditional – can be connected to a HAN. For example, a wi-fi enabled sensor can read a traditional meter and send data to a webserver to build many kinds of energy-related consumer applications. These kinds of applications, whether they use traditional meters or smart meters, allow consumer-facing functions without the need for any communications technologies beyond those already installed in a usual internet-connected household.


Concentration Point

The collected information from a home network to an access point now needs to traverse to a concentration point as part of smart grid. Data traversal is indeed bi-directional. However, the volume of data from a concentration point to a device will be lower than the volume of data from the consumer side flowing to the utility. A concentration point can be a substation, a utility pole-mounted device such as a transformer, or a communications tower. Bandwidth requirements are in the 10-100 kbps range per device from the home or office. However, if appliance-level data points as opposed to whole-home data are transmitted to the concentration point, the bandwidth requirement will bump up.

Initial solution installations relied on power line carrier (PLC). PLC transmits data from a device, meter, or command to a device or meter over existing power lines. PLC is the most common conduit. It is cost effective for utilities, especially in low-density areas where deploying wireless technology is not viable yet power lines are ubiquitous. Deploying wireless technology makes an appealing business case when expensive equipment installation can be shared. Deploying exclusive wireless technology across dispersed premises is cost prohibitive. However, at certain circumstances, PLC is susceptible to interference and PLC offers extremely low bandwidth – less than ~20 kbps. Real-time-data-intensive AMI requires bandwidth up to 100 kbps per device. In dense cities, AMI deployments use 900 MHz wireless mesh network for data transmission. In wireless mesh networks, connectivity between meters and collection endpoints is obtained via a dedicated network using unlicensed radio spectrum, run by the utility or a subcontractor. Stat network is another wireless alternative. It uses fixed points to a multipoint RF network using licensed spectrum and communication towers. More bandwidth supportive broadband communications, such as the IEEE 802.16e, mobile WiMAX, broadband PLC, next-generation cellular technologies, and satellite technologies, are other possible choices. With growing data and big data buzz, bandwidth requirements tend to go up.


Utility Data Center

Information flow from concentration points to the utility typically functions over a private network. A variety of technologies are available: fiber optic cable, T1 cable, microwave networks, or star networks can be used to send data from the hub to the utility. Sophisticated smart grid applications supporting two-way and frequent communication seek bandwidth in the range of at least 500 kbps to dispatch data from a concentration point to a utility. Currently, many AMI networks support intermittent connectivity to the utility – data gets aggregated at a neighborhood node and is only sent to the utility periodically. More bandwidth may be needed to support more functionalities or more real-time connectivity.


If you enjoyed this article, click here to start from the beginning of our Industry 101 Series.

Or to continue your journey, click here to access the next installment of our Industry 101 guide.


Here is a list of relevant reading material our expert identified as sources for additional information:

This post is part of our Industry 101 Series, an ongoing campaign to provide a foundation of knowledge about our unique industry. To learn more about this campaign, please click here.


6.3 Electric Vehicle impact in the Grid

Electric cars are becoming more popular every day. What is the motivation for people switching from traditional gasoline cars to electric or plug-in hybrid vehicles? A host of reasons are cited for the increase in popularity including environmentalism, the cost of gasoline, the desire to distance ourselves from foreign energy sources, and the long-term cost and maintenance of electric vehicles cheaper. These reasons are a clear demonstration that electric vehicles are not simply a passing trend, but rather a monumental change in the way we think of daily transportation. Whether all of these are valid reasons to switch to electric vehicles is not the focus of discussion in this article. Instead, the size of the electric vehicle market, the effect of their increasing popularity, and the ways in which this changes the way electric utilities manage their operations and grid will be the focus of examination.


Electric Vechicle MarketThe following graph shows the total number of electric vehicles year over year. Over the last several years, the largest adopter of electric vehicles worldwide is by far the United States. In fact, nearly a third of the world’s electric vehicles are being driven on U.S. roadways, putting U.S. electric utilities in a unique position to define the way in which electric vehicles are managed in relation to the grid. Assuming this trend increasing electric vehicle ownership continues, some changes must be made in order to manage the influx of electric vehicles being plugged into the grid to charge.

With stricter policies regarding automobile emissions, friendly policies in regards to alternatively fueled vehicles including tax breaks and incentives, and the rapid expansion of electric vehicle manufacturing, it is only logical to assume that the number of electric vehicles on the road will continue to increase at a rapid rate. Many new electric vehicle models are being released to the market, and the current models are already selling quite well. In the long term, all of this translates to a larger population of drivers using and owning electric vehicles.

An important factor in determining which countries upgrade their grids to deal with increased demand and stress of EV charging is where electric vehicle ownership is concentrated within a country. It can be assumed that early adopters of electric vehicles will live in certain highly populated areas that offer charging centers and have the available infrastructure to accommodate electric vehicle charging. Currently in the U.S., the highest concentration of EV ownership is in California. Although the current electrical grid in the U.S. is said to be able to support 150 million battery-powered cars, electric vehicle adoption in concentrated areas can be problematic for local grids. Adding a specific fast charger to a grid can consume the same amount of electricity and generate demand equivalent to an entire small household. If multiple people in one neighborhood install these devices, it creates issues with local grids and transformers sized for smaller neighborhoods. A few solutions address this problem, and one solution is the modernization of the electrical grid.



Many electric utilities are already in the process of upgrading their grids through increased capacity, smart meters, and many more enhancements. These upgrades, especially ones which deal with capacity, must be introduced in high-demand areas first, so many programs have been developed to help pinpoint areas which need priority upgrades. One example is a customer-driven program in which utility companies ask their customers to let them know if and when they purchase an electric vehicle. Electric utilities are also using smart meter information and data to determine peak charging times and areas which generate the highest demand, so these areas can be prioritized in grid upgrades.



One of the most significant impacts of electric vehicles is the impact on peak usage. Many people come home from their commute and plug in their cars. As discussed previously, this can be problematic in smaller neighborhoods where the local grid cannot handle the increased demand usage of electricity. This can be handled in a few ways, including delayed charging, time-of-use, and shifted usage plans. Demand spikes can cause issues for utility companies that may need to suddenly shift their energy production to another plant or might need to suddenly increase production. These sudden increases not only cause stress on the grid, but can cost a significant amount of money as utilities must produce electricity to meet this increased peak in demand, especially considering most electric grids are currently built only for transmission and not storage. This means that any electricity that is produced and transmitted must be used when it is generated. This particular issue creates a unique opportunity for electric vehicles to fill gaps during peak demand times in which they are not charging but are still connected to the grid.



Thus far, we have discussed the potential negative impacts to the electric grid brought about by the advent of the electric vehicle and their increasing popularity. However, the rising popularity of electric cars also creates many positive impacts for the grid and for energy providers such as utility companies. We have already cursorily mentioned some of the potential benefits for the customers including saving money on fuel costs, low cost of maintenance for electric vehicles, and more. But, how does this benefit the utility companies?

The major opportunity for utility companies in the shift from fuel-powered to electric vehicles is the shift of the market from petroleum providers to utility companies. This allows utility companies to increase their bottom line and accommodate the emerging needs of their customers. Not only is this an opportunity for increased revenue, EVs also provide an opportunity for cost savings. As previously mentioned, many grids cannot store electricity, so in order to meet increased peak-demand times, generation must be ramped up. Electric vehicles, however, have the ability to store electricity. This means EVs could be used to supply power back to the grid during peak demand hours. Peak demand times are currently determined primarily by the need for heating and cooling peaks, which happen to coincide with the time people are generally not using or charging their electric vehicles. If enough electric vehicles are on the road and connected to the grid during peak demand time periods, it is possible they could help meet peak demand by using stored power in their battery cells.

If electric cars are used to combat peak demand from heating and cooling and various other demand generating events, how should utilities  handle the increased demand from EV charging?



With the advent of smart meters and the smart grid, it is possible to determine when and how people use electricity. The most effective way to combat peak demand generated by the charging of electric vehicles is to use time-of-use plans, which reward customers who shift their usage to off-peak times with lower billing rates for electricity. If the cost of electricity is more during on-peak times, then people will be less inclined to charge their cars during these times. In the long run, this can save utility companies a significant amount of money. This can also encourage EV owners to use the delayed charging capabilities of their electric vehicles,which allow a car to be plugged in, but not begin charging until a specified time. This approach in combination with electrical grid updates is the best way to mitigate any serious negative impacts that may arise from the rising popularity of EVs.

Electric vehicles are increasing in popularity, and this sudden and constant increase in EVs on the road will cause significant impacts on the electric grid. However, these impacts do not have to be negative if utility companies use their existing smart grid infrastructure to anticipate where upgrades need to be made as well as use the data collected by smart meters to enforce time-of-use plans, which can shift demand to off-peak times such as overnight charging. The advent of the electric vehicle is a unique and exciting opportunity, not only in terms of the potential for increased revenue for utility companies, but also to decrease our dependence on foreign energy sources and potentially distance ourselves from carbon-emitting fuel sources such as petroleum.


If you enjoyed this article, click here to start from the beginning of our Industry 101 Series.

Or to continue your journey, click here to access the next installment of our Industry 101 guide.


Here is a list of relevant reading material our expert identified as sources for additional information:

This post is part of our Industry 101 Series, an ongoing campaign to provide a foundation of knowledge about our unique industry. To learn more about this campaign, please click here.



Today, utility providers currently rely heavily on coal, natural gas, and oil for their energy. Fossil fuels are nonrenewable and finite resources that will eventually dwindle, becoming increasingly expensive and environmentally damaging to retrieve. In contrast, renewable energy resources are sources of energy that are constantly being replenished. These energy resources include: biomass, geothermal, hydrogen, hydroelectric, solar, and wind.

Most renewable energy resources come from the sun either directly or indirectly. For example, solar energy is either used directly with the use of solar panels or solar heat exchangers. However, solar energy drives winds, and plants convert solar energy into carbohydrates which later are harvested for food or fermented to create transportation fuels like hydrogen and ethanol.

Today, an emphasis is being placed on creating clean and renewable energy resources as a means of reducing the carbon footprint and its effects on climate change. In 2015, the United States generated about seven million Gigawatt (GW) hours of electricity. About two thirds of the electricity generated was from fossil fuels.

Renewable Energy SourcesMajor energy sources and percent share of total U.S. electricity generation in 2015.

  • Coal = 33%
  • Natural gas = 33%
  • Nuclear = 20%
  • Other renewables = 7%
  • Biomass = 1.6%
  • Geothermal = 0.4%
  • Solar = 0.6%
  • Wind = 4.7%
  • Hydropower = 6%
  • Petroleum = 1%
  • Other gases = <1%



When people think of renewable energy, they generally think of wind and solar. However, biomass, derived from plant material and animal waste, is one of the oldest sources of renewable energy. This energy source absorbs energy from the sun and regrows over a short period of time.

Compared to fossil fuels, which take millions of years to form, biomass has a clear advantage in the time it takes to be processed and used. Similar to fossil fuels, biomass is mainly used as a means to create heat through the process of burning material or creating transportation fuels for use in generators and engines.

In the United States, biomass fuels provide about 1.6% of the energy used for electricity generation and about 2.5% of the energy used in transportation fuels.  Of this, 46% of the energy from biomass was from wood and wood products, 43% from biofuels, and 11% from municipal waste.

Examples of biomass energy include:

  • Wood and Plant Wastes—burned to heat buildings, produce heat, and generate electricity
  • Agricultural Crops —burned or converted to liquid biofuels
  • Biodegradable Garbage—burned to generate electricity in power plants
  • Animal Manure and Human Sewage—converted to biogas and burned as a fuel to generate electricity BIOMASS POWER GENERATION

In biomass power plants, municipal waste, wood and wood waste, and biogas (predominantly methane) are burned to heat water and produce steam that runs a turbine and generates electricity. Additionally, biomass is burned to provide heat to industries and homes; burning wood in a fireplace is a great example.

Burning biomass isn’t the only way to use its energy. Biomass can be converted to other forms of energy. Transportation fuels such as ethanol and biodiesel are used to power automobiles, trains, and even ships. In the United States, corn and sugar cane are used as a sources of ethanol. To create ethanol, carbohydrate-rich crops are grown and fermented. The ethanol is then combined with gasoline. Today, most cars can run on 10% ethanol with some flex fuel cars capable of running on E85 (85% ethanol).



U.S. Geothermal Resources

Geothermal energy is heat that is generated from below the Earth’s crust. Molten rock, called magma, contains 50,000 times more energy than all of the natural gas and oil resources in the world.  Since access to geothermal energy is ubiquitous, significant advancements in technology have been made to tap into this renewable resource. These technologies range from complex power stations to small and simple pumping systems, each providing significant advantages over traditional energy resources. Methods to utilize these resources include: geothermal powerplants and heat pumps/heat exchangers. GEOTHERMAL POWER PLANTS

As of 2014, the United States has more than 3,300 Megawatts (MW) of installed generation capacity and is a global leader in this energy category. Eighty percent of this capacity is located in California, where more than 40 geothermal plants provide nearly 7% of the state’s electricity.

Geothermal power plants follow three basic designs. The first and simplest design, known as dry steam, uses steam directly from the geothermal source and transfers it through a turbine to generate electricity.

The second and most uncommon approach depressurizes hot water, which is then flashed into steam. The steam is then used to drive a turbine and generate electricity. Due to the limitations of deep drilling, this approach is both expensive and difficult to use.

The third approach, called a binary cycle system, passes hot water through a heat exchanger. The heat exchanger passes the heat where a second liquid (isobutene) is converted to steam and used to drive a turbine. This approach, commonly referred to as a closed loop system, prolongs the life of the geothermal source by retaining the super-heated water and reducing waste. Below is an illustration of all three approaches.

Basic Geothermal power plants

The Three Basic Designs for Geothermal Power Plants: Dry Steam, Flash Steam, and Binary Cycle GEOTHERMAL HEAT PUMPS

Almost everywhere, the upper 10 feet of the Earth’s surface maintains a nearly constant temperature between 50° and 60°F. Geothermal heat pumps tap into this resource to heat and cool buildings. These systems consist of a heat pump, an air delivery system or ductwork, and a heat exchanger. In the below figure, the heat pump pulls ground temperature water through a compressor, which, depending on the season, either condenses or evaporates the water to heat or cool a working fluid. In the winter, a heat pump moves heat by pumping water from the ground through a condenser and into the building to heat the air. In the summer, this process is reversed and the hot water from the building is moved through an evaporator to the ground loop and back to the building to cool the air.

In regions with temperature extremes, ground-source heat pumps are the most environmentally clean and energy-efficient heating and cooling systems available. Far more efficient than electric heating and cooling, these systems can circulate as much as five times the energy they consume in the process.

The U.S. Department of Energy conducted a study and found that heat pumps can save average households hundreds of dollars in energy costs each year. The system typically pays for itself in eight to twelve years. Tax credits and other incentives can reduce the payback period to five years or less.

Heat pump diagramToday, more than 600,000 ground-source heat pumps supply climate control in U.S. homes and in other buildings. Although this is a significant, it is still only a small fraction of the U.S. heating and cooling market with several barriers the market must overcome. For example, despite their long-term savings, geothermal heat pumps have higher upfront costs. In addition, installing them in existing homes and businesses can be difficult, since it involves digging up areas around a building’s structure. Finally, many heating and cooling installers are simply not familiar with the technology.



Hydrogen is the simplest of all the elements and consists of only one proton and one electron. Despite its simplicity, it’s the most plentiful element in the universe; however, hydrogen doesn’t occur naturally as a gas on Earth. It’s always combined with other elements in the form of molecules. Water, for example, is a molecule made up of hydrogen and oxygen to form (H2O).

Hydrogen is also found in many organic compounds, notably the hydrocarbons that make up many fuels, such as gasoline, natural gas, methanol, and propane. Most of the hydrogen produced today is separated from natural gas using a process called gas reforming. Additionally, an electrical current can also be used to separate water into its components of oxygen and hydrogen. This process is known as electrolysis.

Hydrogen in pure form is high in energy and can be burned to release that energy. The key advantage of using pure hydrogen as a fuel is when hydrogen is burned it produces almost no pollution, making it a clean and renewable source of energy. Currently the energy from hydrogen is produced using two main methods: hydrogen fuel and hydrogen fuel cells. PURE HYDROGEN

In the 1970’s, NASA was looking for a fuel source high in energy and clean burning. When combined with oxygen, hydrogen was used to power rockets and space shuttles. Today, some cars are using liquid hydrogen as a fuel with promising results.


Hydrogen Fuel Cell6.2.3.2 FUEL CELLS

A fuel cell combines hydrogen and oxygen to produce electricity, heat, and water. Similar to batteries, fuel cells convert the energy produced by a chemical reaction into usable electricity. Unlike batteries, as long as hydrogen is supplied, the fuel cell never loses its charge.

Fuel cells are a promising technology for use as a source of heat and electricity for buildings and as an electrical power source for electric motors propelling vehicles. Fuel cells operate best on pure hydrogen. But fuels like natural gas, methanol, or even gasoline can be reformed to produce the hydrogen required for fuel cells. Some fuel cells can even be fueled directly with methanol, without using a reformer. THE FUTURE OF HYDROGEN

In the future, hydrogen will join electricity as an important energy carrier. An energy carrier moves and delivers energy in a usable form to consumers. Renewable energy sources, like the sun and wind, can’t produce energy all the time. But they could, for example, produce electric energy and hydrogen, which can be stored until it’s needed.

Hydrogen could also be used as a fuel for zero-emissions vehicles, to heat homes and offices, and to fuel aircraft. However, before hydrogen can play a bigger role in energy production and become a widely used alternative to gasoline, many new facilities and systems must be built.



On Earth, water is constantly moving around in various forms. As an example, water evaporated from oceans combines to form clouds, which eventually condense and precipitate in the form of rain and snow. All this movement provides an enormous opportunity to harness useful energy. Hydroelectric generators take advantage of this movement to create electricity.



In the United States, hydropower has grown steadily, from 56 Gigawatts (GW) of installed capacity in 1970 to more than 78GW in 2011. In order to generate electricity from the kinetic energy in moving water, the water has to move with sufficient speed and volume to spin a turbine.  Roughly speaking, one gallon of water per second falling one hundred feet can generate one kilowatt of electricity. To increase the volume of moving water, impoundments or dams are used to collect the water. An opening in the dam uses gravity to drop water down a pipe called a penstock. The moving water causes a turbine to spin, which causes magnets inside a generator to rotate and create electricity.

Since hydropower depends on rivers and streams for generation, the potential to use hydropower as a source of electricity varies across the country. For example, the Pacific Northwest (Oregon and Washington) generates more than two-thirds of its electricity from hydroelectric dams.

In addition to very large plants in western states, the United States has many smaller hydropower plants. In 1940 there were 3,100 hydropower plants across the country, though by 1980 that number had fallen to 1,425. Since then, a number of these small plants have been restored. As of 2013, 1,672 hydro plants (not including pumped storage) were in operation.

Hydropower can also be generated without a dam, through a process known as run-of-the-river. In this case, the volume and speed of water is not augmented by a dam. Instead, a run-of-river project spins the turbine blades by capturing the kinetic energy of the moving water in the river.  Hydropower projects that have dams can control when electricity is generated because the dams can control the timing and flow of the water reaching the turbines. Therefore, these projects can choose to generate power when it is most needed and most valuable to the grid. Because run-of-river projects do not store water behind dams, they have much less ability to control the amount and timing of when electricity is generated. PUMPED STORAGE

Another type of hydropower technology is called pumped storage. In a pumped storage plant, water is pumped from a lower reservoir to a higher reservoir during off-peak times when electricity is relatively cheap, using electricity generated from other types of energy sources. Pumping the water uphill creates the potential to generate hydropower later on. When the hydropower is needed, it is released back into the lower reservoir through turbines. Inevitably, some power is lost, but pumped storage systems can be up to 80% efficient. Currently more than 90GW of pumped storage capacity is available worldwide, with about 20% of that in the United States. The need to create storage resources to capture and store for later use the generation from high penetrations of variable renewable energy (e.g. wind and solar) could increase interest in building new pumped storage projects. THE FUTURE OF HYDROPOWER

Advances in ‘fish-friendly’ turbines and improved data collection techniques to increase the effectiveness of fish passage technologies create exciting new opportunities for the hydropower industry. If constructed and operated in a manner that minimizes environmental and cultural impacts, hydropower projects can provide low-cost, clean sources of electricity to urban and rural areas throughout the world. Harvesting the power from our rivers can be part of a smart and diverse set of solutions for reducing our dependence on fossil fuels, and the impact they have on our climate and public health. The ability to ramp up and down hydropower generation is a valuable source of flexible generation on the electricity grid, which can directly displace coal and natural gas, and help integrate larger amounts of variable renewable energy resources, like wind and solar power.


6.2.5    SOLAR

Solar energy is energy from the sun that is converted into thermal or electrical energy. This form of energy is the cleanest and most abundant renewable energy source available. A variety of technologies convert sunlight into usable energy. The most commonly used solar technologies for homes and businesses are solar photovoltaics, concentrated solar heating, and passive solar heating.     SOLAR PHOTOVOLTAICS

Traditional solar cells are made from silicon and tend to be the most efficient. Second-generation solar cells are made from amorphous silicon or non-silicon materials such as cadmium telluride, and are called thin film solar cells. Thin film solar cells use layers of semiconductor materials that are only a few micrometers thick. Due to their flexibility, thin film solar cells can double as rooftop shingles and tiles, building facades, or the glazing for skylights. Solar cells, also called photovoltaic (PV), convert sunlight directly into electricity. PV gets its name from the process of converting light (photons) to electricity (voltage), which is called the PV effect. The PV effect was discovered in 1954, when scientists at Bell Telephone discovered that silicon created an electric charge when exposed to sunlight. Soon, solar cells were being used to power space satellites and smaller items like calculators and watches.     CONCENTRATED SOLAR HEATING

Fossil fuels are used by many of today’s power plants as a heat source to generate electricity. However, a new generation of power plants with concentrated solar power systems uses the sun as a heat source. The three main types of concentrated solar power systems are: linear concentrator, dish/engine, and power tower systems.

  • Linear concentrator systems collect the sun’s energy using long rectangular, curved mirrors. The mirrors are tilted toward the sun, focusing sunlight on pipes (receivers) that run the length of the mirrors. The reflected sunlight heats a fluid flowing through the pipes and is used to boil water in a conventional steam generator.
  • A dish/engine system uses a large mirrored dish similar to a satellite dish. The dish-shaped surface directs and concentrates sunlight onto a thermal receiver, which absorbs and collects the heat and transfers it to the steam generator.
  • A power tower system uses a large field of flat, sun-tracking mirrors known as heliostats to focus and concentrate sunlight onto a receiver on the top of a tower. A heat-transfer fluid heated in the receiver is used to generate steam and used in a steam generator.     PASSIVE SOLAR HEATING

Commercial and industrial buildings can use the same solar technologies used for residential buildings: photovoltaics, passive heating, daylighting, and water heating. Nonresidential buildings can also use solar energy technologies that would be impractical for a home. These technologies include ventilation air preheating, solar process heating, and solar cooling.

Solar water-heating systems are designed to provide large quantities of hot water for nonresidential buildings. A typical system includes solar collectors that work along with a pump, heat exchanger, and/or one or more large storage tanks. The two main types of solar collectors used for nonresidential buildings—an evacuated-tube collector and a linear concentrator—can operate at high temperatures with high efficiency. An evacuated-tube collector is a set of many double-walled, glass tubes, and reflectors to heat the fluid inside the tubes. A vacuum between the two walls insulates the inner tube, retaining the heat. Linear concentrators use long, rectangular, curved (U-shaped) mirrors tilted to focus sunlight on tubes that run along the length of the mirrors. The concentrated sunlight heats the fluid within the tubes. Many large buildings need ventilated air to maintain indoor air quality. In cold climates, heating this air can use large amounts of energy. However, a solar ventilation system can preheat the air, saving both energy and money. This type of system typically uses a transpired collector, which consists of a thin, black metal panel mounted on a south-facing wall to absorb the sun’s heat. Air passes through the many small holes in the panel. A space behind the perforated wall allows the air streams from the holes to mix together. The heated air is then sucked out from the top of the space into the ventilation system.


6.2.5    WIND

Humans have been harnessing the wind’s energy for hundreds of years, from old windmills used for pumping water or grinding grain, to ships using sails to move. Today, wind energy is captured by the natural wind in our atmosphere and converted into mechanical energy used to drive a generator that creates electricity.

Wind is the movement of air from an area of high pressure to an area of low pressure. This is caused by the uneven heating of the atmosphere by the sun, irregularities of the earth’s surface, and rotation of the earth. In an effort to capitalize on wind power, wind turbines are installed in areas where wind gusts are consistent all year round.     WIND TURBINES

Each turbine is equipped with wind assessment equipment and will automatically rotate into the face of the wind, and angle or “pitch” its blades to optimize energy capture.Wind turbines, like windmills, are mounted atop a steel tubular tower up to 325 feet, which supports both a “hub” securing wind turbine blades and the “nacelle” which houses the turbine’s shaft, gearbox, generator, and controls. Usually, the hub will have two or three propeller-like blades which are mounted on a shaft to form a rotor. Each blade acts like an airplane wing. When the wind blows, a pocket of low-pressure air forms on the downwind side of the blade which causes the rotor to turn and triggers an internal gearbox to spin. The gearbox then steps up the rotation speed of the rotor and spins an internal shaft which is connected to a generator that produces electricity.

Wind is a clean source of renewable energy that produces no air or water pollution. And since the wind is free, operational costs are nearly zero once a turbine is erected. Wind turbines can be used as stand-alone applications, or they can be connected to a utility power grid or even combined with a photovoltaic system. Mass production and technology advances are making turbines cheaper, and many governments offer tax incentives to spur wind-energy development.

In the United States, Texas has the most wind Farms (42) with a combined wind generated capacity of 17,713 MW. Nevertheless, the wind energy industry is booming. Globally, generation more than quadrupled between 2000 and 2006. At the end of 2015, global capacity reached more than 432,419 MW.


If you enjoyed this article, click here to start from the beginning of our Industry 101 Series.

Or to continue your journey, click here to access the next installment of our Industry 101 guide.


Here is a list of relevant reading material our expert identified as sources for additional information:

This post is part of our Industry 101 Series, an ongoing campaign to provide a foundation of knowledge about our unique industry. To learn more about this campaign, please click here.



Distributed generation is energy generated by small devices near the end user. These systems are known as distributed energy resources (DER). The traditional electric grid network in the United States consists of bulk generation located far away from the concentrated customer base.  This configuration is known as centralized energy, and a large transmission network is needed to transport this generated electricity long distances to the customer. In contrast, DER systems are known as decentralized energy sources because they are small, independent generators located near the consumer load. DER systems are often renewable energy sources such as photovoltaic systems, wind turbines, geothermal systems, small hydro units, biomass sources, or biogas generators. Distributed generation does not have a standard definition, but it is commonly accepted that DERs are less than 100 megawatts (MW) in size. However, they are usually less than 50 MW in order to fall below the maximum voltage accommodated by the distribution network.



Distributed GenerationAt the end of the 19th century, during the development of the electric grid, distributed generation accounted for all of the nation’s electricity needs in the form of direct current (DC) equipment, and only small pockets of the U.S. had access to electricity. The first commercial power plant was the Pearl Street Station in Manhattan. It was a central generation station that was still localized with its customer base. The development of alternating current (AC) technology allowed electricity to be safely transported over much longer distances, and this capability allowed for the creation of the centralized generation and transmission system of the 20th century. The demand for electricity grew exponentially during the early 1900s. Large generating units were developed to meet those needs, and economies of scale lowered the cost of providing electricity to end users.

However, the second half of the 1900s experienced a leveling off of energy growth. Rising fuel prices and uncertain markets spurred research into alternative energy generation methods. In the mid-1990s, research from the past decade had produced economically-viable methods of small-scale electricity generation that could compete with the cheap electricity generated from large scale equipment. Additionally, the Energy Policy Act of 1992 sought to bring competition to the power industry, a concept previously unheard-of in this industry of natural monopolies, and this policy gave non-utility and private investors motivation to implement distributed generation technologies.

Today, some states have deregulated, wholesale energy markets, and many offer incentives to the end user for generating their own energy and exporting surplus electricity to the grid. During 2012, $150 billion was invested in distributed generation, and out of the total amount of new generating capacity added that year, distributed generation accounted for roughly 39 percent.  As distributed generation and smart grid technologies advance, central and distributed generation sources will integrate and complement each other to produce a safe, reliable, environmentally-sensitive, economically-sound electric grid.



Distributed generation technologies can take many forms. They can be mobile, such as generators on large ships, but we will focus on the impact of stationary DER modules that supplement energy from the centralized power grid.  Units can be connected to the grid or kept off-grid and can be used for continuous, peak, or backup power.  Furthermore, distributed generation looks different

Distributed Generation Gridto industrial and commercial customers compared to residential users. Organizations are more likely to own small localized power plants, while individual energy consumers will own singular modules like a small solar panel array. Power plants on the local level can be connected directly to the public grid and produce electricity that is sold to the market, or kept off-grid and produce electricity used solely on-site. End user DERs are most often connected to the grid from the customer’s side of the meter rather than islanded, and only surplus energy that the customer cannot use is placed on the public grid. The former type can generate up to 100 MW of power, while the latter usually generate less than 10 MW. In contrast, the average coal power plant has a power output of 500 MW, and the power output of a typical nuclear power plant is 1000 MW.  The most common types of distributed generation are described below.

Currently, wind turbines produce the most power from renewable resources, excluding hydro. Wind power is appealing because it does not require fuel and is therefore unaffected by fluctuating fuel costs. No forms of pollution are generated from wind turbines, and the ratio of power generation to operating cost is very favorable compared to other generation sources.  However, wind turbine installation has high initial costs, and the energy production is unpredictable and volatile.

Solar power systems are the most common DERs among residential owners because photovoltaic or thermal panels can be installed on the roofs of homes.  The power output of these units can be customized to fit the budget and energy needs of the individual customer. The standard stationary solar panel has no moving parts and therefore, requires less maintenance than other generators. They require no fuel and are quiet, unobtrusive additions to a residential home.

Cogeneration, or combined heat and power (CHP), allow industrial businesses to capture and utilize heat from their processes that would have otherwise been wasted. The average efficiency of fossil fuel generation is 35-37 percent, and about two-thirds of the lost energy is wasted heat. CHP systems can recapture this heat for use in the industrial process and space or water heating. Efficiencies of 90 percent can be achieved with the addition of CHP.

Fuel cells use chemical reactions to convert fuel into energy as opposed to the combustion method. They consist of a cathode and anode with an electrolyte in between to allow charges to travel from one to the other. Water, heat, and carbon dioxide are the only emissions of fuel cells which make them a cleaner energy source than other fossil-fuel power sources. Their high efficiency, low noise, and quick installation make them an appealing alternative, but they have high initial costs, require frequent maintenance, and still rely on fossil fuels.



Centralized power plants are often old and have outdated equipment that produce large amounts of greenhouse gas emissions, and the concentrated nature of their emissions can drastically harm the ecosystems around the power plants. Most of the distributed generation in the U.S. comes from renewable energy sources and has significantly lower emissions than traditional coal power plants, which is a positive benefit for environmentally-concerned customers.

Some industrial and commercial companies look to distributed generation as a way to ensure constant power with zero interruptions and better power quality.  The Electric Power Research Institute (EPRI) estimated that power disturbances caused a loss of $119 billion in revenue for U.S. companies in 2007. Furthermore, 4 – 9% of electricity is lost due to old transmission technology and grid overload, and the electricity that does reach the customer often has poor power quality – that is, the electricity has fluctuations in voltage. Customers can limit their need of an expansive transmission network by investing in DER systems to use as backup systems or in parallel with the grid.

By providing localized power to the end user, distributed generation can reduce the electricity demand needed from bulk generation and remove some of the load from transmission lines, which is especially beneficial during peak times of demand. It is costly for utilities to produce and supply energy during peak demand.  They have to utilize extra power plants that might not be as efficient as their other generation sites, and the grid is often congested and overloaded. In some places, distributed generation can reduce enough peak demand from utilities that power plant and transmission expansions and upgrades are not needed to keep up with demand.

Limiting the need for new transmission and power plant investments is a significant motivation for the development of distributed generation technologies. Large power plants require significant capital investment, and fluctuating market conditions lead utilities to be cautious when making decisions to build new generation. Furthermore, building a new power plant increases a utility’s generation capacity by a large factor, but energy consumption has been increasing only moderately in recent years. This disconnect means utilities risk generating excess amounts of electricity, wasting valuable resources, and waiting several years to generate a return on their investment. On the other hand, DER systems allow the total generation capacity to be increased incrementally.

Lastly, centralized, fossil fuel-dependent energy networks present some security risks that can be mitigated by distributed generation. A large power plant presents a target for cyberattack groups and similar organizations that would prove disruptive to its customer base.  It would not be easy to quickly recover from grid failure if a large power plant were damaged and taken offline. Having many small generators located near consumption reduces criminal targets and gives the grid flexibility to respond to outages throughout the grid. Furthermore, distributed generation incorporates energy production from a variety of sources, including renewable energy.  By diversifying the power source, the economy is less sensitive to price fluctuations and fuel shortages.



First and foremost, a grid with significant amounts of distributed generation needs smart grid technologies in order to manage grid operations, maintain power quality, and balance the generation from all these sources with overall demand.  Some of the required capabilities include forecasting energy demand and availability of renewable energy generation, optimizing control of network switching, calculating generator schedules against controllable loads and storage capacities, and protecting communication and grid data across the network.

One of the biggest tasks to be tackled by the smart grid is the integration of unpredictable energy sources. This need is especially relevant to distributed generation because many DER systems contain renewable energy sources that are intermittent.  The uncertainty of how much variable distributed generation the grid can handle is a factor that potential owners and investors have to consider. The main mitigation of this risk is the addition of energy storage units to DER systems. This addition allows excess energy to be stored and used at a later time.  Currently, the most common form of electricity storage is lithium ion batteries, but size and cost are still restrictions that make energy storage an area that needs further development, though it is worth noting that the cost of these batteries is decreasing.

Furthermore, the capital investment required upfront for DER systems often puts them out of reach for the average residential consumer. Even if customers can afford DER systems, many states do not offer monetary compensation to customers who export their surplus energy back to the grid, which creates a much longer period of time before these customers see the return on their investment. Moreover, electricity customers in the U.S. are not typically encouraged to take an active role in managing their electricity use, so this lack of knowledge does not promote the adoption of small DER systems among residential customers. The customers are uncertain about what local regulations apply and what steps they need to take to connect a solar panel or other energy source to the grid.

All in all, distributed generation is a growing sector of electricity generation. More businesses and residential customers are choosing to supplement their services from their utility with localized generation. While DER systems are not likely to replace centralized power stations any time soon, their presence introduces new challenges that will alter utility operations and business processes. Not only will the utility have to invest in smart grid technology, it will have to redefine its relationship with the customer.  Despite these challenges, distributed energy resources will contribute to a more resilient, reliable electric grid that benefits utilities and customers alike.


If you enjoyed this article, click here to start from the beginning of our Industry 101 Series.

Or to continue your journey, click here to access the next installment of our Industry 101 guide.


Here is a list of relevant reading material our expert identified as sources for additional information:—prof-mohan-kolhe.pdf

This post is part of our Industry 101 Series, an ongoing campaign to provide a foundation of knowledge about our unique industry. To learn more about this campaign, please click here.



Transmission settlements are used for the same purpose as market settlements—to ensure that the supply and demand of power are in sync—but transmission settlements consider this balance from a grid perspective, rather than a consumer one.

Transmission settlements are settled between a transmission operator and a transmission customer, based on the customer’s use of the operator’s grid. Essentially, the transmission customer rents the grid from the operator and other non-competitive ancillary services provided by the operator, such as scheduling and voltage support. The charges for these transmission and ancillary services are based on a FERC-approved tariff, and the funds collected by the independent system operator (ISO) are distributed to the transmission owners or ancillary service providers.

Transmission settlement setups often predate the implementation of the energy market and can even exist in a regulated market. Most ISOs use a monthly invoicing process for transmission settlements, rather than the daily settlements used for market settlements, due to less market volatility and less granular data.

Let’s take an example from a midcontinent independent system operator (MISO) and try to understand their transmission settlement process:

  • Market participants’ use of the MISO transmission system and mandated, non-competitive ancillary services such as scheduling and voltage support are financially settled by the transmission settlements process.
  • Market participant charges for ancillary services and transmission are determined using the tariff approved by FERC.
  • The transmission owners and the providers of the mandated ancillary services receive the collected funds.
  • Transmission settlements utilize different applications than the market settlements.
  • Transmission settlements predate the market opening and continue to follow the existing transmission settlements schedule.


If you enjoyed this article, click here to start from the beginning of our Industry 101 Series.

Or to continue your journey, click here to access the next installment of our Industry 101 guide.

Here is a list of relevant reading material our expert identified as sources for additional information:

This post is part of our Industry 101 Series, an ongoing campaign to provide a foundation of knowledge about our unique industry. To learn more about this campaign, please click here.



The prices generated for day-ahead and real-time markets are made up of several categories and fees associated with each. The figures below represent two independent system operators (ISOs) wholesale energy costs for a full or partial year and also show the significance of each fee on the total cost.

Wholesale Energy Cost 2002

Shown at right is a breakdown of several of the main fees that most often make up energy costs.

Ancillary Services – Ancillary services are needed to support the transmission of electric power from seller to purchaser. Obligations of control areas and transmitting utilities within those control areas make them necessary in order to maintain reliable operations of the interconnected transmission system.

Some ISOs pay generators for their capacity on top of the cost of energy. These capacity charges serve as an incentive for the generator to meet energy requirements in the market at all times. These prices are then bundled into energy prices for customers. Capacity charges are usually calculated based on a customer’s peak load contribution (PLC), which is their peak monthly demand, the price that a load serving unit must pay to guarantee capacity for its customers and the installed capacity (ICAP), the peak monthly demand during a specific time. Unforced capacity (UCAP) represents the actual available ICAP at any given time.

Congestion Prices are transmission charges when the market becomes congested. Actual congestion cannot occur in an energy transmission system as pushing the system beyond its limits can result in line faults and electrical fires. The term “congestion” in this context refers to higher demand, which leads to waiting markets. This can result in higher prices.

Losses refer to the energy lost due to physical resistance in the transmission network. Marginal loss costs are determined by adding the load loss charges, net explicit loss charges, and net inadvertent loss charges, then subtracting the generation loss credits from this total.

Unaccounted for energy (UFE) represents deviations due to unforeseen errors such as measurement errors, modelling errors, energy theft, load profile errors, and distribution loss differences.

UFE is calculated from the difference between the net energy delivered and the total metered demand.

PJM Wholesale Cost Full Year 2008


If you enjoyed this article, click here to start from the beginning of our Industry 101 Series.

Or to continue your journey, click here to access the next installment of our Industry 101 guide.


Here is a list of relevant reading material our expert identified as sources for additional information:

This post is part of our Industry 101 Series, an ongoing campaign to provide a foundation of knowledge about our unique industry. To learn more about this campaign, please click here.



Day-Ahead Market

Electricity is continuously generated and consumed on a 24-hour clock, but settlement periods are defined as distinct half-hour time frames. Electricity suppliers (retailers) assess the demand of their customers in advance for a given settlement period. Once this demand is calculated, they enter into contracts with one or more generation resources in advance to cover this basic minimum expected demand, known as base load. These contracts are essentially OTC – over-the-counter contracts. If deviations from this base load are expected on a certain day—due to weather or other changing factors—the supplier may also buy electricity from power exchanges to shape the base load. These exchange transactions are generally done closer to the period in question, with the cut-off time being an hour before the period’s start; this cut-off is known as the gate closure for the settlement period.

The base load and exchange transactions are the expected standards in the market, but generators can offer to place additional capacity on the grid and set the price they would like to receive for it or request their normal capacity be reduced and set the price they would like to pay for that option. Similarly, suppliers can set a price to be paid for reduced demand or offer a price they would pay for increased demand. These contracts must also close before the gate closure.

Energy Settlement Process Key StepsAt or before the gate closure time, generators must notify the ISO of their contracted generation volumes, including base load, exchanges, and increased or reduced capacity for the given settlement period. Suppliers must do the same for contracted demand volumes. These notifications are known as final physical notifications (FPN).

Between the gate closure time and the beginning of the settlement period, the ISO compares forecasted demand and the FPNs for generation; if there is a discrepancy, it allows for offers and bids from generators and/or suppliers, depending on the need, to rectify. If the market is working as desired, at the beginning of the settlement period, expected generation should exactly meet expected demand.

Market Day

Electricity generators are expected to generate electricity matched with the contracted volume, and the retailer’s customers are expected to consume all the generated electricity that is matched by the contracted volume. However, in reality the generators may generate more or less electricity while the retailer’s customers may consume more or less electricity than contracted. Balancing the market to the day-ahead (DA) is the real-time (RT) market, in which participants can buy and sell energy throughout the operating day. Differences between the scheduled demand from the day-ahead market and the actual demand necessary on the day of trade are balanced through the demand and supply of the RT market.

Day After

The imbalances mentioned before are the basis from which the ISO generates invoices to generators and suppliers. FPNs and bid acceptance data define the contracted volumes, and these expectations are compared against actual half-hourly interval consumption reads from the meter data management agents (MDMA) that give the actual volumes.

Electricity Imbalance Volume = (Electricity Volumes Consumed or Generated – (Bid / Offers Accepted + Total Contracted Volume)

Suppliers who overconsume or generators who underproduce are required to buy the deficit from the system at the set system buy price (SBP). On the other hand, suppliers who underconsume or generators who overproduce have to sell the excess power at the system sell price (SSP). The ISO uses the SBP and SSP to generate invoices and sends these to the generators and suppliers.

To balance the grid, retailer’s customers who consume less electricity than contracted and generators who generate more electricity than contracted need to sell the excess amount  to the system operator at system sell price – SSP.

Electricity Imbalance Amount = Electricity Imbalance Volume * Electricity Imbalance Price (SBP or SSP)

Generally, an ISO will generate market settlement invoices daily, based on the 48 settlement periods in a defined operating day. However, missing or estimated meter data may later be replaced by real reads, or market participants can dispute the charge. Either of these occurrences could cause the ISO to create altered or “trued-up” invoices based on the new information.



Hourly Prices From Demand SchedulesIndependent system operators implement day-ahead (DA) and real-time (RT) energy markets for trading energy. The day-ahead market is an energy market in which participants can buy and sell electric energy at a set price for each hour of the following day. This market is driven by planning through supply offers and demand bids establishing financially binding prices for the following day. A buyer, such as a utility, evaluates how much energy it needs to be able to supply in order to meet demand for the following day, as well as how much it is willing to pay for it. A seller, such as a power plant, decides how much energy to provide and at what hourly price. This information is then developed into a standard supply and demand balance from which each hourly price is derived. The standard market design (SMD) issued by the Federal Energy Regulatory Commission (FERC) places ISOs in a position of power to receive energy and price bids, then responsibly sort and select these with a given motive. An ISO’s motive can range from lowest cost to aiming for the lowest amount of changes to the original schedules.

Some ISOs have full network models that analyze the availability and cost of producing and delivering energy and integrate these costs into the hourly prices called locational marginal prices (LMPs). The main impact of the day-ahead market is the ability for buyers and sellers to evade price volatility through an enforced price.

Locational marginal price is the cost to serve the next unit of load at a specific location with the lowest production cost while still evaluating transmission limit costs.

Locational Marginal Price = Generation Marginal Cost + Transmission Congestion Cost + Cost of Marginal Losses

On the other hand, market clearing price (MCP) is the cost to fulfill the energy demand without considering the transmission limitations. LMP reflects the MCP at each location.



Balancing the day-ahead market is the real-time market, in which participants can buy and sell energy throughout the operating day. Differences between the scheduled demand from the day-ahead market and the actual demand necessary on the day of trade are balanced through the demand and supply of the real-time market. Utilities can buy power that covers the gaps in demand not originally planned for in their day-ahead schedule. These deviations can be caused by plenty of reasons including deviations from load and generation schedules and network model inaccuracies.

Discrepancies in consumption schedules are not always unintentional. If a buyer predicts the real-time market prices to be lower than day-ahead market prices, they may deliberately submit a schedule with underestimated loads. Conversely, if a buyer expects real-time prices to be higher than day-ahead market prices, they may submit an overestimated schedule and sell the excess energy back to the ISO at the real-time market prices. The ability to balance or shift demand necessities between the day-ahead and real-time markets provides a source of demand elasticity in the day-ahead market. This also causes price volatility because of the chance of having price increases in the real-time market while keeping day-ahead prices fairly average.

Deviations in generation schedules can happen for many reasons. Unexpected outages can easily cause the delivered energy to be less than what was expected. Also, energy generators have constraints including start-up time, shut-down time, minimum uptime, and minimum downtime. If a generator falls behind on any of these, they will depend on the real-time market to be able to make up the difference.



While the day-ahead and real-time markets are the most prominent through ISOs, these are not necessarily the only trading opportunities present, and some ISOs do not have both. The figure shown here compares four major northeastern ISOs. PJM and the New England ISO are the most common, with both real-time and day-ahead markets, and the New York ISO has both of these as well as an hour-ahead market.

Having and maintaining multiple markets can cause nuances for an ISO. For instance, the NYISO used their security-constrained unit commitment (SCUC) to predict the physically available flows as well as the MCPs. Their balancing market evaluation (BME) generated hour-ahead prices. They also have a security-constrained dispatch (SCD) to perform a least-cost analysis of the available units every five minutes. While the SCUC and the BME used the same algorithm and were in sync, the SCD has a different model which can result in significantly differing day-ahead and real-time prices, mostly during shortages. The inconsistencies in the BME and SCD also caused discrepancies in the treatment of reserves. The BME sets aside capability to entirely fulfill the energy demand for all reserves while the SCD can use reserves in real-time. In turn, real-time prices can be lower than what was originally forecasted by BME.

The California ISO (CAISO) also faced issues when implementing technology in order to balance and maintain their markets. They implemented the BEEP System, balancing energy and ex-post pricing, which dispatches non-automatic generation control units in ten-minute intervals. Prior to starting market operation in 1998, CAISO enforced a cap on energy bids of $125/MWh in the real-time market, which became known as the BEEP cap. The purpose of this cap was an interim solution to prevent predicted rises in the MCPs due to issues with the BEEP system. The cap was then raised to $250/MWh. Other issues with the BEEP system have occurred, which have made the cap stay in effect. The bid cap predictions are currently linked with the implementation of new systems.

Another issue frequently faced by ISOs is the lack of demand elasticity, referring to the ability for customers to respond to prices by adjusting their demand. This is the most important way for customers to be protected against market power. CAISO has recognized additional sources of demand elasticity as the bridge to increase competitiveness as well as eventually eliminate price caps.

While having multiple energy markets gives an ISO further trading options, responsibly maintaining and regulating these can be an extensive burden. Each ISO must constantly monitor their market in order to come up with the necessary rules and regulations that affect every energy market price.



If you enjoyed this article, click here to start from the beginning of our Industry 101 Series.

Or to continue your journey, click here to access the next installment of our Industry 101 guide.


Here is a list of relevant reading material our expert identified as sources for additional information:

This post is part of our Industry 101 Series, an ongoing campaign to provide a foundation of knowledge about our unique industry. To learn more about this campaign, please click here.





The wholesale market refers to the buying and selling of power between generators and resellers. The resellers can include some or all of the following –

  • Electricity utility companies
  • Competitive power providers
  • Electricity marketers

The price for a wholesale market can be predetermined by a buyer and a seller through a bilateral contract, or it can be set by the organized wholesale market. The clearing price is determined by an auction in which the generation resources offer a price at which they can supply electricity. If a generation resource submits a bid which is a successful bid and thus would be contributing its generation to meet the market demand, it is said to clear the market. The cheapest resource will clear the market first followed by the next cheapest resource until the demand is met. When supply matches the demand, the market is cleared.



Electricity bought by the resell entities in the wholesale market is then sold to the end consumers. With electricity reforms and the advent of competition, consumers may have the option of choosing their electricity supply company. Consumers who choose their supplier are also known by the term choice consumers. On the other hand, consumers who do not choose a supplier are served by their incumbent utility through a service called provider of last resort – POLR.

All the retail markets are regulated at the state level.


Energy Market Simplified View5.2.3 WORKING OF AN ELECTRICITY MARKET

The major players in an electricity market are:

  • Electricity generation resources
  • Electricity suppliers (retailers)
  • Electricity system operator (firms like ERCOT)
  • Domestic/commercial customers

The electricity supplier (retailer) is responsible for purchasing power from the wholesale market through either long-term contracts or several short-term agreements, selling it to the customers, and billing them for the electricity used. As discussed above, with the advent of competition, consumers can choose their supplier and suppliers can choose their generation resource.

The electricity system operator (like ERCOT) is responsible for:

  • Balancing the system in real-time, i.e. matching the demand and supply of electricity at the agreed frequency.
  • Calculating the imbalance for each supplier, i.e. the difference between the amount of electricity purchased and the total demand of the supplier’s customers, and sending out the invoices for the settlement period under consideration.


If you enjoyed this article, click here to start from the beginning of our Industry 101 Series.

Or to continue your journey, click here to access the next installment of our Industry 101 guide.


Here is a list of relevant reading material our expert identified as sources for additional information:

This post is part of our Industry 101 Series, an ongoing campaign to provide a foundation of knowledge about our unique industry. To learn more about this campaign, please click here.



In the 1990s, many in the United States pushed for utility market reform, suggesting that moving to a deregulated market would increase market transparency, drastically reduce the likelihood of power over- or under-generation, drive down prices due to increased competition, and empower consumers to make informed decisions about their power needs.

Instead of the vertically-integrated and heavily-regulated utility market that has long been the standard in the U.S., deregulated markets require the participating utilities to divest in their generation and transmission operations, allowing them to only focus on distribution and billing. Grid operators put electric generation on the market, which is then bought by retail companies and sold to end users as electricity.

The result of this deregulation is a large number of functionally separate market players, all of whom represent portions of the energy industry that would generally be handled by a single utility in a standard regulated market. A portion of the players needed to maintain a deregulated market are described below, but more may be present in a given independent system operator setup, based on the regional and federal mandates applicable to that market.



As stated previously, an independent system operator (ISO) is a regulatory organization, created on the recommendation of FERC Order 888, intended to provide oversight to utilities and transactional transparency and non-discriminatory transmission access to customers that are served by the utilities the ISO governs. Regional Transmission Organization (RTO), introduced by FERC in Order 2000, is often used interchangeably with ISO; they generally have the same goals.

In a deregulated utility market, the ISO is the central repository for information about the current and future state of the electric grid: how much power has been and will be generated by grid operators, the current market price at which retailers may sell power to consumers, and the load that consumers are currently using or forecasted to use in the future. Essentially, the ISO is the heart of the industry, taking in information from many different sources, rerouting it to the affected participants, and ensuring that everything is working as a synchronous whole.

National Operator



Market participants (MPs) are those who actively engage in transmission, energy, and/or operating reserve markets overseen by the ISO. Market participants submit bids to purchase or supply transmission, energy, and/or operating reserve to the grid, using data gleaned from agents either working for the market participants or contracted on their behalf and in compliance with the rules and regulations set out by FERC and the specific ISO. The market participant accepts financial responsibility for the transactions they submit and legal responsibility for any data they, or an agent on their behalf, submits to the ISO. Their participation in the bidding and offering process directly includes MPs in the settlement and settlement invoicing process.

Market participants often include generators, retailers, transmission operators, and transmission customers.




An asset owner (AO) is one who is responsible for assets that directly or indirectly impact the grid’s operations, including physical assets like power lines and meters, but also virtual assets like software, services, and people.

Agent Options for Market Participants

Market participants represent the asset owner’s interests in the energy market, and settlements are generated per asset owner. The division of how a specific company wants to represent itself to or settle with the ISO will define the scope of the asset owner.


5.1.4 AGENTS

Four agency relationships are available to all participants in the MISO system: MDMA

Meter Data Management Agents (MDMAs) collect, validate, and store customer usage data as part of, or on behalf of, a market participant. This data is delivered to the ISO in a pre-defined format and is used to determine the actual volume of generation or consumption used when the ISO does settlement calculations. SCHEDULING AGENT

The market participant who schedules the applicable transactions sent to the ISO is known as a scheduling agent (SA) or scheduling coordinator (SC).

Even though the scheduling agent should ensure that the participant’s bids comply with the timeliness and integrity standards set forth by the ISO, the participant is the one with a legal, financial, and operational relationship with the ISO. If any discrepancies occur, the ISO contacts the participant directly, not any agents who acted on their behalf. SETTLEMENT AGENT

Market Settlement Agent

A market settlement agent (MSA) deals with responding to the settlement invoices that the ISO sends to a market participant. Again, they may be within the market participant’s organization or a contractor who acts on the participant’s behalf.

As with the scheduling agent, the market participant assumes legal, financial, and operational responsibility for any decisions the market settlement agent delivers to the ISO on the participant’s behalf.

Transmission Settlement Agent

In the transmission market, a transmission settlement agent (TSA) acts in the same manner as an MSA does in the energy market. BILLING AGENT

A market participant may designate a billing agent as the one who accepts invoices and makes payments on behalf of the participant; this agent may be internal or external to the participant’s organization.

Similar to previous agent roles, the market participant assumes contractual obligations to the ISO on behalf of any decisions made by the billing agent.



A Local Balancing Authority (LBA) provides timely hourly or half-hourly NAI data[1] to the ISO to support market settlements.

The most visible purpose of an ISO/RTO to consumers is its use as a central clearinghouse for grid transactions between its utilities, including transmission rights and day-ahead or spot market purchases of transmission and/or generation. These transactions are known as settlements, since the ISO is the medium through which goods, services, and payments are reconciled – similar to a financial institution. Deregulated markets can have both market settlements and transmission settlements, which are used to keep the grid’s supply and demand in sync from a consumer and operational standpoint.

[1] Net Actual Interchange (NAI) – the algebraic sum of all metered interchange over all interconnections between two physically adjacent balancing authority areas.


If you enjoyed this article, click here to start from the beginning of our Industry 101 Series.

Or to continue your journey, click here to access the next installment of our Industry 101 guide.


Here is a list of relevant reading material our expert identified as sources for additional information: