Metrics

Views
16.1k+

In This Chapter

Smart Grid Technologies

Authored by: Stuart Borlase

Smart Grids

Print publication date:  October  2012
Online publication date:  October  2012

Print ISBN: 9781439829059
eBook ISBN: 9781439829103
Adobe ISBN:

10.1201/b13003-4

 

Abstract

Contents

 Add to shortlist  Cite

Smart Grid Technologies

Contents

3.1  Technology Drivers

Stuart Borlase, Steven Bossart, Keith Dodrill, Joe Miller, Steve Pullins, Bruce A. Renz, and Bartosz Wojszczyk

3.1.1  Transformation of the Grid

Current transmission and distribution grids were not designed with smart grid in mind. They were designed for the cost-effective, rapid electrification of developing economies. The requirements of smart grid are quite different, and, therefore, the reengineering of the current grid is imminent. This engineering work will take many forms including enhancements and extensions to the existing grid, inspection and maintenance activities, preparation for distributed generation and storage, and the development and deployment of an extensive two-way communications system.

The “heavy metal” electric delivery system of transmission lines, distribution feeders, switches, breakers, and transformers will remain the core of the utility transmission and distribution infrastructure. Many refer to this as the “dumb” part of the grid. While some changes in the inherent design of these components can be made, for example, the use of amorphous metal in transformers to reduce losses, the “smarts” in the T&D system are typically related to advances in the monitoring, control, and protection of the “dumb” equipment. Substations therefore play an essential role as the operational interface to the T&D equipment in the field. Advances in technology over the years and the introduction of microprocessor-based monitoring, control, protection, and data acquisition devices have made a marked improvement in the operation and maintenance of the transmission and distribution network. However, changes in the way the T&D system is utilized and operated in a smarter grid will create significant challenges.

Since the invention of electric power technology and the establishment of centralized generation facilities, the greatest changes in the utility industry have been driven not by innovation but by system failures and regulatory/government reactions to those failures. Smart grid technologies have the potential to be the first true “game changing” technology since alternating current supplanted direct current in the late 1800s. As an example, the design of today’s power system took advantage of the economies of scale through the establishment of large centralized generation stations. Supply and demand are continuously balanced by dispatching the appropriate level of generation to satisfy load. This operating model schedules the dispatch of generation to meet the day ahead forecast load. This supply dispatch model is the predominant method of balancing supply and demand today. One vision to optimize the end-to-end system would entail not just supply dispatch but also a complementary dispatch of demand resources as well. Currently, generation is matched to supply consumer load plus a reserve margin, and often expensive generating plants are used to satisfy peak demand or supply reserve energy in the case of contingencies. Today, much is being done in the areas of consumer demand management to leverage the use of demand resources that are located behind-the-meter. Customer-owned generation, storage, and controllable loads that can be curtailed or turned on or off upon request can change the load as seen by the utility. Enabled by the smart grid, consumer demand management and this new dispatch model could significantly improve the optimization of the electric system, create new markets, and establish new consumer participation opportunities. The capability to dispatch consumer demand and alternative energy resources in the grid could also place downward pressure on the wholesale price of electricity.

From the transmission perspective, increased amounts of power exchanges and trading will add more stress to the grid. The smart grid challenge will be to reduce grid congestion, ensure grid stability and security, and optimize the use of transmission assets and low-cost generation sources. In order to keep generation, transmission, and consumption in balance, the grids must become more flexible and more effectively controlled. The transmission system will require more advanced technologies such as FACTS and HVDC to help with power flow control and ensure stability.

Substations in a smart grid will move beyond basic protection and traditional automation schemes to bring complexity around distributed functional and communications architectures, more advanced local analytics, and the management of vast amounts of data. There will be a migration of intelligence from the traditional centralized functions and decisions at the energy management and distribution management system (DMS) level to the substations and feeders in order to enhance responsiveness of the T&D system. System operation applications will become more advanced in being able to coordinate the distributed intelligence in the substations and feeders in the field to ensure system-wide reliability, efficiency, and security.

As supply constraints continue, there will be more focus on the distribution network for cost reduction and capacity relief. Monitoring and control requirements for the distribution system will increase, and the integrated smart grid architecture will benefit from data exchange between smarter distribution field devices and enterprise applications. The emergence of widespread distributed generation and consumer demand response (DR) programs also introduces considerable impact to utility operations. The smart grid will see an increase in utility and consumer-owned resources on the distribution system. Utility customers will be able to generate electricity to the grid or consume electricity from the grid based on determined rules and schedules. This means that consumers will no longer be pure consumers but sellers or buyers of energy, switching back and forth from time to time. This will require that the grid operates with two-way power flows and monitors and controls the generation and consumption points on the distribution network. The distributed generation will be from disparate and mostly intermittent sources and subject to great uncertainty. Real-time pricing and consumer demand management will require advanced analytics and forecasting of the electricity consumption of individual consumers. Figure 3.1 illustrates some expected transformations of the grid.

Smart grid technologies will generate a tremendous amount of real-time and operational data with the increase in sensors and the need for more information on the operation of the system.

Decentralized information technology is changing the rules in the electric utility industry. The impact of this change on the utility industry is analogous, in part, to what digitized information has done or is doing to the publishing and entertainment industries. While the exact circumstances of the utility industry are unique and nontrivial differences exist, the fundamentals remain—a disruptive technology creates operational opportunities and challenges, resulting in winners and losers within the industry incumbents. To this point, much of the “business case” for smart grid technology is driven by reduced usage of traditional energy sources, which necessarily means that not every industry stakeholder will be aligned on this issue. Companies that are driven primarily by generation evaluate smart grid opportunities differently than companies that are driven primarily by power delivery—for reasons that are or should become obvious.

Transformation of the grid. (From Fan, J. and Borlase, S., Advanced distribution management systems for smart grids,

Figure 3.1   Transformation of the grid. (From Fan, J. and Borlase, S., Advanced distribution management systems for smart grids, IEEE Power and Engineering, Copyright March/April 2009 IEEE.)

3.1.2  Characteristics of a Smart Grid

The “smart grid” and similarly denominated programs have been proposed as an effort to integrate the three critical developments in the future grid: expansion of the grid infrastructure to accommodate renewable resources and microgrids; penetration of information technology to implement full digital control in generation, transmission, and distribution systems; and development of new applications. Further, the smart grid programs respond to the political, public, and scientific community requests to deploy high percent of low-CO2-emitting renewable energy resources. Although there are a number of incarnations of these programs, perhaps the most widely used is from the U.S. DOE (Department of Energy). In June 2008, the U.S. DOE’s National Energy Technology Laboratory (NETL) convened a diverse stakeholder meeting that established seven principal characteristics that define the functions of smart grid. The stakeholders converged around the following seven principal characteristics of smart grid.

DOE’s NETL has defined the main tenets of the smart grid having the following seven goals [2]:

  1. Enabling active participation by consumers in DR
  2. Accommodating all generation and storage options
  3. Enabling new products, services, and markets
  4. Providing power quality (PQ) for twenty-first century needs
  5. Optimizing assets and operating efficiently
  6. Self-healing from power disturbance events
  7. Operating resiliently against physical and cyber attack

First, it will enable active participation by consumers. The active participation of consumers in electricity markets will bring tangible benefits to both the grid and the environment. The smart grid will give consumers information, control, and options that allow them to engage in new “electricity markets.” Grid operators will treat willing consumers as resources in the day-to-day operation of the grid. Well-informed consumers will have the ability to modify consumption based on balancing their demands and resources with the electric system’s capability to meet those demands. Programs such as DR will offer consumers more options to participate in their energy usage and costs. The ability to reduce or shift peak demand allows utilities to minimize capital expenditures and operating expenses while also providing substantial environmental benefits by reducing line losses and minimizing the operation of inefficient peaking power plants. In addition, emerging products like the plug-in hybrid electric (PHEVs) and all electric vehicles (EVs) will result in substantially improved load factors while also providing significant environmental benefits.

Second, it will accommodate all generation and storage options. It will seamlessly integrate all types and sizes of electrical generation and storage systems using simplified interconnection processes and universal interoperability standards to support a “plug-and-play” level of convenience. Large central power plants including environmentally friendly sources, such as wind and solar farms and advanced nuclear plants, will continue to play a major role even as large numbers of smaller distributed energy resources (DERs), including plug-in EVs, are deployed. Various capacities from small to large will be interconnected at essentially all voltage levels and will include DERs such as photovoltaic, wind, advanced batteries, plug-in hybrid vehicles, and fuel cells. It will be easier and more profitable for commercial users to install their own generation such as highly efficient combined heat and power installations and electric storage facilities.

Third, it will enable new products, services, and markets. The smart grid will link buyers and sellers together—from the consumer to the regional transmission organization (RTO)—and all those in between. It will facilitate the creation of new electricity markets ranging from the home energy management system (EMS) at the consumers’ premises to the technologies that allow consumers and third parties to bid their energy resources into the electricity market. Consumer response to price increases felt through real-time pricing will mitigate demand and energy usage, driving lower-cost solutions and spurring new technology development. New, clean, energy-related products will also be offered as market options. The smart grid will support consistent market operation across regions. It will enable more market participation through increased transmission paths, aggregated DR initiatives, and the placement of energy resources including storage within a more reliable distribution system located closer to the consumer.

Fourth, the smart grid will provide PQ for the twenty-first century society and our increasing digital loads. It will monitor, diagnose, and respond to PQ deficiencies, leading to a dramatic reduction in the business losses currently experienced by consumers due to insufficient PQ. New PQ standards will balance load sensitivity with delivered PQ.

The smart grid will supply varying grades of PQ at different pricing levels. Additionally, PQ events that originate in the transmission and distribution elements of the electrical power system will be minimized, and irregularities caused by certain consumer loads will be buffered to prevent impacting the electrical system and other consumers.

Fifth, it will optimize asset utilization and operate efficiently. Operationally, the smart grid will improve load factors, lower system losses, and dramatically improve outage management performance. The availability of additional grid intelligence will give planners and engineers the knowledge to build what is needed when it is needed, extend the life of assets, repair equipment before it fails unexpectedly, and more effectively manage the work force that maintains the grid. Operational, maintenance, and capital costs will be reduced, thereby keeping downward pressure on electricity prices.

Sixth, it will anticipate and respond to system disturbances (self-heal). It will heal itself by performing continuous self-assessments to detect and analyze issues, take corrective action to mitigate them, and, if needed, rapidly restore grid components or network sections. It will also handle problems too large or too fast-moving for human intervention. Acting as the grid’s “immune system,” self-healing will help maintain grid reliability, security, affordability, PQ, and efficiency. The self-healing grid will minimize disruption of service by employing modern technologies that can acquire data, execute decision-support algorithms, avert or limit interruptions, dynamically control the flow of power, and restore service quickly. Probabilistic risk assessments based on real-time measurements will identify the equipment, power plants, and lines most likely to fail. Real-time contingency analyses will determine overall grid health, trigger early warnings of trends that could result in grid failure, and identify the need for immediate investigation and action. Communications with local and remote devices will help analyze faults, low voltage, poor PQ, overloads, and other undesirable system conditions. Then appropriate control actions will be taken, automatically or manually as the need determines, based on these analyses.

Seventh and finally, the smart grid will operate resiliently against attack and natural disaster. The smart grid will incorporate a system-wide solution that reduces physical and cyber vulnerabilities and enables a rapid recovery from disruptions. Its resilience will deter would-be attackers, even those who are determined and well equipped. Its decentralized operating model and self-healing features will also make it less vulnerable to natural disasters than today’s grid. Security protocols will contain elements of deterrence, detection, response, and mitigation to minimize impact on the grid and the economy. A less susceptible and more resilient grid will make it a more difficult target for malicious acts.

Table 3.1   DOE Seven Characteristics of a Smart Grid

Today’s Grid

Principal Characteristic

Smart Grid

Consumers do not interact with the grid and are not widely informed and educated on their role in reducing energy demand and costs

Enables consumer participation

Full-price information available, choose from many plans, prices, and options to buy and sell

Dominated by central generation, very limited distributed generation and storage

Accommodates all generation and storage options

Many “plug-and-play” DERs complement central generation

Limited wholesale markets, not well integrated

Enables new markets

Mature, well-integrated wholesale markets, growth of new electricity markets

Focus on outages rather than PQ

Meets PQ needs

PQ a priority with a variety of quality and price options according to needs

Limited grid intelligence is integrated with asset management processes

Optimizes assets and operates efficiently

Deep integration of grid intelligence with asset management applications

Focus on protection of assets following fault

Self-heals

Prevents disruptions, minimizes impact, and restores rapidly

Vulnerable to terrorists and natural disasters

Resists attack

Deters, detects, mitigates, and restores rapidly and efficiently

These seven characteristics represent the unique yet interdependent features that define the smart grid. Table 3.1 summarizes these seven points and contrasts today’s grid with the vision for the smart grid.

These seven points have come to define the smart grid for many, although there are variants to the list that emphasize such points as encouraging renewable resources deployed in the transmission, subtransmission, and distribution system; emphasis on the use of sensors and sensory signals for direct automatic control; accelerating automation particularly in the distribution system; and intelligently (optimally) managing multiobjective issues in power system operation and design. The seven cited DOE elements may be viewed more generically as making the grid as follows:

  • Intelligent: capable of sensing system overloads and rerouting power to prevent or minimize a potential outage; of working autonomously when conditions require resolution faster than humans can respond and cooperatively in aligning the goals of utilities, consumers, and regulators
  • Efficient: capable of meeting increased consumer demand without adding infrastructure
  • Quality focused: capable of delivering the PQ necessary (free of sags, spikes, disturbances, and interruptions) to power our increasingly digital economy and the data centers, computers, and electronics necessary to make it run
  • Accommodating: accepting energy from virtually all fuel source including solar and wind as easily and transparently as coal and natural gas; capable of integrating any and all better ideas and technologies (e.g., energy storage technologies) as they are market-proven and ready to come online
  • Resilient: increasingly resistant to attacks and natural disasters as it becomes more decentralized and reinforced with smart grid security protocols
  • Motivating: enabling real-time communication between the consumer and utility so consumers can tailor their energy consumption based on individual preferences, like price and/or environmental concerns
  • Green: slowing the advance of global climate change and offering a genuine path toward significant environmental improvement
  • Opportunistic: creating new opportunities and markets by means of its ability to capitalize on plug-and-play innovation wherever and whenever appropriate

3.1.3  Smart Grid Technology Framework

Beyond a specific, stakeholder-driven definition, smart grid should refer to the entire power grid from generation, through transmission and distribution infrastructure all the way down to a wide array of electricity consumers. The concept of a smart grid embraces all the monitoring, control, and data acquisition functions across the T&D and low-voltage networks with the need for more advanced integration and meaningful information exchange between the utility and the electricity network and between the utility and customers. The smart grid will therefore be an enabler of system-wide solutions in the areas of network operations, asset management, distributed generation management, advanced metering, enterprise data access, public and private transport, etc. (Figure 3.2).

The smart grid is a framework for solutions. It is both revolutionary and evolutionary in nature because it can significantly change and improve the way we operate the electrical system today, while providing for ongoing enhancements in the future. It represents technology solutions that optimize the value chain, allowing us to drive more performance out of the infrastructure we have and to better plan for the infrastructure we will be adding. It requires collaboration among a growing number of interested and invested parties, in order to achieve significant, system-level change. The smart grid will embrace more renewable energy, public and private transport, buildings, industrial complexes, houses, increase grid efficiency, and transfer real-time energy information directly to the consumer—empowering them to make smarter energy choices.

Smart grid technologies span the entire electric grid. (© Copyright 2012 GE Energy. All rights reserved.)

Figure 3.2   Smart grid technologies span the entire electric grid. (© Copyright 2012 GE Energy. All rights reserved.)

From a high-level system perspective, the smart grid can be considered to contain the following major components:

  • Smart sensing and metering technologies that provide faster and more accurate response for consumer options such as remote monitoring, time-of-use (TOU) pricing, and demand-side management (DSM)
  • An integrated, standards-based, two-way communications infrastructure that provides an open architecture for real-time information and control to every endpoint on the grid
  • Advanced control methods that monitor critical components, enabling rapid diagnosis and precise responses appropriate to any event in a “self-healing” manner
  • A software system architecture with improved interfaces, decision support, analytics, and advanced visualization that enhances human decision making, effectively transforming grid operators and managers into knowledge workers

Interoperability between the different smart grid components is paramount. A framework can be used that defines the components at three levels, the electricity infrastructure level, the smart infrastructure level, and the smart grid solution level (Figure 3.3). At each of these levels, different applications exist that need to interoperate among themselves (horizontally) and with the levels above or below (vertically).

Smart grid technology framework.

Figure 3.3   Smart grid technology framework.

The smart grid will provide a scalable, integrated architecture that delivers not only increased reliability and capital and O&M savings to the utility but also cost savings and value-added services to customers (AMR/advanced metering infrastructure [AMI]/ADI). A well-designed smart grid implementation can benefit from more than just AMI. Numerous technologies now touted under the smart grid banner are currently implemented to various degrees in utilities. The smart grid initiative uses these building blocks to drive toward a more integrated and long-term infrastructure than is intended to realize incremental benefits in operational efficiency and data integration while leveraging open standards. For example, building on the benefits of an AMI with extensive communication coverage across the distribution system helps to improve outage management and enables IVVC (integrated volt/Var Control). In addition, a high-bandwidth communications network provides opportunities for enhanced customer service solutions, such as Internet access, through a home area network (HAN), and a more attractive return on investment. New smart grid–driven technologies, such as advanced analytics and visualization, will continue to offer incremental benefits and strengthen a renewed interest in the consumer interface, AMI, DSM, and other customer-centric technologies, such as PHEVs.

Many industry reports define a wide range of smart grid technologies. These technologies can be broadly captured under the following areas:

  • Low carbon: for example, large-scale renewable generation, DERs, EVs, carbon capture and sequestration (CCS)
  • Grid performance: for example, advanced distribution and substation automation (self-healing), wide area adaptive protection schemes (special protection schemes), wide area monitoring and control systems (PMU-based situational awareness), asset performance optimization and conditioning (CBM), dynamic rating, advanced power electronics (e.g., FACTS, intelligent inverters, etc.), high-temperature superconducting (HTS), and many others
  • Grid-enhanced applications: for example, DMSs; EMSs; outage management systems (OMS); DR; advanced applications to enable active voltage and reactive power management (IVVC, CVVC); advanced analytics to support operational, nonoperational, and BI decision making; DER management; microgrid and virtual power plant (VPP); work force management; geospatial asset management (GIS); KPI dashboards and advanced visualization; and many others
  • Customer: for example, AMI, home/building automation (HAN), EMSs and display portals, EV charging stations, smart appliances, and many others
  • Cybersecurity and data privacy
  • Customer: for example, AMI, home/building automation (HAN), EMSs and display portals, EV charging stations, smart appliances, and many others Cybersecurity and data privacy

Within the smart grid technology landscape, a broad range of hardware, software, application, and communications technologies are at various levels of maturity. In some cases, the technology is well developed (proven performance over time); however, in many areas, the technologies are still at an early stage of maturity and have yet to be deployed at scale deployments.

Many stakeholders determine smart grid technology selection and rollout based on the following factors

  1. Business risk
  2. Technical risk
  3. Technical functionality and capability, availability and maturity
  1. Business risk: Innovative leaders or fast followers in the rollout of their smart technology:
    • Innovative leaders (invest to lead) set a course that enables the utility to achieve an industry technology leadership po-sition across all business units based on investment in new, and very often, time unproven performance technologies.
    • Fast followers (deploy when justified) provide direction to all business units to deploy smart technology only when it can be economically justi-fied with defined level of maturity.
  2. Technical risk: Technology deployed in any particular location and grid system will vary, based on a number of factors such as technology maturity, complexity required to integrate with the existing/legacy systems and technologies, existing network performance, financial analysis, customer preference and acceptance, and state and/or federal regulatory influence. This suggests a smart grid design around the criteria as follows:
    • Core architecture consisting of mature technologies with time-proven performance and higher/most certain delivered overall benefits
    • Interoperable and scalable architecture to enable integration with exist-ing/legacy systems, maximize future flexibility, and minimize risk of technology obsolescence
    • Secure and open standard architecture
    • New technologies added incrementally as they mature and/or are cost justified
  3. Technical functionalities and capability availability and maturity: Smart grid technology selection needs to be defined through the wide range of functionalities and capabilities. Effective selection of available and mature functionalities and capabilities should support the objectives as follows:
    • Business requirements: deliver well-defined and quantifiable smart grid benefits to all stakeholders
    • Architecture and integration: integrate them as one cohesive end-to-end scalable and interoperable solution
    • Performance: proof of performance in support of full-value realization

Table 3.2 provides a summary of key functionalities and capabilities that can be considered for a wide range of smart grid technologies. The presented functionalities and capabilities can be grouped into four categories: infrastructure, metering, grid, and home/building.

Within the smart grid technology landscape, a broad range of hardware, software, application and communications technologies are at various levels of maturity and deployment. In some cases, the technology is well developed (proven performance over time); however, in many areas, the technologies are still at an early stage of maturity and have yet to be demonstrated at scale deployments.

A high-level review of the smart grid technology functionalities and capabilities landscape suggests representative maturity levels and development trends as shown in Table 3.3. This assessment is based on the scale/level of deployed technologies in existing smart grid projects across the globe.

The development of new capabilities and enabling technologies will be critical to fulfilling the grand promise of the smart grid. Smart grid investments should be directed toward holistic grid solutions that will differentiate utility smart grid initiatives. Smart grid, however, is more than simply new technology. Smart grid will have a significant impact on a utility’s processes. Perhaps more importantly, it is also about the new information made available by these technologies and the new customer–utility relationships that will emerge. Enabling technologies such as smart devices, communications and information infrastructures, and operational software are instrumental in the development and delivery of smart grid solutions. Each utility customer will begin the smart grid journey based upon past actions and investments, present needs, and future expectations.

Table 3.2   Smart Grid Technology Functionalities and Capabilities

No.

Functionalities and Capabilities

Description

Infrastructure

1

Communication and security

Underlying communications to support real-time operational and nonoperational smart technology performance

2

Embedded EVs, large-scale renewable generation, DERs

Integration of high penetration of EVs, large-scale renewable generation, and DERs can lead to situations in which the distribution network evolves from a “passive” (local/limited automation, monitoring, and control) system to one that actively (global/integrated, self-monitoring, semiautomated) responds to the various dynamics of the electric grid. This poses a challenge for the design, operation, and management of the power grid as the network no longer behaves as it once did. Consequently, the planning and operation of new systems must be approached somewhat differently with a greater amount of attention paid to global system challenges. In addition, integration of large-scale renewable energy resources presents a challenge with dispatchability and controllability of these resources. Energy storage systems can offer a substantial contribution to alleviate such potential problems by decoupling the production and delivery of energy

Metering

1

Remote consumer price signals

Function that provides TOU pricing information

2

Granular energy consumption data/information

Function with the ability to collect, store, and report customer energy consumption data/information for any required time intervals or near realtime

3

Identify outage location, extent remotely

Metering function capable of sending signal when meter goes out and identifying themselves after power restoration

4

Remote connection, disconnection, reconnection

Function capable of remotely controlling “on” and “off” smart asset

5

Remote configuration

Function capable of being remotely configured for functionality changes and firmware and software updates

6

Optimize retailer cash flow

Ability for a retail energy service provider to manage its revenues through more effective cash collection and debt management

Grid

1

Embedded sensing, automation, protection, and control

Wide area system monitoring and advanced system analytics: a real-time, PMU-based grid monitoring system combined with advanced analytics consisting of intelligent fault and outage detection. PMU-based state estimation enabling real-time dynamic and static system stability analysis, risk and margin evaluation, power system optimization, special protection schemes arming, etc., therefore providing planners/system operators and engineering with capabilities to effectively predict possible severe grid disturbances leading to major power system outages and blackouts

Wide area adaptive protection, control, and automation: protection philosophy that permits and seeks to make adjustments in various protection functions automatically in order to make them more attuned to the prevailing power system conditions. The purpose of adaptive protection includes mitigate wide area disturbances, improve power system transmission capacity, improve power system reliability, change of operational criteria from The power system should withstand the most severe credible contingency—(n-1 or n-2 criterion) to The power system should withstand the most severe credible contingency followed by protective remedial actions from the wide area protection/emergency control system

2

Advanced system operation

The modern grid relies on fully or semiautomated grid operation with a certain level of human intervention provided from the control centers. Advanced system operation tools are comprised of dynamic security assessment and wide area monitoring system (WAMS) and control capabilities

3

Advanced system management

Advanced asset management enables two-key smart grid capabilities

Optimum equipment performance leading to effective asset utilization. This can be accomplished by implementing real-time, dynamic rating applications at grid level. This allows for planed transfer capabilities and grid assets above the manufacturer’s “nameplate” ratings

Maintenance efficiency of network components, attained by implementing condition- and performance-based maintenance

4

Advanced system planning

Smart grid system planning considering real-time system impacts from large integration of renewable energy resources, high penetration of distributed generation, and chargeable and dischargeable EVs

5

Intentional islanding (microgrids) and aggregated load and generation management (VPP)

Intentional islanding and/or grid-parallel operation of electric subsystem. Allows for optimum, multiple load/generation balancing to enable reliable and cost-effective operation

Home/building

1

Aggregated DR

Aggregation of demand to reduce peak load and help balance the system more efficiently

2

EMS

Ability to control in-home appliances, distributed generation, and EVs to provide an optimum energy consumption

Table 3.3   Smart Grid Technology Landscape

Functionalities and Capabilities

Maturity Level

Development Trend

1

Communication and security

Developing

Fast

2

Embedded EVs, large-scale renewable generation, DERs

Developing

Fast

3

Metering

Mature

Fast

4

Embedded sensing automation protection and control

Developing

Fast

5

Advanced system operation

Developing

Moderate

6

Advanced system management

Mature

Fast

7

Advanced system planning

Developing

Moderate

8

Intentional islanding (microgrids) and aggregated load and generation management (VPP)

Developing

Moderate

9

Home/building

Developing

Fast

3.2  Smart Energy Resources

Thomas Bradley, Johan Enslin, Régis Hourdouillie, Casey Quinn, Julio Romero Aguero, Aleksandar Vukojevic, Bartosz Wojszczyk, Alex Zheng, and Daniel Zimmerle

3.2.1  Renewable Generation

3.2.1.1  Regulatory and Market Forces

Many countries across the world, including the United States, developed regulation to enable integration of more renewable energy into the overall generation portfolio mix. These include renewable energy portfolio standards (RPS) attached with interconnection initiatives like renewable tax credits and feed-in tariffs. Some of these requirements for renewable energy are so aggressive that utilities are concerned about the grid performance and system operational impacts of intermittent nature of renewable energy generation (e.g., wind and solar).

In 2002, California established its RPS program, with the goal of increasing the percentage of renewable energy in the state’s electricity mix to 20% by 2017. On November 17, 2008, Governor Arnold Schwarzenegger signed Executive Order S-14-08 requiring that California utilities reach the 33% renewable goal by 2020. Achievement of a 33% by 2020 RPS would reduce generation from nonrenewable resources by 11% in 2020. This is currently the most aggressive RPS proposed by any of the U.S. states. Other state governments have similar, although at lower penetration levels, but also aggressive RPS allocations [1].

As electric utilities prepare to meet their state’s RPS, for example, 33% by 2020 in California, and to comply with Global Warming Solutions Act of 2006 (AB 32), it becomes evident that U.S. utilities must adapt its engineering practices and planning and operations in order to maintain the high levels of service, reliability, and security. The state initiatives require integration of significantly higher levels of renewable energy, such as wind and solar, which exhibit intermittent generation patterns. Due to the geographic location of renewable resources, the majority of the expected new renewable generation additions will be connected via one or two utility’s transmission systems. This presents unique challenges to these utilities as the level of planned intermittent renewable generation in relation to their installed system capacity reaches unprecedented and disproportionate levels as compared to other utilities in the state.

Entities in the United States, such as CEC, NERC, CAISO, NYSERDA, SPP, CPUC, etc., have initiated and funded several studies on the integration of large levels of renewable energy, and most of these studies concluded that with 10%–15% intermitted renewable energy penetration levels, traditional planning and operational practices will be sufficient. However, once a utility exceeds the 20% penetration levels of renewable resources, it may require a change in engineering, planning, and operational practices with the development of a smarter grid. These studies support continuing transmission and renewable integration planning studies and recommend that smart grid demonstration project installations should be conducted by the different power utilities.

The United States, and especially California, has a different set of electric system characteristics than in Europe, but there is no experience or research in Europe that would lead us to think that it is technically impossible to achieve 20%–30% intermitted penetration levels at most U.S. utilities. Long transmission distances between generation resources and load centers characterize the network in the United States and especially in the WECC region. There are areas now in Europe that are highly penetrated with intermittent renewable, especially wind generation, at levels of around 30%–40%.

Large-scale wind and solar generation will affect the physical operation of the grid. The areas of focus include frequency regulation, load profile following, and broader power balancing. The variability of wind and solar regimes across resource areas, the lack of correlation between wind and solar generation volatility and load volatility, and the size and location of the wind plants relative to the system in most U.S. states suggest that impacts on regulation and load profile requirement resource smoothing will be large at above 20% penetration levels [1].

The European experience taught us that there are consequences of integrating these levels of wind resources on network stability that have to be addressed as wind resources reach substantial levels of penetration. A list of the major issue categories follows:

  • New and in-depth focus on system planning. Steady-state and dynamic considerations are crucial.
  • Accurate resource and load forecasting becomes highly valuable and important.
  • Voltage support. Managing reactive power compensation is critical to grid stability. This also includes dynamic reactive power requirements of intermittent resources.
  • Evolving operating and power balancing requirements. Sensitivity to existing generator ramp rates to balance large-scale wind and solar generation, providing regulation and minimizing start–stop operations for load-following generators.
  • Increased requirements on ancillary services. Faster ramp rates and a larger percentage of regulation services will be required, which can be supplied by responsive storage facilities.
  • Equipment selection. Variable-speed generation (VSG) turbines and advanced solar inverters have the added advantage of independent regulation of active and reactive power. This technology is essential for large-scale renewable generation.
  • Equipment selection. Variable-speed generation (VSG) turbines and advanced solar inverters have the added advantage of independent regulation of ac-tive and reactive power. This technology is essential for large-scale renewa-ble generation.
  • Strong interconnections. Several large energy pump-storage plants are available in Switzerland that are used for balancing power. Larger regional control areas make this possible.

Technical renewable integration issues should not delay efforts to reach the renewable integration goals. However, focus has increased on planning and research to understand the needs of the system, for example, research on energy storage options.

Studies and actual operating experience indicate that it is easier to integrate wind and solar energy into a power system where other generators are available to provide balancing power and precise load-following capabilities. The greater the number of wind turbines and solar farms operating in a given area, the less their aggregate production is variable. High penetration of intermittent resources (greater than 20% of generation meeting load) affects the network in the following ways [1]:

  • Thermal and contingency analysis
  • Short circuit
  • Transient and voltage stability
  • Electromagnetic transients
  • Protection coordination
  • Power leveling and energy balancing
  • Power quality

The largest barrier to renewable integration in the United States is sufficient transmission facilities and associated cost-allocation in the region to access the renewable resources and connecting these resources to load centers. Other key barriers include environmental pressure and technical interconnection issues such as forecasting, dispatchability, low-capacity factors, and intermittency impacts on the regulation services of renewable resources.

In the United States, the sources of the major renewable resources are remote from the load centers in California and the Midwest states. This results in the need for addition of new major transmission facilities across the country. Wind and solar renewable energy resources normally have capacity factors between 20% and 35%, compared to higher than 90% with traditional nuclear and coal generation. These low-capacity factors place an even higher burden on an already scarce transmission capacity. Identification, permitting, cost-allocation, approval, coordination with other stakeholders, engineering, and construction of these new transmission facilities are major barriers, costly, and time consuming.

Although energy production using renewable resources is pollution free, wind and solar plants need to be balanced with fast ramping regulation services like peaker generator or hydro generation plants. Existing regulation generation is too slow and is polluting much more during ramping regulation service. The increased requirements in regulation services counteract the emission savings from these renewable resources. Currently the frequency regulation requirement at the CAISO is around 1% of peak load dispatch, or about 350 MW. This is currently mainly supplied by peaker generating plants and results in higher emission levels. It has been calculated that around 2% regulation would be required for integrating 20% wind and solar resources by 2010 and 4% to integrate 33% renewables by 2020 [1].

With the integration of wind and solar generation, the output of fossil fuel plant needs to be adjusted frequently, to cope with fluctuations in output. Some power stations will be operated below their maximum output to facilitate this, and extra system balancing reserves will be needed. Efficiency may be reduced as a result with an adverse effect on the emissions. At high penetrations (above 20%), wind and solar energy may need to be “spilled” or curtailed because the grid cannot always utilize the excess energy.

3.2.1.2  Centralized and Distributed Generation

In the early beginnings of the electric industry, power generation used to be comprised of a series of small generators installed at large customer facilities, towns, and cities. As the demand for reliability electricity supply increased and the industry developed, the need for larger generators and interconnected power systems grew as well. Large-scale centralized generation dominated the power industry for decades until growing environmental and socioeconomic concerns and rising interest in power system efficiency improvement favored the construction of smaller-scale generation facilities (particularly those of renewable nature) closer to customer loads over the construction of large power plants and long transmission lines. This trend, prompted, for instance, by the Public Utilities Regulatory Policy Act (PURPA) of 1978 or the Energy Policy Act of 1992, has led to the emergence of the distributed energy resource (DER) concept, which includes distributed generation (DG), distributed storage (DS), and other customer energy resources implemented through programs such as demand response, load management, etc.

DERs are distribution-level energy sources that have smaller generating capacities than utility-scale generation resources. Examples include reciprocating diesel engines, natural gas–powered microturbines, large batteries, small to utility-scale renewable generation (photovoltaic [PV], wind, etc.) and fuel cells. DG usually refers to generation only (not storage) energy resources at the distribution level. There are many potential configurations for DERs, from basic backup functions all the way up to a full microgrid.

Providing backup has been the most basic and prevalent application of DERs. Backup generators are usually small diesel generators designated as support for specific loads. Under this configuration, the grid has primary responsibility for providing power; the backup generator only kicks in when the grid has been compromised. Figure 3.4 shows privately owned and utility-owned backup generators powering the grid during an outage. The problem with these configurations is that they lead to what is called low asset utilization, since the backup generators do not run unless the grid is unavailable. Because they have relatively low asset utilization rates, the cost of delivered energy over the lifetime of backup generators tends to be very high. Those high costs drive private backup generator customers to opt for smaller backup generators that are generally not large enough to pick up the entire load. When an outage occurs, most of the load must be dropped with only critical loads, such as emergency lighting, remaining active. Furthermore, these critical loads are often on a separate circuit, meaning that even if the backup generators were large enough, they would not be able to power regular loads. Utilities follow similar logic, putting backup generators only on circuits where critical operations, such as hospitals or high-tech businesses, are located. In many situations, it would be helpful to the system to have the DER operating much of the time. But without embedded intelligence in these resources, they cannot be effectively integrated into the rest of the system.

Privately and utility-owned backup generator configurations.

Figure 3.4   Privately and utility-owned backup generator configurations.

The last two decades have seen the resurgence of grid-connected DG, either independent power producers (IPPs) or utility-owned DG has started to dot distribution grids around the world. This DG application has the objective of supplying service to the grid in a continuous fashion, that is, in the same way as conventional centralized generators. The main difference of this approach is the location (close to the loads), installed capacity (smaller size), and type or lack of ancillary services (e.g., voltage regulation and frequency regulation) that the DG provides. Furthermore, it requires interconnection with the distribution system using synchronous, induction, and electronically coupled generators. This can represent a significant challenge since distribution systems have historically designed to be operated in a radial fashion, without any special considerations for DG, and it may lead to impacts that could affect the operation of both, distribution systems and DG, particularly for intermittent DG such as solar PV and wind. Smart grid technologies can play a significant role in facilitating the integration of DG and mitigating impacts on the distribution grid.

3.2.1.3  Technologies

There are several renewable sources of electric energy (generically called renewables). The main difference between renewables and other conventional energy sources is that renewables provide energy that is considered to be cleaner with respect to pollution. Other distinguishing difference is that renewable energy sources do not deplete natural resources in the process of creating of power. The third difference is that renewables are scalable to the appropriate size anywhere from single-house applications all the way up to large-scale renewables, which can supply power to thousands of homes. Some of the most common renewable energy resources are introduced in the next sections.

3.2.1.3.1  Solar PV

Solar PV generation has experienced a tremendous growth in recent years due to growing demand for renewable energy sources. PV represents a method of generating electric power in solar panels that are exposed to the light. Power generated is based on the conversion of the energy of the sun rays. Solar panels consist of solar cells that contain PV material, which exhibits PV effect. Solar cell that is exposed to light transfers electrons between different bands inside the material. This in turn results in potential difference between two electrodes, which caused direct current (DC) to flow.

There are several main PV applications, such as solar farms, building, auxiliary power supply in transportation devices, stand-alone devices, and satellites. Utilities around the world started incorporating solar farms into their generation portfolios mostly during the last decade. In order to incorporate solar farm into utility power system, alternating current (AC)/DC converter is needed as well as the corresponding relay protection. The main issue with PVs is intermittency. Since PV is unreliable power source that cannot always be counted on, several efforts have been undertaken to increase the reliability of PVs. One of the most successful ones was adding the battery storage where electric energy is stored during the off-peak hours or curtailment period, and then reused when PVs are not available.

In recent years, PV has been used in smaller-size applications, because of high inefficiency of solar cells. However, major advances in the design, materials, and manufacturing have made PV industry one of the fastest growing energy sources. Today, solar PV represents less than 0.5% of total global power generation capacity.

3.2.1.3.2  Solar Thermal

Solar thermal energy (STE) is a technology that converts solar energy into thermal energy (heat).

There are three types of collector levels that are based on the temperature levels: low, medium, and high. In practice, low-temperature collectors are placed flat to heat swimming pools or space heating, medium-temperature collectors are flat plates used for heating of water or air, and high-temperature collectors are used for electric power production.

Heat represents the measure of the thermal energy that particular object contains, and three main factors, specific heat, mass, and temperature, define this value. In essence, heat gain is accumulated from the sun rays hitting the surface of the object. Then, heat is transferred by either conduction or convection. Insulated thermal storage enables STE to produce electricity during the days that have no sunlight. The main downside to STE plants is the efficiency, which is a little over 30% at best for solar dish/stirling engine technology, while other technologies are far behind.

3.2.1.3.3  Wind

Wind power is obtained by converting the energy of the wind by wind turbines into electricity. Even though wind energy dates back from early centuries when it was being used to propel the ships, today’s applications are more geared toward utilities and supply of the power to larger regions. The main drivers of success of any wind farm are the average wind speed in the area and close proximity to the transmission power system.

Wind energy is highly desirable renewable energy source because it is clean technology that produces no greenhouse gas emissions. The main downside of wind power is its intermittency and visual impact that it creates on the environment. During the normal operation, all of the power of the wind farm must be utilized when it is available. If it is not used, the wind farm is either curtailed or power generated can be used to charge the battery energy storage (BES) if one is associated with wind farm.

Wind power is higher at higher speed of wind, but since the speed of the wind constantly changes, power comes and goes in short intervals. Inconsistency in power output is the main reason why wind farms cannot be used in utility’s base-load generation portfolio. Capacity factor of a wind power turbine ranges anywhere from 20% to 40%.

3.2.1.3.4  Biomass and Biogas

Who would have thought that one day technology will be able to produce the electricity from the fuel made out of living and dead biological material? Dead trees, wood chips, plant or animal matter used for production of fibers, chemical, or heat all refer to biomass. Technologies associated with biomass conversion to electrical energy include releasing energy in form of heat or electricity or conversion to a different form such as combustible biogas or liquid biofuel. The downside of biomass as a fuel is increased air pollution. The biomass industry has recently experienced an upswing, and the electricity in the United States produced by biomass plants is around 1.4% of the total U.S. electricity supply. Another form of biogas can be produced from algae. Algae produce oil that can be converted for industrial use and also produce a biomass that is converted to a synthetic natural­ gas, which can be used to generate electricity.

3.2.1.3.5  Geothermal

Geothermal power is extracted from the earth through natural processes. There are several technologies in use today, such as binary cycle power plants, flash steam power plants, and dry steam power plants. The main issue with geothermal power is low thermal efficiency of geothermal plants, even though capacity factor can be quite high (up to 96%).

Geothermal plants can be different in size. Geothermal power is reliable and cost effective (no fuel), but initial capital costs associated with deep drilling as well as earth exploration are main deterring factors from higher penetration of geothermal resources.

3.2.1.3.6  Wave Power

There are two types of ocean power that can be harnessed: wave power and tidal power. Wave power is associated with the energy produced by ocean waves that are on the surface and converting that energy for the generation of electricity. Today, wave farms have been installed in Europe. Currently, this type of renewable does not have significant penetration, because it is highly unreliable, and it requires large wave energy converter to be deployed. The first such farm in the United States is expected to be a wave park in Reedsport, Oregon. The PowerBuoy technology that will be used for this project will have modular, ocean-going buoys, and rising and falling of the waves will cause buoy to move, creating mechanical energy that will be later converted to electric energy and transmitted offshore through the underwater transmission line.

3.2.1.3.7  Hydro

Hydropower plants use the energy of the moving water as a main source for producing electricity. Water fall and gravitational force of this falling water hit the blades on the rotor, which causes rotor to turn, thus producing electricity. Most of the time, hydropower plants are built in places where there is not an abundance of water, but the water is very fast moving (like mountainous areas), and in the valleys where there is an abundance of water, but the water is moving slowly.

3.2.1.3.8  Fuel Cells

Fuel cells are an electrochemical cell that converts the source fuel into an electric energy. Reaction within the cell between a fuel and oxidant with presence of electrolyte generates electricity. Reactants flow into the cell, and reaction produces the flow out of it, while electrolyte remains in it. Fuel cells must be replenished from outside.

3.2.1.3.9  Tidal Power

Tidal power converts the energy of tides into electricity. The most common tidal power technologies are tidal stream generators and tidal barrages. Tidal stream generators rotate underwater and produce electricity using the kinetic energy of tidal streams. Tidal barrage uses a dam located across a tidal estuary to produce electricity using the potential energy of water. Water flows into the barrage during high tide and then it is released during low tide while moving a set of turbines. New technologies such as dynamic tidal power are being discussed and evaluated; this technology is intended to take advantage of a combination of the kinetic and potential energy of tides.

3.2.1.3.10  Combined Heat and Power

Base-load or combined heat and power (CHP) operation modes give the DER primary responsibility for supporting the load. CHP takes its name from the fact that heat from the DER can be used to supply heat locally, increasing the overall efficiency of the system. The DER operates nearly continuously, but the load is still connected to the grid in most cases, especially if the load is too large to be supported by the DER alone. A grid outage does not significantly change the operating mode of the DER, although it may be required to drop part of the load if they are unable to support it in its entirety. Figure 3.5 shows a base-load/CHP DER unit supporting a load during normal operation and during a grid outage, respectively.

This configuration offers much higher asset utilization rates than the backup generator configuration because the DER is essentially running at all times. This improves the economics of the DER purchase compared to the backup generation. However, it may not improve the economics of the operation as a whole. That is because the constant use of a base-load or CHP system means its energy costs can run almost as high as the equipment. In contrast, backup generation has low energy costs because it is seldom used.

For example, if power from a backup generator costs $0.50/kWh to produce 5% of your energy use, and grid electricity costs $0.10/kWh for the other 95%, then the blended energy cost is $0.12/kWh. In contrast, if power from a base-load generator costs $0.15/kWh and comprises 60% of your energy use, and grid electricity costs $0.10/kWh and comprises the other 40%, then the blended rate is $0.13/kWh. Therefore, the cost of production from the base-load or CHP DER application matters more than the cost of the backup generation, because it represents a larger fraction of the total energy costs.

It is clear that for a base-load or CHP DER application to be economical, the generator must be carefully chosen and matched with a load in a way that delivers energy at a lower cost than the grid. One way to accomplish this is to use the excess heat from the DER to warm a nearby industry or residential area. This delivers value through heat as well as electricity. Despite an aggressive build-out of CHP and base-load applications in the 1970s, there are many viable candidates in the United States and a great deal more in developing countries.

Base-load or CHP generator supporting local node during normal operation and outage.

Figure 3.5   Base-load or CHP generator supporting local node during normal operation and outage.

3.2.1.4  Renewable Energy Needs in a Smart Grid

To integrate high penetration levels like 33% intermittent renewable resources by 2020 in California, several planning and operational solutions should be followed. There is no silver bullet but requires a combined effort on three major levels that can be used in a smart grid strategy [1]

  • Generation mix to utilize different complementary resources
  • Advanced smart grid transmission facilities, including fast responsive energy storage, FACTS, HVDC, WAMPAC, etc.
  • Smart grid applications on distribution networks including distribution automation, fast demand response, including distributed resources (DRs) on the distribution feeders, distributed energy storage, controlled charging of plug-in hybrid electric vehicles (PHEV), demand-side management (DSM), etc.

The purpose of increased transmission planning is to identify complete and preferred transmission plans and facilities to integrate these high levels of renewables. The clear goal would be to develop a staged transmission expansion plan, facilities, and storage options to integrate this potential level renewable penetration levels.

Most of the models for these advanced wind and solar facilities have not been fully developed yet and need to be validated. The generator models for wind and solar generation technologies need to be upgraded and validated to include short-circuit models, dynamic variance models like clouding and short-term wind fluctuations.

The European experience with high levels of intermittent resources up to 80% penetration levels does not transfer fully due to the difference in U.S. grid design and load density. The integration of renewable energy at this scale will have significant impact, especially if the addition of energy storage devices (central and distributed) and FACTS devices utilized to counterbalance the influence of the intermittent generation sources. Utilities and ISOs in the United States should conduct RD&D projects and commence studies to fulfill its obligation to accurately and reliably forecast the impacts on future system integrated resource planning. Due to the long lead time for some of the proposed technology solutions, it is recommended that utilities engage these challenges sooner versus later. If technical challenges manifest, a timely solution cannot be implemented if studies, demonstration installations, and field tests still have to be conducted. Additionally, utilities should study all conceivable options that may severely affect transmission system integrity and stability. Otherwise, utilities may experience unintended consequences due to unforeseen technical issues resulting from high penetrations of new renewable energy sources.

3.2.2  Energy Storage

Energy storage in general is a very old concept, even though it was not recognized as such. For instance, solar energy has been transformed and stored in the form of fossil fuels that are used today in a large number of applications. Energy storage concepts have not been widely applied to power systems until recently due mainly to technological and economic limitations given the large volumes of energy that typically are of interest in the power industry. Some exceptions are pumped hydro and uninterruptible power supply (UPS) systems. However, energy storage concepts have been commonly applied to other areas of electrical engineering such as electronics and communications, where the amounts of energy to be stored are easier to manage.

3.2.2.1  Regulatory and Market Forces Driving Energy Storage and Smart Grid Impact

Grid energy storage or the ability to store energy within the power delivery grid can arguably be regarded as the “holy grail” of the power industry, and it is expected to play a key role in ­facilitating the integration of DRs and plug-in electric vehicles (PEVs)* and fully enabling the capabilities, higher efficiency, and operational flexibilities of the smart grid. The main challenge with electric energy is that it must be used as soon as it is generated, or if not, it must be converted into other forms of energy. During the times when their assistance is not required, storage systems accumulate energy. Later on, stored energy is dispatched into the power system for certain periods of time, thus decreasing the demand for generation and assisting the system when needed.

The ability of storing energy in an economic, reliable, and safe way would greatly facilitate the operation of power systems. Unfortunately, high costs and technology limitations have constrained the large-scale application of storage systems. Historically, pumped hydro has been the most common application of energy storage technologies on power system level applications. Nevertheless, the last two decades have seen the emergence and practical applications of new technologies such as battery systems, flywheels, etc., prompted by the increasing interest and need to integrate intermittent resources and PEVs, growing demand for high reliability, for instance, via implementation of microgrids, and the need for finding alternative technologies to provide ancillary services and system capacity deferral among others. There is growing interest­ worldwide in this area, and regulatory mechanisms and incentives are being proposed and debated, such as the U.S. Congress Storage Act of 2009 (S. 1091), which called for amendment to the Internal Revenue Code to

  • Allow a 20% energy tax credit for investment in energy storage property directly connected to the electrical grid (i.e., state systems of generators, transmission lines, and distribution facilities) and designed to receive, store, and convert energy to electricity and deliver such electricity for sale
  • Make such property eligible for new, clean, renewable energy bond financing
  • Allow a 30% energy tax credit for investment in energy storage property used at the site of energy storage
  • Allow a 30% nonbusiness energy property tax credit for the installation of energy storage equipment in a principal residence

There are several main applications where energy storage systems can be used. Some of those include frequency regulation, spinning reserve, peak shaving/load shifting, and renewable integration.

3.2.2.1.1  Frequency Regulation

In practice, there always exists a mismatch between generation and load in a power system. This mismatch results in frequency variations. System operators are always trying to match the generation to the load, so that the frequency can be as close as possible to 60 Hz. Variability of the frequency is further increased by the addition of renewables, such as solar and wind. Any power system is required to maintain the frequency within the desired limits. Any large variations from 60 Hz will cause unwanted system instability and can bring the whole system down. As noted earlier, system operators are trying to balance the generation and load by varying the output of proper generating units based on the system frequency. This type of regulation is called frequency regulation. In addition to having the whole system being able to supply power for the desired load, utility operators always have extra amount of generation that is known as spinning reserve. This spinning reserve has to be enough to provide enough power for frequency regulation purposes as well as be enough to support the tripping of the largest generating unit in the system to prevent the power interruptions. The amount of regulation capacity is most based on historical records and might vary on several factors such as time of the day, time of the year, etc.

One basic difference between the regulated and deregulated markets is that deregulated markets have a market for ancillary services such as frequency regulation. In this market, reserve capacities of every generating unit can be bid and market price is paid for capacity reserved for the regulation as well as actual provided energy. The system works as follows: control system for particular balancing authority sets the outputs of each generation asset. The system computes the difference between the power output and load demand (adjusted with frequency error bias) called area control error or ACE. From this signal, another signal called automatic generation control or AGC is extracted and sent to regulation service provider. This provided in turn adjusts its power output based on the AGC signal that was received. Frequency increase requires provider to supply additional power to the grid, which is equivalent to energy storage system discharging the energy to the system. On the opposite side, frequency decrease requires provider to remove power from the grid, which is equivalent to power system charging the energy system.

In the past, thermal generators or hydro facilities have been used to provide frequency regulation due to their fast response, which is needed for effective regulation. However, this was not the most optimum way for economic dispatch because of the losses and increased wear and tear on the generating sources. In addition, these are base-load generating plants, so output had to be reduced in order to provide frequency regulation capacity, which in turn caused higher-cost generating units to be online in order to support the load. Energy storage that provides frequency regulation allows for better optimization of generation assets. In addition, every MW of renewable resources added to the system will require between 3% and 10% increase in regulation service.

3.2.2.1.2  Spinning Reserve

As mentioned earlier, the total generation in a region that belongs to one utility system is equal to the load demand plus some spinning reserve. The amount of spinning reserve is equal or larger to the highest power-producing unit connected to the system plus some margin. The reason for this is the need for immediate power if the largest unit trips off-line. Knowing that it takes certain amount of time to start any generating unit, having energy storage systems provides additional benefit, because those systems can be immediately deployed. In reality, during the high-load periods, majority of thermal and hydro units are dispatched and run at its maximum and cannot be utilized as spinning reserve. So, in order to have spinning reserve, additional units are needed. Note that during light- or medium-load conditions, these generating units have output less than maximum, with the difference being designated as spinning reserve. Committing generating resources for spinning reserves is mandatory, but it results in increased operating costs and decreased efficiency. Energy storage systems help in reduction of spinning reserves provided by thermal and hydro generating units and allow dispatchers to set operating points at maximum levels during the economic dispatch. Similar to frequency regulation market, in deregulated markets, there exists a spinning reserve service market, where generation owners bid to provide this service. The only downside to energy storage systems is that they provide output only for a limited amount of time and not infinite. After the energy storage system has started providing energy to the utility system, additional generating units must be deployed before the output of energy storage systems runs out in order to avoid service interruptions.

3.2.2.1.3  Peak Shaving/Load Shifting

Load demand is always changing, and utilities employ different techniques in order to predict daily load curves. Major inputs into load estimation are temperature, load demand during the last 7–10 days, and historical data. Based on the estimated load curves, economic dispatch is created to identify generating units that will be supplying the needed power along with spinning reserves and uncertainty in load estimation. Every generating unit has the operating kWh cost associated with it, and economic dispatch is based on these costs. Units with lowest operating cost are used for base loading and run most of the time. For example, nuclear, hydro, and modern coal plants are almost exclusively used for base load. Note here that these units also have the highest capital cost during the construction. In order to cover the peak load demand, utility must bring on-line its higher operating cost generating units. For example, plants that have combustion turbines (CTs) might only be utilized few hours during the whole year to cover the peak load. In order to level demand and move energy usage towards the off-peak hours, energy has to be stored first. This can be done during the time with low demand, because the cost of generation is low. This energy can be supplied from energy storage systems to the grid during the peak times.

3.2.2.1.4  Renewable Integration

Energy storage, power electronics and communications have a key role to play to mitigate the intermittency and ramping requirements of large-scale renewable energy penetration of wind and solar energy. Since its inception, wind and solar technologies have made major breakthroughs and became more reliable. Utilities are constantly incorporating these two renewable resources into their generation portfolios. However, the biggest issue associated with wind and solar power is their unpredictability and variability of the output. In addition, these technologies also require regulation. Solar and wind energy productions are not dispatchable and result typically in high levels of power and associated voltage fluctuations. Common problems in remote wind production areas include low capacity factors for all the wind farms, impacts of line contingencies on wind farm operations, curtailment of wind farm outputs during high production times, and high ramp rate requirements [1].

In most urban regions, PV flat-plate collectors are predominately used for solar generation and can produce power production fluctuations with a sudden (seconds time-scale) loss of complete power output. With partial PV array clouding, large power fluctuations can also result at the output of the PV solar farm with large power quality impacts on distribution networks. It is clear that these power variations on large-scale penetration levels can produce several power quality and power balancing problems. Cloud cover and morning fog require fast ramping and fast power balancing on the interconnected feeder. Furthermore, several other solar production facilities are normally planned in close proximity on the same electrical distribution feeder that can result in high levels of voltage fluctuations and even flicker. Reactive power and voltage profile management on these feeders are common problems in areas where high penetration levels are experienced.

Energy storage systems can be used for smoothing the power out of renewable sources. This can be accomplished by limiting the rate of change of the output of a renewable resource. Energy storage systems can either add or remove power from the system as needed in order to smooth the power output of a renewable resource. One of the most promising solutions to mitigate these integration issues is by implementing a hybrid fast-acting energy storage and STATCOM in a smart grid solution. Several fast-reacting energy storage solutions are currently available on the market. For mitigating the mentioned wind and solar integration problems, the energy storage device needs to be fast acting and a storage capability of typically 15 min−4 h and a STATCOM that is larger than the battery power requirements to have adequate dynamic reactive power capabilities. Figure 3.6 shows a STATCOM—BESS application for mitigating the wind farm related integration issues [2]. The main components and technical characteristics of this smart energy storage solution are described as follows:

  • 8 MW/4 h battery
  • 20 MVAr inverters for BESS and STATCOM
  • Integrated control and HMI (human-machine interface) of STATCOM and BESS system
  • Substation communications interface for integrating the BESS solution into a distribution automation and ISO market participation environment

3.2.2.2  Centralized and Distributed Energy Storage

Energy storage applications can be centralized or distributed. The selection of the type of solution and technology to be used in an application is a function of the type of problem to be addressed and a series of technical and economic considerations such as ratings, size and weight, capital costs, life efficiency, and per-cycle cost. Figure 3.7 shows a summary of the installed grid-connected energy storage technologies worldwide.

Basic Schematic of STATCOM-BESS application. (From Enslin, J., Dynamic reactive power and energy storage for integrating intermittent renewable energy, Invited Panel Session, Paper PESGM2010-000912,

Figure 3.6   Basic Schematic of STATCOM-BESS application. (From Enslin, J., Dynamic reactive power and energy storage for integrating intermittent renewable energy, Invited Panel Session, Paper PESGM2010-000912, IEEE PES General Meeting, Minneapolis, MN, July 25–29, 2010.)

(

Figure 3.7   (See color insert.) Installed grid-connected energy storage technologies worldwide as of April of 2010. (From Current Energy Storage Project Examples, California Energy Storage Alliance (CESA), http://www.storage alliance.org/presentations/CESA_OIR_Storage_Project_Examples.pdf)

Energy storage requirements for electric power utility applications. (Data from Sandia Report 2002-1314; Electricity Storage Association (ESA), Utility Support,

Figure 3.8   Energy storage requirements for electric power utility applications. (Data from Sandia Report 2002-1314; Electricity Storage Association (ESA), Utility Support, http://www.electricitystorage.org/technology/technology_applications/utility_support/)

Centralized energy storage applications consist of large MW-size facilities usually connected to transmission system level voltages; these applications are typically used for providing ancillary services during short periods of time (e.g., seconds or minutes) and for intermittent renewable generation integration. DS consists of kW and smaller MW-size facilities connected to distribution system level voltages, either at distribution substations, feeders, or customer facilities; this includes applications such as community energy storage (CES) and vehicle-to-grid (V2G). CES is a concept that is increasingly being studied with applications ranging 25–75 kWh and devices similar to pad-mounted distribution transformers. Distributed energy storage in general is typically used for intermittent renewable generation integration, distribution reliability improvement, and capacity deferral; therefore they are required to have larger storage times (e.g., minutes or hours), as shown in Figure 3.8. This application is also increasingly being considered for integration of PEVs. The electricity storage association (ESA) provides a very comprehensive description of the recommended applications and advantages and disadvantages of each technology, which are summarized in Table 3.4 and Figure 3.8 and discussed in the next sections.

The coordinated implementation of smart grid technologies such as distributed energy storage, communications, control, power electronics, and power system technologies allows the seamless integration of intermittent DG and adds further capabilities to it including controllability (i.e., dispatchability) and firmness. These capabilities can be used for capacity planning applications (e.g., capacity deferral), increased operational flexibility during outages (intentional islanding), and reliability improvement. Furthermore, DS in the smart grid context may be used to mitigate impacts caused by both, DG (especially PV) and PEVs; this idea is described in general terms in Figures 3.9 and 3.10.

3.2.2.3  Technologies

Energy storage methods can be divided into several groups: chemical, electrical, electrochemical, mechanical, thermal, and biological. Some of the most common examples of energy storage systems connected to utility power grid include the following:

  • BES
  • Superconducting magnetic energy storage (SMES)
  • Flywheel energy storage (FES)
  • Compressed air energy storage (CAES)
  • Ultracapacitors
  • Pumped hydro

Table 3.4   Energy Storage Technology Comparisons

This picture loads on non-supporting browsers.

Source: Electricity Storage Association (ESA), Technology Comparison, http://www.electricitystorage.org/technology/storage_technologies/technology_comparison

These energy storage systems will be separately investigated later on in this chapter.

(

Figure 3.9   (See color insert for b only.) Conceptual description of grid energy storage. (a) Network power flows and (b) Energy storage and release cycles. (From Wikipedia, Grid Energy Storage, http://en.wikipedia.org/wiki/Grid_energy_storage)

3.2.2.3.1  Battery Energy Storage

BES is mostly used for load leveling, peak shaving, and frequency regulation. Today, there are two types of batteries based on their chemical structure. One type is called power battery, and these batteries are capable of delivering fast charge/discharge. These batteries are mainly used for frequency regulation. Another type of battery has a slow charge/discharge times, and those batteries are used for load leveling and peak shaving.

3.2.2.3.2  Superconducting Magnetic Energy Storage

SMES stores energy in the magnetic field that is created due to the flow of DC in a superconducting coil. The coil has been cooled cryogenically to below its superconducting critical temperature. SMES consists of three parts: bidirectional AC/DC inverter system, superconducting coil, and cryogenically cooled refrigerator. DC charges the superconducting coil, and when the coil is charged, it stores magnetic energy until it is released. This energy is released by discharging the coil. Bidirectional inverter is used to convert AC to DC power and vice versa during the coil charging/discharging cycles. The cost of SMES is high today because of its superconducting wires and refrigeration energy use, and its main use is for reducing the loading during the peak times.

Potential application of grid energy storage for mitigation of PV-DG and PEV impacts. (a) PV-DG energy stored and (b) PHEV and BEV energy released. (From Agüero, J.R., Steady state impacts and benefits of solar photovoltaic distributed generation (PV-DG) on power distribution systems,

Figure 3.10   Potential application of grid energy storage for mitigation of PV-DG and PEV impacts. (a) PV-DG energy stored and (b) PHEV and BEV energy released. (From Agüero, J.R., Steady state impacts and benefits of solar photovoltaic distributed generation (PV-DG) on power distribution systems, CEATI 2010 Distribution Planning Workshop, Toronto, Canada, June 2010.)

The main technical challenges associated with SMES are large size, mechanical support due to high forces, superconducting cable manufacturing, infrastructure required for installation, low levels of critical current when superconducting properties of materials break down, levels of critical magnetic field, and health effects due to exposure to large magnetic fields.

3.2.2.3.3  Flywheel Energy Storage

FES operates on the principle of rotational energy—flywheel is being accelerated to a very high speed, and maintaining the rotation at such speed stores energy. When energy is demanded from the system, flywheel rotational speed is reduced. In order to reduce friction during the rotation, vacuum chamber is used to place the rotor. Rotor is connected to electric motor or generator. FES is not affected by the change of temperature, and stored energy is easily calculated, but the main danger is the explosion of flywheel when tensile strength of a flywheel is exceeded.

3.2.2.3.4  Compressed Air Energy Storage

Energy generated at one point in time (off-peak) can be stored and later on used during different periods of time (peak). CAES represents one viable option. There are three types of air storage: adiabatic, diabatic, and isothermic. Adiabatic storage retains the heat that is produced by compression and later returns the heat to the air when the air is expanded to generate power. Diabatic storage dissipates some portion of heat as waste. In order for air to be used after it is removed from storage, it must be heated again prior to expansion in the turbine to power the generating unit. Isothermal storage operates under the same temperature conditions by utilizing the heat exchanger. These exchangers account for some losses.

Most CAES systems currently in operation do not utilize the compressed air to directly generate electricity [8]. Rather, the compressed air is fed into simple-cycle CTs, reducing the compression work in the standard recuperated Brayton cycle. In this mode, the CAES system serves to precompress combustion air during off-peak periods, improving the output of the CT during on-peak periods.

3.2.2.3.5  Ultracapacitors

Ultracapacitors or supercapacitors are the sources of DC energy. In order to be able to be connected to the power grid, a bidirectional AC/DC inverter is needed. Because of fast charge/discharge rates, ultracapacitors are used only during the short interruptions and voltage sags.

Unlike batteries where energy is stored chemically, ultracapacitors store this energy electrostatically. Ultracapacitors consist of two electrodes called collector plates, which are suspended with an electrolyte. Dielectric separator is placed between the collector plates in order to prevent the charges from moving from one electrode to another. Applied potential difference between the two collector plates causes negative ions in electrolyte to be attracted to the positive collector plate and positive ions in electrolyte to be collected on negative collector plate.

Ultracapacitors have several advantages and disadvantages comparing to batteries. Some of disadvantages include lower amount of energy stored per unit of weight, more complex control and switching equipment, high self-discharge, additional voltage balancing, safety issues, while some of advantages include long life, low cost per cycle, good reversibility, high rate of charge/discharge, high efficiency, and high output power.

3.2.2.3.6  Pumped Hydro

Pumped hydro storage method stores energy in the form of water, which is pumped from a reservoir on a lower elevation to a reservoir on a higher elevation. This is done during the off-peak hour when the cost of production of electricity necessary to run the pumps is lower. During the high-demand period, this water is released through the turbines. Pumped hydro is the highest-capacity storage system currently available. It is used for load flattening, frequency control, and reserve generation. However, the cost of building pumped hydro storage is very high.

3.2.2.3.7  Thermal Energy Storage

Thermal energy storage consists of a series of technologies that store thermal energy in reservoirs (e.g., using molten salt of ice) when electricity production is cheap (e.g., during off-peak, when most of the electricity is produced by using efficient and relatively inexpensive “base” units) and release it for heating or cooling purposes when electricity production is expensive (e.g., during peak, when electricity is produced by using costly “peaking” units), which equates to electricity production savings and/or T&D capacity deferral due to load shaving.

Recent developments in thermal storage have investigated conversion of stored heat directly into electricity, using Brayton or Rankine cycles [9]. Work on these systems has been catalyzed by thermal storage systems utilized for concentrating solar power, where excess heat captured during the day is stored for power generation in the evening. Round-trip efficiency of electrical-thermal storage remains problematic, with typical verified efficiencies below 30%. As a result, much attention is currently focused on increasing the temperature of thermal storage to greater than 500°C, utilizing phase-change materials to reduce system size and augmenting thermal storage material to improve thermal conductivity within the storage tanks.

3.2.3  Electric Vehicles

3.2.3.1  Regulatory and Market Forces Driving Electric Vehicles and Smart Grid Impact

With the implementation of smart grid technologies and the associated improvements in the reliability, sustainability, security, and economics of the electric grid comes the opportunity to include vehicles as an active participant in the smart grid. Although electrification of segments of the transportation energy sector does not require any technological or systemic advancements of the electric grid over what is presently available, the large scale of the transportation energy sector will provide long-term challenges to the legacy systems of the electric grid along with considerable opportunities for improved power, energy, and economic management in a smart grid system.

Electric transit (including electric trains and catenary trolleybuses) has a long history of integration with the electric grid. Electric transit has traditionally always operated at large, centralized scales, “tethered” to the grid. These technologies require a more-or-less continuous provision of electricity during operation of the vehicle. The introduction of high-density energy storage has introduced a watershed change in electric transportation: distributed, small vehicles operating in an untethered mode. PEVs are defined as vehicles that can store and use electricity from the electric grid. There are many types of PEVs under development by automakers. Electric vehicles (EVs) are vehicles whose only source of motive energy is stored in batteries. By definition, all EVs are “PEVs,” charging their storage system—typically batteries—from the grid (Figure 3.11).

Hybrid electric vehicles (HEVs) combine conventional engines and electric drive trains to provide motive power from either internal combustion fuel or energy stored in batteries. In conventional HEVs, all motive energy is supplied by the internal combustion engine (ICE), with the battery providing limited energy buffering during operation. These vehicles do not interact with the grid. In contrast, PHEVs are PEVs that have the capability to charge their batteries from the electric grid, thereby allowing either liquid fuel or the grid electricity to be the ultimate energy source for the vehicle [10]. Plug-in fuel cell vehicles (PFCVs) are fuel cell, EV hybrids where the electrical energy storage system for the vehicle can be charged from the electric grid. As with PHEVs, either the fuel cell reactants or grid electricity can be the ultimate source of motive energy [11,12].

Relative to a conventional internal combustion vehicle or conventional HEV baseline, there are numerous benefits that come with the electrification of transportation energy through PEVs [10]:

Degree of Vehicle Electrification. (Discussion of the Benefits and Impacts of Plug-In Hybrid and Battery Electric Vehicles, Electric Power Research Institute, MIT Energy Initiative paper, Draft 6, April 2010,

Figure 3.11   Degree of Vehicle Electrification. (Discussion of the Benefits and Impacts of Plug-In Hybrid and Battery Electric Vehicles, Electric Power Research Institute, MIT Energy Initiative paper, Draft 6, April 2010, http://web.mit.edu/mitei/docs/reports/duvall-hybrid-electric.pdf)

  • Reduced petroleum consumption
  • Lower life-cycle greenhouse gas and criteria pollutant emissions
  • Lower fueling costs
  • Lower life-cycle cost of ownership

Because of these potential benefits, there is a steady and growing interest in the development of PEVs.

Numerous traditional and entrepreneurial automakers have research, development, and limited production plug-in vehicle programs. Mitsubishi, General Motors, Nissan, and others have launched production plug-in vehicle programs. The rate of introduction of PEVs into the world vehicle fleet is expected to increase due to pressures from regulators such as Environmental Protection Agency (United States), California Air Resources Board, and others. The increasing commercial and private investment in PEVs will drive a corresponding investment in electrical infrastructure servicing PEVs. This investment in infrastructure will include public and in-home electric charger ­installations, which will incorporate passive or active forms of communication to facilitate the integration of large fleets of PEVs onto the electric grid.

The following sections will examine the potential impact of PEVs on the existing grid, describe methods of using smart grid technologies alleviate foreseen problems, and investigate potential opportunities enhance the performance of the electric grid using PEVs.

3.2.3.2  Technologies

3.2.3.2.1  Battery Electric Vehicles

A battery electric vehicle (BEV) is a type of EV that uses rechargeable battery packs to store electrical energy and an electric motor (DC or AC depending on the technology) for propulsion. Intrinsically it is a PEV since the battery packs are charged via the electric vehicle supply equipment (EVSE), that is, by “plugging-in” the BEV. The North American standard for electrical connectors for EVs is the SAE J1772, which is being maintained by the Society of Automotive Engineers (SAE) [13]. The standard defines two charging levels AC level 1 (120 V, 16 A, single-phase) and AC level 2 (208–240 V, up to 80 A, single-phase). Furthermore, additional work is being conducted on standardizing level 3 (300–600 V, up to 400 A, DC). A variety of technologies are being used for manufacturing the battery pack, including lead acid, lithium ion, nickel metal hydride, etc. The technical requirements of the batteries are different than those of conventional vehicles and include higher ampere-hour capacity, power-to-weight ratio, energy-to-weight ratio, and energy density. Since BEVs do not have combustion motor, their operation fully depends on charging from the electric grid. Therefore, uncontrolled charging cycles of BEVs for large market penetration levels may cause significant impacts on power distribution systems. Commercial examples of this type of vehicle are the Nissan Leaf, Mitsubishi MiEV, and the Tesla Roadster. The main criticism about BEVs is the reduced driving range (between 100 and 200 mi before recharging) when compared with conventional vehicles (>300 mi) [14].

3.2.3.2.2  Hybrid Electric Vehicles

An HEV is a type of EV that uses a combination of a conventional ICE and an electric motor for propulsion. HEVs use different technologies to improve efficiency and reduce emissions; such technologies include using regenerative breaking, using the ICE to generate electricity to recharge batteries or power the electric motor, and using the electric motor during most of the time and reserving the ICE for propulsion only when needed. Commercial examples of this type of vehicle include the Toyota Prius and the Honda Insight. HEVs are not PEVs, since they can operate autonomously without need of recharging batteries using the power grid. Therefore, no impact on the power grid is expected from proliferation of this type of EV. HEVs are, as of 2011, the highest selling EVs in the market. US HEV sales for the 2009–2010 period exceeded 250,000 units, as shown in Figure 3.12.

(

Figure 3.12   (See color insert.) U.S. HEV sales from 1999 to 2010. (From U.S. DOE alternative fuel vehicles (AFVs) and hybrid electric vehicles (HEVs), http://www.afdc.energy.gov/afdc/data/vehicles.html#afv_hev)

3.2.3.2.3  Plug-in Hybrid Electric Vehicles

A PHEV is a type of EV that has an ICE and an electric motor (like an HEV) and a high-capacity battery pack that can be recharged by plugging-in the car to the electric power grid (like a BEV). There are two basic PHEV configurations [16]:

  • Series PHEVs or extended range electric vehicles (EREVs): only the electric motor turns the wheels; the ICE is only used to generate electricity. Series PHEVs can run solely on electricity until the battery needs to be recharged. The ICE will then generate the electricity needed to power the electric motor. For shorter trips, these vehicles might use no gasoline.
  • Parallel or blended PHEVs: Both the engine and electric motor are mechanically connected to the wheels, and both propel the vehicle under most driving conditions. Electric-only operation usually occurs only at low speeds.

The main advantage of PHEVs with respect to BEV is longer driving range. With respect to conventional vehicles, the advantage is reduced fossil fuel consumption and greenhouse gas emissions. However, the price of a PHEV is higher than that of conventional vehicles. Commercial examples of PHEVs are the Chevrolet Volt and the Fisher Karma.

3.2.3.3  Vehicle to Grid

3.2.3.3.1  Utilization of EVs for Grid Support

The first generation of PEVs is expected to be more costly than traditional vehicles. Early estimates place this premium at $10,000 or more. Due to this expected additional cost, research has been conducted to determine if PEVs can provide additional services to help offset the added expense of a PEV. Studies have shown that vehicles sit unused, on average, for more than 90% of the day [17]. Using this fact, researchers have conducted studies on the ability of PEVs to provide grid support services to provide a source of revenue for the vehicle owner. If this revenue helped offset the initial cost of the plug-in vehicle, it could increase the incentive for consumers to purchase PEVs. The primary means for monetizing the capabilities of PEVs are proposed participation in a deregulated ancillary service market. Studies, to date, have determined that frequency regulation is the component of the ancillary service market most compatible with plug-in vehicle capabilities and will provide the largest financial incentive to vehicle owners [18–20].

There are two primary types of power interactions possible between the vehicle and the electric grid. Grid-to-vehicle charging (G2V) consists of the electric grid providing energy to the plug-in vehicle through a charge port. G2V is the traditional method for charging the batteries of EVs and PHEV. A V2G capable vehicle has the additional ability to provide energy back to the electric grid. V2G provides the potential for the grid system operator to call on the vehicle as a distributed energy and power resource.

In order for PEVs to achieve wide-spread near-term penetration in the ancillary service market, the two primary stakeholders in the plug-in vehicle ancillary service transaction must be satisfied: the grid system operator and the vehicle owner. The grid system operator demands industry standard availability and reliability for regulation services. The vehicle owner demands a robust return on their investment in the additional hardware required to perform the service.

Since PEVs are not stationary but instead have stochastic driving patterns, these resources possess unique availability and reliability profiles in comparison to conventional technologies providing ancillary services. In addition to this, the power rating of an individual plug-in vehicle is significantly less than the power capacity of conventional generation systems that utilities normally contract for ancillary services. These key aspects of PEVs create unique challenges for their integration and acceptance into conventional power regulation markets to provide ancillary services.

The connection between the grid system operator and the PEV to provide grid support services can be classified as one of two types that have been proposed to date: a direct, deterministic architecture and an aggregative architecture. The direct, deterministic architecture, shown conceptually in Figure 3.13, assumes that there exists a direct line of communication between the grid system operator and the plug-in vehicle so that each vehicle can be treated as a deterministic resource to be commanded by the grid system operator. Under the direct, deterministic architecture, the vehicle is allowed to bid and perform services while it is at the charging station. When the vehicle leaves the charging station, the contracted payment for the previous full hours is made, and the contract is ended. The direct, deterministic architecture is conceptually simple, but it has recognized problems in terms of near-term feasibility and long-term scalability.

Example V2G network showing geographically dispersed communications connections under the direct, deterministic architecture. (From Quinn, C. et al.,

Figure 3.13   Example V2G network showing geographically dispersed communications connections under the direct, deterministic architecture. (From Quinn, C. et al., J. Power Sources, 195(5), 1500, 2010.)

First, there exists no near-term information infrastructure to enable the required line of communication. The direct, deterministic architecture cannot use the conventional control signals that are currently used for ancillary service contracting and control because the small, geographically distributed nature of PEVs is incompatible with the existing contracting frameworks. For example, the peak power capabilities of individual vehicles (1.8 kW [11], 17 kW [22]) are below the 1 MW threshold that is required of many ancillary service hourly contracts [23].

In the longer-term, the grid system operator might be required to centrally monitor and control all of the PEVs subscribed in the power control region—a potentially overwhelming communications and control task [24]. As these millions of vehicles engage and disengage from the grid, the grid system operator would need to constantly update the contract status, connection status, available power, vehicle state of charge, and driver requirements quantify the power it can deterministically command. This information would need to be fed into the operator’s market system to determine contract sizes and clearing prices.

The aggregative architecture is shown conceptually in Figure 3.14. In the aggregative architecture, an intermediary is inserted between the vehicles performing ancillary services and the grid system operator. This aggregator receives ancillary service requests from the grid system operator and issues power commands to contracted vehicles that are both available and willing to perform the required services. Under the aggregative architecture, the aggregator can bid to perform ancillary services at any time, while the individual vehicles can engage and disengage from the aggregator as they arrive at and leave from charging stations. This allows the aggregator to bid into the ancillary service market using existing contract mechanisms and compensate the vehicles under its control for time that they are available to perform ancillary services. As such, this aggregative architecture attempts to address the two primary problems with the direct, deterministic architecture.

Example V2G network showing geographically dispersed communications connections under the aggregative architecture. (From Quinn, C. et al.,

Figure 3.14   Example V2G network showing geographically dispersed communications connections under the aggregative architecture. (From Quinn, C. et al., J. Power Sources, 195(5), 1500, 2010.)

First, the larger scale of the aggregated power resources commanded by the aggregator, and the improved reliability of aggregated resources connected in parallel allows the grid system operator to treat the aggregator like a conventional ancillary service provider. This allows the aggregator to utilize the same communications infrastructure for contracting and command signals that conventional ancillary service providers use, thus eliminating the concern of additional communications workload placed on the grid system operator.

In the longer term, the aggregation of PEVs will allow them to be integrated more readily into the existing ancillary service command and contracting framework, since the grid system operator needs only directly communicate with the aggregators. The communications network between the aggregator and the vehicles is of a more manageable scale than communications network required under the direct, deterministic architecture. The aggregative architecture is therefore more extensible than the direct, deterministic architecture as it allows for the number of vehicles under contracts to expand by increasing the number of aggregators, increasing the size of aggregators, or both. Since many distribution utilities are installing “advanced metering” systems, allowing two-way communication with individual consumers, these utilities could potentially enter the ancillary service market by providing such aggregation services using their metering communications networks.

From the perspective of the grid system operator, the aggregative architecture represents a more feasible and extensible architecture for implementing PEVs as ancillary service providers. For the system operator, the aggregative architecture is an improvement relative to the direct, deterministic architecture because it allows PEVs to make use of the current market-based, command and control architectures for ancillary services. Aggregators can control their reliability and contractible power to meet industry standards by controlling the size of their aggregated plug-in vehicle fleet, thereby providing the grid system operator with a buffer against the stochastic availability of individual vehicles. This allows the aggregator to maintain reliability equivalent to conventional ancillary service providers including conventional power plants. Because the payments from the grid system operator for ancillary services are equal for both architectures, the direct, deterministic architecture offers no apparent advantages from the perspective of the grid system operator.

From the perspective of the vehicle owner, the direct, deterministic architecture is preferred relative to the aggregative architecture. The initial allowable investment for the aggregative architecture is approximately 40% of the initial allowable investment for the direct, deterministic architecture [21]. The substantially higher initial investments allowed by the direct, deterministic architecture suggest that the average vehicle owner will prefer the direct, deterministic architecture.

These divergent preferences of the vehicle owners and the system operator highlight a fundamental problem that must be overcome before PEVs can be successfully implemented into the ancillary service market. The differing requirements of the stakeholders make only the aggregative architecture acceptable to both parties. The direct, deterministic architecture is unacceptably complex, unreliable, and unscalable to utilities and grid system operators. The aggregative architecture more than halves the revenue that can be accrued by the vehicle owners but still allows for a positive revenue stream. Only the aggregative architecture is mutually acceptable to all stakeholders and can provide a more feasible pathway for realization of a near-term utilization of PEVs for ancillary service provision.

3.2.3.3.2  Utilization of EVs for Energy Buffering

There exists a daily load cycle for the U.S. electric grid. In general, the grid is relatively unloaded during the night and reaches peak loading during the afternoon hours in most U.S. climates. Balancing authorities dispatch power plants to match the power generation to the time-varying load. Types of generation resource are dispatched differently to meet different portions of the load. Nuclear and large thermal plants are typically dedicated to relatively invariant “base-load” power. Dispatchable generation with fast response rates (e.g., CTs), hydropower, and energy storage can be dispatched to meet predicted and actual load fluctuations. By combining generation types, the control authority meets the time-varying load with a time-varying power generation, while meeting constraints imposed by environmental requirements, emission caps, transmission limitations, power markets, generator maintenance, unplanned outages, and more.

Even at relatively low market penetrations, plug-in vehicles will represent a large new load for the electric grid, requiring the generation of more electrical energy. In one set of scenarios analyzed by NREL researchers, a 50% plug-in market penetration corresponded to a 4.6% increase in grid load during peak hours of the day [25]. When vehicle charging and discharging can be controlled, other studies have found that as many as 84% of all U.S. cars, trucks, and SUVs (198 million vehicles) could be serviced using the present generation and transmission capacity of the U.S. electrical grid [26]. Controlling the electrical demand of PEVs will determine the infrastructure, environmental and economic impacts of these vehicles. Smart grid technologies can provide the control, incentives, and information to enable the successful transition to PEVs, but these technologies must reconcile the requirements of the electricity infrastructure with the expectations and economic requirements of the vehicle owner.

The simplest and most effective means for controlling the energy consumption of PEVs is direct utility control of charging times. Under this scenario, the utility would only allow consumers to charge during off-peak hours. By filling the nightly valley in electrical load, PEVs would reduce the hourly variability of the load profile. This has the effect of improving the capacity factor of base-load power plants, reducing total emissions, reducing costs, and eliminating the load growth due to plug-in vehicle market penetration. From a utility perspective, having direct control of the vehicle charging is ideal. From a consumer perspective, the willingness of vehicle owners to tolerate utility control of charging times depends on the type of plug-in vehicle that is being considered. For BEVs, the charger is the only source of energy for the vehicle, and being limited to charging during off-peak periods would significantly limit the usability of the vehicle and reduce its consumer acceptability. For PHEVs, the vehicle can operate with normal performance and reduced fuel economy when charging is not available. The degree to which consumers would tolerate increased fueling costs due to utility control of charging is under debate.

A more acceptable means for using smart grid technologies to control the energy consumption of plug in vehicles is by providing incentives for off-peak charging through a time-of-use (TOU) rate. A TOU rate is an electricity rate structure where the cost of electricity varies with time. Smart grid technologies such as advanced metering and consumer information feedback are necessary conditions for implementation of TOU tariffs. TOU rates are generally designed to represent the fact that electricity is more expensive during the day (when the grid is highly loaded) and less expensive during the night (when the grid is lightly loaded), so as to incentivize the conservation of electricity during the day. Special TOU rate structures have been designed for EV use so as to encourage EV owners to charge their vehicles at night, thereby conserving electricity during hours of peak demand. These legacy EV TOU rate structures have also been made available to PHEV owners. In theory, TOU rates should be able to be designed so as to provide an economic incentive for plug-in vehicle owners to charge their vehicles at night. In practice, the TOU rate can provide robust economic incentives for EV owners to charge their vehicle during off-peak periods because electricity is the only fuel cost for EVs. When TOU rates are applied to low all-electric range PHEVs, they can only provide partial compensation for the increase in vehicle fuel consumption that is caused by delaying charging until off-peak periods. For high all-electric range PHEVs, TOU rates are very effective at incenting off-peak charging of PHEVs. In summary, achieving the goals of controlling the energy consumption of many PEVs cannot be achieved solely by incenting off-peak charging through TOU rates [27].

These results do not necessarily suggest that an increase in peak load is inevitable with the introduction of PEVs. Instead of the smart grid being used to enable consumer controls, punitive pricing structures, and price volatility, smart grid must be used to engage the consumer in understanding how they can improve the sustainability and economy of the vehicle/grid systems. Consumer education and real-time information exchange between the utility and consumers will be a critical component of controlling the energy consumption rate and timing of plug-in vehicles.

3.2.4  Microgrids

Microgrid has become a concept much talked about within the smart grid evolution. The microgrid market is in its infancy, and it is difficult to clearly estimate its size and potential. Pike Research—a smart energy practice specializing in new energy technologies—has tried to give an estimate of the potential for the microgrid market. They foresee the number of microgrid to increase from about 100 today to 2000 by 2015. Local generation could increase from 422 MW in 2010 to 3 GW in 2015. The following chapters describe what a microgrid may bring to end consumers and utilities, what the drivers and the challenges are, and what the requirements may be for microgrid automation.

3.2.4.1  Microgrid Definition

A microgrid is an integrated energy system consisting of interconnected loads and DESs that can operate connected to the grid or in an intentional island mode. The objective is to ensure better energy reliability, security, and efficiency. Some solutions have been typically available for improving energy reliability and efficiency in industrial plants, in commercial buildings, for military or university campuses. A new breed of microgrid is now becoming a reality for utilities wishing to integrate local generation or implementing grid relief solutions in areas that are poorly served by the transmission grid. Microgrids may be a quick alternative to the building or reinforcement of transmission lines. Main challenges for utilities are to guarantee grid reliability, stability, and security and also to optimize energy efficiency.

Scale and location of the microgrid are important factors. Microgrids should be constructed at the low-voltage (LV) or medium-voltage (MV) level. The key defining characteristics of a microgrid are as follows:

  • Provides sufficient and continuous energy to a significant portion of the internal demand
  • Has its own internal control and optimization strategy
  • Can be islanded and reconnected with minimal service disruption
  • Can be used as a flexible controlled entity to provide services/optimization for the grid or the energy market
  • Applicable to various voltage levels (usually 1–20 kV)
  • Has storage capacity

So as not to confuse with other grid components, a microgrid is defined as NOT

  • One microturbine in a commercial building (this only DG) A group of individual generation sources that are not coordinated but run optimally for a narrowly defined load
  • A group of individual generation sources that are not coordinated but run optimally for a narrowly defined load
  • A load or group of loads that cannot be easily separated from the grid or controlled (facility/building management)

A microgrid’s capacity to self-manage and define operational strategies concerning islanding mode and self-reorganization makes it the ultimate “smart grid” offering.

3.2.4.2  Microgrid Drivers

Several drivers are pushing towards strong and quick deployment of microgrids:

  • Environmental incentives: Owners and operators of microgrid DG capacities should benefit in most countries from governmental incentives to help re-newable implementation. This makes microgrids a new and efficient way to develop renewable energies and attain goals set by many countries in this area (e.g., the European Union 2020 plan).
  • Cost-effective access to electricity: Microgrids could constitute a relatively cheap and efficient step toward rural electrification. For many emerging countries, low rural population density and high electrical infrastructure prices represent too big a hurdle to completely electrify a territory. Microgrids could be a more gradual solution to solve this issue.
  • Reliability: In areas where grid saturation is a problem and causes black- or brown-outs regularly, microgrids could offer a solution to alleviate pressure without heavy investment in large-scale power plants and high-voltage power lines. This, in turn, would give the end consumer a more reliable electrical supply.
  • Security: The islanding capacity is also one way to improve grid resilience in case of unforeseen difficulties, an important factor for several sensitive end consumers such as military bases, hospitals, or server farms.
  • Energy efficiency: DG has the benefit of reducing losses from long distance electrical transmission. Losses currently vary from 7% (in most advanced countries) to 25% and more in several emerging countries.
  • Renewable energy implementation: Microgrids could be a strong accelerator towards renewable energy implementation. Many countries have set ambitious goals in this ­matter—the 2020 plan for European countries, for example—microgrids constitute one way to achieve those goals. It is, in general, a way to accelerate the development of smart grids with an easy integration.
  • Progress in energy storage technologies: Energy storage is a vital part of the microgrid system. The storage market should be multiplied by 7 by 2015 and should then represent a market of about $2.5 billion [28]. Also development of electrical vehicles can be seen as a plug-and-play storage capacity and for this reason, can have a large impact on microgrid development and research in the coming years.

Microgrid benefits. *If owners of the micro-generation capability are neither DSO nor consumer. (EU More Microgrids project highlights, December 2009,

Figure 3.15   Microgrid benefits. *If owners of the micro-generation capability are neither DSO nor consumer. (EU More Microgrids project highlights, December 2009, www.microgrids.eu)

3.2.4.3  Microgrid Benefits

It is clear that microgrid may bear many benefits especially when renewable energy is generated and consumed locally. Microgrids should reduce electrical losses, increase grid stability and security, and as a whole, reduce spending for both consumers and distribution system operators (DSOs). Indeed, microgrids may benefit at the same time DSOs, end consumers, and the microgeneration operators (this group may of course be the DSO or an association of end consumers). The ensuing section details briefly these various benefits (Figure 3.15).

3.2.4.3.1  Economic Benefits

  • Load consumer benefits: Microgrid automation systems can encompass relatively complex price setting mechanisms. It is possible to imagine systems in which dynamic pricing software calculates in realtime the cheapest source of energy: main grid electricity or local generation sources (e.g., rooftop PV panels or wind farm integrated to the microgrid).
  • Microgeneration benefits: Many countries have introduced incentives to accelerate the implementation of renewable energies. Such schemes usually include a subsidized price for the owner of a renewable energy generation system (PV, wind, small hydro, biomass) to sell back to the electric company the electricity produced at higher than market price. This can also be considered as a microgrid benefit.
  • Network spending reduction: As mentioned earlier, in areas where the existing infrastructure is under high demand or where there is no existing electrical infrastructure (e.g., rural areas in developing countries), microgrid implementation could represent a much cheaper alternative to transmission infrastructure costs. Network spending is in that respect reduced or at the very least postponed.

3.2.4.3.2  Environmental Benefits

Greenhouse gas reduction: Microgrids may rely heavily on local renewable energy sources. Furthermore, DG drastically reduces electrical losses incurred on HV transport lines (losses that can be translated in tons of CO2 emission reduction).

3.2.4.3.3  Technical Benefits

  • Peak load shaving: Dynamic pricing coupled with the availability of local generation may be a powerful tool to shave or shift loads. It has been shown that a dynamic shaving can lessen peak load demand by 10% and general consumption by up to 15%.
  • Reliability enhancement: Thanks to their potential high-quality local automation capacities, microgrids should improve general grid stability and electricity reliability.
  • Voltage regulation: The possibility to get energy from the grid or locally may help to improve voltage quality of electricity provided that the right automation solutions are in place.
  • Energy loss reduction: As explained earlier, local generation reduces the need to transmit electricity on long distance, thus reducing the energy losses.

Obviously, identification of microgrid benefits is a multiobjective and multiparty coordination task, which will strongly depend on business structure and models. However, it seems clear that microgrids have a lot to offer to all players of the electrical grid.

3.2.4.4  Challenges to the Development of Microgrids

There exists a certain number of factors that can slow down the development and deployment of microgrids. These limits can be divided into two groups: technical challenges and legal challenges.

The first technical challenge concerns the balance management between load and generation. To improve grid efficiency, it is important that the DG be scheduled, that the demand be controlled, and that clear systems and securities be implemented to know when to consume the local generation or when to store it. All these imply also that the grid be able to forecast energy production and make “smart” decisions based on the forecasts. The microgrid must also be able to evaluate very precisely its reserves so as to take good decisions at multiple horizons: “a week, a day, 15 min, a few seconds.”

Protection and safety represent other technical issues. Pilot programs are designed at the moment to see how a grid reacts when it is switched from a normal mode of operation to an islanded mode (and vice versa). Research has not yet tried to evaluate precisely the consequences of unplanned outages and the reorganization needed to be done in the microgrid. A second point concerns “black start,” the process of restoring a power station to operation without relying on external energy sources. The final challenge concerns two-way electricity flows. One idea concerning microgrids is that if one was to regroup multiple microgrids, they could feed each other with energy. For this system to function, there needs to be two-way electricity flows at the transformer. In most current cases, this is not a possibility due to security rules and equipment capability.

The third technical challenge concerns everything to do with the microgrid and main grid interconnection. There needs to be real-time monitoring of power flow, precise measurement tools to calculate and monitor key electrical characteristics (voltages, flows, angles), Volt VAr analysis, a controlled frequency and harmonics in islanded mode, and of course the reconnection procedures after an islanded period. For the moment, pilot programs do not emphasize any specific challenges concerning these various points, but as complexity and size increase with real-case larger microgrids, difficulties may arise, and so it is important to keep all these issues in mind.

The final technical challenge concerns the information and communication parameters linked to the electrical aspects of the microgrid. It is important to design a data management system capable of handling all the data generated by a complete microgrid but at the same time has no redundant operations and is economical in proportion to the size of the microgrid.

One last idea concerning microgrids would be to create a dynamic peer-to-peer market to transfer electricity (and do the money settlement) inside the microgrid between the various generators/consumers of electricity. This last point bears various technical issues, and many more tests have to be conducted before an operational peer-to-peer microgrid is created.

Legal and regulatory challenges are numerous: who operates the microgrid and what are the operator’s responsibilities? Can the microgrid participants sell their electricity to other participants, to other microgrids, to the transmission system operator, etc.? These challenges also need to be addressed by pilot projects.

3.2.4.5  Microgrid Pilot Projects

Currently, operational microgrids are all pilot programs, generally small scale and with no proven return on investment. However, these various projects should be seen as the first steps to prove the business model and the technical feasibility on a large scale.

Currently, three areas in the world have started implementing several microgrid programs: Europe, Japan, and North America.

Europe subsidized a major research program called More Microgrids (budget of €8.5 million) during the 2006–2010 period. Partly funded by the European Union, partly funded by the private sector, the project implemented eight microgrids in various locations in Europe, both North and South. Most microgrids are small-scale low-voltage pilot projects to research on technical issues and feasibility. Two of these programs are operated in laboratories. However, one project stands out by it size: Bornholm Island in Denmark. Bornholm is a Danish Island with 28,000 inhabitants. Electricity is generated locally through local sources in low and medium voltage (oil, coal, and wind). It is also linked to an underwater high-voltage cable from Sweden. After an accident, this underwater cable was cut, and the grid became islanded, becoming in effect a microgrid for several months. This is, up to now, the only “microgrid” of this size and voltage levels in the world. The next step in European research is to develop projects of larger scale to identify and solve size-related issues. In 2010, the European Commission launched a call for demonstration program within the Framework Program 7 (Energy 2010—7.1.1—large-scale demonstration of smart distribution networks with DG and active customer participation). Two ambitious projects called Ecogrid and GRID4EU are starting at the end of 2011. One example of subproject of GRID4EU is the NiceGrid project: specification and deployment of a medium-voltage microgrid in a new area of Nice (France) with strong concentration of PV generation.

In Japan, the main research institute in charge of microgrid research is NEDO (the New Energy and Industrial Technology Development Organization). Japan is the world leader in pure numbers of microgrids, but again, most of them are very small, and they focus mostly on the integration of renewable energies. Private research is led by Mitsubishi Electric Corporation (MELCO) which has already developed several specific microgrid products (inverters and management system).

North America probably has the most advanced research when it comes to microgrids. Canada, through the CANMET Energy and Technology Research Center, has several pilot microgrid programs, specially focusing on DER integration standards and codes and also net-metering.

In the United States, the Consortium for Electric Reliability Technology Solutions (CERTS) and Power Systems Engineering Research Center (PSERC) are two main research institutes. With the bailout grants given out by the Obama administration, total budget for smart grid research increased to approximately $8.1 billion in 2009. About 10%–15% ($800 million) should go toward various microgrid research projects. Research in the United States is mostly developed for institutional projects (military or university campuses): out of the 455 MW of current microgrid capacity given by Pike Research, 320 MW is from campus microgeneration.

The rest of the world is not as advanced in microgrid research. However, other countries in Asia and in the Middle East have started investing in this type of technology. Several projects, some of which highly ambitious such as the Masdar Smart City in Abu Dhabi, have been launched (at least at the specification stage). Microgrids have also got a lot of interest over the past years in China.

As a whole, microgrid programs are at a very early stage, and each project is maximum of a few MW. Europe and Asia are mostly developing community or industrial/commercial microgrids while America is focusing more on institutional and campus projects.

Case Study 1: San Diego Gas and Electric Microgrid Project

San Diego Gas and Electric microgrid project, with support from the Department of Energy (DOE) and the California Energy Commission (CEC), has begun and one of the largest-scale microgrid demonstration projects in the United States. The project, which is to be implemented in the desert city of Borrego, will combine distribution-side technologies with consumer-side technologies, improve system reliability, and reduce peak load by more than 15%. These technologies include DER resources, advanced energy storage, residential solar panels, and demand response resources. The project has received about $10 million in federal and state funding and projects a total project cost of about $15 million over 3 years. The project will incorporate an advanced microgrid controller that will integrate and optimize the utility-side DER with consumer-side resources such as demand response. The microgrid controller will also coordinate with other utility systems such as the distribution management system (DMS), outage management system (OMS), and advanced metering infrastructure (AMI). This kind of integrated microgrid controller program could potentially be expanded to the rest of the utility’s distribution system to improve DER management across the utility’s service area.

3.2.4.6  Types of Microgrid

The largest number of identified type of microgrid are, in this order, institutional microgrids (hospitals, university, or military campuses), followed by commercial/industrial (factories, server farms, commercial malls, business towers), and finally community grids (multiple houses or apartment buildings, some commercial buildings). The latest is very small today, but a huge increase is expected when regulatory and business barriers are lifted.

One possible customer segmentation is the following:

Blue ocean: This segment consists of areas not yet connected to the country’s main grid. In this case, there is no preexisting infrastructure. One of the main drivers in this case is the possibility to supply good-quality electricity without spending in building transmission lines.

Network relief: Areas where the main grid is saturated and hence has problems on voltage stability and peak demand are the key markets for this segment. Microgrids will increase stability and defer expensive investments in large-scale infrastructure.

Energy security: This particular segment deals with all institutions where it is strategically important to get stable and good-quality electricity without any interruption. Hospitals, military campuses, refineries, and the like can potentially be islanded for long periods of time in case of main grid outage. Within this category, it is possible to define subcategories depending on the criticality: typically hospitals need a higher level of service (a few seconds of interruption can cost a life) than industries.

Energy efficiency: This particular segment’s main motivations are environmental concerns and profits made by the sale of renewable energies. In this particular case, a microgrid is only one solution since the islanding characteristic that defines a microgrid is optional. This segment may include university campuses, office buildings, small communities, etc.

From a technical point of view, different categories of microgrids can also be defined along different level of voltages. Depending on the microgrid architecture, level of voltages impacted, and components included (substations, feeders, etc.), automation processes and telecommunication tools may differ greatly. Figure 3.16 shows four possible categories.

Microgrid categories. (US DOE/CEC Microgrids Research Assessment, Navigant Consulting Inc., May 2006.)

Figure 3.16   Microgrid categories. (US DOE/CEC Microgrids Research Assessment, Navigant Consulting Inc., May 2006.)

3.2.4.7  Building Blocks of a Microgrid

A typical microgrid will contain physical systems, control systems, and interfaces with the other systems at the utility.

3.2.4.7.1  Physical Systems

A microgrid is composed largely of off-the-shelf physical components. None of the physical components discussed here are specific to a microgrid application. In fact, most of these same physical systems are used in other ways by the utilities. It is the combination of the physical systems under an advanced control scheme that creates a microgrid application. Nonetheless, it is worth understanding the basic building blocks of the system. These include the following:

  • Sensors: Sensors, and more generally information input, are required to de-termine whether criteria for islanding or reconnecting have been met. Sen-sors are the eyes and ears of the microgrid.
  • Switches: Intelligent switches are a vital part of the microgrid because they allow quick reconfiguration of the components in the microgrid. Switches allow the microgrid to electrically disconnect or reconnect with the grid, section-off areas of the circuit, or bring various DER components on or off-line.
  • Power electronics: Power electronics allow for DC-to-AC or AC-to-DC conversions, as well as voltage changes for DER components. This allows DER such as energy storage or microturbine generators to be grid-connected despite nongrid-conforming generating modes.
  • Energy storage: Energy storage helps smooth rapid changes due to external events or characteristics of DER in the microgrid. For example, in the event of a blackout, energy storage can help to support the system for a few minutes while generators start up. In order to perform these functions, the storage must be sufficiency large and able to respond quickly.
  • Generators: Generators can take a variety of forms but are most commonly diesel- or natural gas–based combustion engines. These generators are necessary even when renewable alternatives are available because they provide consistent energy that can be relied on with relatively high certainty.
  • Protection equipment: Protection equipment is always necessary, regardless of whether or not the DER is configured in a microgrid. Nonetheless, there are special precautions that must be taken in a microgrid because of the added level of complexity involved with becoming a separate electrical entity. Specifically, the major issues that must be accounted for are protection when disconnecting and reconnecting with the grid, and ensuring that there is an appropriate level of fault detection and protection in each part of the islanded microgrid system. Control systems for protection equipment will also have to be modified to fit the operating paradigm of the microgrid.
  • Metering: Advanced metering must be in place at the substation and preferably in the residential neighborhood so that conditions and power flow can be monitored in realtime.

3.2.4.7.2  Control Systems

Taking the example of the microgrid category D (multimedium voltage feeder microgrid), three functional levels of automation can be identified:

  • Local controllers necessary to control individual components of the microgrid: load controllers, energy storage controllers, and microgeneration source controllers. These local controllers response to orders sent by the microgrid controller and react to real-time conditions (detection of a fault, etc.) in order to guarantee the reliability of the different components. Pro-tection relays and smart switches are also part of this level of automation.
  • Local microgrid controllers provide real-time monitoring and control functions for all the components within their control boundaries. Their main objective is to ensure power reliability and quality. It may bear some basic local optimization functions based on economics of local generation and storage.

3.2.4.7.3  Interfaces with Other Systems

The microgrid controller relies on other systems to deliver information to it, as well as execute some of its requests. In some cases, the utility can use its systems to deliver commands to the microgrid controller. Though it can operate independently, the microgrid controller is of most use to the grid when it understands the external pressures on the system and reacts accordingly. Interfaces with the DMS/OMS and the AMI system are particularly important.

3.2.4.7.4  Microgrid General Functions

The architecture earlier is functional. It means that it may have to be adapted depending on the size of the microgrid and the regulatory constraints. In any case, it is mandatory that all functions are fulfilled somewhere in the microgrid:

Possible microgrid automation architecture.

Figure 3.17   Possible microgrid automation architecture.

  • Manage exchange with the main grid (disconnection, reconnection, market functions, emergency control if required).
  • Operate reliably for microgrid consumers in islanded mode and in connected mode.
  • Optimize asset utilization (DG, load, transformers) within the microgrid while in islanded mode or in connected mode.

3.2.5  Energy Resources Integration Challenges, Solutions, and Benefits

Large-scale distributed renewable generation is planned to form part of the renewable energy portfolio. The additional increased use of DERs, including storage and PHEVs, results in bidirectional power flows, protection issues on utility distribution systems that were not designed to accommodate active generation and storage at the distribution level. The technologies and operational concepts to properly integrate (DRs) into the existing distribution feeders need to be addressed with smart grid solutions to avoid negative impacts on system reliability and safety.

Electric distribution system is sensitive to DR based on several factors. The most important ones are size, location type, and also a characteristic of the distribution system. Even though DR can be connected anywhere on the system (substation, primary feeder, low-voltage or secondary, customer), its location has the biggest impact on the distribution system. The farther the DR’s PCC is, the weaker the system it must be connected to. There are several impacts on distribution power system based on the DR connection. Some of those include short-circuit current levels, location of DR point of interconnection (POI) (substation, at primary lines, at secondary lines), system losses, reactive power flow, impact on lateral fusing, reverse power flow, islanding, and voltage and frequency control.

Every DR system consists of the following major parts:

  • Prime movers—this represents the primary source of power. There are several prime movers available today, such as reciprocating engines, CTs, microturbines, wind turbines, PV systems, fuel cells, and storage technologies.
  • Power converter—this represents the way that power is converted from one entity to another. Synchronous generators, induction generators, double-fed asynchronous generators, inverters, and static power converters are examples of power converters.
  • Transformer, switches, relays, and communications devices—these devices enable the protection of the DR from the distribution system and vice versa.

There are several types of DR interconnection systems. They can be divided into several groups:

  • Inverter-based systems—these systems are used in batteries, fuel cells, PV, microturbine, and wind turbine applications. Some systems such as batter-ies, fuel cells, and PV generate DC power, and inverter is bidirectional DC/AC converter. Microturbines generate AC power with high frequency that is later on converted to DC and then to AC with 60?Hz frequency.
  • Systems that run parallel to the distribution system and interconnection system require synchronization with the common bus. These systems are used in load peak shaving, emergency power supply, prime power, and cogeneration.

3.2.5.1  Integration Standards

Most small and large-scale distributed renewable generation resources are currently governed by the IEEE 1547 [28] set of standards that include references to UL1741 for interconnecting on low-voltage networks. These standards were developed toward the end of the 1990s when DG, especially distributed PV and wind generation, was at very low penetration levels. IEEE 1547 describes the interconnection issues of DG resources in terms of voltage limits, anti-islanding, power factor, and reactive power production mainly from a safety and utility operability point of view.

There are, however, concerns on some of the practical impacts of the IEEE 1547 standard on distribution feeder design, operation, and safety. These include reactive power injection, voltage regulation, voltage ride-through, and power quality of high levels of inverters penetrating the distribution network without any coordinated control. Currently there are several IEEE standard working groups working on different application notes and setting the requirements for a future update on IEEE 1547.

Larger wind generation facilities above 10 MW are now required to have low-voltage ride-through (LVRT) capability to increase system reliability. New generation interconnection requirements have been adopted by FERC as part of the FERC Order 661, docket RM05-4-0000 NOPR, mainly for large wind and solar power facilities, larger than 20 MW. These provisions are updated and adopted as Appendix G to the LGIA [30]. FERC requires now also renewable energy plants to be able to provide sufficient dynamic voltage support and reactive power if the utility’s system impact study shows that it is needed to maintain system reliability. This implies that wind generators should have dynamic reactive capability for the entire power factor range and that dynamic reactive power capability must be required in every instance.

Currently there is also an industry-wide initiative on the smart grid inoperability panel [31]. This initiative is coordinated by NIST and EPRI. The main purpose of this panel is to develop interconnection and communications requirements for DRs, including PV, energy storage, and demand response.

Some of the standard communication and protocol profiles and standards for PV generation and storage systems are using DNP3 and IEC-61850. The purpose of defining a standard communication profile is to make it easier to interconnect DRs with increased security levels.

3.2.5.2  Renewable Generation Integration Impacts

Studies and actual operating experience indicate that it is easier to integrate PV solar and wind energy into a power system where other generators are available to provide balancing power, regulation, and precise load-following capabilities. The greater the number of intermittent renewable generation is operating in a given area, the less their aggregate production is variable.

A summary of the main renewable generation integration is provided followed by a more in-depth description.

Typical T&D system–related problems include [32] the following:

  • PV and wind capacity factors in the range of 15%-30%.
  • No dispatch capability of PV solar and wind farms without storage.
  • Ultrafast ramping requirements (400–1000 MW/min).
  • Most existing PV inverters do not provide reactive power and voltage support capability.
  • Existing PV inverters do not have LVRT capability.
  • Most PV plants are noncompliant with FERC—large generator interconnection procedure (LGIP).
  • IEEE-1574 provides contradicting guidance with LVRT and nonislanding requirements.
  • Reactive power management and coordination within feeders are not designed with high PV and wind production in mind.
  • Power quality, especially voltage fluctuations, temporary over voltage (TOV), flicker, and harmonics may be out of IEEE-519 and other standards.
  • Lack of coordination control of existing reactive power support.

High penetration of intermittent resources (greater than 20% of generation meeting load) affects the network in the following ways:

  • Power flow and reactive power: Interconnected transmission and distribution lines must not be overloaded. Reactive power should be generated throughout the network, not only at the interconnection point and should be compensated locally through the feeders. Due to PV and wind power variations and required ramp rates larger than 1?MW/s, fast-acting reactive power sources should be employed throughout the feeders and network.
  • Short circuit: Impact of additional generation sources to the short-circuit current ratings of existing electrical equipment on the network should be determined. PV inverters normally do not contribute short-circuit duty to the feeder networks.
  • Transient stability: Dynamic behavior of the system during contingencies, sudden load changes, and disturbance clouds can affect stability and power quality. Voltage and angular stability during these system disturbances and production variance are very important. In most cases, fast-acting reactive-power compensation equipment, including SVCs and distributed STATCOMs, is required for improving the transient stability and power quality of the network. PV array clouding in larger PV plants may require energy storage facilities to provide smoothing for the PV plant output.
  • Electromagnetic transients: Ensure these fast operational switching transients have a detailed representation of the connected equipment, capacitor banks, their controls and protections, the converters, and DC links. Due to PV power fluctuations, these network equipments may switch much more than originally intended.
  • Protection and islanding: Investigate how unintentional islanding and reverse power flow may have a large impact on existing protection schemes, philosophy, and settings. Large levels of PV production will reverse power flows during certain times of the day, and protection circuits need to be able to protect the distribution feeders under these conditions. Problems were reported with PV inverter nonislanding circuitry in regions with high PV power production.
  • Power leveling and energy balancing: Due to the fluctuating and uncontrollable nature of wind power as well as the uncorrelated generation from PV power and load, PV power generation has to be balanced with other very fast controllable generation sources. These include gas, hydro, or renewable power-generating sources, as well as fast-acting energy storage, to smooth out fluctuating power from wind generators and increase the overall reliability and efficiency of the system. The costs associated with capital, operations, maintenance, and generator stop-start cycles have to be taken into account.
  • Power quality: Fluctuations in the PV and wind power production and the strength of the T&D network at interconnection points have direct consequences to the power quality. As a result, large voltage fluctuations may result in voltage variations outside the regulation limits, as well as violations on flicker and other power quality standards.
  • Other DER facilities: Several other DER technologies are currently being integrated on the distribution feeders as part of smart grid initiatives including PEVs, CHP generation, and distributed energy storage. The coordination of these DER devices is crucial to determine the combined impacts on the distribution feeders and networks.

3.2.5.2.1  Intermittency

In most urban regions, PV flat-plate collectors are predominately used for solar generation and can produce power production fluctuations with a sudden (seconds time-scale) loss of complete power output. PV generation penetration within residential and commercial feeders approaches 4–8 MW per feeder. With partial PV array clouding, large power fluctuations result at the output of the PV solar farm with large power quality impacts on distribution networks [32].

During cloudy and foggy days, large power fluctuations are measured on the feeders with high penetration levels and can produce several voltage quality, protection, uncoordinated reactive power demand, and power balancing problems. Cloud cover and morning fog require fast ramping and fast power balancing. Furthermore, several other solar production facilities are normally planned in close proximity on the same electrical distribution feeder that can result in high levels of voltage fluctuations and even flicker on the feeder. Reactive power and voltage profile management on these feeders are common problems in areas where high penetration levels are experienced.

Feeder automation and smart grid communications are therefore crucial to solve these intermittency problems.

3.2.5.2.2  Short-Circuit Levels

Short-circuit current levels vary greatly with respect to impedance of the feeder and length of the conductor. Addition of DR affects the values of short-circuit currents, thus inadvertently affecting the relay settings. One measure that is of interest is the ratio of rated output current of DR with respect to the available current at POI. For DRs on feeder primary voltage levels, if this ratio is ≥1%, then DR will have noticeable impact on voltage regulation, power quality, and voltage flicker. If the DR is on secondary or low-voltage levels, ratio of <1% can have major impact on secondary voltage.

3.2.5.2.3  DR with POI at Substation

Substation represents the strongest point of the distribution system. Voltage levels can range anywhere from 12 to 34.5 kV, and transformers typically have capacities in 12–20 MVA. Placing DR in the substation represents less of a challenge for the distribution system since DR acts as another power source. The only additional requirement is the modification of protection and control schemes that will account for the addition of DR. However, if capacity of DR is 15%–20% of the substation load, then additional issues arise such as voltage regulation, equipment ratings, fault levels, and protective relaying. If capacity of the DR is close to the substation load, then issues will arise with voltage regulation on LTC. Current transformer at LTC will not measure the difference between the current in the substation and current supplied by DR. Since this difference is low, LTC will interpret as if the total load is light and will not boost the voltage appropriately, thus causing the low voltage at the end of the line. If capacity of DR is larger than substation load, then it will export power into transmission system, thus creating additional protection and control issues.

3.2.5.2.4  DR with POI at Primary and Secondary Lines

Distribution system has higher impedance on primary feeder lines, so DR placed anywhere on these lines will have more influence on the system than comparable DR placed in the substation. To begin with, most of the distribution systems were designed for one-way power flow: from the transformer to the end customer. DR placed on the feeder can cause reverse power flows, and it requires additional protection and/or control equipment. Generally, security and safety of all protective devices may be compromised if DR causes fault levels to change by more than 5%. The main factor that affects the effect of DR on the system is the strength of the distribution system. The closer the distance to the other strong power source, the stronger the system is.

There are a couple of ratios called stiffness ratios of stiffness factors that predict the impact of DR on the system:

Primary_stiffness_ratio = available_distribution_system_fault_current_at_POI DR_steady_state_full_load_output_current
Secondary_stiffness_ratio = fault_current_of_power_system fault_current_of_DR

As a general rule, stiffness ratios of more than 100 are less likely to create voltage problems. One additional stiffness ratio is defined in IEEE P1547-D8 standard as

Stiffness_ratio = system_fault_current_including_DR DR_fault_current

This stiffness ratio is used when trying to evaluate the impact of DR on system fault levels.

3.2.5.2.5  System Losses and Reactive Power Flow

Reducing system losses represents one of the main challenges of power utility’s T&D system today. Utilizing DRs reduces system losses if DR is properly sized and placed. In order to obtain the maximum loss reduction in a radial distribution circuit with single DR, the DR has to be placed at a position where the output current of DR is equal to half of the load demand. The reason for this is that the distance that power has to travel from sources to loads is minimum, which in turn, minimizes losses. However, if DR is too large, then it can actually cause losses to increase.

3.2.5.2.6  Equipment Loading, Maintenance, and Life Cycle

In the same way that low to moderate penetration levels of DG (either conventional or intermittent) reduce equipment loading, moderate to high penetration levels or a condition that leads to reverse power flow may increase equipment loading up to a point where this can become a concern from an equipment rating perspective and lead to equipment overload. Similarly, the interaction among intermittent DG (PV and wind) and voltage control and regulation equipment such as load tap changers (LTC), line voltage regulators, and voltage-controlled capacitor banks may lead to frequent operation of these equipment (frequent tap changes and status changes). This, in turn, increases maintenance requirements, and, ultimately, if it is not properly addressed, it may impact equipment life cycle. The smart grid plays a key role in this regard, since the ability of continuously monitoring equipment and additional controllability that can be achieved, for instance, via phasor measurement units (PMUs) and DS, allows the system operator to avoid this type of conditions. Furthermore, DS and dynamic Volt Var Control and compensation using smart technologies such as inverters and flexible AC distribution systems (FACDS) allow mitigating impacts due to intermittent DG and significant impacts on additional voltage control and regulation equipment.

3.2.5.2.7  Impacts on Protection Systems

A key impact of the integration of DR in power distribution grids is that on protection systems, which have been traditionally designed to be operated in a radial fashion. The integration of DR may lead to reverse power flows through feeder sections and substations; therefore, this is a situation that the distribution grid, in general, has not been designed, built, and is not prepared for.

It has been a long standing practice of utilities to protect laterals with fuses. Utilities generally use two philosophies for protection coordination, fuse clearing and fuse saving, and in some case, a combination of both, fuse clearing where fault currents are high and fuse saving where fault currents are moderate to low. For the case of fuse saving, relays upstream trip before the fuse blows, and once the fault is cleared, the breaker recloses. This action has to be fast because the breaker has to trip before fuse starts to melt and gets damaged. Depending on the severity of the fault, these schemes sometimes cannot operate correctly. DR causes fuse saving schemes to be even more complex, because of the increased fault currents. In addition, DR increases the fault current level through the fuse, but not necessarily through the breaker. Furthermore, the addition of DR causes issues with fuse-to-fuse coordination. Choosing correct fuse sizes, relay settings, and DR tripping settings can alleviate this problem.

Additional impacts on protection systems are modification of the “reach” of protective devices such as circuit reclosers and relays due to the feeder load offset effect of DR, particularly for the case of large DG, and potential overvoltage issues during unintentional islanding conditions, which are a function of the DR’s interconnection transformer configuration. This situation can be particularly severe when the configuration of the medium-voltage side of the interconnection transformer is delta.

A critical component of protective devices on distribution networks is overcurrent relays. These relays have instantaneous and time-delayed settings, which cause the distribution breakers to trip if fault current levels have been exceeded. In addition, on 34.5 kV long distribution lines, sometimes distance relays that are overcurrent relay supervised are used because it might be hard to distinguish between the high load currents and low fault currents. The commonality between all these relays is that they are designed and built for one-way flow. However, reverse power flow can cause protection devices to misoperate.

Smart grid technologies can play an important role in mitigating these impacts, for instance, by using adaptive protection systems, which allow the settings of protective devices to adapt to the varying system conditions, either feeder loading and configuration or DG output. Most important is to recognize the need for distribution protection system to evolve; this is expected to become more important as the penetration level of DG and PEVs and other smart grid technologies increases. As the complexity of operating the smart distribution system increases, the need for replacing conventional protective devices, specifically fuses, will grow as well. It is likely that the distribution grid of the future will be similar to modern transmission systems, from a protection system standpoint.

3.2.5.2.8  Intentional and Unintentional Islanding

Islanding happens when part of the utility system has been isolated by operation of one or more protective devices, and DR that is installed in that isolated part of the system continues to supply power to the customers in that area. This is very dangerous situation because of several reasons:

  • DR might not be able to maintain proper system parameters such as voltage and frequency and can damage the customer equipment.
  • The islanded area might be out of phase, so utility system might not be able to reconnect the islanded area.
  • Security issues associated with utility workers working on downed lines that are back-fed from DR.
  • Improper grounding can lead to high voltages during the islanding.

DRs that can self-excite are capable of islanding, while non-self-exciting DRs can island only if certain conditions have been met.

There are a couple of techniques that are used to prevent islanding: frequency regulation and voltage regulation. During the normal operation, frequency and voltage are fluctuating within certain ranges. For frequency, the settings are set to anywhere from 0.5 to 1.0 Hz from nominal frequency of 60 Hz. Allowed voltage variations are from 120 ± 6 V. Thus, having frequency, undervoltage and overvoltage protection can prevent islanding. The reason why relays trip very fast is because DR units can rarely match the power demand in the area, which in turn results in change in frequency and voltage that is detected by relays.

Additional issue is reconnection after the fault. When fault on the feeder that has DR occurs, breakers trip, and depending on reclosing sequence, they can reclose up to three times before getting locked out. The reclosing sequence normally has three reclosing shots, one of which is instantaneous (of course, there is a delay of several cycles because that represents the time that it takes breaker’s mechanism to reclose). IEEE Std. 1547 recommends DR to trip before any breaker reclosing occurs. After the DR trips off-line, for safety reasons, it is not advisable to have control logic programmed such that DR reconnects to the system immediately after the normal power supply has been established. After voltage and frequency have been restored to its normal limits, DR shall be allowed to be reconnected.

There are, however, some situations when the load on the island is balanced with DR. In that case, several techniques such as voltage shift and frequency shift are used to detect islanding. This protection should operate within few seconds after islanding has occurred.

3.2.5.2.9  Voltage Regulation and Control

Currently, electric distribution systems are capable of handling one-way power flow—from the substation downstream to the customers. In such system, voltages are highest at the substation, and they are the lowest at the end of the line. However, this assumes that there are no distributed energy sources on distribution line. Depending on the size of DRs online, and its placement on the feeder, it is possible to have the voltage at the end of the line to be higher than the voltage at the substation.

Voltage regulation in distribution power networks is specified in ANSI C84.1 standards. In essence, nominal voltage of 120 V is desired, while expected deviations are ±5% or equivalently 114–126 V. It describes the process and equipment that is needed in order to keep the voltage within the limits provided herein. According to IEEE Std. 1547, “The DR shall not actively regulate voltage at PCC. The DR shall not cause the Area EPS service voltage at other local EPSs to go outside the requirements of ANSI C84.1 standards.”

Having distribution energy sources significantly increases voltage regulation and relay protection. Voltage on distribution networks is controlled by voltage regulators and capacitor banks. Voltage drop depends on the wire size, type of conductor, length of the feeder, loads on the feeder, and power factor. DR can affect the voltage on distribution feeder in couple of ways:

  • If DR is injected into power system, then it will reduce the amount of current needed from the substation for all the loads, thus automatically reducing the voltage drop.
  • If DR supplies or absorbs reactive power to the system, it will affect the voltage drop on the whole feeder. If DR supplied reactive power, the voltage drop will be reduced, and if DR absorbs reactive power, it will increase voltage drop.

There are several operating problems associated with inclusion of DRs into distribution networks:

  • Low voltage—many feeders utilize voltage regulator (VR) with LDC compensation. If DR is placed downstream from the VR, and it represents the significant part of the load downstream (i.e., DR can supply vast majority of the load downstream), then VR will lower its settings, causing the voltage to drop. If DR does not inject sufficient reactive power into EPS, low-voltage condition will persist.
  • High voltage—if the distribution system is near upper limit of 126 V, injecting real and reactive power by DR can push this voltage over the limit. At the same time, DR can have a setting for high voltage out of range lower than its tripping point, so DR will stay online. Solving this problem will require either the DR to reduce its output or voltage increase that will trip DR.
  • Voltage unbalance—small-scale DR devices are single phase most of the time. Thus, injecting the power will have effect only on one phase, and voltage difference can change between the phases, thus creating high unbalance. This unbalance can exist even if the voltages are within ANSI C84.1 range. To alleviate this problem, it might be smart to connect DR to highest loaded phase and transfer single-phase load from the highest loaded phase to two other phases.
  • Excessive operations—DR devices (e.g., wind or solar) can be very unpredictable, and their output can be intermittent. The output of DR can change rapidly, and this can cause voltage regulating devices to operate excessively. Majority of these devices have daily maximum limit number of operations. The solution here is changing the time-delay settings on voltage regulating devices to provide better coordination with DR.
  • Improper regulation during reverse power flow conditions—if there is a feeder with VR that has LDC, and there is a DR located downstream of VR, it is possible that DR can be quite large so that not only it can supply power to all load in the area, but it might be also capable of supplying the load upstream from it. VRs should detect then reverse power flow. However, VR now assumes that the source now is stronger than substation (which is not the case). VR will try to raise its tap to the limit in one direction or the other. Tap change produces now voltage change that is opposite from what control algorithm expects. In order to find the solution for this problem, so all VR with LDC have to be in a specific mode in reactive bidirectional mode to operate with DRs in reverse power flow.
  • Improper regulation during alternate feed configurations—this problem is the one where a feeder gets a portion of the load from another feeder through the tie switch. Having DRs on the system complicates things in such a way that any of the five problems from earlier is possible.

3.2.5.2.10  Frequency Control

Small-scale DR itself cannot exert frequency control, which is generally reserved for large synchronous generators. This is not the case of large-scale (MW-size) DR, which depending on the size and regulatory framework may be allowed to provide ancillary services; this is the case, for instance, of large-scale DS and DG. Potentially the wide-area controllability that can be achieved via smart grid technologies can allow the implementation of the “virtual power plant” concept, which consists of the aggregated and coordinated dispatch, and operation of a large number of DR (either small-scale, medium-scale, or utility-scale) may allow providing this type of ancillary service. Similarly, the implementation of the microgrid concept requires the availability of DR with frequency control capability; this can be accomplished by means of DG, the combination of intermittent DG and DS, or using DS alone.

3.2.5.2.11  Dispatchability and Control

IEEE Std. 1547 states that each DR unit of 250 kVA or more or DR aggregate of 250 kVA or more at since PCC shall have provisions for monitoring its connection status, real and reactive power output, and voltage at the point of DR connection. Monitoring the exchange of information and control for DR systems should support interoperability between DR devices and area EPS. Use of standard commands and protocols and data definitions enables this interoperability. In addition, this reduces costs for data translators, manual configuration, and special devices. DR can be dispatched as a unit for energy export as needed, according to a certain schedule, during peak periods, shut down for maintenance, used for ancillary services such as load regulation, energy losses, spinning reserve, voltage regulation, and reactive power supply.

3.2.5.2.12  Power Quality

Important potential impacts of DR integration are voltage rise, voltage fluctuation, flicker, voltage­ unbalance, voltage sags and swells, and increased total harmonic distortion (THD). All these impacts may affect the overall distribution grid power quality. Voltage rise and voltage fluctuation are a natural consequence of the interconnection of DR on the power distribution grid, as previously discussed, their magnitude is a function of the grid’s stiffness factor and DR output. Furthermore, extreme intermittency due to cloud cover may lead to rapid voltage fluctuations; this has motivated some utilities to require evaluating potential impacts on flicker as a requisite for authorizing DR interconnection. Voltage unbalance can be accentuated by large penetration levels of single-phase DR, particularly if different technologies and capacities are used, and they are uncoordinatedly connected to different phases of the power distribution grid. Voltage sags and swell can be the consequence fault current contributions and sudden connection and disconnection of utility-scale DG. Increased harmonic distortion may be caused by large proliferation of electronically coupled DG; here, it is worth noting that despite the fact that individual inverters may comply with standard requirements pertaining to harmonic injection, it is the interaction and cumulative effect of harmonics produced by a large number of inverters that could have a negative effect on feeders’ THD levels. As previously indicated, smart grid technologies and intelligent control of DR inverters can help alleviate issues related with voltage rise, voltage fluctuation, and intermittency. Other issues such as voltage sags and swells due to larger fault currents may be mitigated using, for instance, superconducting fault current limiters. Finally, issues related with increased voltage unbalance and THD should be addressed in the planning stage of the smart grid, where maximum penetration levels and location of DR must be carefully evaluated. Another potential and more complex solution is the coordinated dispatch of these technologies via the virtual power plant concept.

3.2.5.3  Electric Vehicle Impacts on Electric Grid Systems

PEVs are seen as having the potential to improve multiple facets of the transportation sector. However, for PEVs to have a significant positive impact on the transportation sector, a substantial fraction of the existing vehicle fleet must be converted to PEVs. Any significant conversion of this type will impose a large demand on the electric sector if not properly administered. Therefore, to realize transportation improvements on a grand scale without creating concurrent electrical problems, changes in the electric and transportation sectors must be collaborative and occur concurrently.

The charging of PEVs is the most important interaction between electrified transportation and the electric grid, and is the area in which smart grid technologies can provide tools to assimilate the two sectors. Plug-in vehicle charging is divided into two main categories: “smart” charging and unconstrained charging. Unconstrained charging is the simplest form of plug-in vehicle charging and allows the vehicle owner to plug in at any time of the day without any limitations [33]. Constrained charging is defined as any charging strategy in which the electricity provider and vehicle are able to cooperatively implement charging strategies with an aim to limit plug-in vehicle charging loads so as to maximize the economic efficiency of vehicle charging. The first generation of PEVs will likely charge without input or restriction from the utility. Due to the initial low volume of vehicles, this will likely have a low impact on the electric grid [34,35]. However, most research, to date, has shown that as PEVs penetrate the market, unconstrained charging will need to be replaced with some level of constrained or “smart” charging to reduce the possibility of exacerbating peak electric demands [33,36]. Studies have shown that “smart” charging can potentially permit replacement of at least 50% of the traditional vehicle fleet with PEVs without the need to increase generation or transmission capacity. Larger penetrations also present opportunities for the electric sector to regulate the system more effectively, resulting in more uniform daily load profiles, better capital utilization, and reduced operational costs [33,36].

The electric utility sector has also expressed concern regarding expected increased loads on residential transformers and other electric grid components. Studies have shown that the acceptance of HEVs such as the Toyota Prius has typically occurred nonuniformly throughout geographic areas, with high concentrations in certain areas and little-to-no adoption in others. The adoption of PEVs is expected to follow a similar pattern [37].

Increased loading on residential transformers poses a problem for the electricity provider as most residential transformers are already approaching their recommended use capacities. In addition, although “smart” charging of PEVs will help the electric sector reduce peak demands, “smart” charging may force transformers—especially residential transformers—to be fully utilized for the majority of the day. Increased use will reduce the amount of equipment rest and cooling time, which could shorten the operational life of the transformers and other electric grid equipment [38]. These studies agree, however, that these pressures will not result in decreases in reliability or functionality of distributions systems. They will merely require changes in distribution system maintenance schedules.

The most prevalent strategies currently being pursued to implement smart charging are as follows:

  • Financial (TOU pricing, critical peak pricing, real-time pricing)—Charging dif-ferent rates at different times of day to incentivize users to change their be-havior
  • Direct (delayed charging, demand response)—Curtailment of charging activities, enabled by smart charging chips or charger-side intelligence in a demand-response type program
  • Information based (home area network, smart meters and displays)—Giving users information and signals to help them make informed decisions about the cost and impact of charging on the grid [33,35,39,40]

Due to the variation in the energy sources used throughout the electric sector, some charging strategies may prove more advantageous and effective than others. All of the “smart” charging strategies require some level of communication between the PEV, vehicle owner, and the electricity provider or grid system operator. For direct and financial smart-charging strategies, the plug-in vehicle or owner must be able to receive and process pricing and/or power control signals sent by the electricity­ provider [36]. More advanced charging strategies, especially market-oriented or two-way power flow strategies, require reliable, two-way communication between the plug-in vehicle and the electricity provider or the grid system operator [20,36]. Two-way communication is required because the electricity provider or grid system operator needs to know the state of charge (SOC) of all the PEVs connected in order to forecast the charging load for the valley-filling algorithm and the availability of PEVs to provide V2G frequency control. Research has shown that the communication task can be achieved by integrating broadband over PowerLine and HomePlug, Zigbee, or cellular communications technologies into a stationary charger or into the PHEV’s power electronics [40].

Regardless of the type of smart-charging strategy utilized, the required charging infrastructure and strategies will impose constraints on the electric grid. The largest impact smart charging will have on the electric grid is associated with the communications requirements needed between PEVs and owners, and the electricity provider or grid system operator. The simplest method (in terms of communication) for the electric sector to control charging behavior is to implement TOU rates. TOU rates can be relayed to PEV owners through rate plans that only change based on time of day and year and require the installation of an electric meter capable of metering energy transfer in realtime for billing purposes. However, it is yet to be determined if TOU rates are strong enough motivators to affect the charging habits of the majority of plug-in vehicle owners. The next level of complexity available for the electric sector is the use of real-time data communication. Control could be based upon one-way communication: For example, vehicles could charge only when real-time rates drop below a set threshold. Several proposed control strategies (e.g., V2G) would require two-way communication. However, for a large number of PEVs, real-time data transfer has been seen as an overwhelming task [19].

To help guide the development of the charging infrastructure, which is required for PEVs, the SAE J1772 standard has been developed. The standard requires plug-in vehicle power transfer connections to be able to operate on single phase 120 or 240 V and also support communications capabilities. The power transfer equipment can either be a separate component or can be integrated into the power electronic equipment and electric motor. In order for PEVs to be capable of V2G, either an inverter must be added to the vehicle’s power electronics or equipment capable of utilizing the on-board charger as both an inverter and a rectifier would need to be used [39]. Although various power levels of charging have been proposed, level 1 charging (110 V, 15 A) is currently the most common. Level 2 and level 3 rapid chargers have increased power ratings, but the installation of level 2 and level 3 chargers can be a slow and costly process, especially for residential installations [41,42] (Table 3.5).

It is clear that some level of smart-charging infrastructure will be needed as PEVs begin to penetrate the transportation market. Smart grid technologies provide a variety of charging methods that can help ensure PEV customer satisfaction while maintaining a balance between plug-in vehicle charging demand and the electric grid’s resources. However, “smart” charging of PEVs will require a large investment in electric grid and communications infrastructure and will significantly increase the workload of the electric sector.

3.2.5.3.1  Equipment Loading, Maintenance, and Life Cycle

Arguably the most significant impact of PEVs charging on the power grid is increase in equipment loading, specifically on distribution transformers and lines. Here, it is worth noting that the severity of this impact is a function of the charging scenarios, charging strategy (uncontrolled or controlled charging), market penetration level, and distribution feeder characteristics (existing loading, voltage level, load profile, etc.). In order to determine the impact of PEV charging on the grid, it is necessary to conduct preliminary studies to determine (a) charging scenarios, like the one shown in Figure 3.18, which indicates the expected level 1 and level 2 charging profiles of PEVs (PHEVs and BEVs), that is, the time of day when charging is expected to occur and the likely charging demands in percentage of PEVs and (b) market penetration levels, which indicate the amount of PEVs that are expected to be charged in a geographic area as a function of time. Studies and common sense indicate that residential PEV charging is expected to occur during the late afternoons and early evenings, when commuters get back home. Unfortunately, in many cases, this coincides with peak feeder loading conditions, which have a direct impact on increasing distribution transformer and line loadings.

Table 3.5   SAE Charging Configurations and Ratings Terminology

AC

DC

AC Level 1

PEV includes on-board charger

• 120 V, 1.4 kW @ 12 A

• 120 V, 1.4 kW @ 12 A

• Estimated charge time:

• PHEV: 7 hrs (SOC* – 0% to full)

• BEV: 17 hrs (SOC – 20% to full)

*DC Level 1

EVSE includes an off-board charger

• 200-450 VDC, up to 36 kW (80 A)

• Estimated charge time (20 kW off-board charger):

• PHEV: 22 min (SOC* – 0% to 80%)

• BEV: 1.2 hrs (SOC – 20% to 100%)

AC Level 2

PEV includes on-board charger

• 240 V, up to 19.2 kW (80 A)

• Estimated charge time for 3.3 kW on-board charger:

• PEV: 3 hrs (SOC* – 0% to full)

• BEV: 7 hrs (SOC – 20% to full)

• Estimated charge time for 7 kW on board charger:

• PEV: 1.5 hrs (SOC* – 0% to full)

• BEC: 3.5 hrs (SOC – 20% to full)

• Estimated charge time for 20 kW on-board charger:

• PEV: 22 min (SOC* – 0% to full)

• BEC: 1.2 hrs (SOC – 20% to full)

*DC Level 2

EVSE includes an off-board charger

• 200-450 VDC, up to 90 kW (200 A)

• Estimated charge time (45 kW off-board charger):

• PHEV: 10 min (SOC* – 0% to 80%)

• BEV: 20 min (SOC – 20% to 80%)

*AC Level 3 (TBD)

•> 20 kW, single-phase and three-phase

*DC Level 3 (TBD)

EVSE includes an off-board charger

• 200-600 VDC (proposed) up to 240 kW (400 A)

• Estimated charge time (45 kW off-board charger):

• BEV (only): <10 min (SOC – 0% to 80%)

Source: http://www.sae.org/smartgrid/chargingspeeds.pdf

SOC, state of charge; EVSE, electric vehicle supply equipment

Once charging and market penetration scenarios are determined, it is necessary to conduct power flow analyses under a series of varying loading conditions to determine equipment loading. These simulations consist of superimposing PEV loads on expected customer or distribution transformer loads and running power flow analyses to determine feeder electrical variables (voltages, currents, etc.). The complexity of these analyses will vary depending on the accuracy sought, and they may include conducting statistical analyses to model the uncertainty about charging and market penetration scenarios. These analyses must be conducted for uncontrolled charging scenarios, to determine “worst case” impacts, and under controlled charging scenarios that are designed to mitigate expected impacts. Controlled scenarios aim at modifying PEV charging profiles by providing incentives or penalties via TOU rates or exerting charging load control or management to displace charging to off-peak hours.

Example of an expected PEV charging scenario (projected 2020). (From Xu, L. et al., A framework for assessing the impact of plug-in electric vehicle to distribution systems,

Figure 3.18   Example of an expected PEV charging scenario (projected 2020). (From Xu, L. et al., A framework for assessing the impact of plug-in electric vehicle to distribution systems, 2011 IEEE PSCE, Phoenix, AZ, March 2011.)

Example of percent of distribution system impacted versus PEV market penetration (uncontrolled charging). (From Dow, L. et al., A novel approach for evaluating the impact of electric vehicles on the power distribution system,

Figure 3.19   Example of percent of distribution system impacted versus PEV market penetration (uncontrolled charging). (From Dow, L. et al., A novel approach for evaluating the impact of electric vehicles on the power distribution system, 2010 IEEE PES General Meeting, Minneapolis, MN, July 2010.)

The literature indicates that under uncontrolled scenarios, transformer overloads are expected to occur even at low penetration levels; this is shown in Figure 3.19. Despite the fact that, at first sight, smart grid technologies such as controlled charging appear to be a mitigation measure for equipment loading impacts, it has the disadvantage of shifting charging to off-peak hours, for example, during early morning. This ultimately leads to (a) increasing load coincidence and creating new peaks that may also overload distribution transformers and lines, especially for large market penetration levels; this is shown in Figure 3.20 and (b) “flattening” distribution transformer load profiles, that is, increasing their load factors. Obviously the former is undesired, and even though the latter seems attractive, it may have a negative impact on equipment maintenance and life cycle, since off-peak loading conditions allow distribution transformers to cool down. Therefore, incentives and load control or management strategies must be carefully designed and applied to avoid creating further impacts. Other solutions to equipment overload are conventional approaches such as capacity increase (transformer upgrade, line reconductoring, etc.). Furthermore, the coordinated control and dispatch of local DER, such as DG and DS and the implementation of demand response, is a promising alternative for solving these issues, as shown in Figure 3.21. Finally, a combination of all the aforementioned approaches (conventional and smart grid technologies) is recommended. As indicated previously, the smart grid will play a critical role in enabling these solutions.

Example of percent of distribution system impacted versus PEV market penetration (controlled charging). (From Dow, L. et al., A novel approach for evaluating the impact of electric vehicles on the power distribution system,

Figure 3.20   Example of percent of distribution system impacted versus PEV market penetration (controlled charging). (From Dow, L. et al., A novel approach for evaluating the impact of electric vehicles on the power distribution system, 2010 IEEE PES General Meeting, Minneapolis, MN, July 2010.)

(

Figure 3.21   (See color insert.) Example of feeder load under PV and EV penetration scenarios. (From Agüero, J.R., IEEE Power Energy Mag., September/October 2011, 82–93.)

(

Figure 3.22   (See color insert.) Example of percentage of feeder sections experiencing low voltage for various PEV penetration levels. (From Agüero, J.R. and Dow, L., Impact studies of electric vehicles, Quanta Technology, Raleigh, NC, 2011.)

3.2.5.3.2  Voltage Regulation and Feeder Losses

The additional currents flowing through distribution transformers and lines due to moderate-to-high penetration scenarios of PEV may lead to an increase on voltage drop along distribution feeders that can cause low-voltage violations, particularly on areas located far from distribution substations. An example of this is shown in Figure 3.22 for various PEV penetration levels. This issue can be addressed by installing additional line voltage regulators and switched capacitor banks, as well as by the coordinated dispatch and control of local DER, such as DG and DS, and the implementation of DR and load control/management. PEV’s charging loads are expected to have a power factor close to unity, thanks to power factor correction (PFC) systems. However, as the penetration level increases, higher charging loads imply higher currents and distribution line and load transformer losses. Therefore, PEV proliferation is expected to increase distribution system losses. Again, the combined implementation of conventional and smart grid solutions via the additional communications and control capabilities enabled by the smart grid is expected to be the more successful approach for ensuring adequate voltage regulation and minimizing the impact of PEVs on distribution losses. This also highlights the need of multiobjective optimization approaches for a coordinated utilization of all available resources.

3.2.5.3.3  Power Quality

As indicated in previous sections, increased harmonic distortion may be caused by large proliferation of inverter-based equipment, including PEV charging facilities; here, it is worth noting that despite the fact that individual inverters may comply with standard requirements pertaining to harmonic injection, it is the interaction and cumulative effect of harmonics produced by a large number of inverters (including PEV and electronically coupled DG inverters) that could have a negative effect on feeders’ THD levels. This is an area that requires attention and further research, since it is expected to become more important as the deployment of these technologies grows. As previously indicated, issues related with increase on THD should be addressed in the planning stage of the smart grid, where maximum penetration levels and location of DG and PEVs must be carefully evaluated.

3.2.5.3.4  Others

Almost since the first sales of hybrid vehicles, there has been considerable interest in using the vehicles as auxiliary power supplies—backup generators or supplemental power systems. In some geographical areas, there remains a substantial risk of power failure due to natural disasters such as storms or floods. Owners of PHEVs in these areas could tap into their vehicles’ electrical systems for backup power in the event of power failure. Indeed, several informal projects have utilized electric vehicles for this purpose, connecting directly to the traction battery [47] or operating solely off the vehicle’s 12 V convenience power [48].

These efforts have been hampered by lack of support from vehicle manufacturers and lack of suitable inverters capable of both operation off grid and supporting EV battery voltages. This application remains a hobbyist niche but could rapidly become a de facto standard if many vehicles are equipped with inverters to support V2G operations, and vehicle manufacturers see the backup-power market as a potential added feature in their product offering. Serious safety issues must also be addressed, including electrical safety with both DC and AC circuits and the buildup of emissions if the vehicle is unintentionally operated in enclosed spaces.

Limitations on the size of household electrical services will also impact the introduction of EVs—particularly the selection of charging solutions. Many newer houses in the United States are equipped with 100 A electrical services, while older homes may have smaller services, and larger homes may have 200 A services or larger. Regardless of the absolute service size, in most cases, the installed service was properly sized for the anticipated loads in the household. Similarly, multiunit developments also size electrical services to meet electrical codes with limited spare capacity.

Although electrical codes remain relatively conservative, allowing for increased demand, introduction of a new, large electricity demand will likely violate those codes and possibly overload the electrical service. Further, electrical codes do not generally allow the introduction of additional circuits on the understanding that those circuits will not be utilized simultaneously with existing household loads. That is, although vehicle connections could be electronically limited to nighttime charging, when other household loads are low, there are currently few mechanisms in electrical codes to allow for such expansion.

The layout of household electrical services also presents issues. While newer homes frequently have the incoming electrical service in the garage area, in many homes, the electrical service entrance is located far from the garage—a location that has traditionally experienced far lower loads than other parts of the house. The expense of modifying the incoming electrical panel, and pulling new circuits to the garage areas, will likely retard the adoption of level 2 and level 3 charging. Vehicles charged at level 1 will typically require continuous electrical connections all night to reach a full state-of-charge. Therefore, if only level 1 charging is widely implemented, many of the most promising control mechanisms (controlled charging, V2G, etc.) offered by integrating EVs into the electrical grid will be inaccessible.

Finally, it should be noted that vehicle manufacturers currently have little incentive to modify vehicles to support grid support functions. Indeed, many proposed solutions, including V2G, controlled charging, and backup power applications, are likely to negatively impact battery life and/or decrease customer satisfaction—primary goals of the vehicle manufacturers.

Ultimately, integration of PEVs into both the transportation and electricity sectors is a system problem, requiring system solutions. Viable solutions will need to balance competing goals of vehicle owners, grid operators, and vehicle manufacturers, as well as address issues as diverse as electrical code compliance and dispersed communication.

3.3  Smart Substations

Stuart Borlase, Marco C. Janssen, and Michael Pesin

An electrical substation is a focal point of an electricity generation, transmission, and distribution system where voltage is transformed from high to low or reverse using transformers. Electric power flows through several substations between generating plants and consumer and usually is changed in voltage in several steps. There are different kinds of substations such as transmission substations, distribution substations, collector substations, and switching substation. The general functions of a substation include the following:

  • Voltage transformation
  • Connection point for transmission and distribution power lines
  • Switchyard for electrical transmission and/or distribution system configuration
  • Monitoring point for control center
  • Protection of power lines and apparatus
  • Communication with other substations and regional control center

Substations and feeders are the source of critical real-time data for efficient and safe operation of the utility network. Real-time data, also called operational data, are instantaneous values of power system analog and status points such as volts, amps, MW, MVAR, circuit breaker status, and switch position. These data are time critical and are used to protect, monitor, and control the power system field equipment. There is also a wealth of operational (non-real-time) data available from the field devices. Nonoperational data consist of files and waveforms such as event summaries, oscillographic event reports, or sequential event records, in addition to supervisory control and data acquisition (SCADA)-like points (e.g., status and analog points) that have a logical state or a numerical value. Nonoperational data are not needed by the SCADA dispatchers to monitor and control the power system, but the data can help make operation and management of system assets more efficient and reliable.

3.3.1  Protection, Monitoring, and Control Devices (IEDs)

Intelligent electronic devices (IEDs) are microprocessor-based devices with the capability to exchange data and control signals with another device (IED, electronic meter, controller, SCADA, etc.) over a communications link. IEDs perform protection, monitoring, control, and data acquisition functions in generating stations, substations, and along feeders and are critical to the operations of the electric network.

IEDs are widely used in substations for different purposes. In some cases, they are separately used to achieve individual functions, such as differential protection, distance protection, overcurrent protection, metering, and monitoring. There are also multifunctional IEDs that can perform several protection, monitoring, control, and user interfacing functions on one hardware platform.

IEDs are a key component of substation integration and automation technology. Substation integration involves integrating protection, control, and data acquisition functions into a minimal number of platforms to reduce capital and operating costs, reduce panel and control room space, and eliminate redundant equipment and databases. Automation involves the deployment of substation and feeder operating functions and applications ranging from SCADA and alarm processing to integrated volt/Var Control (IVVC) in order to optimize the management of capital assets and enhance operation and maintenance (O&M) efficiencies with minimal human intervention.

The main advantages of multifunctional IEDs are that they are fully IEC 61850 compatible and compact in size and that they combine various functions in one design, allowing for a reduction in size of the overall systems and an increase in efficiency and improvement in robustness and providing extensible solutions based on mainstream communications technology.

IED technology can help utilities improve reliability, gain operational efficiencies, and enable asset management programs including predictive maintenance, life extensions, and improved planning.

3.3.2  Sensors

The main functionality of sensors is to collect data from power equipment at the substation yard such as transformers, circuit breakers, and power lines. With the introduction of digital and optical technologies in combination with communication, new sensors are becoming available to acquire different types of asset-related information. Original copper-wired analog apparatus can now be replaced by optical apparatus with fiber-based sensors for monitoring and metering. The most prominent advantages of such sensors are higher accuracy, no saturation, reduced size and weight, safe and environment friendly (avoid oil or SF6), higher performance, wide dynamic range, high bandwidth, and low maintenance. The main advantages of optical sensors are the wide frequency bandwidth, wide dynamic range, and high accuracy. Furthermore, these new sensors allow monitoring and control to be implemented with two important application features:

  • Single sensor may serve different types of IEDs.
  • Single sensor may serve a large number of IEDs via process bus.

Those sensors also need accurate time synchronization of the inputs and the samples being placed on the process bus.

3.3.3  SCADA

SCADA refers to a system or a combination of systems that collects data from various sensors at a plant or in other remote locations and then sends these data to a central computer system, which then manages and controls the data and remotely controls devices in the field.

SCADA is a term that is used broadly to portray control and management solutions in a wide range of industries. The electric power industry has a specific set of requirements that applied to SCADA systems.

The primary purpose of an electric utility SCADA system is to acquire real-time data from the field devices located at the power plants, transmission and distribution substations, distribution feeders, etc., provide control of the field equipment, and present the information to the operating personnel. Realtime to the monitoring and control of substations and feeders is typically in the range of 1–5 s.

SCADA systems are globally accepted as a means of real-time monitoring and control of electric power systems, particularly generation and transmission systems. RTUs (remote terminal units) are used to collect analog and status telemetry data from field devices, as well as communicate control commands to the field devices. Installed at a centralized location, such as the utility control center, are front-end data acquisition equipment, SCADA software, operator graphical user interface (GUI), engineering applications that act on the data, historian software, and other components.

Recent trends in SCADA include providing increased situational awareness through improved GUIs and presentation of data and information, intelligent alarm processing, the utilization of thin clients and web-based clients, improved integration with other engineering and business systems, and enhanced security features.

Typically, control and data acquisition equipment compose a system with at least one master station, one or more RTUs, and a communications system. The electric utility master station is usually located at an energy control center (ECC), and RTUs are installed at the power plants, transmission and distribution substations, distribution feeder equipment, etc.

3.3.3.1  Master Stations

The master station is a computer system responsible for communicating with the field equipment and includes a human machine interface (HMI) in the control room or elsewhere. In smaller SCADA systems, the master station may be composed of a single PC. In larger SCADA systems, the master station may include multiple redundant servers, distributed software applications, and disaster recovery sites.

A large electric utility master station or energy management system (EMS) typically has the following:

  • One or more data acquisition servers (DAS) or front-end processors (FEP) that interface with the field devices via the communications system
  • Real-time data server(s) that contains real-time database(s) (RTDB)
  • Historical server(s) that maintains historical database
  • Application server(s) that runs various EMS applications
  • Operator workstations with an HMI

Typical modern EMS architecture (simplified).

Figure 3.23   Typical modern EMS architecture (simplified).

In most modern EMSs, hardware components are connected via one or more local area networks (LANs). Many systems have a secure interface to the corporate networks to make EMS data available to the corporate users (Figure 3.23).

There are several different types of the modern master stations. In general, master stations can be divided into five different categories based on their functionality. However, in some cases the functions can cross over from one type of system to another:

  1. SCADA master station
  2. SCADA master station with automatic generation control (AGC)
  3. EMS
  4. Distribution management system (DMS)
  5. Distribution automation (DA) master

SCADA master station primary functions:

  • Data acquisition
  • Remote control
  • Remote control
  • User interface
  • Areas of responsibility
  • Historical data analysis
  • Report writer

SCADA/AGC system primary functions (in addition to SCADA master station):

  • AGC
  • Economic dispatch (ED)/hydroallocator
  • Interchange transaction scheduling

EMS primary functions (in addition to SCADA/AGC system):

  • Network configuration/topology processor
  • State estimation
  • Contingency analysis
  • Three phase balanced operator power flow
  • Optimal power flow
  • Dispatcher training simulator

DMS primary functions:

  • Interface to automated mapping/facilities management (AM/FM) or geographic information system (GIS)
  • Interface to customer information system (CIS)
  • Interface to outage management
  • Three phase unbalanced operator power flow
  • Map series graphics

DA system primary functions:

  • Two-way distribution communications
  • Fault identification/fault isolation/service restoration
  • Voltage reduction
  • Load management
  • Power factor control
  • Short-term load forecasting

All types of master stations are interfaced with the field devices. Historically in electric utilities, these devices were RTUs. In recent years, with the proliferation of IEDs, many of these devices are taking over the RTU functionality.

3.3.3.2  Remote Terminal Unit

The RTU is a microprocessor-based device that interfaces with a SCADA system by transmitting telemetry data to the master station and changing the state of connected devices based on control messages received from the master station or (in some modern systems) commands generated by the RTU itself. The RTU provides data to the master station and enables the master station to issue controls to the field equipment. Typical RTUs have physical hardware inputs to interface with field equipment and one or more communication ports (Figure 3.24).

Different RTUs process data in different ways, but in general there are several internal software modules that are common among most RTUs:

  • Central RTDB that interfaces with all other software modules.
  • Physical I/O application—acquires data from the RTU hardware components that interface with physical I/O.
  • Data collection application (DCA)—acquires data from the devices with data communications capabilities via communication port(s). For example, IEDs.
  • Data processing application (DPA)—presents data to the master station or HMI.
  • Some RTUs also have data translation applications (DTA) that manipulate data before they are presented to the master station or support stand-alone functionality at the RTU level (Figure 3.25).

SCADA system data flow architecture.

Figure 3.24   SCADA system data flow architecture.

RTU software architecture.

Figure 3.25   RTU software architecture.

3.3.4  Substation Technology Advances

Early generations of SCADA systems typically employed one RTU at every substation. With this architecture, all cables from the field equipment had to be terminated at the RTU. RTUs have typically offered limited expansion capacities. For analog inputs, the RTU required the use of transducers to convert higher level voltages and currents from CT and PT outputs into the milliamp and volt level. Most RTUs had a single communication port and were only capable of communicating with one master station. The communication between an RTU and its master station was typically achieved via proprietary bit-oriented communication protocols. As technology advanced, RTUs became smaller and more flexible. This allowed for a distributed architecture approach, with one smaller RTU for one or several pieces of substation equipment. This resulted in lower installation costs with reduced cabling requirements. This architecture also offered better expansion capabilities (just add more small RTUs). In addition, the new generation of RTUs was capable of accepting higher level AC analog inputs. This eliminated the need for intermediate transducers and allowed direct wiring of CTs and PTs into the RTU. This also enabled RTUs to have additional functionality, such as digital fault recording (DFR) and power quality (PQ) monitoring.

There were also advances in communications capabilities, with additional ports available to communicate with IEDs. However, the most significant improvement was the introduction of an open communications protocol. The older SCADA systems used proprietary protocols to communicate between the master station and the RTUs. Availability of an open and standard (for the most part) utility communications protocol allowed utilities to choose vendor-independent equipment for the SCADA systems. The de facto standard protocol for electric utilities SCADA systems in North America became DNP3.0. Another open communications protocol used by utilities is MODBUS. The MODBUS protocol came from the industrial manufacturing environment. The latest communication standard adopted by utilities is IEC 61850. IEC61850 is a very powerful and flexible network-based, object-oriented communication standard that allows for utilities to move their next-generation substations that are flexible and expandable; allows for the implementation of multivendor solutions; and in addition to the communication, also facilitates a standardized engineering approach allowing for optimization of utility engineering and maintenance processes.

Another technology that aided SCADA systems was network data communications. The SCADA architecture based on serial communications protocols put certain limitations on system capabilities. With a serial SCADA protocol architecture,

  • There is a static master/slave data path that limits the device connectivity
  • Serial SCADA protocols do not allow multiple protocols on a single channel
  • There are issues with exchanging new sources of data, such as oscillography files, PQ data, etc.
  • Configuration management has to be done via a dedicated “maintenance port”

The network-based architecture offers a number of advantages:

  • There is significant improvement in speed and connectivity: An Ethernet-based LAN greatly increases the available communications bandwidth. The network layer protocol provides a direct link to devices from anywhere on the network.
  • Availability of logical channels: Network protocols support multiple logical channels across multiple devices.
  • Ability to use new sources of data: Each IED can provide another protocol port number for file or auxiliary data transfer without disturbing other processes (e.g., SCADA) and without additional hardware.
  • Improved configuration management: Configuration and maintenance can be done over the network from a central location.

The network-based architecture in many cases also offers a better response time, ability to access important data, and reduced configuration and system management time. Take, for example, SCADA systems that have been around for many years. These were simple remote monitoring and control systems exchanging data over low-speed communications links, mostly hardwired. In recent years, with the proliferation of microprocessor-based IEDs, it became possible to have information extracted directly from these IEDs either by an RTU or by other substation control system components. This is achieved by using the IED communications capabilities, allowing it to communicate with the RTU, data concentrator, or directly with the master station. As more IEDs were installed at the substations, it became possible to integrate some of the protection, control, and data acquisition functionality. A lot of the information previously extracted by the RTUs now became available from the IEDs. However, it may not be practical to have the master station communicate directly with the numerous IEDs in all the substations. To enable this data flow, a new breed of devices called substation servers is utilized. A substation server communicates with all the IEDs at the substation, collects all information from the IEDs, and then communicates back to the central master station. Because the IEDs at the substation use many different communications protocols, the substation server has to be capable of communicating via these protocols, as well as the master station’s communications protocol. A substation server allows the SCADA system to access data from most substation IEDs, which were only accessible locally before.

Server-based substation control system architecture.

Figure 3.26   Server-based substation control system architecture.

With the substation server-based SCADA architecture (Figure 3.26), all IEDs (including RTUs) are polled by the substation server. The IEDs and RTUs with network connections are polled over the substation LAN. The IEDs with only serial connection capabilities are polled serially via the substation server’s serial RS232 or RS485 ports (integrated or distributed). In addition to making additional IED data available, the substation server significantly improves overall SCADA system communication performance. With the substation server-based architecture, the master station has to communicate directly with only the substation server instead of multiple RTUs and IEDs at the substation. Also, a substation server’s communications capability is typically superior to that of an IED. This, and the reduced number of devices directly connected to the master station, contributes to a significantly improved communications performance in a polled environment.

Data available in the substation can be divided into two types: operational or real-time data and nonoperational data. Operational data are real-time data required for operating utility systems and performing EMS software applications such as AGC. These data are stored by EMS applications and available as historical data. Nonoperational data are historical, real-time, and file type data used for analysis, maintenance, planning, and other utility applications.

Modern IEDs, such as protection relays and meters, have a tremendous amount of information. Some of these devices have thousands of data points available. In addition, many IEDs generate file type data such as DFR or PQ files. A typical master station is not designed to process this amount of data and this type of data. However, a lot of this information can be extremely valuable to the different users within the utility, as well as, in some cases, the utility’s customers. To take advantage of these data, an extraction mechanism independent from the master station needs to be implemented.

Operational data and nonoperational data have independent data collection mechanisms. Therefore, two separate logical data paths should also exist to transfer these data (Figure 3.27). One logical data path connects the substation with the EMS (operational data). A second data path transfers nonoperational data from the substation to various utility information technology (IT) systems. With all IEDs connected to the substation data concentrator, and sufficient communications infrastructure in place, it also becomes possible to have a remote maintenance connection to most of the IEDs. This functionality is referred to as either “remote access” or “pass-through.” Remote access or pass-through is the ability to have a virtual connection to remote devices via a secure network. This functionality significantly helps with troubleshooting and maintenance of remote equipment. In many cases, it can eliminate the need for technical personnel to drive to a remote location. It also makes real-time information from individual devices at different locations available at the same computer screen that makes the troubleshooting process more efficient.

(

Figure 3.27   (See color insert). Substation data flow.

Figure 3.28 shows the conceptual migration path from basic SCADA functionality through integration and automation to a full smart grid substation solution.

An advanced substation integration architecture (Figure 3.29) offers increased functionality by taking full advantage of the network-based system architecture, thus allowing more users to access important information from all components connected to the network. However, it also introduces additional security risks into the control system. To mitigate these risks, special care must be taken when designing the network, with special emphasis on the network security and the implementation of user authentication, authorization, and accounting. It is very important that a substation communication and physical access security policy is developed and enforced.

3.3.5  Platform for Smart Feeder Applications

Monitoring, control, and data acquisition of the electricity network will extend further down to the distribution pole-top transformer and perhaps even to individual customers, either through the substation communications network, by means of a separate feeder communications network, or tied into the advanced metering infrastructure (AMI). More granular field data will help increase operational efficiency and provide more data for other smart grid applications, such as outage management. Higher speed and increased bandwidth communications for data acquisition and control will be needed. Fault detection, isolation, and service restoration (FDIR) on the distribution system will require a higher level of optimization and will need to include optimization for closed-loop, parallel circuit, and radial configurations. Multilevel feeder reconfiguration, multiobjective restoration strategies, and forward-looking network loading validation will be additional features with FDIR. IVVC will include operational and asset improvements, such as identifying failed capacitor banks and tracking capacitor bank, tap changer, and regulator operation to provide sufficient statistics for opportunities to optimize capacitor bank and regulator placement in the network. Regional IVVC objectives may include operational or cost-based optimization. This will all require advanced smart substation and feeder solutions and a broader perspective on how integration of substation and feeder data and T&D automation can benefit the smart grid.

Substation smart grid migration.

Figure 3.28   Substation smart grid migration.

Realizing the promises and benefits of a smarter grid—from improved reliability, to increased efficiency, to the integration of more renewable power – will require a smarter distribution grid, with advanced computing power and two-way communications that operate at the speed of our 21st Century digital society. The problem – only approximately 10% of the 48,000 distribution substations on today’s grid in the U.S. are digitized. Upgrading these substations to meet today’s energy challenges will require time, resources and money [1].

Smart substations in the smart grid architecture. (© Copyright 2012 Michael Pesin. All rights reserved.)

Figure 3.29   Smart substations in the smart grid architecture. (© Copyright 2012 Michael Pesin. All rights reserved.)

3.3.6  Interoperability and IEC 61850

IEC 61850 is a vendor-neutral, open systems standard for utility communications, significantly improving functionality while yielding substantial customer savings. The standard specifies protocol-independent and standardized information models for various application domains in combination with abstract communications services, a standardized mapping to communications protocols, a supporting engineering process, and testing definitions. This standard allows standardized communication between IEDs located within electric utility facilities, such as power plants, substations, and feeders but also outside of these facilities such as wind farms, electric vehicles, storage systems, and meters. The standard also includes requirements for database configuration, object definition, file processing, and IED self-description methods. These requirements will make adding devices to a utility automation system as simple as adding new devices to a computer using “plug and play” capabilities. With IEC 61850, utilities will benefit from cost reductions in system design, substation wiring, redundant equipment, IED integration, configuration, testing, and commissioning. Additional cost savings will also be gained in training, MIS operations, and system maintenance.

IEC 61850 has been identified by the National Institute of Standards and Technology (NIST) as a cornerstone technology for field device communications and general device object data modeling. IEC 61850 Part 6 defines the configuration language for systems based on the standard. Peer-to-peer communication mechanisms such as the Generic Object Oriented System Event (GOOSE) will minimize wiring between IEDs. The use of peer-to-peer communication in combination with the use of sampled values (SVs) from sensors will minimize the use of copper wiring throughout the substation, leading to significant benefits in cost savings, more compact substation designs, and advanced and more flexible automation systems, to name a few. With high-speed Ethernet, the IEC 61850-based communications system will be able to manage all of the data available at the process level as well as at the station level.

The IEC 61850 standard was originally designed to be a substation communications solution and was not designed to be used over the slower communications links typically used in DA. However, as wide area and wireless technologies (such as WiMAX) advance, IEC 61850 communications to devices in the distribution grid will become possible. It is therefore possible that IEC 61850 will eventually be used in all aspects of the utility enterprise. At this time, an IEC WG is in the process of defining new logical nodes (LNs) for distributed resources—including photovoltaic, fuel cells, reciprocating engines, and combined heat and power.

With the introduction of serial communication and digital systems, the way we look at secondary systems is fundamentally changing. Not only are these systems still meant to control, protect, and monitor the primary system but we expect these systems to provide more information related to a realm of new functions. Examples of new functions include the monitoring of the behavior, the aging, and the dynamic capacity of the system. Many of the new functions introduced in substations are related to changing operating philosophies, the rise of distributed generation, and the introduction of renewable energy. For protection, new protection philosophies are being introduced focused more on the dynamic adaption of protection functions to the actual network topology, wide area protection and monitoring, the introduction of synchrophasors, and many more.

This tendency is not new. Ever since the introduction of the first substation automation systems and digital protection, we have been searching for ways to make better use of the technologies at hand. After many experiments and discussions, this has led to the development of IEC 61850, originally called “Communication networks and systems in substations.” It has now evolved into a worldwide standard called “Communication networks and systems for power utility automation,” providing solutions for many different domains within the power industry.

The concepts and solutions provided by IEC 61850 are based on three cornerstones:

  • Interoperability: The ability of IED from one or several manufacturers to exchange information and use that information for their own functions.
  • Free configuration: The standard shall support different philosophies and allow a free allocation of functions, for example, it will work equally well for centralized (RTU based) or decentralized (substation control system based) configurations.
  • Long-term stability: The standard shall be future proof, that is, it must be able to follow the progress in communications technology as well as evolving system requirements.

This is achieved by defining a level of abstraction that allows for the development of basically any solution using any configuration that is interoperable and stable in the long run. The standard defines different logical interfaces within a substation that can be used by functions in that substation to exchange information between them. This is shown in Figure 3.30.

IEC 61850 does not predefine or prescribe communications architectures. The interfaces shown in Figure 3.30 are logical interfaces. IEC 61850 allows in principle any mapping of these interfaces on communications networks. A typical example could be to map interfaces 1, 3, and 6 on what we call a station bus. This bus is a communications network focused on the functions at bay and station level. We also could map interfaces 4 and 5 on a process bus, a communications network focused on the process and bay level of a substation. The process bus may in such a case be restricted to one bay, while the station bus might connect functions located throughout the substation. However, it may be possible as well to map interface 4 on a point-to-point link connecting a process-related sensor to the bay protection.

Interfaces within a substation automation system. (© Copyright 2012 Marco Janssen. All rights reserved.)

Figure 3.30   Interfaces within a substation automation system. (© Copyright 2012 Marco Janssen. All rights reserved.)

Engineering approach in IEC 61850. (© Copyright 2012 Marco Janssen. All rights reserved.)

Figure 3.31   Engineering approach in IEC 61850. (© Copyright 2012 Marco Janssen. All rights reserved.)

IEC 61850 is, in principle, restricted to digital communications interfaces. However, IEC 61850 specifies more than the communications interfaces. It includes domain-specific information models. In case of substation, a suite of substation functions have been modeled, providing a virtual representation of the substation equipment. The standard, however, also includes the specification of a configuration language. This language defines a suite of standardized, XML-based files that can be used to define in a standardized way the specification of the system, the configuration of the system, and the configuration of the individual IEDs within a system. The files are defined such that they can be used to exchange configuration information between tools from different manufacturers of substation automation equipment. This is shown in Figure 3.31.

The definitions in IEC 61850 are based on a layered approach. In this approach the domain-­specific information models, abstract communications services, and the actual communications protocol are defined independently. This basic concept is shown in Figure 3.32.

IEC 61850 is divided in parts, and in parts 7-3 and 7-4xx, the information model of the substation equipment is specified. These information models include models for primary devices such as circuit breakers and instrument transformers such as CTs and VTs. They also include the models for secondary functions such as protection, control, measurement, metering, monitoring, and synchrophasors.

Concept of the separation of application and communications in IEC 61850. (© Copyright 2012 Marco Janssen. All rights reserved.)

Figure 3.32   Concept of the separation of application and communications in IEC 61850. (© Copyright 2012 Marco Janssen. All rights reserved.)

In order to have access to the information contained in the information models, the standard defines protocol-independent, abstract communications services. These are described in part 7-2, such that the information models are coupled with communications services suited to the functionality making use of the models. This definition is independent from any communications protocol and is called the abstract communications service interface (ACSI). The major information exchange models defined in IEC 61850-7-2 are the following:

  • Read and write data
  • Control
  • Reporting
  • GOOSE
  • SV transmission

The first three models are based on a client/server relation. The server is the device that contains the information while the client is accessing the information. Read and write services are used to access data or data attributes. These services are typically used to read and change configuration attributes. Control model and services are somehow a specialization of a write service. The typical use is to operate disconnector, earthing switches, and circuit breakers. The reporting model is used for event-driven information exchange. The information is spontaneously transmitted when the value of the data changed.

The last two models are based on a publisher/subscriber concept. In IEC 61850 for this the term peer-to-peer communication is introduced to stress that publisher subscriber/communication involves mainly horizontal communication among peers. These communications models are used for the exchange of time-critical information. The device, being the source of the information, is publishing the information. Any other device that needs the information can receive it. These models are using multicast communication (the information is not directed to one single receiver).

The GOOSE concept is a model to transmit event information in a fast way to multiple devices. Instead of using a confirmed communications service, the information exchange is repeated regularly. Application of GOOSE services are the exchange of position information from switches for the purpose of interlocking or the transmission of a digital trip signal for protection-related functions.

The model for the transmission of SVs is used when a waveform needs to be transmitted using digital communication. In the source device, the waveform is sampled with a fixed sampling frequency. Each sample is tagged with a counter representing the sampling time and transmitted over the communications network. The model assumes synchronized sampling, that is, different devices are sampling the waveform at exactly the same time. The counter is used to correlate samples from different sources. That approach creates no requirements regarding variations of the transmission time.

While IEC 61850-8-x specifies the mapping of all models from 7-2 with the exception of the transmission of SVs, IEC 61850-9-x is restricted to the mapping of the transmission of SV model. While IEC 61850-9-2 is mapping the complete model, IEC 61850-9-1 is restricted to a small subset using a point-to-point link providing little flexibility. Both mappings are using Ethernet as communications protocol.

Of course, in order to create real implementations, we need communications protocols. These protocols are defined in parts 8-x and 9-x. In these parts is explained how real communications protocols are used to transmit the information in the models specified in IEC 61850-7-3 and -7-4xx using the abstract communications services of IEC 61850-7-2. In the terminology of IEC 61850, this is called “specific communication service mapping” (SCSM).

Through this approach, an evolution in communications technologies is supported since the application and its information models and the information exchange models are decoupled from the protocol used, allowing for upgrading the communications technology without affecting the applications.

The core element of the information model is the LN. An LN is defined as the smallest reusable piece of a function. It, as such, can be considered as a container for function-related data. LNs contain data, and these data and the associated data attributes represent the information contained in the, part of the, function. The name of an LN class is standardized and comprises always four characters. Basically, we can differentiate between two kinds of LNs:

  • LNs representing information of the primary equipment (e.g., circuit breaker—XCBR or current transformer—TCTR). These LNs implement the interface between the switchgear and the substation automation system.
  • LNs representing the secondary equipment including all substation automation functions. Examples are protection functions, for example, distance protection—PDIS or the measurement unit—MMXU.

The standard contains a comprehensive set of LNs allowing to model many, if not all, substation functions. In case a function does not exist in the standard extension rules for LNs, data and data attributes have been defined allowing for structured and standardized extensions of the standard information models.

The mappings currently defined in IEC 61850 (part 8-x and 9-x) are using the same communications protocols. They differentiate between the client/server services and the publisher/subscriber services. While the client/server services are using the full seven-layer communication stack using MMS and TCP/IP, the publisher/subscriber services are mapped on a reduced stack, basically directly accessing the Ethernet link layer.

For the transmission of the SVs, IEC 61850-9-2 is using the following communications protocols:

  • Presentation layer: ASN.1 using basic encoding rules (BER) [ISO/IEC 8824-1 and ISO/IEC 8825]
  • Data link layer: Priority tagging/VLAN and CSMA/CD [IEEE 802.1Q and ISO/IEC 8802-3]
  • Physical layer: Fiber optic transmission system 100-FX recommended [ISO/IEC 8802-3]

Ethernet is basically a nondeterministic communications solution. However, with the use of switched Ethernet and priority tagging, a deterministic behavior can be achieved. Using full duplex switches, collisions are avoided. Tagging the transmission of SVs—which requires, due to the cyclic behavior, a constant bandwidth—with a higher priority than the nondeterministic traffic used, for example, reporting of events, ensures that the SVs always get through.

The model for the transmission of SVs as specified in IEC 61850-7-2 is rather flexible. The configuration of the message being transmitted is done using an SV control block. Configuration options include the reference to the dataset that defines the information contained in one message, the number of individual samples that are packed within one message, and the sampling rate.

While the flexibility makes the concept future proof, it adds configuration complexity. That is why the UCA users group has prepared the “Implementation guideline for digital interface to instrument transformers using IEC 61850-9-2.” This implementation guideline is an agreement of the vendors participating in the UCA users group, how the first implementations of digital interfaces to instrument transformers will be. Basically, the implementation guideline is defining the following items:

  • A dataset comprising the voltage and current information for the three phases and for neutral. That dataset corresponds to the concept of a merging unit (MU) as defined in IEC 60044-8.
  • Two SV control blocks: a first one for a sample rate of 80 samples per period, where for each set of samples an individual message is sent and a second one for 256 samples per period, where 8 consecutive set of samples are transmitted in one message.
  • The use of scaled integer values to represent the information including the specification of the scale factors for current and for voltage.

3.3.6.1  Process Level

Process level technology is a maturing technology. Designed primarily to interface with nonconventional CTs and VTs, a process level communication will also include “transitional” hardware that will interface with existing copper CTs and VTs. The benefits of the process near implementation of the IEC 61850-based technology include elimination of copper, the elimination of CT saturation, and avoidance of CT open circuits, which are a serious safety hazard.

With this solution, new designs become possible, where electronic transformers are used instead of conventional transformers in the switchyard. The voltage and current signals are captured at the primary side, converted to the optic signals by an MU, and transferred to the protection and control devices via optical fibers. This can lower the requirement of transformer insulation and reduce the conducted and radiated interference suffered in the analog signal transmitted through legacy wiring. Intelligent control units are used as an intermediate link to circuit breaker controls. The intelligent control unit also converts analog signals from primary devices (such as circuit breaker and switches) into digital signals and sends it to the protection and control devices via process bus. At the same time, the tripping and reclosing commands issued by protection and control devices will be converted into analog signals to control the primary equipment. Large amount of copper wiring between IEDs and primary devices in conventional substations are replaced by optical fibers.

3.3.6.2  Bay Level

All the IEDs in the control house fully support IEC 61850. Synchronous phasor measurements are realized by phasor measurement units (PMUs) or in protection IEDs. PMUs are used for wide area power system monitoring and control, improving state estimation and archiving more reliable system performance. GOOSE messaging and SV network over the process bus are used. The interoperation between IEDs is realized by GOOSE messages sent over the network. The Ethernet switch is used to process the message priority to realize the GOOSE exchange scheme between relays.

3.3.6.3  Station Level

At the station level, an MMS-based communications network is used. This also provides the communications link between SCADA, control centers, and IEDs located at the bay level.

3.3.6.4  IEC 61850 Benefits

High-speed peer-to-peer communications between IEDs connected to the substation LAN based on exchange of GOOSE messages can successfully be used to replace hardwiring for different protection and control applications. Sampled analog values communicated from MUs to different protection devices connected to the communications network replace the copper wiring between the instrument transformers in the substation yard and the IEDs. IEC 61850 is a communications standard that allows the development of new approaches for the design and refurbishment of substations. A new range of protection and control applications results in significant benefits compared to conventional hardwired solutions. It supports interoperability between devices from different manufacturers in the substation, which is required in order to improve the efficiency of microprocessor-based relays applications and implement new distributed functions.

Process-bus-based applications offer some important advantages over conventional hardwired analog circuits. The first very important one is the significant reduction in the cost of the system due to the fact that multiple copper cables are replaced with a small number of fiber optic cables. Using a process bus also results in the practical elimination of CT saturation because of the elimination of the current leads resistance. Process-bus-based solutions also improve the safety of the substation by eliminating one of the main safety-related problems—an open current circuit condition. Since the only current circuit is between the secondary of a current transformer and the input of the MU is located right next to it, the probability for an open current circuit condition is very small. It becomes nonexistent if optical current sensors are used. The process bus improves the flexibility of the protection, monitoring, and control systems. Since current circuits cannot be easily switched due to open circuit concerns, the application of bus differential protection, as well as some backup protection schemes, becomes more complicated. This is not an issue with process bus, because any changes will only require modifications in the subscription of the protection IEDs receiving the sampled analog values over IEC 61850 9-2.

IEC 61850-based substation systems provide some significant advantages over conventional protection and control systems used to perform the same functions in the substations:

  • Reduced wiring, installation, maintenance, and commissioning costs
  • Optimization possibilities in the design of the high voltage system in a substation
  • Improved interoperability due to the use of standard high-speed communications between devices of different manufacturers over a standard communications interface
  • Easy adaptation to changing configurations in the substation
  • Practical elimination of CT saturation and open circuits
  • Easier implementation of complex schemes and solutions as well as easier integration of new applications and IEDs by using GOOSE messages and SVs that are multicasted on the communications network and that the applications and IEDs can simply subscribe to

It has been shown that the greatest benefits of using IEC 61850 may not be found in initial deployment, but it will be IEC 61850’s additional flexibility later in the substation life cycle that shows the greatest benefits. Table 3.6 shows the factors. Of the three factors for which IEC 61850 is believed to show a clear benefit, only the configuration benefits could be realized on the first installation by a utility.

The result is a significant improvement in configuration time as well as a reduction in the errors introduced by having to configure both the IED and server, as in a traditional approach. An expected 75% reduction in labor costs when configuring a substation represents a significant savings. For a more complex device that would normally take a day to configure, the savings could be even higher, perhaps approaching 90% (Figure 3.33).

3.3.7  IEC 61850-Based Substation Design

In a smart grid environment, availability of and access to information is key. Standards like IEC 61850 allow the definition of the available information and access to that information in a standardized way. IEC 61850 Communication Networks and Systems for Utility Automation is a standard for communications that creates an environment that will allow significant changes in the way the power system is protected and operated. In addition these concepts can also be used outside of the substation, allowing the implementation of wide area protection using standardized communications.

Table 3.6   Anticipated IEC 61850 Benefits

Description

Network

Legacy

Impact

Equipment purchase

$

$

Installation

$

$

0

Configuration

$$$

$

+

Equipment migration

$$$

$

+

Application additions

$$$

$

+

Approximate time (min) to configure an IEC 61850 client to communicate with a 200-point IED.

Figure 3.33   Approximate time (min) to configure an IEC 61850 client to communicate with a 200-point IED.

The IEC 61850 standard Communication Networks and Systems for Utility Automation allows the introduction of new designs for various functions, including protection inside and outside substations. The levels of functional integration and flexibility of communications-based solutions bring significant advantages in costs at various levels of the power system. This integration affects not only the design of the substation but almost every component and/or system in it such as protection, monitoring, and control by replacing the hardwired interfaces with communication links. Furthermore, the design of the high voltage installations and networks can be reconsidered regarding the number and the location of switchgear components necessary to perform the primary function of a substation in a high voltage network. The use of high-speed peer-to-peer communications using GOOSE messages and SVs from MUs allows for the introduction of distributed and wide area applications. In addition, the use of optical LANs leads in the direction of copperless substations.

3.3.7.1  Paradigm Shift in Substation Design

For many years, the current generation substation designs have been based on that functionality, and over time, we have developed several typical designs for the primary and secondary systems used in these substations. Examples of such typical schemes for the primary equipment are shown in Figure 3.34 and include the breaker and a half scheme, the double busbar scheme, the single busbar scheme, and the ring bus scheme. These schemes have been described and defined in many documents including Cigré Technical Brochure 069 General guidelines for the design of outdoor AC substations using fact controllers.

For the secondary equipment (protection, control, measurement, and monitoring), typical schemes have also been in use, but here we have seen more development in new concepts and philosophies. Typical concepts for secondary equipment include redundant protection for transmission system using different operating principles and manufacturers and separate systems for control, measurements, monitoring, data acquisition, operation, etc. At distribution, integrated protection and control at feeder level is a common solution. In general, it can be said that the concepts used for the secondary systems have been based on the primary designs and the way the utility wants to control, protect, and monitor these systems.

Typical traditional primary substation schemes. (From

Figure 3.34   Typical traditional primary substation schemes. (From page 8 of the Cigré Technical Brochure 069, General guidelines for the design of outdoor AC substations using FACTS controllers. Coyright Cigré.)(Labels A–J refer to different substation primary plant topologies as they are used in the Cigré report that this comes from.)

Typical conventional substation design. (From Apostolov, A. and Janssen, M., IEC 61850 Impact on Substation Design, paper number 0633. © Copyright 2008 IEEE.)

Figure 3.35   Typical conventional substation design. (From Apostolov, A. and Janssen, M., IEC 61850 Impact on Substation Design, paper number 0633. © Copyright 2008 IEEE.)

In general the existing or conventional substations are designed using standard design procedures for high voltage switchgear in combination with copper cables for all interfaces between primary and secondary equipment.

Several different types of circuits are used in the substation:

  • Analog (current and voltage)
  • Binary—protection and control signals
  • Power supply—DC or AC

A typical conventional substation design is shown in Figure 3.35.

Depending on the size of the substation, the location of the switchgear components, and the complexity of the protection and control system, there very often are a huge number of cables with different lengths and sizes that need to be designed, installed, commissioned, tested, and maintained.

A typical conventional substation has multiple instrument transformers and circuit breakers associated with the protection, control, and monitoring and other devices being connected from the switchyard to a control house or building with the individual equipment panels.

These cables are cut to a specific length and bundled, which makes any required future modification very labor intensive. This is especially true in the process of refurbishing old substations where the cable insulation is starting to fail.

The large amount of copper cables and the distances that they need to cover to provide the interface between the different devices expose them to the impact of electromagnetic transients and the possibility for damages as a result of equipment failure or other events.

The design of a conventional substation needs to take into consideration the resistance of the cables in the process of selecting instrument transformers and protection equipment, as well as their connection to the instrument transformers and between themselves. The issues of CT saturation are of special importance to the operation of protection relays under maximum fault conditions. Also ferroresonance in voltage transformers has to be considered in relation to the correct operation of the protection and control systems.

Failures in the cables in the substation may lead to misoperation of protection or other devices and can represent a safety issue. In addition open CT circuits, especially when it occurs while the primary winding is energized, can cause severe safety issues as the induced secondary e.m.f. can be high enough to present a danger to people’s life and equipment insulation.

The earlier discussion is definitely not a complete list of all the issues that need to be taken into consideration in the design of a conventional substation. It provides some examples that will help better understand the impact of IEC 61850 in the substation.

In order to take full advantage of any new technology, it necessary to understand what it provides. The next part of this chapter gives a short summary of some of the key concepts of the standard that have the most significant impact on the substation design.

3.3.7.2  IEC 61850 Substation Hierarchy

In a smart grid environment, availability of and access to information is key. Standards like IEC 61850 allow the definition of the available information and access to that information in a standardized way.

The IEC 61850 standard Communication Networks and Systems for Utility Automation allows the introduction of new designs for various functions, including protection inside and outside substations. The levels of functional integration and flexibility of communications-based solutions bring significant advantages in costs at various levels of the power system. This integration affects not only the design of the substation but almost every component and/or system in it such as protection, monitoring, and control by replacing the hardwired interfaces with communications links. Furthermore, the design of the high voltage installations and networks can be reconsidered regarding the number and the location of switchgear components necessary to perform the primary function of a substation in a high voltage network. The use of high-speed peer-to-peer communications using GOOSE messages and SVs from MUs allows for the introduction of distributed and wide area applications. In addition, the use of optical LANs leads in the direction of copperless substations.

The development of different solutions in the substation protection and control system is possible only when there is good understanding of both the problem domain and the IEC 61850 standard. The modeling approach of IEC 61850 supports different solutions from centralized to distributed functions. The latter is one of the key elements of the standard that allows for utilities to rethink and optimize their substation designs.

A function in an IEC 61850-based integrated protection and control system can be local to a specific primary device (distribution feeder, transformer, etc.) or distributed and based on communications between two or more IEDs over the substation LAN.

Considering the requirements for the reliability, availability, and maintainability of functions, it is clear that in conventional systems numerous primary and backup devices need to be installed and wired to the substation. The equipment as well as the equipment that they interface with must then be tested and maintained.

The interface requirements of many of these devices differ. As a result specific multicore instrument transformers were developed that allow for accurate metering of the energy or other system parameters on the one hand and provide a high dynamic range used by, for example, protection devices.

With the introduction of IEC 61850, different interfaces have been defined that can be used by substation applications using dedicated or shared physical connections—the communications links between the physical devices. The allocation of functions between different physical devices defines the requirements for the physical interfaces and, in some cases, may be implemented in more than one physical LAN or by applying multiple virtual network on a physical infrastructure.

Logical interfaces in IEC 61850. (From IEC TR 61850–1, Copyright IEC.)

Figure 3.36   Logical interfaces in IEC 61850. (From IEC TR 61850–1, Copyright IEC.)

The functions in the substation can be distributed between IEDs on the same or on different levels of the substation functional hierarchy—station, bay, or process as shown in Figure 3.36.

A significant improvement in functionality and reduction of the cost of integrated substation protection and control systems can be achieved based on the IEC 61850-based communications as described in the following.

One example where a major change in substation is expected is at the process level of the substation. The use of nonconventional and/or conventional instrument transformers with digital interface based on IEC 61850-9-2 or the implementation guideline IEC 61850-9-2 LE results in improvements and can help eliminate issues related to the conflicting requirements of protection and metering IEDs as well as alleviate some of the safety risks associated with current and voltage transformers.

The interface of the instrument transformers (both conventional and nonconventional) with different types of substation protection, control, monitoring, and recording equipment as defined in IEC 61850 is through a device called an MU. The definition of an MU in IEC 61850 is as follows:

Merging unit: interface unit that accepts multiple analog CT/VT and binary inputs and produces multiple time synchronized serial unidirectional multi-drop digital point to point outputs to provide data communication via the logical interfaces 4 and 5.

MUs can have the following functionality:

  • Signal processing of all sensors—conventional or nonconventional
  • Synchronization of all measurements—three currents and three voltages
  • Analog interface—high- and low-level signals
  • Digital interface—IEC 60044-8 or IEC 61850-9-2

It is important to be able to interface with both conventional and nonconventional sensors in order to allow the implementation of the system in existing or new substations.

Concept of the MU. (From Apostolov, A. and Janssen, M., IEC 61850 Impact on Substation Design, paper number 0633. © Copyright 2008 IEEE.)

Figure 3.37   Concept of the MU. (From Apostolov, A. and Janssen, M., IEC 61850 Impact on Substation Design, paper number 0633. © Copyright 2008 IEEE.)

The MU has similar elements as can be seen from Figure 3.37 as a typical analog input module of a conventional protection or multifunctional IED. The difference is that in this case the substation LAN performs as the digital data bus between the input module and the protection or functions in the device. They are located in different devices, just representing the typical IEC 61850 distributed functionality.

Depending on the specific requirements of the substation, different communications architectures can be chosen as described hereafter.

IEC 61850 is being implemented gradually by starting with adaptation of existing IEDs to support the new communications standard over the station bus and at the same time introducing some first process-bus-based solutions.

3.3.7.3  IEC 61850 Substation Architectures

IEC 61850 is being implemented gradually by starting with adaptation of existing IEDs to support the new communications standard over the station bus and at the same time introducing some first process-bus-based solutions.

3.3.7.4  Station-Bus-Based Architecture

The functional hierarchy of station-bus-based architectures is shown in Figure 3.38. It represents a partial implementation of IEC 61850 in combination with conventional techniques and designs and brings some of the benefits that the IEC 61850 standard offers.

The current and voltage inputs of the IEDs (protection, control, monitoring, or recording) at the bottom of the functional hierarchy are conventional and wired to the secondary side of the substation instrument transformers using copper cables.

The aforementioned architecture however does offer significant advantages compared to conventional hardwired systems. It allows for the design and implementation of different protection schemes that in a conventional system require significant number of cross wired binary inputs and outputs. This is especially important in large substations with multiple distribution feeders connected to the same medium voltage bus where the number of available relay inputs and outputs in the protection IEDs might be the limiting factor in a protection scheme application. Some examples of such schemes are a distribution bus protection based on the overcurrent blocking principle, breaker failure protection, trip acceleration schemes, or a sympathetic trip protection.

Station bus functional architecture. (© Copyright 2012 Marco Janssen. All rights reserved.)

Figure 3.38   Station bus functional architecture. (© Copyright 2012 Marco Janssen. All rights reserved.)

The relay that detects the feeder fault sends a GOOSE message over the station bus to all other relays connected to the distribution bus, indicating that it has issued a trip signal to clear the fault. This can be considered as a blocking signal for all other relays on the bus. The only requirement for the scheme implementation is that the relays connected to feeders on the same distribution bus have to subscribe to receive the GOOSE messages from all other IEDs connected to the same distribution bus.

The reliability of GOOSE-based schemes is achieved through the repetition of the messages with increased time intervals until a user-defined time is reached. The latest state is then repeated until a new change of state results in sending of a new GOOSE message. This is shown in Figure 3.39.

The repetition mechanism does not only limit the risk that the signal is going to be missed by a subscribing relay. It also provides means for the continuous monitoring of the virtual wiring between the different relays participating in a distributed protection application. Any problem in a device or in the communications will immediately, within the limits of the maximum repetition time interval, be detected and an alarm will be generated and/or an action will be initiated to resolve the problem. This is not possible in conventional hardwired schemes where problems in the wiring or in relay inputs and outputs can only be detected through scheduled maintenance.

One of the key requirements for the application of distributed functions using GOOSE messages is that the total scheme operating time is similar to or better than the time of a hardwired conventional scheme. If the different factors that determine the operating time of a critical protection scheme such as breaker failure protection are analyzed, it is clear that it requires a relay to initiate the breaker failure protection through a relay output wired into an input. The relay output typically has an operating time of 3–4 ms and it is not unusual that the input may include some filtering in order to prevent an undesired initiation of this critical function.

GOOSE message repetition mechanism. (From IEC TR 61850, Copyright IEC.)

Figure 3.39   GOOSE message repetition mechanism. (From IEC TR 61850, Copyright IEC.)

As a result, in a conventional scheme, the time over the simple hardwired interface, being the transmission time between the two functions, will be between 0.5 and 0.75 cycles—longer than the required 0.25 cycles defined for critical protection applications in IEC 61850-based systems.

Another significant advantage of the GOOSE-based solutions is the improved flexibility of the protection and control schemes. Making changes to conventional wiring is very labor intensive and time consuming, while changes of the “virtual wiring” provided by IEC 61850 peer-to-peer communications require only changes in the system configuration using the substation configuration language (SCL)-based engineering tools.

3.3.7.5  Station and Process Bus Architecture

Full advantage of all the features available in the new communications standard can be taken if both the station and process bus are used. Figure 3.40 shows the functional hierarchy of such a system.

IEC 61850 communications-based distributed applications involve several different devices connected to a substation LAN. MUs will process the sensor inputs, generate the SVs for the three phase and neutral currents and voltages, format a communications message, and multicast it on the substation LAN so that it can be received and used by all the IEDs that need it to perform their functions. This “one to many” principle similar to that used to distribute the GOOSE messages provides significant advantages as it not only eliminates current and voltage transformer wiring but it also supports the addition of new ideas and/or applications using the SVs in a later stage as these can simply subscribe to receive the same sample stream.

Another device, the IO unit (IOU) will process the status inputs, generate status data, format a communications message, and multicast it on the substation LAN using GOOSE messages.

All multifunctional IEDs will receive the SVs messages as well as the binary status messages. The ones that have subscribed to these data then process the data, make a decision, and operate by sending another GOOSE message to trip the breaker or perform any other required action.

Station and process bus functional architecture. (© Copyright 2012 Marco Janssen. All rights reserved.)

Figure 3.40   Station and process bus functional architecture. (© Copyright 2012 Marco Janssen. All rights reserved.)

Communications architecture for process and station bus. (© Copyright 2012 Marco Janssen. All rights reserved.)

Figure 3.41   Communications architecture for process and station bus. (© Copyright 2012 Marco Janssen. All rights reserved.)

Alternative substation design. (From Apostolov, A. and Janssen, M., IEC 61850 Impact on Substation Design, paper number 0633. © Copyright 2008 IEEE.)

Figure 3.42   Alternative substation design. (From Apostolov, A. and Janssen, M., IEC 61850 Impact on Substation Design, paper number 0633. © Copyright 2008 IEEE.)

Figure 3.41 shows the simplified communications architecture of the complete implementation of IEC 61850. The number of switches for both the process and substation buses can be more than one depending on the size of the substation and the requirements for reliability, availability, and maintainability.

Figure 3.42 is an illustration of how the substation design changes when the full implementation of IEC 61850 takes place. All copper cables used for analog and binary signals exchange between devices are replaced by communications messages over fiber. If the DC circuits between the substation battery and the IEDs or breakers are put aside, the “copperless” substation is a fact. We then can even go a step further and combine all the functions necessary for multiple feeders into one multifunctional device, thus eliminating a significant amount of individual IEDs. Of course the opposite is also possible. Since all the information is available on a communication bus, we can choose to implement relatively simple or even single function devices that share their information on the network, thus creating a distributed function.

The next possible step when using station and process bus is the optimization of the switchgear. In order for the protection, control, and monitoring functions in a substation to operate correctly, several instrument transformers are placed throughout the high-voltage installation. However with the capability to send voltage and current measurements as SVs over a LAN, it is possible to eliminate some of these instrument transformers. One example is the voltage measurements needed by distance protections. Traditionally voltage transformers are installed in each outgoing feeder. However if voltage transformers are installed on the busbar, the voltage measurements can be transmitted over the LAN to each function requiring these measurements. These concepts are not new and have already been applied in conventional substations. In conventional substations, however, it requires large amounts of (long) cables and several auxiliary relays limiting or even eliminating the benefit of having less voltage transformers.

Process-bus-based applications offer important advantages over conventional hardwired analog circuits. The first very important one is the significant reduction in the cost of the system due to the fact that multiple copper cables are replaced with a small number of fiber optic cables.

Using a process bus also results in the practical elimination of CT saturation of conventional CTs because of the elimination of the current leads resistance. As the impedance of the MU current inputs is very small, this results in the significant reduction in the possibility for CT saturation and all associated with its protection issues. If nonconventional instrument transformers can be used in combination with the MUs and process bus, the issue of CT saturation will be eliminated completely as these nonconventional CTs do not use inductive circuits to transduce the current.

Process-bus-based solutions also improve the safety of the substation by eliminating one of the main safety-related problems—an open current circuit condition. Since the only current circuit is between the secondary of a current transformer and the input of the MU is located right next to it, the probability for an open current circuit condition is very small. It becomes nonexistent if optical current sensors are used.

Last, but not least, the process bus improves the flexibility of the protection, monitoring, and control systems. Since current circuits cannot be easily switched due to open circuit concerns, the application of bus differential protection, as well as some backup protection schemes, becomes more complicated. This is not an issue with process bus because any changes will only require modifications in the subscription of the protection IEDs receiving the sampled analog values over IEC 61850 9-2.

3.3.8  Role of Substations in Smart Grid

Substations in a smart grid will move beyond basic protection and traditional automation schemes to bring complexity around distributed functional and communications architectures, more advanced local analytics, and data management. There will be a migration of intelligence from the traditional centralized functions and decisions at the energy management and DMS level to the substations to enhance reliability, security, and responsiveness of the T&D system. The enterprise system applications will become more advanced in being able to coordinate the distributed intelligence in the substation and feeders in the field to ensure control area and system-wide coordination and efficiency.

The integration of a relatively large scale of new generation and active load technologies into the electric grid introduces real-time system control and operational challenges around reliability and security of the power supply. These challenges, if not addressed properly, will result in degradation of service, diminished asset service life, and unexpected grid failures, which will impact the financial performance of the utility’s business operations and public relationship image. If these challenges are met effectively, optimal solutions can be realized by the utility to maximize return on investments in advanced technologies. To meet these needs, a number of challenges must be addressed:

  • Very high numbers of operating contingencies different from "system as design" expectations
  • High penetration of intermittent renewable and distributed energy resources, with their (current) characteristic of limited controllability and dispatchability
  • PQ issues (voltage and frequency variation) that cannot be readily addressed by conventional solutions
  • Highly distributed, advanced control and operations logic
  • Slow response during quickly developing disturbances
  • Volatility of generation and demand patterns and wholesale market demand elasticity
  • Adaptability of advanced protection schemes to rapidly changing operational behavior due to the intermittent nature of renewable and DG resources

In addition, with wide deployment of smart grid, there will be an abundance of new operational and nonoperational devices and technologies connected to the wide area grid. The wide range of devices will include smart meters; advanced monitoring, protection, control, and automation; EV chargers; dispatchable and nondispatchable DG resources; energy storage; etc. Effective and real-time management and support of these devices will introduce enormous challenges for grid operations and maintenance. To effectively address all these challenges, it is necessary to engineer, design, and operate the electric grid with an overarching solution in mind, enabling overall system stability and integrity. A smart grid solution, from field devices to the utility’s control room, utilizing intelligent sensors and monitoring, advanced grid analytical and operational and nonoperational applications, comparative analysis and visualization, will enable wide area and real-time operational anomaly detection and system “health” predictability. These will allow for improved decision-making capabilities, PQ, and reliability. An integrated approach will also help to improve situational awareness, marginal stress evaluation, and congestion management and recommend corrective action to effectively manage high penetration of new alternative generation resources and maximize overall grid stability

Some expected smart substation transformations are summarized later.

3.3.8.1  Engineering and Design

Future substation designs will be driven by current and new well-developed technologies and standards, as well as some new methodologies which are different from the existing philosophy. The design requirements for the next-generation substations will be based on the total cost of ownership and shall be aimed at either cost reduction while maintaining the same technical performance or performance improvement while assuring a positive cost benefit ratio. Based on these considerations, smart substation design may take the form of (a) retrofitting existing substations with a major replacement of the legacy equipment with minimal disruption to the continuity of the services, (b) deploying brand-new substation designs using the latest off-the-shelf technologies, or (c) greenfield substation design that takes energy market participation, profit optimization, and system operation risk reduction into combined consideration.

Designing the next-generation substations will require an excellent understanding of primary and secondary equipment in the substation, but also the role of the substation in the grid, the region, and the customers connected to it. Signals for monitoring and control will migrate from analog to digital, and the availability of new types of sensors, such as nonconventional current and voltage instrument transformers, will require shifting the engineering and design process from a T&D network focus to also include the substation information and communications architecture. This will require a better understanding of communications networks, data storage, and data exchange needs in the substation. As with other communications networks used in other process or time-critical industries, redundancy, security, and bandwidth are an essential part of the design process. Smart substations will require protocols specific to the needs of electric utilities while ensuring interconnectivity and interoperability of the protection, monitoring, control, and data acquisition devices. One approach to overcoming these challenges is to modify the engineering and design documentation process so that it includes detailed communication schematics and logic charts depicting this virtualized circuitry and data communications pathways.

3.3.8.2  Information Infrastructure

Advances in processing technology have been a major enabler of smarter substations with the cost-effective digitization of protection, monitoring, and control devices in the substation. Digitization of substation devices has also enabled the increase in control and automation functionality and, with it, the proliferation of real-time operational and nonoperational data available in the substation. The availability of the large amounts of data has driven the need for higher speed communications within the substation as well as between the substation and feeder devices and upstream from the substation to SCADA systems and other enterprise applications, such as outage management and asset management. The key is to filter and process these data so that meaningful information from the T&D system can be made available on a timely basis to appropriate users of the data, such as operations, planning, asset maintenance, and other utility enterprise applications.

Central to the smart grid concept is design and deployment of a two-way communications system linking the central office to the substations, intelligent network devices, and ultimately to the customer meter. This communications system is of paramount importance and serves as the nervous system of the smart grid. This communications system will use a variety of technologies ranging from wireless, RF, and broadband over power line (BPL) most likely all within the same utility. The management of this communications network will be new and challenging to many utilities and will require new engineering and asset management applications. Enhanced security will be required for field communications, application interfaces, and user access. An advanced EMS and DMS will need to include data security servers to ensure secure communications with field devices and secure data exchange with other applications. The use of IP-based communications protocols will allow utilities to take advantage of commercially available and open-standard solutions for securing network and interface communications.

IEC 61850 will greatly improve the way we communicate between devices. For the first time, vendors and utilities have agreed upon an international communications standard. This will allow an unprecedented level of interoperability between devices of multiple vendors in a seamless fashion. IEC 61850 supports both client/server communications as well as peer-to-peer communications. The IEC process bus will allow for communication to the next generation of smart sensors. The self-description feature of IEC 61850 will greatly reduce configurations costs, and the interoperable engineering process will allow for the reuse of solutions across multiple platforms. Also because of a single standard for all devices training, engineerings and commissioning costs can be greatly reduced.

3.3.8.3  Operation and Maintenance

The challenge of operations and maintenance in advance substations with smart devices is usually one of acceptance by personnel. This is a critical part of the change management process. Increased amounts of data from smart substations will increase the amount of information available to system operators to improve control of the T&D network and respond to system events. Advanced data integration and automation applications in the substation will be able to provide a faster response to changing network conditions and events and therefore reduce the burden on system operators, especially during multiple or major system events. For example, after a fault on a distribution feeder, instead of presenting the system operator with a lockout alarm, accompanied by associated low volts, fault passage indications, battery alarms and so on, leaving it up to the operator to drill down, diagnose, and work out a restoration strategy, the applications will instead notify the operator that a fault has occurred and analysis and restoration is in progress in that area. The system will then analyze the scope of the fault using the information available; tracing the current network model; identifying current relevant safety documents, operational restrictions, and sensitive customers; and locating the fault using data from the field. The master system automatically runs load flow studies identifying current loading, available capacities, and possible weaknesses, using this information to develop a restoration strategy. The system then attempts an isolation of the fault and maximum restoration of customers with safe load transfers, potentially involving multilevel feeder reconfiguration to prevent cascading overloads to adjacent circuits. Once the reconfiguration is complete, the system can alert the operator to the outcome and even automatically dispatch the most appropriate crew to the identified faulted section.

3.3.8.4  Enterprise Integration

Enterprise integration is an essential component of the smart grid architecture. To increase the value of an integrated smart grid solution, the smart substation will need to interface and share data with numerous other applications. For example, building on the benefits of an AMI with extensive communication coverage across the distribution system and obtaining operational data from the customer point of delivery (such as voltage, power factor, loss of supply, etc.) help to improve outage management and IVVC implementation locally at the substation level. More data available from substations will also allow more accurate modeling and real-time analysis of the distribution system and will enable optimization algorithms to run, reducing peak load and deferring investment in transmission and distribution assets. By collecting and analyzing nonoperational data, such as key asset performance information, sophisticated computer-based models can be used to assess current performance and predict possible failures of substation equipment. This process combined with other operational systems, such as mobile workforce management, will significantly change the maintenance regime for the T&D system.

3.3.8.5  Testing and Commissioning

The challenge of commissioning a next-generation substation is that traditional test procedures cannot adequately test the virtual circuitry. The best way to overcome this challenge is to use a system test methodology, where functions are tested end to end as part of the virtual system. This allows performance and behavior of the control system to be objectively measured and validated. Significant changes will also be seen in the area of substation interaction and automation database management and the reduction of configuration costs. There is currently work under way to harmonize the EPRI CIM model and enterprise service bus IEC 61968 standards with the substation IEC 61850 protocol standards. Bringing these standards together will greatly reduce the costs of configuring and maintaining a master station through plug and play compatibility and database self-description.

3.4  Transmission Systems

Transmission systems are the bulk power delivery systems of electric utilities; they carry millions of megawatts of energy each day. There will be an increased focus on the transmission level of the smart grid as transmission systems become more complex and more interconnected and serve as the power delivery system for more renewable energy sources. There are several monitoring and control technologies that ensure efficient operation, safety, and reliability at the transmission level of the grid and are key to any smart grid deployment. While some smart grid technologies for transmission systems may take proactive action to automatically control the network and have a localized effect at their point of connection, other technologies are transmission systems in their own right and may deliver energy from one location in a precisely controlled manner to the load center. These intelligent systems are able to offer dynamic control of not only power flow but many other aspects of a stable network, including voltage, reactive power, frequency, etc.

3.4.1  Energy Management Systems

Jay Giri and Thomas Morris

3.4.1.1  History of Energy Management Systems

Operating the electric grid at close to normal frequency, without causing any unexpected disconnections of load or generation, is known as maintaining electrical integrity or “normal synchronous operation.” The first centralized control centers designed to maintain the integrity of the electric grid were implemented in the 1950s.

Control centers use a software and hardware system called an energy management system (EMS). Based on a centralized command and control paradigm, the EMS has evolved over the past six decades into much larger and more complex systems through computer automation. But the newer systems have the same simple mission as the original system: “Keep power available on at all times.”

An EMS monitors and manages flows in the higher-voltage transmission network. A distribution management system (DMS) monitors and manages flows in the lower-voltage distribution network.

  • Real-time monitoring of grid conditions. The first EMS application placed in control centers across the country was known as the supervisory control and data acquisition (SCADA) system. SCADA allows electric system oper-ators to visually monitor grid conditions from a central location and to take control and remedial actions remotely via the SCADA system if adverse conditions are detected. The initial SCADA systems were hardwired analog systems.
  • Maintaining system frequency. The next function implemented at control centers was load frequency control (LFC). The objective of LFC is to automatically maintain system frequency as load changes by changing generation output accordingly. In the early implementations of LFC, the control center operator visually monitored the system frequency measurement and periodically sent incremental change signals to generators via analog-wired connections or by placing phone calls to generating plant operators to keep generation output close to system load demand. Later, as analog systems transitioned to digital, LFC became the first automated application to help the control center operator keep power available at all times.
  • Sharing electricity with neighbors. The next progression in system monitoring and control was interconnecting one power utility with neighboring utilities to increase overall grid reliability by allowing power sharing during emergencies and to exchange cheaper power during normal operations.
  • Modern control centers. Figure 3.43 shows the suite of real-time and off-line functions that comprise a modern control center.

3.4.1.2  Current EMS Technology

There are over 3000 electric service territories in the United States responsible for managing their portion of the electric grid. Most of the very high voltage transmission substations in the United States have sensors and meters that monitor real-time operating conditions and have the means to remotely operate transmission equipment, such as circuit breakers and transformer tap changers. Less than 25% of the distribution substations have any remote monitoring and control capability, and the final supply to the end user typically has no technology at all. However, this is changing with technology evolution and the reduction in monitoring and control device costs. Smart grid has been driving increased implementations of intelligent residential meters and other technologies and applications that will help drive more visibility of the T&D network through SCADA.

Control center applications overview. (© Copyright 2012 Alstom Grid. All rights reserved.)

Figure 3.43   Control center applications overview. (© Copyright 2012 Alstom Grid. All rights reserved.)

Figure 3.44 shows a typical modern-day EMS control center environment with the different display screens the operator uses at the console to monitor and control grid conditions. The control center consists of many such operator consoles, as well as large wallboards or digital displays that provide a bird’s eye view of the entire system. The operators’ responsibilities are to monitor data on their consoles, coordinate with other operators within their control center, coordinate with plant operators, and periodically exchange information with neighboring system EMS operators. The majority of the time, the grid is relatively quiescent with no adverse conditions. But when a disturbance suddenly occurs, the operators each need to perform their specific individual tasks and need to coordinate with other operators in the control center in order to use their collective expertise to identify specific actions that may need to be taken to mitigate the impact of the disturbance.

3.4.1.3  Advances in Energy Management Systems for the Smart Grid

3.4.1.3.1  Grid Operator Visualization Advances

Timely visualization of real-time grid conditions is essential for successful grid operations.

In the aftermath of the 1965 blackout of the northeast United States and Canada, the findings from the blackout report included the following: “control centers should be equipped with display and recording equipment which provide operators with as clear a picture of system conditions as possible.” Since then many more blackouts have occurred, small and large, around the world, and in almost all cases, improvements in visibility of grid conditions were identified as one of the primary recommendations.

EMS control center operator console. (© Copyright 2012 Alstom Grid. All rights reserved.)

Figure 3.44   EMS control center operator console. (© Copyright 2012 Alstom Grid. All rights reserved.)

On August 14, 2003, the largest blackout in the history of the North American power grid occurred. Subsequently, numerous experts from across the industry were brought together to create a blackout investigation team. A primary objective of this team was to perform in-depth postevent analyses to identify the root causes and, more importantly, to make recommendations on what could be done to prevent future occurrences of such events. The report (the United States–Canada, 2004) identified four root causes: inadequate system understanding, inadequate situational awareness (SA), inadequate tree trimming, and inadequate reliability coordinator diagnostic support. This report gave a sudden new prominence to the term “situation awareness” or “situational awareness.”

There are several definitions of SA. Very simply, SA means to be constantly aware of the health of changing power system grid conditions. Other definitions include “Being cognizant of the current power system state and the potential imminent impact on grid operations” and “The perception of the elements in the environment within a volume of time and space, the comprehension of their meaning, and projection of their status in the near future” [1].

An essential aspect of SA for grid operations is being able to extract and concisely present the information contained in the vast amount of ever-changing grid conditions. An advanced visualization framework (AVF) is necessary to be able to present real-time conditions in a timely, prompt manner. AVF needs to provide the ability to efficiently navigate and drill down, to discover additional information, such as the specific location of the problem. More importantly, AVF needs to provide the ability to identify and implement corrective actions in order to mitigate any risks to successful grid operations. Operators do not just want to only know that there is a problem now or that a problem is looming in the immediate horizon. They also want to know how to fix the problem.

A number of visualization products have been developed over the past decade. These include Powerworld [2], RTDMS [3], Space-Time Insight, and so on.

The following are examples of currently available ALSTOM technology.

A frequently cited human limitation has been described as Miller’s magical number seven, plus or minus two [4]. Miller’s observation was that humans have a limited capacity for the number of items or “chunks” of information that they can maintain in their working memory. Therefore, as increasing volumes of data are streamed into the control center, one must keep in mind that there is a limit on how much of these data are actually useful to the operator. As per Miller, the operator can typically handle only five to nine such “chunks” of information. The limitation with the traditional display technologies has been that they approach the problem by “rolling up” (aggregating) the data and then allowing the operator to “drill down” for details. The result is a time-consuming and cognitively expensive process [5].

Evolution of EMS SA capabilities. (© Copyright 2012 Alstom Grid. All rights reserved.)

Figure 3.45   Evolution of EMS SA capabilities. (© Copyright 2012 Alstom Grid. All rights reserved.)

The challenge is to translate the ever-increasing deluge of measurement data being brought into the control center into useful, bite-size, digestible “chunks” of information. This has been the primary objective of all the recent SA developments for grid operations: “converting vast volumes of grid data into useful information and showing it on a display screen.”

As the saying goes, “a picture is worth a thousand words.” More importantly, the correct picture is worth a million words! What this means is that providing the grid operator with a concise depiction of voluminous data is meaningful, whereas providing a depiction of voluminous data that needs immediate operator attention is immensely more meaningful! This is the objective of an advanced, intelligent SA, to provide timely information that may need prompt action, for current system conditions.

The way to develop the “correct” picture is to organize the visualization presentation around operator goals; that is, what is the task result the operator is seeking? Use-cases need to be developed to document the specific actions an operator takes in order to reach a specific goal. These use-cases can then be used to develop efficient navigation capabilities to quickly go from receipt of an alert to analyzing the “correct picture” and determining the appropriate course of action.

These are the requirements upon which today’s advanced visualization and SA capabilities have been developed. SA capabilities continue to be developed and enhanced to help improve grid operations. Figure 3.45 shows how SA has evolved with analytical tools over the past few decades and what is foreseen for the immediate future.

Generation 1 visualization and applications were focused on monitoring and control; these capabilities were developed in the 1980s and 1990s. Generation 2 focused on creating information from data to facilitate decision making in order to take corrective action; these capabilities were developed in the last two decades and are operational in many control centers around the world. Generation 3 is foreseen to focus on developing real-time measures to determine exposure and associated risk related to ensuring integrity of the grid. This generation will likely be focused on stochastic analytics, as well as heuristic and intelligent systems, for the development of advanced risk management and mitigation applications and visualization capabilities. These developments will be aided by ongoing technology advances such as subsecond, synchronous measurements, coupled with fast-acting, subsecond controllers.

3.4.1.3.2  Decision Support Systems

Most control center operator decisions today are essentially reactive. Current information, as well as some recent history, is used to reactively make an assessment of the current state and its vulnerability. Operators then extrapolate from current conditions and postulate future conditions based on personal experience and planned forecast schedules.

The next step is to help operators make decisions that are preventive. Once there is confidence in the ability to make reactive decisions, operators will need to rely on “what-if” analytical tools to be able to make decisions that will prevent adverse conditions if a specific contingency or disturbance were to occur. The focus therefore shifts from “problem analysis” (reactive) to “decision making” (preventive).

The industry trend next foresees predictive decision making, and in the future, decisions will be proactive. These types of decision-making process are the foundation of a decision support system (DSS) that will be essential to handle operation of smarter grids with increasing complexity and more diverse generation and load types. The DSS will use more accurate forecast information and more advanced analytical tools to be able to confidently predict system conditions and use what-if scenarios to be able to take action now in order to preclude possible problematic scenarios in the future. The components of DSS include the following:

  • AVF
  • Geospatial views of the grid
  • Dynamic dashboards generated on demand
  • Holistic views combining data from multiple diverse sources
  • Use-case analysis to enhance ergonomics
  • Advanced, fast, alert systems
  • Root-cause analysis to quickly identify sources of problems
  • Diagnostic tools that recommend corrective actions
  • Look-ahead analysis to predict imminent system conditions

Figure 3.46 is an overview of a look-ahead analytical tool to help the operator make preventive decisions in order to obviate potential problems. The current system state is used to calculate projected future system states based on load forecasts, generation schedules, etc., to determine whether conditions in the future are safe. As the figure depicts, if the projections indicate a problem is imminent, the operator could then determine and implement an action, in advance, to ensure that the problem is avoided.

Figure 3.47 is one example of a DSS implemented by ALSTOM grid. It consists of a central DSS server and database. The DSS server provides information to a map board (for wide area visualization) and to operator workstations. A power system simulator is used as a look-ahead engine to forecast immediate future conditions. This look-ahead data, together with traditional EMS SCADA and state estimator data, are shown at the operator workstations to facilitate and improve timely, preventive decision making.

Look-ahead analysis for preventive control. (© Copyright 2012 Alstom Grid. All rights reserved.)

Figure 3.46   Look-ahead analysis for preventive control. (© Copyright 2012 Alstom Grid. All rights reserved.)

DSS implementation. (© Copyright 2012 Alstom Grid. All rights reserved.)

Figure 3.47   DSS implementation. (© Copyright 2012 Alstom Grid. All rights reserved.)

3.4.1.3.3  Control Centers of the Future

Automation of the grid will evolve toward more decentralized, intelligent, and localized control. This is the vision of smart grid at the transmission level. Evolution toward a “smarter” transmission system grid is imminent and will take many forms of predictive and corrective actions: from avoiding system congestion while maximizing efficiency and minimizing supply costs to reacting quickly to system faults while maintaining power to as many customers as possible. These are goals not only at the transmission level but also at the distribution level of the electric grid.

The future will likely see more generation sources closer to the load centers. Residential subdivisions could have their own local fuel cells supplying power to 20 or 30 households; this will result in the creation of local microgrids that will attempt to optimize benefits for that local area. This would reduce dependence on the transmission grid to transfer power from remote locations to populated load centers. As renewable energy costs become more competitive, there will be growth in generation sources such as wind power, solar cells, and possibly geothermal, tidal, and ocean power. Customers will be able to monitor the current price of electricity and decide whether to turn on the dishwasher or not using a “smart metering” scheme. This will flatten the utility’s load demand profile and make generation dispatch more predictable.

In addition to more local generation, use of renewable energy and increased customer control, new types of measurements will be deployed aggressively worldwide. Already, globally synchronized measurements taken in the subsecond range, such as the phasor measurement units (PMUs) described earlier, are being used in control centers to facilitate earlier and faster detection of problems and to make it easier to assess conditions across the grid. Novel control center applications will be developed to use this new type of synchronized measurement technology to further improve the ability to maintain the integrity of the power system. These applications will also be able to identify disturbances, unplanned events, and stability problems at a much faster rate.

Continual development of control center applications and tools will play a critical role in driving smart grid advances in the transmission arena: wide area measurements and control, congestion alleviation, increased power delivery efficiency and reliability, and system-wide stability and security.

3.4.1.4  Control System Cybersecurity Considerations

3.4.1.4.1  Introduction

The North American Electric Reliability Corporation (NERC) Critical Infrastructure Protection (CIP) Standards 002–009 [6] require utilities and other responsible entities to place critical cyber assets within an electronic security perimeter. The electronic security perimeters must be subjected to vulnerability analyses, use access control technologies, and include systems to monitor and log the electronic security perimeter access. The Federal Energy Regulatory Commission (FERC) requires responsible entities involved in bulk electricity transmission to adhere to the NERC CIP standards. No such regulation exists for the electric distribution systems in the United States. Electronic perimeter security minimizes the threat of illicit network penetrations; however, persons with electronic access to control systems within the electronic security perimeter still remain a threat. Such persons include hackers who have penetrated the electronic security perimeter via external network connections, disgruntled insiders, and hackers who may penetrate wireless interconnection points within the electronic security perimeter.

SCADA systems remotely monitor and control grid physical assets. SCADA systems are used in power transmission and distribution systems for SA and control. Present-day SCADA systems are commonly connected to corporate intranets which may have connections to the Internet. SCADA communications protocols such as MODBUS, DNP3, and Allen Bradley’s Ethernet Industrial Protocol lack authentication features to prove the origin or age of network traffic. This lack of authentication capability leads to the potential for network penetrators and disgruntled insiders to inject false data and false command packets into a SCADA system either through direct creation of such packets or replay attacks.

Modern power systems are being upgraded with the addition of PMUs and phasor data concentrators (PDCs) that facilitate wide area transmission system SA. The IEEE C37.118 protocol carries phasor measurements between PMUs and PDCs to historians and to EMSs. As with MODBUS and DNP3, the IEEE C37.118 protocol does not include a cryptographic digital signature. As such, a hacker or disgruntled insider may potentially inject false synchrophasor data into a transmission control system network without detection. Furthermore, the IEEE C37.118 protocol includes command frames used to configure PMUs and PDCs. False command frames may also be injected in a manner to similar to that used to inject false data frames.

IEC 61850 is one of the new protocol stacks development to increase interoperability among protection and control devices (IEDs—intelligent electronic devices) in the substation. However, the IEC 61850 protocol does not directly include cybersecurity features, though a separate IEC recommendation [7], IEC 62351, guides users on how to secure an IEC 61850 network installation. IEC 61850 offers features such as a standardized XML-based substation configuration language (SCL) for describing and configuring substation protection and control devices. IEC 61850 also offers standardized data-naming conventions for power system components. Such standardization greatly simplifies power system management and configuration, though it is also an enabler for hackers since it can minimize a hacker’s learning curve. It is imperative that IEC 61850 installations adhere to the IEC 62351 recommendations.

3.4.1.4.2  Network Penetration Threats

There are three primary threats to process control systems: sensor measurement injection, command injection, and denial of service (DOS).

Sensor measurement injection attacks inject false sensor measurement data into a control system. Since control systems rely on feedback control loops before making control decisions, protecting the integrity of the sensor measurements is critical. Sensor measurement injection can be used by attackers to cause control algorithms to make misinformed decisions.

Command injection attacks inject false control commands into a control system. Control injection can be classified into two categories. First, human operators oversee control systems and occasionally intercede with supervisory control actions, such as opening a breaker. Hackers may attempt to inject false supervisory control actions into a control system network. Second, remote terminal units (RTUs) and IEDs protect, monitor, and control grid assets. The protection and control algorithms take the form of ladder logic, C code, and registers that perform calculations and hold key control parameters such as high and low limits, comparison, and gating control actions. Hackers can use command injection attacks to overwrite ladder logic, C code, and remote terminal register settings.

DOS attacks attempt to disrupt the communications link between the remote terminal and master terminal or human machine interface. Disrupting the communications link between master terminal or human machine interface and the remote terminal affects the feedback control loop and makes process control impossible. DOS attacks take many forms. A common DOS attack attempts to overwhelm hardware or software so that it is no longer responsive.

3.4.1.4.3  Isolating the Control System Network

Control systems should be isolated from corporate networks or LANs to minimize the potential of illicit penetration via wired networks. Corporate networks are used by most employees of a company. Corporate networks often include connections to the WWW (Internet), some via wireless LAN connections using IEEE 802.11 protocols. These portable nodes, such as laptop computers which come and go from the corporate network, generally allow the use of e-mail and typically see frequent use of USB disk drives. All of these characteristics lead to cybersecurity vulnerabilities and the need to isolate the control system network from the corporate network.

Connections to the WWW are a common point for external network penetration. Hackers commonly use port scanning tools to scan for TCP and UDP services. Contemporary port scanning equipment software such as NMAP [8] can target specific IP address ranges, find TCP and UDP services, identify service demon version numbers, and identify operating system name and version numbers. Armed with such information, hackers can use look-up tables available on the Internet to find exploits targeted at specific versions of specific network services running on specific operating system platforms. These exploits often allow hackers to bypass network defenses and penetrate the corporate network.

Wireless LANs on corporate networks are also a significant weak link. The IEEE 802.11 standards include multiple security substandards of which a predominant group has been cracked and is subject to penetration attacks [9]. The Wireless Equivalent Privacy (WEP) standard is vulnerable to exploit in less than 60 s. The TKIP portion of the Wireless Protected Access standard has also been cracked. These vulnerabilities allow an attacker in close proximity to a corporate network to penetrate the corporate network for further port scanning, eavesdropping, and network traffic injection.

Portable nodes such as laptop computers commonly travel between many networks. For instance, a corporate user may use his or her laptop at home, at the local coffee shop, or in the airport and then later connect the laptop to the corporate Internet. External networks such as the home, coffee shop, and airport networks often have less robust cybersecurity profiles and provide a convenient platform for injecting malware such as key loggers and root kits onto corporate laptops via viruses and worms. When the laptop returns to the corporate network infected with a root kit or key logger, it may offer a backdoor for hackers to then penetrate the corporate network for further port scanning, eavesdropping, and network traffic injection.

(a) Insecure versus (b) isolated control system.

Figure 3.48   (a) Insecure versus (b) isolated control system.

Corporate employees almost always have e-mail access. E-mail is a very common platform for infecting computers in a corporate network. Hackers use spam e-mail to spread viruses which contain root kits and key loggers which may then be used to offer a backdoor to penetrate the corporate network for further port scanning, eavesdropping, and network traffic injection.

Another malware injection vector is through thumb drives. In April of 2010, the Industrial Control Systems Cyber Emergency Response Team (ICS-CERT) released an alert warning control system operators of the threat of USB drives with autorun features that can be used to inject malware. The advisory recommends control system operators disable CD-ROM autorun capability, establish strict policies for the use of USB drives on control system networks, and train users on the treatment of these drives.

The aforementioned penetration threats, connections to the WWW (or Internet), usage of wireless LANs, and the use of laptop computers, e-mails, and USB drives lead to the need for isolation of the control system network from the corporate network. Figure 3.48 shows two control system network architectures: an insecure architecture and an architecture secured via isolation.

Figure 3.48a shows seemingly separate corporate and control system networks. Often in such network arrangements, the corporate network and control system network will be separated via routers and often the two networks will be on separate virtual networks. However, if there is no mechanism in place to stop unauthorized network traffic from entering the control system network from the corporate network, penetrators can harm the control system via data or control injection attacks or via DOS attacks.

Figure 3.48b shows a control system network isolated from a corporate network. The diagram labels the box between the networks as a firewall, IDS or intrusion detection system (IDS), and access control system. The firewall can be used to limit access between the two networks. NAT (network address translation) firewalls hide the internal IP address of nodes on the control system network from nodes on the corporate network. This protects the control system nodes from port scanning attacks. Further, the firewalls can be used to scan the contents of network packets for signatures of known attacks. Firewalls can also be used as gateway devices which limit traffic to only certain applications on specific TCP and UDP ports.

Access control may reside in the firewall or may reside on a separate server within the control system network. Access control schemes limit network access to authorized individuals and systems. Access control schemes vary in strength. A simple access control scheme is the use of user ID and passwords. NERC CIP 007-3, Cyber Security—Systems Security Management, requires that when passwords must be used, passwords must be at least six characters long and include a mixture of letters, numbers, and special characters. The use of passwords for access control should be avoided wherever possible. Passwords systems are subject to dictionary attacks and other brute force attacks. Also password files are vulnerable to exploits, including password files in control systems [10].

A more robust access control system may use a public key infrastructure (PKI) to provide all control system users and systems with a certificate. PKI systems assign individual public/private key pairs to each user or system in a network. Certificates signed by a certification authority are used to communicate a user’s or system’s public key to other users or systems. When a user or system accesses a network device, a challenge response protocol can be used to allow the connecting user or system to authenticate identification by proving the user or system possesses the private key associated with the public key in their certificate. Systems within a PKI-protected network may also confirm certificate validity via an inquiry to the network certificate authority. Certificates may be revoked by a certificate authority and PKI certificates also have expiration dates. PKI provides a good means for adhering to NERC CIP requirements that require access control and encourage the use of individual user accounts with individual roles. Role-based access control allows each user to be assigned privileges, which match his or her work needs, without providing excess privileges that may allow a user to inadvertently or intentionally harm a system.

IDSs are used to monitor network activity for patterns related to cybersecurity threats. IDS systems may reside within a firewall or external to the firewall. Often, multiple types of IDSs are used on a single network. There are two basic types of IDS: signature-based IDS and statistical IDS.

A signature-based IDS scans network packets for signatures of known attacks. If a packet matches a known signature, the packet is flagged and an alert is generated. The alert may be audible, e-mail, or just written to a file for later review. Signature-based IDSs are generally deterministic, meaning that they will always detect an attack that matches a known signature. Because signature-based IDS monitors for exact pattern matches, they can be bypassed by small changes to previously known attacks. Also, signature-based IDSs cannot detect completely new attacks since by definition no signature will exist to match. Signature-based IDS systems are also relatively fast which can be important in real-time system applications. Some work has been done to develop signature-based IDS patterns for control systems using SNORT® (an open source network intrusion prevention and detection system [IDS/IPS]) for the MODBUS and DNP3 protocols [11]. Additionally, Oman and Phillips have used signature-based methods to detect SCADA cyber intrusions [12].

A statistical IDS estimates the probability that a network transaction or a group of network transactions are part of a cyber attack. The general idea of statistical IDSs is to attempt to detect intrusions that do not match a previously known intrusion signature yet are still different enough from normal traffic to warrant review. Most statistical IDSs are anomaly detectors that use data mining classifiers, such as neural networks or Bayesian networks, to classify network transactions as anomalous or normal. Many statistical IDS methodologies exist in both the research and practical domains. No statistical IDS is deterministic, all are probabilistic, meaning that all have less than 100% accuracy and all sometimes classify normal traffic as abnormal (a false-positive) and sometimes classify abnormal traffic as normal (a false negative). Control systems monitor and control critical physical processes, and therefore one of the most important cybersecurity criteria is availability. The control system must remain available for control and monitoring at all times, and a corollary to this is that control system cybersecurity solutions must do no harm to the control system. As such, IDS inaccuracies are problematic and lead to the need for statistical IDS alerts to always be sent to a human for validation before intrusion mitigation actions are taken.

Statistical IDSs are being developed specifically for use with control systems and the smart grid. These involve development of IDS inputs or (aka features) specific to control system applications and network protocols. The introduction of control system and smart-grid-specific IDS features will lead to more accurate IDSs.

3.4.1.4.4  Smart Grid Control System Cybersecurity Considerations

NERC CIP 005 requires utilities and other bulk energy system constituents to create an electronic security perimeter around critical cyber assets. Figure 3.49 is a diagram representing the primary assets found in a typical bulk electric transmission system after the addition of PMUs and PDCs. The assets are grouped into four major blocks: a control center, a PDC, and two transmission substations. Each block is electronically isolated by an electronic security perimeter. The electronic security perimeters are denoted as dashed lines around each isolated block. NERC CIP does not specify the methods for creating the electronic security perimeter. As such the methods vary widely and are therefore drawn as security clouds in Figure 3.49.

Smart grid bulk electric transmission system electronic security perimeters.

Figure 3.49   Smart grid bulk electric transmission system electronic security perimeters.

The security cloud should address the three basic cybersecurity core principles: confidentiality, integrity, and availability. Of these three cybersecurity core principles, it is generally agreed that they should be ranked by importance for the smart grid and for control systems as availability, integrity, and then confidentiality.

3.4.1.4.4.1  Availability

The smart grid is considered critical infrastructure. Loss of SA over the bulk electric transmission systems may lead to incorrect control actions. Furthermore, loss of the ability to make control actions may lead to system damage or failure including ultimately blackouts. Such can lead to economic harm for the local or regional economy. There are two primary components to ensuring control system availability: IDSs and system design.

Loss of availability can come from DOS attack and command injection attacks that attempt to directly take control of the control system. Control injection attacks can be detected with IDSs and prevented with authentication techniques that are covered under the integrity discussion. DOS attacks attempt to deny network service by flooding a network with information at a rate faster than it can be processed. IDSs can detect and mitigate many DOS attacks.

Smart grid control systems should include IDS sensors to monitor network transactions at all entry points to the control system network or at points guaranteed to capture traffic from all entry points to the control system network. Entry points include local area network (LAN) drops, dial-up modems, wireless terminals, and connections to trusted neighbors such as regional operators and independent system operators, as well as connections to the corporate LAN.

System design also affects control system availability. First, many attack vectors can be stopped by eliminating unneeded network services. NERC CIP 007 requires bulk electric responsible entities to disable all network ports and services not used for normal or emergency operation. For instance, TCP and UDP each use port multiplexing to support many transport layer services. In total, TCP and UDP can support 64 K ports. The Internet Assigned Numbers Authority (IANA) assigns port numbers to frequently used services, such Telnet, SSH, and SMTP, etc. Many control system protocols have IANA-reserved port numbers, for example, MODBUS TCP servers listen on port 502, Allen Bradley EtherIP uses TCP port 44818, and UDP port 2222. DNP3 over TCP uses port 20000. Any unused port should not have a listening server running on any cyber system connected to the control system network. IDSs should monitor for activity on all TCP and UDP ports.

Smart grid control systems should be designed to allow each user to have a unique account ID and password. This requirement supports traceability and role-based access control. Traceability means that actions taken on the control system can be traced to an individual user. Role-based access control means that each user can be assigned roles and associated privileges (levels of authority). For example, a dispatcher may be allowed to open a breaker, while a less privileged user may not be able to open the same breaker. Legacy control system equipment may not support separate usernames. In this case, NERC CIP 007 requires entities to limit password knowledge to those individuals with a need to know. The security clouds shown in Figure 3.49 include access control features that limit access to an entire electronic security perimeter. These access control features can be certificate based, can support separate user ID and passwords for all users, and can support role-based access control.

The final design-related element to the availability principle may be obvious but is worth mentioning. All cybersecurity solutions should of course do no harm to the control system. The algorithms used to model control systems and make control decisions often have data age requirements. For instance, some EMS algorithms require data to be less than 2–4 ms old to support control decisions. “Bump-in-the-wire” (additional hardware and software in the communication link) for cybersecurity solutions, such as that shown in Figure 3.49, add latency to traffic delivery. The additional latency must never cause the system to become nonfunctional or uncontrollable. Furthermore, many proposed IDS systems include automated mitigation actions. These actions should only be taken when the IDS system is deterministic. Statistical IDS systems are probabilistic and therefore always have a probability of misclassifying network traffic. In such cases, the IDS may recommend mitigation actions, but a human should be kept in the control loop to validate mitigation recommendations.

3.4.1.4.4.2  Integrity 

The integrity cybersecurity principle is intended to protect network traffic from unauthorized modification. The most common method for insuring network traffic integrity is authenticating network traffic through the use of digital signature algorithms (DSAs). The MODBUS, DNP3, Allen Bradley EtherIP, IEEE C37.118, and IEC 61850 standards do not include features to authenticate network traffic. Authentication is left to the responsibility of a high layer protocol.

The security clouds in the network architecture shown in Figure 3.49 can digitally sign network traffic. Bump-in-the-wire solutions exist that can capture network traffic as it leaves an electronic security perimeter and append it with a digital signature. The security cloud in a receiving electronic security perimeter can validate the digital signature before forwarding the traffic to cyber systems inside the electronic security perimeter. The digital signatures can be based on multiple algorithms. FIPS 186 (the NIST Federal Information Processing Standards Publications for the Digital Signature Standard) specifies the NIST-recommended DSA. DSA uses public key cryptography techniques to sign network traffic. This method is often considered too slow and resource intensive for control systems; however, if cryptographic processors are used in the place of the security clouds in Figure 3.49, it is likely that DSA signing and validation can meet required latency targets for smart grid applications. The elliptic curve digital signature algorithm (ECDSA) is an alternative approach for network traffic authentication. The ECDSA is considered faster, uses smaller keys, and therefore has less storage than DSA. ECDSA is patented by CERTICOM, RSA, the U.S. National Security Agency, and Hewlett Packard. This may slow the adoption of ECDSA. A third alternative for authentication is the hashed message authentication algorithm (HMAC). HMAC is the least resource-intensive DSA of the three discussed here.

A key consideration when using digital signatures in control systems is the length of the signature and the latency added to the network traffic as a result of adding and validating the signature. This is especially pertinent for low data rate systems such as SCADA systems that often have data rates of 1,200–19,200 Bd. The Pacific Northwest National Laboratory [13] recently released a study that measures round-trip response times for DNP3 frames signed with various length HMAC authenticators. Response times varied according to the length of the authenticator, according to the data rate of the communications link and according to the type of bump-in-the-wire cyber system used to authenticate the DNP3 frames. The worst case was 1996 ms latency for 1200 Bd systems using industrial PCs to create and validate the 12 B HMAC signatures. The best case was 210 ms latency for 19,200 Bd system using industrial PCs to create and validate the 12 B HMAC signatures. Systems, such as YASIR [14], have been developed to minimize latency involved in adding digital signatures to MODBUS and DNP3 network frames.

Another important consideration when planning to use digital signatures is availability of hardware resources. Many RTUs are equipped with cryptographic resources and storage capabilities that may conflict with the needs of an adequate DSA. This presents three possibilities for control systems that support digital signatures for authentication. First, system designers may choose to use existing hardware and add a bump-in-the-wire cybersecurity solution (to sign and validate network traffic). Second, system designers may choose to upgrade existing remote or master terminal hardware to integrate the required cryptographic and storage resources. Third, system designers may choose to use a lightweight DSA that can execute on existing master terminal and remote terminal platforms.

3.4.1.4.4.3  Confidentiality

The confidentiality cybersecurity principle intends to protect network traffic from unauthorized eavesdropping. The most common method for insuring network traffic confidentiality is through the use of encryption.

The need for confidentiality of control system network traffic can be a controversial topic, with many control system engineers arguing against the need for control system network traffic confidentiality. The need for confidentiality will vary for each installation. However, it must be stressed that hackers use eavesdropping to collect information about systems before executing attacks. Confidentiality minimizes the potential attacker’s capabilities in this intelligence gathering stage.

The security clouds in the network architecture shown in Figure 3.49 can encrypt and decrypt network traffic. Bump-in-the-wire solutions exist that can capture network traffic as it leaves an electronic security perimeter and encrypt the network traffic. The security cloud in a receiving electronic security perimeter can decrypt network traffic before forwarding the traffic to cyber systems inside the electronic security perimeter. Encryption algorithm choice can be based on multiple algorithms.

A key consideration when using encryption in control systems is the latency added to the network traffic as a result of encryption and decryption of the traffic. This is especially pertinent for low data rate systems such as SCADA systems, which often have data rates of 1,200–19,200 Bd. Symmetric block ciphers such as AES (Advanced Encryption Standard), DES (Data Encryption Standard), or 3DES seem best suited for use in control systems. These ciphers are generally quite fast and all three have many open source implementations in software and hardware. All three of these ciphers can be used as a stream cipher (output feedback, cipher feedback, or counter mode) to speed up the encryption decryption process and thereby reduce latency.

Another important consideration when planning to use encryption is availability of hardware resources. Many RTUs are equipped with cryptographic resources and storage capabilities which may conflict with the needs of an adequate encryption algorithm. This presents three possibilities for control systems, which support encryption for confidentiality. First, system designers may choose to use existing hardware and add a bump-in-the-wire cybersecurity solution (to sign and validate network traffic). Second, system designers may choose to upgrade existing remote or master terminal hardware to integrate the required cryptographic and storage resources. Third, system designers may choose to use a lightweight encryption algorithm which can execute on existing master terminal and remote terminal platforms.

3.4.1.4.5  Conclusion

Control systems that implement feedback control loops across networked communications links must protect the availability and integrity of control and sensor measurement data. Three primary threats to the control systems found in utility control systems are sensor measurement injection, control injection, and denial of service (DOS) attacks. Current installations of synchrophasor systems have a primary application of wide area visibility. As these systems evolve from a wide area visibility role to a wide area control (WAC) role, their cyber critical asset classification will also evolve from non-critical to critical. As that evolution completes, synchrophasor system cybersecurity protections will need to be upgraded to protect system availability, integrity, and confidentiality.

3.4.2  FACTS and HVDC

Stuart Borlase, Neil Kirby, Paul Marken, Jiuping Pan, and Dietmar Retzmann,

One basic function of electricity networks is that the amount of power produced at any given moment must match the amount of power consumed. In the middle of this balancing act is the infrastructure that must carry the power from its place of production to its point of use, that is, the transmission network. In a typical AC power system, the transmission network performs this service through transmission lines, transformers, circuit breakers, and other common equipment. The flow of electricity through the transmission system follows the basic laws of physics. For a given voltage and line impedance, one can calculate the amount of current that will flow. This current flow may be more (overloaded) or less (underutilized) than desired by the transmission operator. A transmission device that is able to change the electrical system response to a given condition is obviously a useful element in creating a smarter grid. While adding this equipment alone does not constitute having a “smart grid,” measurement devices and software that calculates optimum situations are only helpful to the extent that something can be done about the situation. The ability to control the flow of real and/or reactive power, the voltage and the frequency, and other aspects of the transmission system can be key elements in optimizing the grid. Those devices that can assert control over the real or reactive power flow in a specific line or node or even region of a network are the following:

  • Synchronous condensers
  • FACTSs (flexible AC transmission systems) devices
  • HVDC (high-voltage direct current)

These devices have the ability to implement aspects of smart control under normal, steady-state operating conditions, as well as under transient or fault events, and depending on their speed of response, may be able to automatically prevent or speed up the recovery from fault situations.

In particular, the group of high-voltage power electronics devices, such as FACTS and HVDC, provide features that avoid problems in heavily loaded power systems; they increase the transmission capacity and system stability very efficiently and assist in preventing cascading disturbances.

As load increases and changes, some system elements are going to become loaded up to their thermal limits, and wide area power trading with fast varying load patterns will contribute to increasing congestion [1,2]. In addition to this, the dramatic global climate developments call for changes in the way electricity is supplied. Environmental constraints, such as loss minimization and CO2 reduction, will play an increasingly important role. Consequently, network planners must deal with conflicting requirement between reliability of supply, environmental sustainability, as well as economic efficiency [3,4]. The power grid of the future must be secure, cost effective, and environmentally compatible. The combination of these three tasks can be tackled with the help of ideas, intelligent solutions, as well as innovative technologies, such as HVDC and FACTS, which have the potential to cope with the new challenges. By means of power electronics, they provide a versatile range of features that are necessary to avoid many operational problems in the power systems; they increase the transmission capacity and system stability very efficiently and help prevent cascading disturbances. Features of a future smart grid such as this can be outlined as follows: flexible, accessible, reliable, and economic. Smart grid will help achieve a sustainable development.

The developing load and generation patterns of existing power systems will lead to bottlenecks and reliability problems. Therefore, the strategies for the development of large power systems go clearly in the direction of smart grid, consisting of AC/DC interconnections and point-to-point bulk power transmission “highways” (super grid solutions). FACTS technology is also an important part of this strategy, and hybrid systems offer significant advantages in terms of technology, economics, and system security.

3.4.2.1  Power System Developments

The development of electric power supply began more than 100 years ago. Residential areas and neighboring establishments were at first supplied with DC via short lines. At the end of the nineteenth century, AC transmission was introduced, using higher voltages to transmit power from remote power stations to the consumers.

In Europe, 400 kV became the highest AC voltage level, in Far East countries mostly 550 kV, and in America 550 and 765 kV. The 1150 kV voltage level was anticipated in some countries in the past, and some test lines have already been built. Figure 3.50 depicts these developments and prospects.

Development of AC transmission—milestones and prospects. (© Copyright 2012 Siemens. All rights reserved.) * China (1000 kV pilot project launched) and India (1200 kV in actual planning) are currently implementing bulk power UHV AC backbone; ** Brazil: North–South interconnector.

Figure 3.50   Development of AC transmission—milestones and prospects. (© Copyright 2012 Siemens. All rights reserved.) * China (1000 kV pilot project launched) and India (1200 kV in actual planning) are currently implementing bulk power UHV AC backbone; ** Brazil: North–South interconnector.

Examples of large synchronous AC interconnections are systems in North America, Brazil, China, and India, as well as in Europe (installed capacity 631 GW, formerly known as UCTE—now CE, Continental Europe) and Russia (IPS/UPS—315 GW). IPS/UPS and CE are planned to be interconnected in the future.

It is an unfortunate consequence of increasing size of interconnected systems that the advantages of larger size diminish for both technical and economical reasons, since the energy has to be transmitted over extremely long distances through the interconnected synchronous AC systems. These limitations are related to problems with low-frequency inter-area oscillations [5–7], voltage quality, and load flow. This is, for example, the case in the CE (former UCTE) system, where the 400 kV voltage level is in fact too low for large cross border and inter-area power exchange.

FACTS technology, based on power electronics, was developed in the 1960s to improve the performance of weak AC systems, to make long-distance AC transmission feasible, and to help solve technical problems within the interconnected power systems.

FACTS systems are used both in a parallel connection (SVC [static VAr compensator], STATCOM [static synchronous compensator]), in a series connection (FSC, TCSC/TPSC, S3C), or as a combination of both (UPFC, CSC) to control load flow and to improve dynamic conditions. These will be described in the following sections.

In the second half of the last century, high power HVDC transmission technology was introduced, offering new dimensions for long-distance transmission. This development started with the transmission of power in a range of less than 100 MW and was continuously increased. The state of the art for many years settled at 500 kV rating, as illustrated in Figure 3.50, and there are many examples of links with transmission ratings of 3 GW over large distances with only one bipolar DC line around the world today. More recent development has achieved transmission ratings of 6 GW and more over even larger distances with only one bipolar DC transmission system. Further projects with similar or even higher ratings in China, India, and other countries are going to follow.

Table 3.7 summarizes the impact of FACTS and HVDC on load flow, stability, and voltage quality when using different devices. Evaluation is based on a large number of studies and experiences from projects. For comparison, mechanically switched devices (MSC/R) are included in the table.

FACTS and HVDC applications will play an important role in the future development of smart power systems. This will result in efficient, low-loss AC/DC hybrid grids which will ensure better controllability of the power flow and, in doing so, do their part in preventing “domino effects” in case of disturbances and blackouts. By means of these DC and AC ultrahigh-power transmission technologies, the “smart grid,” consisting of a number of highly flexible “microgrids,” will turn into a “super grid” with bulk power energy highways, fully suitable for a secure and sustainable access to huge renewable energy resources such as hydro, solar, and wind. The state-of-the-art AC and DC technologies and solutions for smart and super grids are explained in the following sections.

In addition to these relatively complex systems using power electronics, there are other lower cost features which may be incorporated into the equipment installed in future power networks. These offer varying extents of functionality to add to the overall intelligence of the smart grid of the future, such as monitoring of transformers and switchgear which provide real-time analysis of transformer oil and other status information. Maintenance management systems can monitor and analyze this information and determine increased wear-and-tear rates, predict failure modes, and identify the need for preemptive maintenance before the next scheduled maintenance activity.

3.4.2.2  Flexible AC Transmission Systems

Reactive power compensation has been regarded as a fundamental consideration in achieving efficient electric energy delivery system. Reactive compensation may be categorized into series compensation, shunt compensation, and combined compensation, representing the intentional insertion of reactive power-producing devices in series and/or in parallel in the power circuit, either capacitive or inductive. Further flexibility can be achieved with dynamically controllable compensation to provide the required amount of corrective reactive power precisely and promptly. A family of such controllable compensation devices based on power electronics technology is often referred to as FACTS devices.

Table 3.7   Facts and HVDC: Overview of Functions

This picture loads on non-supporting browsers.

Source:  Copyright 2012 Siemens Energy, Inc. All right reserved.

Influence (based on studies and practical experience): o, No or low; • small; ••, medium; ••• strong.

3.4.2.2.1  FACTS Developments

Since the 1960s, FACTSs have been evolving to a mature technology with high power ratings [8]. The technology, proven in various applications, became a first-rate, highly reliable one. Figure 3.51 shows the basic configurations of FACTS devices.

In Figure 3.52, the impact of series compensation on power transmission and system stability is illustrated, and Figure 3.53 depicts the increase in voltage quality by means of shunt compensation with SVC (or STATCOM).

3.4.2.2.2  Series Compensation

The conventional or fixed series compensation (FSC) is a well-established technology and has been in commercial use since the early 1960s. The basic concept of series-capacitor compensation is to reduce the overall inductive reactance of power lines by connecting series capacitors in series with the line conductors. As shown in Figure 3.54, the series-capacitor compensation equipment comprises series-capacitor banks, located in the line terminals or in the middle of the line, and overvoltage protection circuit for the capacitor bank. A photograph of a series compensation installation is shown in Figure 3.55.

Transmission solutions with FACTS.

Figure 3.51   Transmission solutions with FACTS.

FACTS—influence of series compensation on power transmission.

Figure 3.52   FACTS—influence of series compensation on power transmission.

Incorporating series capacitors in suitable power lines can improve both power system steady-state performance and dynamic characteristics. Series compensation has traditionally been used associated with long-distance transmission lines and with improving transient stability. In a transmission system, the maximum active power transferable over a certain power line is inversely proportional to the series inductive reactance of the line. Thus, by compensating the series inductive reactance to a certain degree, typically between 25% and 70%, using series capacitors, an electrically shorter line is realized and higher active power transfer and improved system performance can be achieved. In recent years, series capacitors are also applied on shorter transmission lines to improve voltage stability. In general, the main benefits of applying series compensation in transmission systems include the following

FACTS—improvement in voltage profile with SVC. (© Copyright 2012 Siemens. All rights reserved.)

Figure 3.53   FACTS—improvement in voltage profile with SVC. (© Copyright 2012 Siemens. All rights reserved.)

Common FSC locations and main circuit diagram. (Courtesy of Siemens.)

Figure 3.54   Common FSC locations and main circuit diagram. (Courtesy of Siemens.)

Photograph of a series compensation installation. (© Copyright 2012 Siemens. All rights reserved.)

Figure 3.55   Photograph of a series compensation installation. (© Copyright 2012 Siemens. All rights reserved.)

  • Enhanced system dynamic stability
  • Desirable load division among parallel lines
  • Improved voltage regulation and reactive power balance
  • Reduced network power losses

The thyristor-controlled series compensation (TCSC) is an extension of conventional series compensation technology, providing further flexibility of series compensation in transmission applications (Figure 3.56).

3.4.2.2.3  Shunt Compensation

An SVC is a regulated source of leading or lagging reactive power. By varying its reactive power output in response to the demand of an automatic voltage regulator, an SVC can maintain virtually constant voltage at the point in the network to which it is connected. An SVC is comprised of standard inductive and capacitive branches controlled by thyristor valves connected in shunt to the transmission network via a step-up transformer. Thyristor control gives the SVC the characteristic of a variable shunt susceptance. Figure 3.57 shows three common SVC configurations for reactive power compensation in electric power systems. The first configuration consists of a thyristor-switched reactor (TSR) and a thyristor-switched capacitor (TSC). Since no reactor phase control is used, no filters are needed. The second one consists of a thyristor-controlled reactor (TCR), a TSC, and harmonic filters (FC). The third one consists of a TCR, mechanically switched shunt capacitors (MSC), as well as FC.

For example, with the TCR/TSC configuration, flexible and continuous reactive power compensation can be obtained by appropriate switching of TSCs and accurate controlling of TCR, from the full inductive rating of the TCR to the full capacitive rating of the TSCs and the FC.

SVC technology has been in commercial use since the early 1970s (with over 1000 systems in service), initially developed for the steel industry to address the problem of voltage flicker with arc furnaces. The SVC is now a mature technology that is widely used for transmission applications, providing voltage support in response to system disturbances and balancing the reactive power demand of large and fluctuating industrial loads. The installation can be in the midpoint of transmission interconnections or in load areas. In general, the main benefits of applying SVC technology in power transmission systems include the following:

TCSC main circuit diagram. (Courtesy of Siemens.)

Figure 3.56   TCSC main circuit diagram. (Courtesy of Siemens.)

Common SVC configurations. (Courtesy of Siemens.)

Figure 3.57   Common SVC configurations. (Courtesy of Siemens.)

  • Improved system voltage profiles
  • Reduced network power losses
  • Stabilized voltage of weak systems or load areas
  • Increased network power delivery capability
  • Mitigated active power oscillations

An SVC installation is shown in Figure 3.58, which is part of the Lévis De-icer Substation in Québèc, Canada. This system performs regular reactive power compensation in normal operation but is also capable of reconfiguration into a DC source, to generate DC current to remove ice buildup on transmission lines.

Photograph of an SVC installation. (© Copyright 2012 Alstom Grid. All rights reserved.)

Figure 3.58   Photograph of an SVC installation. (© Copyright 2012 Alstom Grid. All rights reserved.)

STATCOM main circuit diagram.

Figure 3.59   STATCOM main circuit diagram.

The STATCOM technology is based on power electronic concept of voltage-sourced conversion (Figure 3.59). The shunt-connected voltage-sourced converter (VSC) is comprised of solid-state switching components with turn-off capability with antiparallel diodes. Performance of the STATCOM is analogous to that of a synchronous machine generating balanced three-phase sinusoidal voltages at the fundamental frequency with controllable amplitude and phase angle. The device, however, has no inertia and does not contribute to the short circuit capacity.

The STATCOM consists of a VSC operating as an inverter with a capacitor as the DC energy source. It is controlled to regulate the voltage in much the same way as an SVC. A coupling transformer is used to connect to the transmission voltage level. In this application, only the voltage magnitude is controlled, not phase angle. By controlling the converter output voltage relative to the system voltage, reactive power magnitude and direction can be regulated. If the VSC AC output voltage is lower than the system voltage, reactive power is absorbed. If the VSC AC output voltage is higher than the system voltage, reactive power is produced.

The functions performed by STATCOM in a transmission network are quite the same as an SVC such as steady-state and dynamic voltage support and regulation, improved synchronous stability and transfer capability, and improved power system damping. In addition, STATCOM is also installed for power quality applications. These include the following:

STATCOM with energy storage.

Figure 3.60   STATCOM with energy storage.

  • Improved dynamic load balancing
  • Improved flicker control
  • Faster response for load compensation

STATCOM with energy storage is an enhancement of STATCOM consisting of series-connected batteries as shown in Figure 3.60. Energy storage enables the STATCOM to generate and consume active power for a certain period of time. One typical application of STATCOM with energy storage is for integrating renewable energy source such as wind farm or solar farm that has a strongly fluctuating power production. The load balancing function with energy storage delivers active power at a scheduled power level and reactive consumption/production within operational limits, according to the power and voltage setting orders from the system operator.

These devices will form an increasingly important component of the future smart grid as a result of the increasing use of variable generation sources such as wind and solar, as the stored energy may be used to fill in the nongenerating periods from these diverse renewable sources. The capacity of the storage system will clearly need to be suitably rated to substitute the energy normally provided by the renewable source, but for short periods this is a viable solution.

3.4.2.2.4  Combined and Other Devices

More sophisticated systems to control power flow in transmission lines may be formed by combining series and shunt devices. The STATCOM described previously is a shunt-connected voltage sourced device that can regulate voltage at the point of connection through control of reactive power flow by injecting reactive current. Another device called the static synchronous series compensator (SSSC), which is similar to the STATCOM except it is series-connected, controls the magnitude and phase of an injected voltage independent of the current in the line.

In the unified power flow controller (UPFC) configuration, a STATCOM and SSSC are combined on a transmission line as shown in Figure 3.61, and they can regulate both real and reactive power in a line, allowing for rapid voltage support and power flow control. These devices require two converters in a back-to-back (B2B) configuration and may use the same DC capacitor in much the same way as a HVDC link.

UPFC. (© Copyright 1999 ABB. All rights reserved.)

Figure 3.61   UPFC. (© Copyright 1999 ABB. All rights reserved.)

IPFC. (© Copyright 1999 ABB. All rights reserved.)

Figure 3.62   IPFC. (© Copyright 1999 ABB. All rights reserved.)

The interline power flow controller (IPFC) is another configuration of the combined VSCs, except that the two converters are inserted on different transmission lines. The IPFC consists of two SSSC converters as shown in Figure 3.62. In this configuration, the IPFC is able to control both real and reactive power in both lines i–j and i–k by exchanging power through the DC link between them.

The SSSC, UPFC, and IPFC are applications of VSC converters, and presently these systems are not in common use; those that are in operation have been constructed as development projects [9].

3.4.2.2.5  Variable Frequency Transformer

A relatively new transmission device is the variable frequency transformer or VFT. A VFT is considered by many to be a “smart” device as it has the ability to control the amount of power flowing through it. Similar to an HVDC system, the VFT can interconnect asynchronous grids with the key difference being that the VFT provides a true AC connection. The first asynchronous AC transmission using a VFT appeared in 2003 at Hydro-Québec’s Langlois Substation.

The VFT absorbs reactive power since it is an induction machine. It is normally applied with shunt banks to supply reactive power per the application’s needs. As a true AC connection, the VFT allows reactive power to flow from one side to the other. As in any AC circuit, reactive power flow is a function of the system voltages and the series impedance. Figure 3.63 is a simplified one-line diagram of a VFT interconnection.

One-line diagram of a variable frequency transformer. (© Copyright 2012 GE Energy. All rights reserved.)

Figure 3.63   One-line diagram of a variable frequency transformer. (© Copyright 2012 GE Energy. All rights reserved.)

While many designs are theoretically possible, the present technology consists of one or multiple parallel 100 MW, 60–60 Hz, machines. There are currently five machines in commercial operation. In addition to providing flexibility in moving a controlled amount of real power between two points, which need not be synchronized, adding a VFT has also demonstrated improvements to power system dynamic performance and generator damping.

The ability of the VFT to control the flow of power through it offers network operators a transmission device that can be dispatched similar to generating assets. This can assist with operation of the power grid in a more optimized manner. The power flowing through any given transmission path may be higher or lower than operators prefer. Changing the system to adjust the flow of power through one point may have unintended consequences through another transmission path

3.4.2.2.6  Synchronous Condenser

While the synchronous condenser is not a new, high-tech device invented to contribute to the modern smart grid, it is worthy to consider that this may be the original volt/Var Controller. Once commonly found in both industrial and utility applications, the number of synchronous condensers in operation has been on the decline. Simply put, the synchronous condenser is a motor without a load connected to its shaft. Or viewed in a way more familiar to the utility industry, a condenser is similar to a generator without a prime mover. The field is under- or overexcited to absorb or produce reactive power. The machine will absorb a small amount of real power to overcome losses. When equipped with a modern generator field exciter, the speed of response is reasonably fast.

Although slower than a STATCOM and more costly than SVC, the synchronous condenser demonstrates a number of advantages over electronic solutions, including significant overload capability, short circuit level, and real rotating inertia. It is relatively compatible with harmonic issues and can even act as a harmonic sink. As more renewable sources of energy such as wind and solar have displaced traditional thermal machines, some grids have experienced a decline in rotating inertia. In other applications, synchronous condensers are required on the receiving end of large thyristor-based HVDC systems to ensure proper inverter operation. This has prompted a renewed interest in the application of synchronous condensers as part of the overall smart grid solution.

The synchronous condenser’s usefulness in the smart grid is not unlike modern FACTS devices. Controlling voltage through injecting or absorbing reactive power at key points in the transmission system can allow more precise control of power flow and allow optimized transmission grid operation.

3.4.2.3  High-Voltage Direct Current

HVDC transmission is a well-established method of using controlling power flow within or between networks through power electronics systems. Originally the power flow control device was based on mercury-arc technology, though these systems have now been almost all decommissioned. Modern HVDC systems have been based on the use of thyristors as the controlled device (referred to as line commutated converter [LCC], current source converter [CSC], or conventional HVDC) for over 40 years, and more recently in the last 10 years or so, the use of the transistor (referred to as VSC) has been increasing.

The HVDC control system is designed to automatically respond to stimulus events from many sources, including the following:

  • Operator input
  • Routine changes in AC network conditions
  • Routine network switching events
  • Disturbance caused by faults within the DC system, the AC network

The intelligence incorporated into the control and protection system can be made to provide fully automated responses to all of these scenarios, such that the situation is detected and the response carried out without the need for human intervention. In this way the HVDC system can be considered as an essential component of the smart grid.

HVDC transmission systems offer many benefits over their AC counterparts, including the following:

  • Power flow through the link can be precisely controlled in both magnitude and direction, either through operator action or through automated response.
  • Voltage and frequency in the two AC networks can be controlled independently of each other, again either through operator action or through automated response.
  • The HVDC link can be used to assist one (or even both) of the AC networks in responding to disturbances (e.g., power swing damping, by modulation of the transmitted power). This is normally fully automated since the operator is unable to respond in this timescale.

Additionally, the use of an HVDC link rather than an AC interconnection provides the following:

  • Improved system stability margins due to the ability to rapidly change power transfer
  • No increase of the short circuit level of the system
  • No transfer of faults across the interconnected systems

In the evolution of HVDC, different applications were developed, as shown schematically in Figure 3.64.

Figure 3.65a shows the results of a simulation study based on two AC networks (A and B), which are interconnected and synchronized by a line rated at 500 MW. A short circuit fault occurs in network B at about 0.3 s; it can be seen that after about 7 s, the angular displacement of the rotors of selected generators in network A, relative to a reference generator, is increasing, that is, they cannot regain synchronism and the system is unstable. Exactly the same fault is applied in Figure 3.65b, but in this case an HVDC B2B link has been introduced between the two networks, that is, the link is now effectively asynchronous. It can be seen that within about 4 s, the rotor angle swings have been damped and stability maintained; the power flow through the link is virtually unchanged once the fault had been cleared. This is just one example of the way that networks which incorporate the controllability of HVDC can be made more intelligent, self-healing, and an integral, essential part of the smart grid of the future.

Options for HVDC interconnections. (a) Back-to-Back solution, (b) HVDC long-distance transmission, and (c) integration of HVDC into the AC system (hybrid solution).

Figure 3.64   Options for HVDC interconnections. (a) Back-to-Back solution, (b) HVDC long-distance transmission, and (c) integration of HVDC into the AC system (hybrid solution).

Post-fault response of two AC networks interconnected with (a) AC link; and (b) HVDC B2B link.

Figure 3.65   Post-fault response of two AC networks interconnected with (a) AC link; and (b) HVDC B2B link.

Smart grid intelligence requires both functionality in individual equipment or subsystems and communications between these subsystems to allow other components of the network or hierarchy to see what is going on and, in turn, allow them to make other intelligent decisions and take controlling actions.

HVDC is one of the most intelligent subsystems within a network, since it is able to carry out precise control of power flow based on internal and external information, since it is customary for HVDC systems to pass information at a very detailed level to remote centers for monitoring, protection, and control at other locations. By coordinating the action of the HVDC control system and other control systems in the network (generators, switching, and transformer substations, FACTS devices, etc.), it is possible to build up a complete control hierarchy for the network made up of these discrete and dispersed subsystems. Intelligent systems such as this are obviously capable of responding to events much faster than the human operator, and the rapid response to most faults or other events is critical to allow the system to recover and restabilize quickly.

For example, the power flow on the VSC-HVDC systems can be optimally scheduled based on system economics and security requirements. It is also feasible to dispatch VSC-HVDC systems in real-time power grid operations. Such increased power flow control flexibility allows the system operators to utilize more economic and less pollutant generation resources and implement effective congestion management strategies.

3.4.2.3.1  HVDC Developments

In general, for transmission distances above 600 km, DC transmission is more economical than AC transmission (≥1000 MW). Power transmission of up to 600–800 MW over distances of about 300 km has already been achieved with submarine cables, and cable transmission lengths of up to approximately 1000 km are at the planning stage. HVDC is now a mature and reliable technology (Figure 3.66).

The first commercial applications were cable transmissions for AC cable transmission over more than 80–120 km is technically not feasible due to reactive power limitations. Then, long-distance HVDC transmissions with overhead lines were built as they are more economical than transmission with AC lines [10]. To interconnect systems operating at different frequencies, B2B schemes were applied. B2B converters can also be connected to long AC lines (Figure 3.64a). A further application of HVDC transmission that is highly important for the future is its integration into the complex interconnected AC system (Figure 3.64c). The reasons for these hybrid solutions are basically lower transmission costs as well as the possibility of bypassing heavily loaded AC systems. Further information on the application of HVDC to handle large-scale transmission to overcome the difficulties encountered by the conventional AC networks may be found in Barker et al. [11] and MacLeod et al. [12].

The power ranges of VSC-HVDC have been improved rapidly in recent years. In the upper range, the technology now reaches 1200 MVA for symmetric monopole schemes with cables, which can be increased to 2400 MVA for bipole schemes with overhead lines.

Typical configurations of HVDC are depicted in Figure 3.67. HVDC VSC is the preferred technology for connecting islanded grids, such as offshore wind farms, to the power system [13]. This technology provides the “black-start” feature by means of self-commutated VSCs [14]. VSCs do not need any “driving” system voltage; they can build up a three-phase AC voltage via the DC voltage at the cable end, supplied from the converter at the main grid.

VSC-HVDC technology is now emerging as a flexible and economical alternative for future transmission grid expansion. In particular, embedded VSC-HVDC applications, together with the wide area monitoring system, in meshed AC grids could significantly improve overall system performance, enabling smart operation of transmission grids with improved security and efficiency. VSC-HVDC transmission also offers a superior solution for many challenging technical issues associated with integration of large-scale renewable energy sources such as offshore wind power.

(

Figure 3.66   (See color insert.) Evolution of HVDC voltage rating and technology. * Multiple bridges per pole; †500 kV becomes de facto standard for single 12-pulse bridge per pole; ‡ 660 kV used as “standard” in China; § 800 kV used as “standard” in China and India. (© Copyright 2012 Siemens. All rights reserved.)

HVDC configurations and technologies.

Figure 3.67   HVDC configurations and technologies.

3.4.2.3.2  Thyristor-Based “Conventional” HVDC

Conventional HVDC, also known as line commutated converter (LLC) systems, are based on thyristor switching technology.

Figure 3.68 shows a simplified circuit diagram of main components that make up the power circuit of a typical conventional HVDC system: these are the thyristor valves, the converter transformers, and the AC FC. The most common configuration of conventional HVDC is using the 12-pulse bridge arrangement, which offers the best compromise of least cost and least harmonics. The AC FC perform the dual roles of (a) maintaining reactive power balance with the AC network and (b) preventing harmonics generated by the HVDC from reaching the AC system. A photograph of a conventional HVDC installation is shown in Figure 3.69.

Conventional HVDC single-line diagram. (© Copyright 2012 Siemens. All rights reserved.)

Figure 3.68   Conventional HVDC single-line diagram. (© Copyright 2012 Siemens. All rights reserved.)

Figure 3.70 illustrates a typical HVDC thyristor valve hall for one end of an HVDC scheme. There are two different configurations of HVDC, a point-to-point system as shown in Figure 3.68, where the DC connection is an overhead line or insulated cable, and a B2B system where the DC connection has zero length. The point-to-point system is used for the economical transmission of power over long distances. The B2B system can be used to isolate systems that are normally asynchronous to prevent the spread of cascading faults and to increase the stability limit on an AC line.

Photograph of a conventional (thyristor) HVDC installation. (© Copyright 2012 Siemens. All rights reserved.)

Figure 3.69   Photograph of a conventional (thyristor) HVDC installation. (© Copyright 2012 Siemens. All rights reserved.)

Conventional HVDC thyristor valve hall. (© Copyright 2012 Siemens. All rights reserved.)

Figure 3.70   Conventional HVDC thyristor valve hall. (© Copyright 2012 Siemens. All rights reserved.)

3.4.2.3.3  VSC-Based HVDC

VSC-HVDC is a transmission technology based on VSCs and insulated gate bipolar transistors (IGBT). The converter operates with high frequency pulse width modulation (PWM) and thus has the capability to rapidly control both active and reactive power, independently of each other.

In particular, VSC-HVDC systems are attractive solutions for transmitting power underground and under water over long distances. With extruded DC cables, power ratings from a few tens of megawatts up to more than 1000 MW are available.

Figure 3.71 shows a simplified circuit diagram of main components that make up the power circuit of a typical VSC-HVDC system: these are the IGBT converter valves, converter reactors, DC capacitors, AC FC, DC cables, and transformers.

The first VSC-HVDC schemes were based on a two level topology, where the output voltage is switched between two voltage levels; however, the most common valve configuration being currently implemented is the modular multilevel converter (MMC), due to improvements in operating efficiency. Each phase has two valves, one between the positive potential and the phase outlet and one between the outlet and the negative potential. Thus, a three-phase converter has six valves, three-phase current reactors, and a set of DC capacitors. The phase reactor permits continuous and independent control of active and reactive power. It provides low-pass filtering of the PWM pattern to give the desired fundamental frequency voltage. The converter generates harmonics related to the switching frequency. The harmonic currents are blocked by the phase reactor, and the harmonic content on the AC bus voltage is reduced by AC filters. The fundamental frequency voltage across the reactor defines the power flow (both active and reactive) between the AC and DC sides. AC filters typically contain two or three grounded or ungrounded tuned filter branches. Depending on filter performance requirements, that is, permissible voltage distortion and others, the filter configuration may vary between schemes. The transformer is an ordinary single- or three-phase power transformer, with tap changer. The secondary voltage and the filter bus voltage will be controlled with the tap changer to achieve the maximum active and reactive power from the converter. Figure 3.72 shows a VSC-HVDC converter station.

VSC-HVDC single-line diagram. (© Copyright 2012 Siemens. All rights reserved.)

Figure 3.71   VSC-HVDC single-line diagram. (© Copyright 2012 Siemens. All rights reserved.)

HVDC station with VSCs. (© Copyright 2009 ABB. All rights reserved.)

Figure 3.72   HVDC station with VSCs. (© Copyright 2009 ABB. All rights reserved.)

One attractive feature of VSC-HVDC systems is that the power direction is changed by changing the direction of the current and not by changing the polarity of the DC voltage. This makes it easier to build a VSC-HVDC system with multiple terminals. These terminals can be connected to different points in the same AC network or to different AC networks. The resulting multiterminal VSC-HVDC systems can be radial, ring, or meshed topologies.

VSC-HVDC is ideal for embedded applications in meshed AC grids. Its inherent features include flexible control of power flow and the ability to provide dynamic voltage support to the surrounding AC networks. Together with advanced control strategies, these can greatly enhance smart transmission operations with improved steady-state and dynamic performance of the grid.

Fast control of active and reactive power of VSC-HVDC systems can improve power grid dynamic performance under disturbances. For example, if a severe disturbance threatens system transient stability, fast power runback and even instant power reversal control functions can be used to help maintain synchronized power grid operation.

3.4.3  Wide Area Monitoring, Protection and Control

Jay Giri, Zhenyu (Henry) Huang, Rajat Majumder, Rui Menezes de Moraes, Reynaldo Nuqui, Manu Parashar, Walter Sattinger, and Jean-Charles Tournier

3.4.3.1  Overview

Time-synchronized measurements across widely dispersed locations in an electric power grid are a key and differentiating feature of a wide area monitoring, protection and control (WAMPAC) system. WAMPAC systems are based on the synchronized sampling of power system currents and voltage signals across the power grid using a common timing signal derived from GPS* (Figure 3.73) The sampled signals are converted into phasors—vector representations of the grid’s voltage and current measurements at fundamental frequency—that are synchronized and compared across the electrically connected power system using an accurate GPS time reference. Bus voltage and current phasors define the state of an electric power grid in realtime.

Synchronized sampling of power system signals.

Figure 3.73   Synchronized sampling of power system signals.

Phasor measurement unit (PMU).

Figure 3.74   Phasor measurement unit (PMU).

3.4.3.1.1  Phasor Measurement Unit

The PMU, also known as a synchrophasor, is the basic building block of a WAMPAC system. The PMU samples the power system signals from voltage and current sensors and converts them into phasors. Phasors are complex number representations of the sampled signals commonly used in the design of, and inputs to, control and protection systems for bulk power transmission grids. The phasors are time tagged from a timing pulse derived from the GPS and then streamed into the wide area communications network as fast as one phasor per cycle of the power system frequency (Figure 3.74). Currently, the IEEE synchrophasor standard 37.118 defines the format by which the phasor data are transmitted from the PMU. The phasor angle information is referenced with the GPS timing pulse; for it to have physical significance, it has to be compared to (subtracted) other phasor angle measurements from the same system. Phasor angle differences provide useful information concerning system stress or modes of oscillatory disturbances in the power system. The PMU provided the critical synchronized time-lapsed information that enabled a clear understanding of the events leading to the northeast blackout of 2003 in the United States.

PMU technology has advanced significantly since Dr. Arun Phadke and his team developed the first PMU in Virginia Tech in 1988. Modern-day PMUs have become more accurate and capable of measuring a larger set of phasors in a substation. Most PMUs have binary output modules for transmitting binary signals, such as trip signals to open a circuit breaker. Some vendors have PMUs integrated within protection relays or digital fault recorders, with timing signals taken from IRIG-B* time sources instead of GPS antennae. A typical PMU connection in a transmission substation is shown in Figure 3.75. The PMU is considered one of the most promising if not the most important measurement device in modern transmission systems.

3.4.3.1.2  Time Synchronization

Time synchronization is the core of WAMPAC-based applications. WAMPAC applications rely on a precise time stamp transmitted with each PMU measurement to monitor, control, and protect the electrical network. In general, time synchronization requirements range from nanoseconds to microseconds. Time synchronization can be achieved by multiple means. All methods are based on the distribution of a common source clock signal across the network either by satellite, via the communications network (e.g., using the IEEE 1588 protocol), or using dedicated synchronization networks (e.g., IRIG-B). The crucial need for a highly reliable and available time synchronization system implies the systemic use of a high-quality clock with accuracies expressed in PPM (parts per million) in order to maintain high accuracy, even in the case of a temporary loss of the synchronization signal.

Typical PMU connection in a transmission substation.

Figure 3.75   Typical PMU connection in a transmission substation.

3.4.3.1.3  Phasor Data Concentrator

A PDC collects phasor data from multiple PMUs or other PDCs, aligns the data by time tag to create a synchronized dataset, and then passes the data on to applications processors. For applications that process PMU data from across the grid, it is vital that the measurements are time aligned based on their original time tag to create a system-wide, synchronized snapshot of grid conditions. To accommodate the varying latencies in data delivery from individual PMUs, and to take into account delayed data packets over the communications system, PDCs typically buffer the input data streams and include a certain “wait time” before outputting the aggregated data stream. A PDC also performs data quality checks, validates the integrity or completeness of the data, and flags all missing or problematic data.

PMUs utilize various data formats (IEEE 1344, IEEE C37.118, BPA Stream, etc.* ), data rates, and communications protocols (e.g., TCP, UDP, etc.) for streaming data to the PDC. On the input side, the PDC must support these different formats; additionally, it must be able to down-sample (or up-sample) the input streams to a standard reporting rate and process the various datasets into a common format output stream. There may also be multiple users of the data. Hence the PDC should be able to distribute received data to multiple users simultaneously, each of which may have different data requirements that are application specific.

The functions of a PDC can vary depending on its role or its location between the source PMUs and the higher-level applications. Broadly speaking, there are three levels of PDCs (Figure 3.76).

Levels of PDCs: (1) local or substation level, (2) transmission owner control centers, and (3) regional control center level (ISOs, RTOs).

Figure 3.76   Levels of PDCs: (1) local or substation level, (2) transmission owner control centers, and (3) regional control center level (ISOs, RTOs).

  1. Local or substation PDC. A local PDC is generally located at the substation for managing the collection and communication from multiple PMUs within the substation or neighboring substations and sending this time-synchronized aggregated dataset to higher-level concentrators at the control center. Since the local PDC is close to the PMU source, it is typically configured for minimal latency. It is also commonly utilized for local substation control operations. Local PDCs may include a short-term data storage system to protect against communications network failures. A local PDC is generally a hardware device that requires limited maintenance and that can operate independently if it loses communications with the rest of the communications network.
  2. Control center PDC. This PDC operates within a control center environment and aggregates data from one utility’s PMUs and substation PDCs, as well as neighboring utility PDCs. They are capable of simultaneously sending multiple output streams to different applications, such as visualization, alarms, storage, and EMS applications, each of which has its own specific data rate requirements. Control center PDC architectures are typically redundant in order to handle expected future loads and to satisfy high-availability needs of a production system regardless of PMU vendor and device type. PDCs need to be adaptable to accommodate new protocols and output formats, as well as interfaces with new applications.
  3. Super-PDC. A Super-PDC operates on a larger, regional scale and is responsible for collecting and correlating phasor measurements from hundreds of PMUs and multiple substations and/or control center PDCs; it may also be responsible for facilitating PMU data exchange between utilities. In addition to supporting applications such as wide area monitoring system (WAMS) and visualization, and EMS and SCADA applications, it is capable of archiving a vast amount of data (typically, several Terabytes per day). Super-PDCs are therefore typically enterprise-level software systems running on clustered server hardware to accommodate scalability to meet the growing PMU deployment and utility needs.

3.4.3.2  Drivers and Benefits of WAMPAC

Microprocessor-based computer relaying, information technology, and advances in communications are changing the landscape of transmission systems monitoring. WAMPAC systems are driven by the need for alternative solutions for managing transmission reliability and security via improved SA. Electric transmission grids interconnect bulk power systems that are spread across geographical regions. As such, electric transmission grids have evolved to be very reliable and secure systems—the cost of failure is great. Now, smart grid initiatives will impose new reliability and economic requirements that will further impact how transmission systems will be monitored, protected, and controlled in the future.

Smart transmission grids are expected to be self-healing, that is, when experiencing disturbances, component failures, or cyber attacks, the grid is expected to recover. New and unconventional generating sources from renewable energy introduce operational challenges. With the increase in renewable energy sources, smart grids need to provide the most efficient transmission corridors for delivering energy to major load centers. Power systems are being operated closer to their thermal and stability limits. As a result, transmission operators need to increase their SA of the grid. The onset and early indications of disturbances and contingencies need to be visible to the operator in a timely fashion. SCADA and EMS systems need more advanced WAMPAC applications for the transmission grid to meet these challenges.

3.4.3.3  WAMPAC Needs in a Smart Grid

It is noteworthy to underline the key functional characteristics of smart grids specific to transmission systems to understand the needs for WAMPAC. First, a smart grid should be self-healing from power disturbance events. Events often cause failure or isolation of transmission lines and generation sources that could potentially lead to grid collapse. A smart grid should effectively manage a large number of renewable sources from wind, solar, and storage, including electric vehicles, and maintain the same level of power quality. There is also a great expectation that smart grids will be more efficient in transmitting electricity. With a large number of intermittent renewable generation sources, the reliable and efficient transmission of electricity is not a trivial task (Figure 3.77).

Potential impact of high levels of renewable energy sources in smart grids.

Figure 3.77   Potential impact of high levels of renewable energy sources in smart grids.

3.4.3.3.1  Maintaining Reliability, Stability, and Security against Large Disturbances

Smart transmission grids should exhibit self-healing capabilities from power disturbance events—they are expected to endure disturbances and outages with zero or minimal impact to the grid’s ability to supply and distribute power. While grids have been designed to survive large events, the integration of renewable generation will probably push these designs to their limits. Renewable generation will be integrated into both the bulk transmission level and in the distribution level. Power system events often cause failure or isolation of transmission or distribution lines and generators that could potentially lead to grid collapse. The ensuing dynamics often stresses generators and loads to the point where they disconnect from the grid, which more often than not, further stresses the remaining grid components. A typical dynamic system response to grid disturbances is in the form of power oscillations between the generating sources, which if left uncontrolled, can persist and lead to system instability. While local control methodologies address most of these grid disturbances, control based on wide area information promises to be a more effective solution. For example, WAMPAC can enable adjustment of the excitation set points of generation sources on a WAC basis in order to more effectively dampen persistent power oscillations. Large disturbances also have been known to produce voltage and current oscillations that often result in unwanted and false operation of protection systems. A need exists to communicate system events to these grid protection units to help them differentiate between disturbances requiring action to those that should be temporarily ignored. Such information transfer must be fast and handle data from several distant locations in the grid.

3.4.3.3.2  Management of Large Numbers of Intermittent Generation

Smart grids will need to support a higher penetration of intermittent generation and storage. Managing such large numbers and varieties of generation sources could exceed the processing limits of existing EMSs or DMSs. Large numbers of intermittent generators will result in highly transient power flows that can push the limit of transmission lines beyond their current carrying capabilities and cause transmission grid congestion. Smart grid operations will benefit from applications that can coordinate the management of renewable generators to mitigate these intermittent flows. This coordination will need to occur in much faster time frame than what can be realized in current grid management systems.

Smart transmission grids require an advanced level of monitoring both at the control center and substation level. Several factors drive these requirements. Increased connectivity with neighboring systems will make existing systems more sensitive to neighboring disturbances. A higher number of low inertia generators on the grid, such as renewable energy, offer less damping to the spread of system disturbances, and therefore faster propagation of disturbances can be expected. The spread of disturbances across utility boundaries will pose a challenge to system operators. Expanded visibility for tracking disturbances outside traditional control center boundaries is required. Transmission system monitoring will benefit from more field data available at a higher sampling frequency to quickly and accurately measure disturbances and dynamics so that corrective or mitigating control actions can be taken.

3.4.3.3.3  Maintaining Power Quality

A potential issue in smart grid is the degradation of power quality as large numbers of intermittent generation and power electronics loads become integrated into the transmission and distribution system. It will be challenging to maintain nominal frequency and the quality of power with highly variable generation in the grid. Frequency and voltage quality issues could be resolved with improved regulation from the grid’s active power sources. Improved frequency regulation could come from using energy storage. A key requirement to addressing these power quality issues is the ability to monitor, store, and communicate data for processing and analysis in the control center so that operator actions may be initiated. However as previously mentioned, it is highly unlikely thatp existing grid management systems will be capable of carrying the extra volume of data transfer and most of the current measuring and monitoring systems also do not use a sampling rate high enough to capture some of these power quality issues. What is required is a monitoring system that supports the required signal sampling rate and communications bandwidth to transfer such data.

3.4.3.3.4  Increasing Transmission Efficiency

Smart grids are expected to increase utilization of existing grid assets, such as lines and transformers. In this way, the grid can be more efficient in delivering power from the source to the loads. One approach to increase transmission utilization is to dispatch generators to maximize power flow through the grid without exceeding system thermal and stability limits. Maximizing efficiency across the grid requires sufficient system-wide measurements to support system optimization applications. The efficient utilization of grid assets is also limited by the requirement for operating margins to account for potential grid instabilities or generation outages. It is anticipated that renewable generation and plug-in electrical vehicles will further complicate the estimation of these limits and that operators will probably establish higher margins for operational security. Countermeasures against instabilities can be instituted to provide security against these instabilities. More advanced decision support tools are clearly needed to ensure increased efficiency in a smart grid.

3.4.3.4  Major WAMPAC Activities

3.4.3.4.1  United States

Most groundbreaking research and initial WAMPAC applications worldwide started in the United States. Currently, smart grid initiatives from the federal government have made funds available to support the large-scale deployment of PMUs in the United States. Prior to the smart grid initiatives, there existed a working group to drive the deployment of WAMPAC in the United States: the North American Synchrophasor Working Group (NASPI). NASPI is a collaborative effort between the U.S. Department of Energy (DOE), the North American Electric Reliability Corporation (NERC), and North American electric utilities, vendors, consultants, federal and private researchers, and academics. NASPI’s mission is to improve power system reliability and visibility by creating a robust, widely available and secure synchronized data measurement infrastructure for the interconnected North American electric power system with associated analysis and monitoring tools for better planning and operation and improved reliability. The NASPI architecture is referred to as the NASPI network or NASPInet—see Figure 3.78. A key effort in NASPI is the development of the phasor gateway and the super-phasor data concentrator (Super-PDC). The NASPInet consists of phasor gateways exchanging data via a phasor data bus. The overall goal of the NASPInet effort is to develop an “industry grade,” secure, standardized, distributed, and expandable data communications infrastructure to support dissemination of utility synchrophasor measurements for applications across North America.

3.4.3.4.1.1  Phasor Gateway

The phasor gateway is the primary interface between a utility, or another authorized party, and the data bus for synchrophasor data exchanges via NASPInet. The phasor gateway manages the connected devices on the entity’s side, manages quality of service, administers cybersecurity and access rights, performs necessary data conversions, and interfaces the utility’s PMU network with the data bus. The main functions of the phasor gateway include the following:

  • Serve as the sole access point to the data bus for interorganizational synchrophasor traffic via a publisher-subscriber-based data exchange mechanism.
  • Facilitate and administer registration of user PMUs, PDCs, and phasor signals. This is done through a name and directory service (NDS) system-wide registry. All real-time data streaming sources need to be registered through the owner’s phasor gateway before their data can be published to NASPInet. This includes information such as physical location of the device, device type, device identifier, signal description, signal quality, and ownership according to the phasor gateway owner and NASPInet naming conventions. Only upon successful registration of the data source with the NDS can the phasor gateway publish data.
  • Facilitate and administer the subscription and publishing of phasor data. The publish/­subscribe mechanism consists of three parts: device/signal registration by publishers, subscription setup between publisher and subscriber that is initiated by subscribers, and quality of service and data security of the subscribed data. The owner of the phasor gateway that publishes the data to NASPInet maintains full control of its data distribution regarding who could subscribe to its data and which data could be subscribed to on a per-subscriber and per-signal basis. Nonsubscribers are therefore prevented from receiving the published data without a valid subscription. Subscribers are ensured that data will only come from publishers that they subscribe to.
  • Administer and disseminate cybersecurity and access rights. The phasor gateway should provide system administrator functions to configure, operate, diagnose, and control the phasor gateway access rights to ensure appropriate access to, and usage of, the data on a per-user and per-signal basis, including who can add, edit, and remove users, and control each user’s access rights. The security must meet corresponding NERC (North American Reliability Corporation), CIP (U.S. CIP program), FIPS (U.S. Federal Information Processing Standard), and other relevant cybersecurity standards and guidelines to safeguard reliable operation and data exchange.
  • Manage traffic priority through the phasor gateway according to data service classes. It is well understood that different applications have different data requirements in terms of latency, data rates, availability, etc. Five different classifications of applications are identified based on these requirements: Class A, feedback and control; Class B, open loop control (e.g., state estimation); Class C, visualization; Class D, postevent analysis; and Class F, R&D. The phasor gateway must support data delivery based on the priority traffic levels, that is, higher priority data are always processed and delivered before lower priority data.
  • Monitor data integrity. This includes the ability to monitor both data that are forwarded to and received from the data bus for error and conformance with the data service class specifications to ensure that all transported data meet quality-of-service requirements. The types of statistics provided by the phasor gateway are the number of missing packets and missing packet rate, number of packets with data integrity checks, data stream interruptions, data stream delays, and changes in input data configuration. The phasor gateway should also have the ability to notify the administrator when there are excessive data errors or the data do not conform to the data service class specification.
  • Provide logging of data transmission, access controls, and cybersecurity for analysis of all anomalies. The phasor gateway should log all user activities (e.g., access requests), system administration activities (e.g., data source registration), data subscription-related activities, quality-of-service alerts, cybersecurity alerts, application errors, etc. Therefore, any anomaly can be traced and analyzed to determine whether it is the result of NASPInet’s own degradation or failures, or intentional/unintentional intrusion by unauthorized entities (hackers, intruders, unauthorized equipment connection, unauthorized user logins, etc.).
  • Provide APIs for interfacing with a user's systems and applications to access data bus data and services.

NASPInet conceptual architecture (phasor gateways and data bus). (Quanta Technology LLC, Phasor Gateway Technical Specification for North American Synchro–Phasor Initiative Network (NASPInet), May 29, 2009.,

Figure 3.78   NASPInet conceptual architecture (phasor gateways and data bus). (Quanta Technology LLC, Phasor Gateway Technical Specification for North American Synchro–Phasor Initiative Network (NASPInet), May 29, 2009., http://www.naspi.org.)

3.4.3.4.1.2  Super-PDC

The term “super-phasor data concentrator” or “Super-PDC” was first coined within the context of the Eastern Interconnection Phasor Project (EIPP), which was a U.S. DOE-led initiative started in 2002 to deliver immediate value of synchrophasor information within the U.S. Eastern Interconnection. The initial focus of the project involved networking existing PMU installations across the entire eastern interconnection and streaming these data to a centralized site for data concentration and archival. To support this EIPP endeavor, Tennessee Valley Authority (TVA) made a substantial investment in developing this “centralized” PDC (termed as the “Super-PDC”) for the entire Eastern Interconnection that (1) was capable of gathering data from multiple PDCs and PMUs deployed across several utilities and ISOs, (2) supported a variety of phasor data transmission protocols (e.g., BPA PDCStream, IEEE C37.118, IEEE 1344, OPC, VirginaTech FNET) to ensure that all PMU capable devices within the interconnection could be integrated, (3) included a comprehensive database mechanism to manage the metadata associated with the phasor measurements, and (4) was capable of archiving huge amounts of this measurement data with fast historical data retrieval mechanisms.

The Super-PDC architecture that was developed by TVA is shown in Figure 3.79. It includes the real-time data acquisition module for parsing the data packets from various devices and protocols, the interface to TVA’s proprietary DatAWare database, which maintains a 30 day rolling archive, data preprocessing module responsible for synchronization and encapsulation of these time-aligned data into a single stream, and finally the real-time broadcast module for streaming these data in realtime to applications. The Super-PDC at TVA is currently receiving data from approximately 120 PMUs across the Eastern Interconnection (the largest collection of PMU data within North America). It archives the data in a historian with no data compression, collecting approximately 36 GB per day (1 TB per month).

In 2008 the North American Electricity Reliability Corporation (NERC) contracted TVA to architect the second generation to TVA’s “centralized” Super-PDC architecture, where multiple regional Super-PDCs could work together collaboratively to create a “distributed” system of data collection and concentration nodes that are centrally managed and configured, with minimal amount of information exchanged between nodes to ensure high availability. In this way, computationally intensive tasks such as data archival with I/O speed limitations could be dispersed across distributed resources. Additionally, such a distributed approach also eliminates the concern for a single point of failure associated with the earlier centralized approach (Figure 3.80).

NASPI Super-PDC architecture. (From Myrda, P.T. and Koellner, K., NASPInet—The internet for synchrophasors,

Figure 3.79   NASPI Super-PDC architecture. (From Myrda, P.T. and Koellner, K., NASPInet—The internet for synchrophasors, 43rd Hawaii International Conference on System Sciences (HICSS), Kauai, HI, January 5–8, 2010.)

Generation II Super-PDC system (also known as the NERC phasor concentration system). (From Myrda, P.T. and Koellner, K., NASPInet—The internet for synchrophasors,

Figure 3.80   Generation II Super-PDC system (also known as the NERC phasor concentration system). (From Myrda, P.T. and Koellner, K., NASPInet—The internet for synchrophasors, 43rd Hawaii International Conference on System Sciences (HICSS), Kauai, HI, January 5–8, 2010.)

In late 2009, TVA released the Super-PDC source code to open source development as the openPDC and formally posted the openPDC Version 1.0 source code in January 2010. The openPDC is an enhancement of the original TVA Super-PDC that has been modified for greater performance and scalability. In April 2010, TVA and NERC positioned the Grid Protection Alliance (GPA) to provide ongoing administration of the openPDC code base. GPA is a not-for-profit corporation that has been formed to support the electric utility industry.

Several U.S. utilities, under their 2010 Smart Grid Investment Grants (SGIG), have already undertaken pilot projects to implement and demonstrate various aspects of NASPInet. It is envisioned that NASPInet, once fully deployed, would support hundreds of phasor gateways and thousands of PMUs, each typically sampling data at 30 times per second.

3.4.3.4.2  Europe

Concepts of and experience with wide area monitoring systems (WAMS) in Europe date back to the years 1980–1990, when EdF, the French transmission system operator (TSO) of that time, developed a comprehensive plan based on phasor measurements. However, due to the fact that all the required telecommunication from the substation to the central control system and back was based on very expensive satellite channels, the system has never been put into operation. The further development of PMU technology based on accurate GPS time synchronization as well as the development of low-cost and reliable terrestrial communications channels has facilitated a restart of the phasor technology only some 10 years later. In the meantime, accurate synchronized off-line transient recorders have been developed and used for dynamic model calibration as well as for complex events analysis within the highly meshed CE system. One of the main driving factors for using more accurate measurement equipment was the increase of system dynamic challenges due to the increase of system size caused by the connection of power systems from the eastern European system to the western European system in the early 1990s. This need is intensified today by the more and more extensive use of the transmission system infrastructure due to increased market activities as well as increased power flow distances caused by renewable (wind) infeed far from the main energy consumers.

Power system equipment manufacturers have recognized the needs of system operators and have developed devices able to measure voltage and current phasors that are subsequently computed online in central PDCs. On the level of this centralized communication and data computation, together with a corresponding visualization platform, a large number of corresponding applications have been developed. Due to the nature of a relatively new technology, the ongoing WAMS activities can be divided into two categories.

3.4.3.4.2.1  Universities/Research & Development and Demonstration Projects

For this kind of application, the data acquisition is performed using the public Internet connection and the data servers are located inside university or manufacturer labs. The PMUs used are mainly installed on the low voltage outlets in the buildings. Due to this fact, the related analyses are restricted to frequency and voltage phase angle as system input measurands. A few projects financed by the European Commission, such as ICOEUR, have already delivered valuable results.

3.4.3.4.2.2  Industrial Applications 

In the industrial applications, data acquisition is performed with the help of private TSO communications channels, where the PDC is embedded in the TSO IT environment and the output of the WAMS is already used within the operation or planning departments of the TSOs. In contrast to university-driven R&D projects, the TSO measurements are performed on the high voltage level using dedicated CT and VT measurements. Consequently, exact and high-resolution active power and reactive power measurements are also available.

Based on different technologies and corresponding software and hardware suppliers, the Continental Europe (CE) power system is monitored by receiving WAMS measurements from various transmission substations in each country. For the analysis of all major and minor events with a system-wide impact within the last years, those devices have delivered an important contribution for the related postmortem dynamic system analysis. The same measurements are continuously used for monitoring of the dynamic system performance as well as for the calibration of system dynamic models.

Some European TSOs have already integrated the PMU and corresponding PDC information within their SCADA systems. The corresponding main applications are the following:

  • Voltage phase angle difference monitoring
  • Line thermal monitoring
  • Voltage stability monitoring (online P–V curves)
  • Online monitoring of system damping (online modal analysis with online parameter estimation
  • Intelligent alarming if predefined critical levels are exceeded
  • Online monitoring of system loading

(

Figure 3.81   (See color insert.) Swissgrid web page showing current European PDCs links (January 2012). (© Copyright 2012 Swissgrid AG. All rights reserved.)

In order to increase system observability* beyond their own system observation area, a few European TSOs have already meshed their PDCs by exchanging PMU data online. One of these applications is a web page application setup by Swissgrid, see Figure 3.81.

Within the CE power system, more than 100 WAMS devices are currently in operation, continuously delivering high-quality measurements for system operation and system planning. Accurate time-stamped measurements have shown that they are a valuable component to ensure secure system operation. The related tools for data postprocessing have also demonstrated their maturity. However, WAMS integration with traditional SCADA systems has only reached the initial stages. In addition, the effort for enhancement of these links in combination with future implementation of dynamic security assessment (DSA), VSA (voltage security assessment), and wide area protection (WAP) systems have to be increased with the active participation of all partners (universities, manufacturers, TSOs, consultants).

3.4.3.4.3  Brazil

Brazil spans a large part of the South American continent. The distance of the far ends of the Brazilian territory (from north to south, and from east to west) is about 3900 km. Today, the Brazilian Interconnected Power System (BIPS) covers almost 70% of the Brazilian territory with a large transmission network that includes over 90,000 km of 230, 345, 440, 500, and 765 kV transmission lines, one 600 kV HVDC transmission line, approximately 400 substations and more than 170 power plants. Figure 3.82 shows the Brazilian main transmission grid.

(

Figure 3.82   (See color insert.) The BIPS. (Courtesy of ONS, Brazil, 2010.)

The country’s main generation source is hydroelectric. In the past, more than 80% of the total installed capacity and near 90% of the total energy production came from hydro plants. The hydro generation plants are located along 12 major hydrographic basins all over the Brazilian territory and many of them are not close to the major load centers in the southeast and south region. Some of the largest hydroelectric plants are the furthest from the load centers, resulting in bulk power transfers over long distances. Rainfall and the resulting inflow patterns are distinct among regions and may vary significantly over the year for each region, as well as between dry and wet years. In this scenario, one of the main operational tasks in the Brazilian power system is to allow the economical gains through interregional power transfers, taking advantage of the seasonal rainfall and water flow differences in each of its geoelectric regions. This is realized through optimization of the available hydro resources, mixed with complementary thermal energy. The result of this process has a direct impact on the overall operating cost of the system. As in all systems of this proportion, disturbances due to significant generation and load unbalances may cause excessive frequency variations, voltage collapse situations, and even the islanding of certain parts of the network, with loss of important load centers. Studies of the system dynamic behavior have shown inter-area low-frequency electromechanical oscillations in the range of 0.3–0.8 Hz. These oscillations are usually well damped but could, in some disturbances, spread with severe consequences. To avoid such situations, conventional (not synchronized measurement) system integrity protection schemes (SIPS) were deployed to perform predefined actions. Load shedding or generator tripping are some planned actions for expected system contingencies, such as losing one or more circuits of a major transmission path. The economic and reliable operation of the Brazilian power system must also accommodate the needs of a deregulated electricity market established since 1998, which increased the number of players in the electricity market. The main operational challenge of the Brazilian power system thus is how to achieve optimal hydro resource utilization while ensuring reliable system operation within the constraints of a long transmission system and market operation regulations.

The interest in transmission grid synchronized measurements in Brazil emerged in the 1990s due to the difficulty to assess the system dynamic performance during wide area disturbances. The PMU received the attention of the Brazilian Electric Studies Committee, a member of the Group to Coordinate the Interconnected Operation (GCOI) of the Brazilian power system. The feasibility of PMU application in the Brazilian power system was subject of preliminary studies done by this committee, with utilities’ and manufacturers’ participation.

With the electricity sector restructuring that happened in Brazil by the end of the 1990s, BIPS operation was transferred to a recently instituted Independent System Operator, the ONS. On March 11, 1999, only 2 months after ONS started operating the BIPS, Brazil faced a huge blackout. This blackout affected mainly the southeastern region, which accounts for the largest load in Brazil. The March 11 event analysis highlighted the need for better tools aiming at long-lasting dynamic behavior recording. Following the blackout recommendations, ONS started a project in 2000 to deploy a WAMS on the BIPS to record its dynamic performance, and in 2003, the first commercial PMU product was available in Brazil. In 2004 the regulatory environment in Brazil changed and ANEEL (the Brazilian regulatory office) decided not to allow ONS to own transmission assets. After working with ANEEL to reformulate the project strategy from the early centralized approach to a decentralized one, a resolution was passed in 2005 establishing the framework, under which the responsibilities and tasks for ONS and utilities in implementing the WAMS project were clearly defined. For ONS, its main responsibilities and tasks are as follows: (a) define and specify the WAMS architecture and equipment; (b) specify, acquire, and install the ONS PDCs; (c) define PMU placement on BIPS; (d) coordinate certification tests on PMU models to guarantee the system’s integration and WAMS global performance; and (e) define the WAMS deployment schedule and coordinate the PMU installation by utilities. For utilities, their responsibilities and tasks are as follows: (a) purchase, install, operate, and maintain the PMU placed in their substations and (b) supply the communications links, complying with technical requirements, specifications, and schedules coordinated by ONS.

The deployment plan for Brazilian WAMS consists of three main components:

  • A phased deployment plan: ONS has adopted a phased deployment plan to address most of the challenges of this project. On the application side, ONS will focus on first deploying a sufficient number of PMUs at selected locations to facilitate the system dynamics recording for envisioned off-line applications, such as postmortem analysis, system model validation, and performance assessment. The number of PMUs will be gradually increased for real-time system operation support, such as state estimator improvement, until a full observability of BIPS's higher voltage level (345?kV and above) by phasor measurement is reached. Additional PMU installations for WAC and protection applications will be considered only at a later stage of the system deployment, as practical experience is gained with this technology. This phased deployment plan allows utilities and ONS to limit the initial capital investment, minimize the risks associated with many uncertainties of the project, and gradually gain experience on the system before making a full-scale deployment.
  • Top-down system design approach: ONS has adopted a top-down system design approach aimed to avoid potential future problems in its phased deployment plan. This top-down approach allows ONS to take not only the requirement of the current applications but also the need of future applications into account in the WAMS design. The system architecture is designed to be highly flexible and scalable to allow for easy system expansion later. It also allows the system design to take into account the availability of current, off-the-shelf products, maturity of technologies, as well as current communication support from the BIPS. In addition to system design, this approach enables ONS to provide unified design specifications for PMUs and any other system component that will be installed and operated at a utility’s substation, such as substation phasor data concentrators (SPDC). These specifications will be used by all utilities involved in this project in their procurement process.
  • PMU/PDC certification test process: To ensure global performance of the Brazilian WAMS, ONS has included a PMU and PDC certification test process as an integral part of its deployment plan. The PMU certification test process included first developing a PMU test methodology and the test guidelines and then conducting the PMU certification test to ensure that all PMUs to be acquired by utilities will meet the same standards and system requirements. ONS is also envisioning the need for PDCs testing and certification. The WAMS architecture design includes the use of substation PDCs to aggregate and process the data from PMUs at the substation and then forward the data to PDCs installed at ONS control centers. Substation PDCs must therefore be verified to be interoperable with all PMU models and also with the PDCs installed at ONS' control centers. With a phased deployment plan, one of the main system design objectives of the Brazilian WAMS was the system flexibility and scalability for easy system expansion. For PDCs at ONS control centers, they will only support a small number of phasor measurements initially but they will be easily expanded to support hundreds of PMUs at the project final stage. ONS is investigating testing tools/methods that will allow it to verify whether main PDCs can meet the earlier requirements.

Another important WAMS initiative in Brazil came from the Santa Catarina Federal University (UFSC). The initiative started in 2001 as a research project carried out jointly by UFSC and a Brazilian industry partner. In 2003, the project received financial support from the Brazilian Government, which allowed the deployment of a prototype phasor measurement system. This first Brazilian system measures the distribution low voltage in nine university laboratories communicating with a PDC at UFSC over the Internet. This system recorded the BIPS dynamic performance during the latest major power system disturbances. Currently, another project from UFSC installed PMUs on three 500 kV substations in the South of Brazil.

WAMS deployment is understood as an important step to allow the Brazilian transmission system to evolve to a true smart grid. There is a common agreement that synchronized measurements will be part of the next generation of SCADA and EMSs. Without a better measurement system, it will be very difficult to develop more advanced EMS applications.

3.4.3.5  Role of WAMPAC in a Smart Grid

Smart grids will rely on various utility systems interoperating with each other. The level of interaction will be governed, among other requirements, by the ability of these systems to communicate and exchange meaningful data at sufficient time intervals. PMU data are time tagged and therefore any system interoperating with WAMPAC must include time synchronization. For example, WAMPAC systems can interoperate with network management systems to enable improved disturbance visualization into the control center. On the other hand, network management systems can supply other system information not modeled by WAMPAC systems to improve WAMPAC performance.

WAMPAC is the all encompassing term for WAM, WAP, and WAC applications of PMUs. Therefore, the integration of WAM, WAP, and WAC (WAMPAC) is in the context of the integration of the applications utilizing the phasor data originating from the PMUs and collected and disseminated by the PDCs and supporting architectures (e.g., NASPI in the United States).

3.4.3.5.1  Maintaining Reliability, Stability, and Security against Large Disturbances

3.4.3.5.1.1  Wide Area Monitoring 

Since power grid conditions are constantly changing, the overall health status of the grid is also constantly changing. It is the responsibility of grid operations to continually monitor real-time conditions to assess the current state of the system, to determine if corrective actions are required, and to identify and implement corrective actions if warranted. Synchrophasors and WAMPAC technology are smart-grid enabling technologies that offer great promise in terms of providing the industry with new SA tools to quickly assess the current grid conditions. Specifically, PMUs are capable of directly measuring the system state (i.e., voltage and current phasors) very accurately and at the high subsecond resolution, which is well suited for observing the dynamic behavior of the power grid and characterizing its stability. Of equal importance is the time-alignment property of these measurements that allows for comparison of phase angles from widely disparate locations to assess grid stress over a wide area. Measurement-based techniques that leverage these characteristics of WAMS technologies will complement existing EMS capabilities. Figure 3.83 illustrates how a PMU-based WAMS and network model-based EMS hybrid solution can provide a more comprehensive grid security assessment. While measurement-based techniques may be applied to quickly and accurately assess grid conditions over a wide area basis, the model-based EMS applications offer the required context in terms of establishing dynamic security limits and suggesting corrective actions to mitigate potentially harmful conditions.

Integration of WAMS and EMS for enhanced grid security assessment. (© Copyright 2012 Alstom Grid. All rights reserved.)

Figure 3.83   Integration of WAMS and EMS for enhanced grid security assessment. (© Copyright 2012 Alstom Grid. All rights reserved.)

The phase angle separation information that is provided by WAMS is a good measure of grid instability and may signify potential voltage or oscillatory stability problems in the system. Similarly, rapid changes in phase angles that can be quickly detected by the high-resolution measurements can indicate sudden weakening of transmission capacity due line outages. Additionally, it is also possible to assess the current damping levels of both local and inter-area oscillations directly from the measurements, provide locational information on where the oscillations are most prominent, and alarm the operator should poor damping conditions occur (see Figure 3.84). Other measurement-based wide area security assessments include the use of localized frequency measurements from synchrophasors and observable time delays within this subsecond PMU data, along with any additional real-time EMS SCADA and transmission network topology information, to quickly identify the specific location of the origin of the disturbance, to detect and manage electrical islanding conditions, and then to monitor system restoration following grid separation.

(a) Phase angle difference monitoring and (b) local and inter-area oscillations monitoring. (© Copyright 2012 Alstom Grid. All rights reserved.)

Figure 3.84   (a) Phase angle difference monitoring and (b) local and inter-area oscillations monitoring. (© Copyright 2012 Alstom Grid. All rights reserved.)

WAMS technologies also benefit steady-state network analysis applications. PMUs directly measure the system state. Using additional real-time measurements in the grid improves the EMS state estimator application, which helps in increasing the grid reliability and performance. A much needed predictive element of system operations is needed in smart grid to help the decision-making process of the control center operator. Once the operators have made an assessment of the current state and its vulnerability, operators will need to rely on “what-if” analytical tools to be able to make decisions that will prevent adverse conditions if a specific contingency or disturbance were to occur and make recommendations on corrective actions. Thus the focus shifts from “problem analysis” (reactive) to “decision making” (proactive/preventive). WAMPAC will play a significant role in future DSSs that will use more accurate forecast information and more advanced analytical tools to be able to confidently predict system conditions and analyze “what-if” scenarios in the transmission grid.

3.4.3.5.1.2  Wide Area Protection 

Protection in smart transmission grids will become more challenging due to entry of renewable generation and distributed energy resources. The underlying protection systems in traditional grids are largely designed based on conventional generator responses to short circuits. The pattern of fault currents flow from generators to the short circuit points is estimated with a high degree of certainty. Renewable and distributed energy resources will distort these responses, resulting in increased risk of failure to detect short circuits or increased risk of false operation of protection systems in the absence of a fault. A large number amount of distributed generation on the grid can also make the system behave dynamically different after fault clearing, thus posing greater risk of system disturbances. These system disturbances include power swings and oscillations that could propagate throughout the system. Fault detection and isolation schemes in a smarter grid will have to be revised to take into account the impact of distributed generators responding to short circuits. WAP will be able to processes multiple local and remote measurements and implement wider area protection schemes to contain or prevent the spread of disturbances in an interconnected power system. WAP could be used to isolate unstable areas of smart grids to prevent the disturbance from cascading into the other regions and also identify the separation boundaries in the grid to create islands that survive major disturbances. WAP systems are designed to protect the system when control actions fail to address the disturbance. Protection actions include system separation, controlled islanding, generator tripping, and any other actions designed to contain a large-scale disturbance from precipitating into a system collapse.

3.4.3.5.1.3  Wide Area Control 

Existing transmission grids are being pushed to their limits with tremendous growth of energy demand worldwide. The energy infrastructure must also take into account environmental constraints and energy-efficiency requirements while maintaining the grid stability. Substantial amounts of renewable energy and the use of HVDC and FACTS devices in transmission systems result in more complexity in grid controllability. In order to utilize these assets to improve the overall grid stability, a system-wide approach is essential. One of the major precursors of having system-wide control utilizing signals from remote locations is a very reliable communication infrastructure with a high bandwidth. PMUs are the building blocks of a wide area control system. Employing PMUs in a wide area control system that includes monitoring and control of HVDC, FACTS or power systems stabilizers can help to improve transfer capability and to counter disturbances, such as power oscillations as shown in Figure 3.85. Such remote power grid information could come from a wide area monitoring system (WAMS). WAMS/WACS applications range from monitoring (such as state estimation and voltage security monitoring) to wide area control such as the damping of power oscillations. It is envisioned that future smart transmission grid operation could be highly improved by WAMS/WACS. For example, events often cause failure or isolation of transmission lines and generators that could potentially lead to grid collapse. WAC can be used for transferring blocking or overriding signals to protection and control systems to allow grids to ride through disturbances. For example, it is well known that during voltage collapse events transformer tap changer operation to restore voltages to normal levels aggravates the voltage collapse. WAC can send blocking signals to these transformers to inhibit their tap changer operations during voltage collapse.

Smart grid wide area control (WAC).

Figure 3.85   Smart grid wide area control (WAC).

3.4.3.5.1.4  Wide Area Stability 

Smart grid deployment results in both generation and loads being more dynamic and stochastic, which would make the grid be more vulnerable to adverse oscillations. Electromechanical oscillations, also known as small signal stability* problems, are one major threat to the stability and reliability of transmission grids. A poorly damped oscillation mode can become unstable, producing large-amplitude oscillations, leading to system breakup and large-scale blackout. Existing transmission capacity in most countries is derated in order to provide a margin of safety for reliable operations. There have been several incidents of system-wide low-frequency oscillations. Of them, the most notable is the August 10, 1996, U.S. western system breakup involving undamped system-wide oscillations. Figure 3.86 shows the measurement of power transfer from the Pacific Northwest to California for the August 10, 1996, event in the United States. The system deteriorated over time since the first line was tripped at 15:42:03. About 6 min later, undamped oscillations occurred and the system broke up into several islands.

Undamped oscillations leading to the August 10, 1996, U.S. western system islanding event.

Figure 3.86   Undamped oscillations leading to the August 10, 1996, U.S. western system islanding event.

(

Figure 3.87   (See color insert.) MANGO versus modulation stability control. (* Power system stabilization; § Pacific DC Intertie damping).

The first step to address this concern is to develop real-time monitoring of low-frequency oscillations. Significant efforts have been devoted to monitoring system oscillatory behaviors from measurements in the past 20 years. The deployment of advanced sensors such as PMUs provides high-precision time-synchronized data needed for detecting oscillation modes. A category of measurement-based modal analysis techniques, also known as ModeMeter, uses real-time phasor measurements to estimate system oscillation modes and their damping. There is yet a need for new methods to bring modal information from a monitoring tool to actionable steps. The methods should be able to correlate low damping with grid operating conditions in a real-time manner, so that operators can respond by adjusting operating conditions when low damping is observed.

Modal Analysis for Grid Operations (MANGO) is a U.S. effort funded by the U.S. DOE to address the problem of adequately detecting transmission grid power oscillations and to establish a procedure to aid grid operation decision making for mitigating inter-area oscillations [15]. Compared to alternative modulation-based methods, MANGO aims to improve damping through adjustment of operating points, whereas the modulation-based methods do not change the grid operating points. Figure 3.87 illustrates the difference of these two types of damping improvement methods. Modulation control retains the operating point but improves damping through automatic feedback control. Figure 3.88 illustrates the overall proposed MANGO framework.

Proposed MANGO framework.

Figure 3.88   Proposed MANGO framework.

Based on the effect of operating points on modal damping, MANGO can improve small signal stability through operating point adjustment. Simulation studies show that damping ratios can be controlled by operators through adjustment of grid operating parameters, such as generation redispatch, or load reduction as a last resort. At the same stress level (total system load), inter-area oscillation modes can be controlled by adjusting generation patterns to reduce flow on the interconnecting tie-line(s).

3.4.3.5.2  Management of Large Numbers of Intermittent Generation

Being able to monitor intermittent generation, such as renewable energy sources, in realtime is valuable to the management of the generation. EMS and DMS systems might not have sufficient monitoring capacity both in terms of monitoring points and signal sampling to include intermittent generation. Several dedicated WAMPAC systems can be deployed to perform such management functions. Under this scheme, WAMPAC can be used to manage the highly changing operating conditions, including intermittent power flows, autonomously. The same system can communicate with existing EMS or DMS system for receiving operator dispatch orders, if necessary.

3.4.3.5.3  Maintaining Power Quality

PMUs have been used in the past to monitor current and voltage harmonics. With WAMPAC, this information can be relayed and visualized in the network control room so that operators can resolve the power quality issue. WAMPAC time-synchronized data drive a host of monitoring applications that brings value to monitoring power quality in the control center and substations. Modern visualization tools such as those based on geographical information systems (GIS) can be made dynamic by layering the time-synchronized data of the power system captured by WAMPAC. Operators can then benefit from wide area visibility of power quality, allowing prompt preventive action to take place. For example, voltage sags or swells can be easily viewed in an animated fashion using WAMPAC visualization applications previously discussed.

3.4.3.5.4  Increasing Transmission Efficiency

WAMPAC systems can contribute to improving smart grid transmission efficiency in the following ways: (1) improved smart grid network management, (2) congestion management via stabilizing control, and (3) real-time optimization of grid operating parameters. Unlike traditional EMS/SCADA systems, WAMPAC systems can capture the accurate real-time state of the system. For example, up-to-date conductor temperatures calculated from phasor measurements can help determine additional power transfer capability of the transmission grid. Thermal margins in smart grid operations can be used by economic dispatch to ensure that most economical units are allocated, thereby minimizing total cost of power delivery. PMUs can also be used to enable stabilizing controls to mitigate potential destabilizing phenomena such as voltage collapse and angle instability. Traditionally, operators place flow limits on transmission lines to ensure that no destabilization takes place following a disturbance. It results in inefficient utilization of transmission assets. In most cases, stabilizing control will contribute to transmission efficiency by releasing extra transmission margins and/or allowing more economical generator dispatch. As discussed previously, deployment of PMUs enables an accurate estimation of the smart grid system model. This model can be used to calculate the most optimal set points for HVDC and FACTS devices. HVDC and FACTS devices can contribute to transmission efficiency if WAMPAC can modulate their operating set points according to the current smart grid conditions. For example, the objective can be expressed defined as optimal power flow with minimum transmission losses.

3.4.4  Role of Transmission Systems in Smart Grid

Renewable energy generation is a key topic of today’s power systems, in all countries. Driven by the need to reduce CO2 emissions to stop or at least reduce the global warming effect, new “CO2-free” technologies are investigated to fulfill the energy requirements of the future. Based on the Kyoto protocol and its subsequent conferences, most countries have committed to specific CO2 reduction and renewable energy targets within the next 10–20 years.

Large synchronous power grids, for example, in the Americas and in Europe, continue to develop in complexity and were not originally designed to serve the purpose they are expected to carry out nowadays, and this progression will continue into the future. Originally, the conventional power plants which are very easy to control were mostly built in the vicinity of cities and load centers and the grid around them was designed to provide the required capacity. The power demand was growing over the years and the ever-increasing amount of power capacity had to be brought from the adjacent grids over large distances. In addition to this, in the course of deregulation and privatization, a great number of power plants had to change their location; in the meantime plenty of volatile wind power has been installed in many countries, causing parts of the grid which may already be overloaded to become even more overloaded. For power grids, wind energy is the most difficult to process due to its inherent variability, whether it is located onshore or offshore. These fluctuations create great difficulties for the grids, for in this case not only the power flow, that is, the power supply, but also the voltage of the grids is affected. This results in fluctuations of both active and reactive power. This deteriorates voltage quality; the corresponding grid code can no longer be adhered to, and the adjacent loads as well as the grid itself are affected detrimentally. Moreover, in the event of grid faults, larger power outages referred to as “voltage collapse” can easily occur due to cascading tripping of wind or solar generators at low voltage levels. Due to this, in a large number of countries, the grid codes have been significantly tightened in order to fix the voltage within the exact, time-dependent ranges of tolerance and to protect the grid.

The security of power supply in terms of reliability and blackout prevention has the utmost priority when planning and extending power grids. The availability of electric power is the crucial prerequisite for the survivability of a modern society and power grids are virtually its lifelines. The aspect of sustainability is gradually gaining in importance in view of such challenges as the global climate protection and economical use of power resources are running short. It is, however, not a means to an end to do without electric power in order to reduce CO2 emissions. A more appropriate way is to integrate renewable energy resources to a greater extent in the future (energy mix) and, in addition to this, to increase the efficiency of conventional power generation as well as power transmission and distribution without loss of system security. The future power grids will have to withstand increasingly more stresses caused by large-scale power trading and a growing share of fluctuating regenerative energy sources, such as wind and solar power. In order to keep generation, transmission, and consumption in balance, the grids must become more flexible, that is, they must be controlled in a better way. State-of-the-art power electronics with HVDC and FACTS technologies provide a wide range of applications with different solutions, which can be adapted to the respective grid in the best possible manner. DC current transmission constitutes the best solution when it comes to loss reduction for transmitting power over long distances. The HVDC technology also helps control the load flow in an optimal way. This is the reason why, along with system interconnections, the HVDC systems become part of synchronous grids increasingly more often—either in form of a B2B for load flow control and grid support or as a DC energy highway to relieve heavily loaded grids.

HVDC technology allows for grid access of generation facilities on the basis of availability-dependent regenerative energy sources, including large on- and offshore wind farms, and compared with conventional AC transmission, it suffers a significantly lower level of transmission losses on the way to the loads.

Based on these evaluations, Figure 3.89 shows the stepwise interconnection of a number of grids by using AC lines, DC B2B systems, DC long-distance transmissions, and FACTS for strengthening the AC lines. These integrated hybrid AC/DC systems provide significant advantages in terms of technology, economics, as well as system security. They reduce transmission costs and help bypass heavily loaded AC systems. With these DC and AC ultrahigh power transmission technologies, the “smart grid,” consisting of a number of highly flexible “microgrids,” will turn into a “super grid” with bulk power energy highways, fully suitable for a secure and sustainable access to huge renewable energy resources such as hydro, solar, and wind, as indicated in Figure 3.90. This approach is an important step in the direction of environmental sustainability of power supply: transmission technologies with HVDC and FACTS can effectively help reduce transmission losses and CO2 emissions.

Despite a significant share of wind power, the stability of the grid has to be maintained that is, grid access solutions are needed, which provide both sustainability and security of electric power supply. This can be made possible by means of power electronics with dynamic fast control, which makes the grid more flexible and subsequently able to take in more regenerative and distributed energy sources. The solution of choice to tackle this complex task is FACTS and HVDC technology for they can be controlled on demand which takes a conventional grid to the “smart grid.”

(

Figure 3.89   (See color insert.) Hybrid system interconnections—“Super Grid” with HVDC and FACTS. (© Copyright 2012 Siemens. All rights reserved.)

Prospects of smart transmission grid developments. (© Copyright 2012 Siemens. All rights reserved.)

Figure 3.90   Prospects of smart transmission grid developments. (© Copyright 2012 Siemens. All rights reserved.)

3.5  Distribution Systems

3.5.1  Distribution Management Systems

Stuart Borlase, Jiyuan Fan, and Tim Taylor

3.5.1.1  Distribution SCADA

Supervisory control and data acquisition (SCADA) systems are a relative mature technology for the management of distributed asset systems. While they have long been used for the management of generation and transmission systems, they are increasingly being employed for the monitoring and control of distribution systems. Technology advances that will aid in the deployment of SCADA technologies are still occurring, particularly in the communications area. These are described in other chapters in this book.

Figure 3.91 conceptually illustrates the major components of a SCADA system. The SCADA master hardware and software is typically located centrally at the control center. The control center consists of the SCADA application servers, the communications front end processors, a data historian, interfaces to other control systems, operator work stations, and other supporting components. The primary SCADA system is often redundant, with a local backup system and/or remote backup at another site. Other system environments are often installed by the utility for testing and quality assurance, development, and training. Various types of communications links to the remote terminal units (RTUs) are used. These communications links are now becoming more IP based using open protocols.

In the application of SCADA for distribution systems, the costs of the additional sensors, IEDs (intelligent electronic devices), RTUs, communications, and SCADA master station must be considered relative to the benefits that are realized. It is rarely economical to monitor and control an entire distribution system with SCADA points. Distribution organizations typically choose to apply SCADA only to equipment that provide them with adequate return on investment in terms of improving reliability, volt/Var Control (VVC), situational awareness, remote control, or other business benefits. Monitoring and control of large distribution substations is usually always beneficial, but monitoring and controlling equipment further down the network on distribution feeders is not widespread, at least in the United States and other utilities with geographically large distribution systems. Figure 3.92 shows typical equipment types that can be part of a SCADA system applied on overhead distribution systems.

Major components of a SCADA system.

Figure 3.91   Major components of a SCADA system.

The most common equipment monitored and controlled in distribution SCADA include recloser controllers, switch controllers, voltage regulator controllers, and switched capacitor bank controllers. In many cases, IEDs and associated CTs and PTs are installed at these devices on the feeder, and adding the communications capability is only an incremental cost. The status and analog values monitored at these points provide operators with valuable visibility of the network operations further down the distribution system. In addition, if remote control is enabled for these devices, then reliability can be improved from the control center (through the recloser controllers and the switch controllers), and VVC can be improved (through the voltage regulator controllers and the switched capacitor bank controllers.).

Typical overhead distribution equipment included in a distribution SCADA system.

Figure 3.92   Typical overhead distribution equipment included in a distribution SCADA system.

Possible overlap in separate transmission and distribution SCADA systems.

Figure 3.93   Possible overlap in separate transmission and distribution SCADA systems.

In underground distribution systems, SCADA can be applied to equipment such as the network protectors in network transformer vaults, automatic throwover equipment, and the ring-main units that are used in many parts of the world for protection and switching. In these cases, the status, analog, and control points are similar to those for the overhead distribution system.

With the extension of SCADA to the distribution system, an important consideration is the best way to manage the SCADA within the distribution substation, both from a technology viewpoint and a business process perspective. If the transmission system SCADA and the distribution system SCADA are handled by the same utility operators, then it is greatly simplified. But in many organizations, distribution operations and transmission operations are separate. In such cases, coordination between the two organizations, for work flows such as switching, tagging, and control, must be established. Development, maintenance, and coordination of the two network models must also be addressed.

Figure 3.93 shows a typical distribution substation. An area of overlap exists between a newly defined distribution SCADA and an existing transmission SCADA/EMS. It shows the area of overlap between transmission and distribution, as well as the extent of their respective network models.

3.5.1.2  Trends in Distribution SCADA and Control

For master station developments, one of the key trends in the industry is the increase of bandwidth from the substation to the control center and also from the monitoring and control points on the distribution network to the control center. This increase in bandwidth enables the proliferation of thousands of low-cost sensors to be deployed on the network to increase the monitoring and measuring capability of SCADA, which will enable the applications at the master station to have a more complete view of the network and increase the accuracy of calculations and predictions—enabling more automated operations to take place.

The exponential rise in the number of real-time points means the old fixed capacities of SCADA systems have to be left behind and modern systems need to be able to scale up while maintaining and improving upon accepted performance standards. This is aided by the power of CPUs and the relatively cheap availability of large amounts of RAM, but must also be inherently supported by the design of the software systems processing the information. Additionally the more accurate modeling of the distribution network will enable optimization algorithms to run, reducing peak load and deferring investment in transmission and distribution assets. Many localized fault locators will be able to be deployed to accurately locate faults and enable restoration to occur quickly. Significant changes will also be seen in the area of database management and the reduction of configuration costs. IEC 61850 will greatly improve communications between devices. For the first time, vendors and utilities have agreed upon an international standard protocol. This will allow an unprecedented level of interoperability between devices of multiple vendors in a seamless fashion. The self-description feature of IEC 61850 will greatly reduce configurations costs. Also because of a single standard for all devices, training, engineering, and commissioning costs will be greatly reduced. IEC 61850 supports both client/server communications as well as peer-to-peer communications. The IEC process bus will allow for communication to the next generation of smart sensors. There is currently work under way to harmonize the EPRI CIM model and enterprise service bus IEC 61968 standards with the substation IEC 61850 protocol standards. Bringing these standards together will greatly reduce the costs of configuring and maintaining a master station through plug and play compatibility and database self-description. Advances in GUI/HMI interfaces will also be greatly improved. Moves toward browser-based displays will become more prevalent. New improvements that enhance the user experience will be developed especially in the area of safety.

Control systems already contain more and more intelligence and that trend will continue. Users are used to operating on an exception basis, for example, responding to a feeder lockout alarm only after local auto reclose schemes have completed. In the future there will be a lot more information available to the system, which in turn means that additional intelligence must be applied to that information in order to present the operator with the salient information rather than simply passing on more data. Taking the example of a fault on a distribution feeder further, an example would be that instead of presenting the user with a lockout alarm, accompanied by associated low volts, fault-passage indications, battery alarms, etc., leaving it up to the operator to drill down, diagnose, and work out a restoration strategy, the distribution control system will instead notify the operator that a fault has occurred and analysis and restoration is in progress in that area. The system will then analyze the scope of the fault using the information available, tracing the current network model; identifying current relevant safety documents, operational restrictions, and sensitive customers; and locating the fault using location data from the field. The system will automatically run load flow studies identifying current loading, available capacities, and possible weaknesses, using this information to develop a restoration strategy. The system will then attempt an isolation of the fault and maximum restoration of customers with safe load transfers, potentially involving multilevel feeder reconfiguration to prevent cascading overloads to adjacent circuits. Once the reconfiguration is complete, the system can alert the operator to the outcome and even automatically dispatch the appropriate crew to the identified faulted section.

Control systems will not only be able to present information to operators for their consideration but they will be able to advise the operator on how to best deal with a situation. Systems are able to propose and validate switching for planned and unplanned work. Faults can be automatically isolated and partially restored automatically as in the aforementioned example, or more likely in the short term, the system can carry out the analysis and present the proposed strategy to the users to carry out interactively. This mode of operation also supports the current reality that most distribution switching is manual; however, a control system can be set up to operate a “first pass restoration” via available SCADA devices to maximize customers on supply, then a second wave of manual switching coordinated by the operator. The system conditions can be monitored and checked against expected future loads and contingencies without the operator taking action.

3.5.1.3  Current Distribution Management Systems

Distribution management systems (DMSs) started with simple extensions of SCADA from the transmission system down to the distribution network. A large proportion of dispatch and system operations systems in service today rely on manual and paper-based systems with little real-time circuit and customer data. Operators have to contend with several systems and interfaces on the control desk (“chair rolls”) based on multiple network model representations. The experience of operators is the key to safe system operation. With an increase in regulatory influence and smart grid focus on advanced technologies, there is a renewed interest in increasing investment in distribution networks to defer infrastructure build-out and reduce operating and maintenance costs through improving grid efficiency, network reliability, and asset management programs.

As distribution organizations have become more interested in increasing asset utilization and reducing operational costs, advanced DMS applications have been developed. These include load allocation and unbalanced load flow analysis; switch order creation, simulation, approval, and execution; overload reduction switching; and capacitor and voltage regulator control.

Two specific examples of advanced applications that reduce customer outage durations are the fault-location application and the restoration switching analysis (RSA) application.

Various DMS applications are commonly used today.

Fault detection, isolation, and service restoration (FDIR) is designed to improve the system reliability. FDIR detects a fault on a feeder section based on the remote measurements from the feeder terminal units (FTUs), quickly isolates the faulted feeder section, and then restores the service to the unfaulted feeder sections. It can reduce the service restoration time from several hours to a few minutes, considerably improving the distribution system reliability and service quality. The fault-location application estimates the location of an electrical fault on the system. This is different than identifying the protective device that operated, which typically is done based on the pattern of customer outage calls or through change in a SCADA status point. The location of the electrical fault is where the short-circuit fault occurred, whether it was a result of vegetation, wildlife, lightning, or something else. Finding the location of an electrical fault can be difficult for crews, particularly on long runs of conductor not segmented by protective devices. Fault location tends to be more difficult when troubleshooters or crews are hindered by rough terrain, heavy rain, snow, and darkness. The more time required to locate the fault, the more time customers are without power. DMS-based fault-location algorithms use the as-operated electric network model, including the circuit connectivity, location of open switches, and lengths and impedances of conductor segments, to estimate fault location. Fault current information such as magnitude, predicted type of fault, and faulted phases is obtained by the DMS from IEDs such as relays, recloser controls, or RTUs. After possible fault locations are calculated within the DMS application, they are geographically presented to the operator on the console’s map display and in tabular displays. If a geographic information system (GIS) land base has been included, such as a street overlay, an operator can communicate to the troubleshooter the possible location including nearby streets or intersections. This information helps crews find faults more quickly. As business rules permit, upstream isolation switches can be operated and upstream customers can be reenergized more quickly, resulting in much lower interruption durations. The DMS fault-location application uses the electrical DMS model and fault current information from IEDs to improve outage management.

RSA is an advanced application that improves reliability performance indices. This application can improve the evaluation of all possible switching actions to isolate a permanent fault and restore customers as quickly as possible. The application recommends the suggested switching actions to the operator, who can select the best alternative based on criteria such as number of customers restored, number of critical customers restored, and required number of switching operations. Upon the occurrence of a permanent fault, the application evaluates all possible switching actions and executes an unbalanced load flow to determine overloaded lines and low-voltage violations if the switching actions were performed. The operator receives a summary of the analysis, including a list of recommended switching actions. Similar to the fault-location application, the functionality uses the DMS model of the system but improves outage management and reduces the customer average interruption duration index (CAIDI) and SAIDI. The RSA application is particularly valuable during heavy loading and when the number of potential switching actions is high. Depending on the option selected, the application can execute with the operator in the loop or in a closed-loop manner without operator intervention. In closed-loop operation, the RSA application transmits control messages to distribution devices using communications networks such as SCADA radio, paging, or potentially AMI infrastructure. Such an automated isolation and restoration process approaches what many call the “self-healing” characteristic of a smart grid.

Integrated volt/Var Control (IVVC) has three basic objectives: reducing feeder network losses by energizing or de-energizing the feeder capacitor banks, ensuring that an optimum voltage profile is maintained along the feeder during normal operating conditions, and reducing peak load through feeder voltage reduction by controlling the transformer tap positions in substations and voltage regulators on feeder sections. Advanced algorithms are employed to optimally coordinate the control of capacitor banks, voltage regulators, and transformer tap positions.

The topology processor (TP) is a background, off-line processor that accurately determines the distribution network topology and connectivity for display colorization and to provide accurate network data for other DMS applications. The TP may also provide intelligent alarm processing to suppress unnecessary alarms due to topology changes.

Distribution power flow (DPF) solves the three-phase unbalanced load flow for both meshed and radial operation scenarios of the distribution network. DPF is one of the core modules in a DMS and the results are used by many DMS applications, such as FDIR and IVVC, for analyses.

Load modeling/load estimation (LM/LE) is a very important base module in DMS. Dynamic LM/LE uses all the available information from the distribution network—including the user transformer capacities and customer monthly billings, if available, combined with the real-time measurements along the feeders to accurately estimate the distribution network loading, for both individual loads and aggregated bulk loads. The effectiveness of the entire DMS relies on the data accuracy provided by LM/LE. If the load models and the load values are not accurate enough, all the solution results from the DMS applications will be useless.

Optimal network reconfiguration (ONR) is a module that recommends switching operations to reconfigure the distribution network to minimize network energy losses, maintain optimum voltage profiles, and balance the loading conditions among the substation transformers, the distribution feeders, and the network phases. ONR can also be utilized to develop outage plans for maintenance or service expansion fieldwork.

Contingency analysis (CA) in the DMS is designed to analyze potential switching and fault scenarios that would adversely affect supply to customers or impact operational safety. With the CA results, proactive or remedial actions can be taken by changing the operating conditions or network configuration to guarantee minimal number of customer outages and maximum network reliability.

Switch order management (SOM) is a very important tool for system operators in real-time operation. Several of the DMS applications and the system operators will generate numerous switch plans that have to be well managed, verified, executed, or rejected. SOM provides advanced analysis and execution features to better manage all switch operations in the system.

Short-circuit analysis (SCA) is an off-line function to calculate the short-circuit current for hypothetical fault conditions in order to evaluate the possible impacts of a fault on the network. SCA then verifies the relay protection settings and operation and recommends more accurate relay settings or network configuration.

Relay protection coordination (RPC) manages and verifies the relay settings of the distribution feeders under different operating conditions and network reconfigurations.

Optimal capacitor placement/optimal voltage regulator placement (OCP/OVP) is an off-line function used to determine optimal locations for capacitor banks and voltage regulators in the distribution network for the most effective control of the feeder VArs and voltage profile.

Dispatcher training simulator (DTS) is employed to simulate the effects of normal and abnormal operating conditions and switching scenarios before they are applied to the real system. In distribution grid operation, DTS is a very important tool that can help the operators to evaluate the impacts of an operation plan in advance or simulate historical operation scenarios to obtain valuable training on the use of the DMS. DTS is also used to simulate conditions of system expansions.

3.5.1.4  Advanced Distribution Management Systems

Distributed energy resources (DER) on the distribution network will be from disparate sources and subject to great uncertainty. The electricity consumption of individual consumers is also of great uncertainty when they respond to the real-time pricing and rewarding policies of power utilities for economic benefits. The conventional methods of LM and LE in the traditional DMS are no longer effective, rendering other DMS applications ineffective or altogether useless. The impact of demand response management (DRM) and consumer behaviors may be modeled or predicted from the utility pricing rules and rewarding policies for specified time periods, which can be incorporated into the LM and LE algorithms; this requires a direct linkage between the DMS and the DRM applications. When the DRM application attempts to accomplish load relief in response to a request from the independent system operator (ISO), it will need to verify from the DMS that the DRM load relief will not result in any distribution network connectivity, operation, or protection violations. The high penetration of distributed generation will require the load flow algorithm to deal with multiple, incremental, and isolated supply sources with limited capacities, as well as a network topology that is no longer radial or is weakly meshed. In a faulted condition, the distributed generation will also contribute to the short-circuit currents, adding to the complexity of the SCA, RPC, and FDIR logic.

A number of smart grid advances in distribution management are expected, as shown in the Figure 3.94.

Advanced distribution management for the smart grid. (Fan, J. and Borlase, S., Advanced distribution management systems for smart grids, and

Figure 3.94   Advanced distribution management for the smart grid. (Fan, J. and Borlase, S., Advanced distribution management systems for smart grids, and IEEE Power and Engineering. © Copyright March/April 2009 IEEE.)

Monitoring, control, and data acquisition will extend further down the network to the distribution pole-top transformer and perhaps even to individual customers by means of an advanced metering infrastructure (AMI) and/or demand response and home energy management systems on the home area network (HAN). More granular field data will help increase operational efficiency and provide more data for other smart grid applications, such as outage management. Higher speed and increased bandwidth communications for data acquisition and control will be needed. Sharing communications networks with an AMI will help achieve system-wide coverage for monitoring and control down the distribution network and to individual consumers.

Integration, interfaces, standards, and open systems will become a necessity. Ideally, the DMS will support an architecture that allows advanced applications to be easily added and integrated with the system. Open standards databases and data exchange interfaces (such as CIM, SOAP, XML, SOA, and enterprise service buses) will allow flexibility in the implementation of the applications required by the utility, without forcing a monolithic distribution management solution. For example, the open architecture in the databases and the applications could allow incremental distribution management upgrades, starting with a database and monitoring and control application (SCADA), then later adding an IVVC application with minimal integration effort. As part of the overall smart grid technology solution or roadmap, the architecture could also allow interfacing with other enterprise applications (such as a GIS, an outage management system (OMS), or a meter data management system (MDMS) via a standard interface. Standardized web-based user interfaces will support multiplatform architectures and ease of reporting. Data exchange between the advanced DMS and other enterprise applications will increase operational benefits, such as MDM and outage management.

FDIR will require a higher level of optimization and will need to include optimization for closed-loop, parallel circuit, and radial configurations. Multilevel feeder reconfiguration, multiobjective restoration strategies, and forward-looking network loading validation will be additional features with FDIR.

IVVC will include operational and asset improvements—such as identifying failed capacitor banks and tracking capacitor bank, tap changer, and regulator operation to provide sufficient statistics for opportunities to optimize capacitor bank and regulator placement in the network. Regional IVVC objectives may include operational or cost-based optimization.

LM/LE will be significantly changed where customer consumption behaviors will no longer be predictable but more smartly managed individually and affected by distribution response management.

With a significant increase in real-time measurements available from more widespread installations of field IEDs on feeders and meter data from end users and AMI systems, distribution state estimation (DSE) will play an important role in monitoring the overall grid operation condition and situation awareness, as well as in supporting IVVC and other distribution optimization functions. More accurate estimation of distribution system voltages extending from the substation down the feeders to end user locations will allow IVVC to precisely control the voltage profiles along the feeder and at the end user to realize more economic benefits.

TP, DPF, ONR, CA, SCA, and RPC will be used on a more frequent basis. They will need to include single-phase and three-phase models and analysis, and they will have to be extended down the network to individual customers. Moreover, distribution optimization functions such as FDIR, IVVC, and ONR will be more effectively integrated for real-time and look-ahead operational support. Distribution optimization functions will also be coordinated with consumer demand management and DER optimization.

Distributed generation, microgrids, and customer generation (such as plug-in hybrid vehicles [PHEVs]) will add many challenges to the protection, operation, and maintenance of the distribution network. Small generation loads at the customer interface will complicate power flow analysis, CA, and emergency control of the network. Protection and control schemes will need to account for bidirectional power flow and multiple fault sources. Protection settings and fault restoration algorithms may need to be dynamically changed to accommodate changes in the network configuration and supply sources.

The development of new technologies and applications in distribution management can help drive optimization of the distribution grid and assets. The seamless integration of smart grid technologies is not the only challenge. Also challenging is the development and implementation of the features and applications required to support the operation of the grid under the new environment introduced by the use of clean energy and distributed generation as well as the smart consumption of electricity by end users. DMSs and distribution automation applications have to meet the new challenges, requiring advances in the architecture and functionality of distribution management, that is, an advanced DMS for the smart grid. Expect to see an evolution of traditional distribution management to include advanced applications to monitor, control, and optimize the network in the smart grid, that is, an advanced DMS for the smart grid.

Databases and data exchange will need to facilitate the integration of both geographical and network databases in an advanced DMS. The geographical and network models will need to provide single-phase and three-phase representations to support the advanced applications. Ideally, any changes to the geographical data (from network changes in the field) will automatically update the network models in the database and user interface diagrams. More work is required in the areas of distributed real-time databases, high-speed data exchange, and data security.

Dashboard metrics, reporting, and historical data will be essential tools for tracking performance of the distribution network and related smart grid initiatives. For example, advanced distribution management will need to measure and report the effectiveness of grid efficiency programs, such as VAr optimization, or the system average interruption duration index (SAIDI), the system average interruption frequency index (SAIFI), and other reliability indices related to delivery optimization smart grid technologies. Historical databases will also allow verification of the capability of the smart grid optimization and efficiency applications over time, and these databases will allow a more accurate estimation of the change in system conditions expected when the applications are called upon to operate. Alarm analysis, disturbance, event replay, and other power quality metrics will add tremendous value to the utility and improve relationships with customers. Load forecasting and load management data will also help with network planning and optimization of network operations.

Analytics and visualization will assimilate the tremendous increase in data from the field devices and integration with other applications, and they will necessitate advanced filtering and analysis tools. Visualization of the data provides a detailed but clear overview of the large amounts of data. Data filtering and visualization will help quickly analyze network conditions and improve the decision-making process. Visualization in an advanced DMS would help display accurate, near-real-time information on network performance at each geospatially referenced point on a regional or system-wide basis. For example, analytics and visualization could show voltage magnitudes by color contours on the grid, monitor and alarm deviations from nominal voltage levels, or show line loading through a contour display with colors corresponding to line loading relative to capacity. System operators and enterprise users will greatly benefit from analytic and visualization tools in day-to-day operations and planning.

Enterprise integration is an essential component of the smart grid architecture. To increase the value of an integrated smart grid solution, the advanced DMS will need to interface and share data with numerous other applications. For example, building on the benefits of an AMI with extensive communication coverage across the distribution system and obtaining operational data from the customer point of delivery (such as voltage, power factor, loss of supply) help to improve outage management and IVVC implementation.

Enhanced security will be required for field communications, application interfaces, and user access. The advanced DMS will need to include data security servers to ensure secure communications with field devices and secure data exchange with other applications. The use of IP-based communications protocols will allow utilities to take advantage of commercially available and open-standard solutions for securing network and interface communications.

3.5.2  Volt/Var Control

Stuart Borlase, Jiyuan Fan, Xiaoming Feng, Carroll Ivester, Bob McFetridge, and Tim Taylor

VVC relates to switching of distribution substation and feeder voltage regulation equipment and capacitor banks with two main objectives: reducing VAr flow on the distribution system and adjusting voltage at the customer delivery point within required limits. An effective VVC approach combines, coordinates, and optimizes the control of both VAr flow and customer voltage. Components of VVC are as follows:

Var Control, VAr compensation, power factor correction

Substation and distribution feeder capacitor banks are used to minimize VAr flow (improve power factor) on the distribution feeder during all load levels (peak and base). Reduction of VAr flow reduces distribution system losses, which reduces load on the substation and distribution feeders.

Conservation voltage reduction

CVR (conservation voltage reduction) is the control of substation transformer LTCs (load tap changers) and distribution feeder voltage regulators to reduce customer delivery voltage within specified and safe margins at the customer service point during peak periods of load, which may result in a reduction of customer load and, in turn, result in load reduction on the substation and distribution feeders. CVR may also be implemented during base loading periods. Voltage control is not only exercised for CVR but also to comply with normal operation and regulatory compliance.

Integrated Volt/Var Control

IVVC is the coordination of VAr flow and CVR to reduce distribution feeder losses and control the voltage profile on the feeder, which may reduce system losses and may improve service voltage to the customer. Other possible benefits may include reduction in capacitor bank inspections and capacitor bank troubleshooting.

Volt/VAr optimization

VVO (volt/VAr optimization) is the capability to optimize the objectives of VAr (loss) minimization and load reduction (with voltage constraints) using optimization algorithms and well-defined control objectives subject to various system constraints through centralized or decentralized decision makings.

The discussions that follow on voltage and Var Control reference the voltage levels and operation of the U.S. electrical system as an example.

3.5.2.1  Inefficiency of the Power Delivery System

Electric utilities have two concerns when it comes to transmitting electricity from the generator to the customer. First, it must get there safely and reliably. Second, the majority of what is generated must make it to the customer in order for the utility to be profitable—efficiency of power delivery to the customer. For a utility to maximize profits, it must minimize the amount of electric losses on the system during the transfer of electricity from the generation site to the customer. Electric losses are mostly a result of the heating effect (I2R losses) of current passing through power delivery equipment. These are known as resistive losses. Other electric losses, known as reactive losses, are a result of losses in magnetic flux coupling in transformers and other inductive equipment, including transmission and distribution lines themselves. Power is transmitted over long distances at high voltages in order to reduce losses. Utilities have increased both transmission voltages and distribution voltages in order to reduce losses. Traditionally, 115 and 230 kV were considered the primary transmission voltages, but lately 500 and 745 kV have become more dominant. The same is true at the distribution level, where 4 kV distribution lines are replaced with 12, 25, or even 34.5 kV lines. The purpose of the distribution system is to reduce the voltage to a lower level and deliver power at the required voltage level to the customer. In the United States, residential customers interface to the system at 120 or 240 V, not the 115–500 kV found on the transmission lines. It is cost prohibitive to have 115 kV–120 V transformers at the many customer service points. So, from the generator to the customer, the voltage is transformed multiple times and carried over hundreds of miles. Each transformation causes losses. Likewise, each mile the power flows causes additional losses in lines and cables. Since the distribution system operates at a lower voltage and a higher current than the transmission system, the distribution system will produce more losses per mile of equivalent impedance compared to the transmission system.

Another factor that can reduce electrical losses is a reduction in the distance from generation to end use. In the early days of electricity, power plants were built close to the customers in large cities. Since then, several trends have occurred that are placing the power plants further away from the customers. First, more customers are moving away from the city centers and living greater distances apart. Power plants are also moving further away. This is due to several factors. First, customers do not want power plants close to their homes. Next, with the advent of gas peaking generation plants, power plant sites were selected based on the availability of gas pipelines, water and transmission line capacity as the major factors, and not where the main load centers were located. This has resulted in an increase in the distance electricity is transmitted and distributed, which increases electrical losses on the system. In the future, with the advent of distributed generation, smaller plants that are closer to the end customer will help to reduce electrical losses.

Electrical losses occur at every level of the transmission and distribution system due to the electrical impedance (resistive and reactive) of the equipment: from the step-up transformers at the power plants and the transmission and distribution grid (lines and transformers) down to the customer end delivery points. VArs in the system are caused by current flowing through inductive equipment on the system, such as transformers and lines, and also by the type of load. VArs in the system increase the current flowing in the system, which results in an increase in energy delivery losses. To reduce the VArs, capacitance in the form of capacitor banks is added to the system.

3.5.2.2  Voltage Fluctuations on the Distribution System

Electric utilities are required to deliver voltage to the customer at a nominal voltage within a specified operating range. In the United States, for single-phase, three-wire service, the nominal service voltage is specified under ANSI (American National Standards Institute) standard C84.1 [2] as 120 Vac with an acceptable range (Range A) of ±5% of the nominal voltage. Therefore, in the United States, any voltage between 114 and 126 Vac is deemed to be an acceptable voltage or, as the standard states, a “favorable voltage.” The voltage is allowed to enter an acceptable zone of 110–127 Vac for short durations. Acceptable delivery voltages for U.S. customers are listed in Table 3.8. Range A is considered the favorable zone where the occurrence of delivery voltages outside these limits should be infrequent. Range B is considered the tolerable range and includes voltages above and below Range A limits. Corrective actions must be undertaken for sustained voltages in Range B to meet Range A requirements.

If the distribution system voltage is too high, it can damage power delivery equipment, such as transformers, as well as consumer equipment, such as appliances and electronic equipment. High voltages can also reduce the life of lighting products. If the incoming voltage is too low, lighting will dim, motors will have less starting torque and can overheat, and some equipment, such as computers and TVs, will power down. As a general rule, lower voltages result in more damage to the load on a distribution system and higher voltages cause more permanent damage.

Table 3.8   Acceptable Delivery Voltages for U.S. Customers

Nominal Service Voltage

Range B Minimum

Range A Minimum

Range A Maximum

Range B Maximum

Percent of nominal

91.7%

95%

105%

105.8%

Single phase

120/240, three wire

110/220

114/228

126/252

127/254

Three phase

240/120, four wires

220/110

228/114

252/126

254/127

208Y/120, four wires

191/110

197/114

218/126

220/127

480Y/277, four wires

440/254

456/263

504/291

508/293

2.4–34.5 kV% of nominal

95%

97.5%

105%

105.8%

Source: Voltage ratings for electrical power systems and equipment, American National Standard ANSI C84.1-1989.

Power flows from generators through transmission and distribution lines and several transformers before it reaches the end customer. Transmission and distribution lines and transformers all have electrical impedance (resistive and reactive) and the current flowing through the impedance results in a voltage drop. Therefore, the main factors affecting the amount of voltage drop are the load (the amount of current), the types of load, and the distribution system impedance.

If the voltage at the customer closest to the distribution substation is below 126 and the voltage at the customer furthest from the substation is above 114, then there may be no need for any voltage corrective action. This is typically not the case. Without any means to compensate for the reduction and the continual change in distribution voltages, customers closest to the substation will experience highest voltage levels and customers furthest from the substation will experience lowest voltage levels. To regulate the voltage levels at the substation and along the distribution feeder and ensure the voltage levels are within limits at the customer, substation power transformers and distribution feeder voltage transformers (voltage regulators) are equipped with means to actively change the turns ratio (taps) of the transformer while energized. The tap changing equipment on power transformers are referred to as LTCs.

3.5.2.3  Effect of Voltage on Customer Load

There are two main types of load on the system: constant resistive load and constant power load. With constant resistive load, when the voltage is decreased, so does the current. This causes a reduction in electrical losses on the distribution system and a further reduction in voltage drop due to the losses. With constant power load, when voltage is decreased, current increases. Resistive load, mostly lighting and heating, as well as appliances powered via nonswitching power supplies, has been the predominant type of load on electric systems in the past. Constant power load, such as fluorescent lighting, appliances with switching power supplies, and heat pumps, is becoming more dominant. Motors are the worst type of load for voltage fluctuations because as the voltage decreases excessively, the current increases in a linear manner. If the voltage at the motor load is too low, the motor stalls and this causes an exponential increase in load current.

Studies have shown that a reduction in voltage typically results in a reduction in load when the voltage is first reduced. The level of load reduction achieved with voltage reduction is highly dependent on the type of load on the distribution feeder [3]. The effects of the voltage reduction diminish over time as certain loads require additional current at the lower voltage to complete the task. For example, with hot water heaters, once the water temperature falls below the desired setting, the water heater turns on. At lower voltages, the water heater does not produce as much heat and therefore has to run longer to heat the water to the appropriate temperature.

To decrease the load, either during emergencies or when generation is not available or too expensive, utilities can only reduce the voltage to the point where the customer at the end of the feeder is still above 114 V. This means that customers closest to the substation are served power at a higher voltage. If this is the case, the full benefit of voltage reduction cannot be achieved. A constant level (or “flat”) voltage profile from the substation down the distribution feeder to each customer would allow for maximum voltage reduction benefits. Large reactive loads and impedance in the distribution system result in an uneven voltage profile that sometimes cannot be entirely compensated by adjusting the substation transformer and distribution feeder voltage regulator taps. In this case, reducing the VArs flowing in the distribution system will help maintain a flatter voltage profile along the distribution feeder.

3.5.2.4  Drivers, Objectives, and Benefits of Voltage and Var Control

Not all utilities have generation, transmission, and distribution systems as part of their operations, so the drivers, objectives, and benefits of VVC differ for utilities. Whether utilities have generation plants or purchase energy from other utilities or power pools, by reducing the losses in the system through Var Control, the utility can obtain more revenue for the same amount of electricity generated or purchased. However, the grid has a limited capacity. As load continues to increase, the capacity of the grid has to increase with it, meaning more power plants, more transmission lines, and more transmission and distribution substations. This is a high cost to the utility, both financially and politically. Customers consider electricity a commodity and are not always willing to pay for the costs that it takes to deliver it. Customers want to keep the cost down but are not always willing to allow the construction of the plants, lines, and substations in their neighborhoods. By reducing losses through Var Control and customer demand through voltage control, utilities can increase the capacity of the grid and avoid or defer system expansion projects. Time is also a factor that has to be reviewed. It typically takes upwards of a year to build a substation, several years to build transmission lines, and in the case of nuclear power plants, up to 10 years to bring a new power plant online. With the current increase in demand, it may not be possible to keep up with the customer demand, even if customers are willing to help with the costs. If the amount of power generated can be reduced by eliminating losses, less CO2 will be generated from the power plants. Reducing the emissions from the plants can be a tremendous benefit for both the utilities and the public in general.

Another advantage of VVC has been driven by deregulation and trading of electricity on the open market. Utilities generate electricity in multiple ways: hydropower plants (power plants that use water to generate the electricity), fossil power plants (plants that use coal, natural gas, or oil to generate electricity), nuclear power plants, and the green power from the sun or wind. Each of these plants has different costs associated with producing the electricity, with hydro and nuclear the cheapest to produce, followed by the fossil fuel plants, and finally the green plants. Utilities maximize their profits by generating electricity from the most cost-effective plants, using the remaining plants for peak or emergency power only. Sometimes it is actually cheaper for the utility to reduce load rather than run an additional power plant for a few hours to support the temporary increase in load. Weekday load typically starts to increase in the morning when customers are getting ready for work and commercial and industrial businesses start for the day. Load then typically peaks during the day before reducing to the lowest level overnight as shown in Figure 3.95. The distribution feeder and substation load profile depends on the type of customer and load. Other factors can change the load pattern, such as weather. Residential loads typically peak during the hours of 2 and 6 pm in the summer in areas with high temperatures due to air-conditioning load. Irrespective of the load pattern, there will always be times when the load on the system is higher or lower than average. Ideally, a constant load on the system would help utilities operate the most cost-effective base generation, such as fossil or nuclear power plants. However, peak loading requires utilities to operate more expensive, fast-responding generation, such as gas turbines. Utilities are also required to have reserve generation capacity on immediate standby as contingency for any loss of generation on the system or any possible system reconfiguration due to equipment failures. As discussed earlier, lower voltages will, for at least short periods of time, reduce the power consumed by customer loads. The opposite is true for higher customer delivery voltages, causing a general increase in load. Customers on the same distribution feeder with exactly the same loads will have slight differences in their electricity bill, with the customers closer to the substation having a slightly higher bill due to the fact that their incoming service is at a higher voltage than those of customers located at the end of the line. Therefore, changing the voltage can change the loading and therefore the revenue. One would think that a utility would always want to run the voltage on the high side to increase revenue but this not always the case. There are several operating conditions that benefit the utility to run the system at lower voltages.

Effect of voltage control on customer load.

Figure 3.95   Effect of voltage control on customer load.

While VVC has been implemented over the many years to varying degrees, smart grid initiatives now bring renewed focus on the implementation considerations and measured benefits of voltage and Var Control to utilities. Current utility installations have shown that IVVC can reduce distribution feeder losses up to 25% and the peak load by up to 3% (see Table 3.9), depending on the load characteristics.

Another key benefit of VVC is that of security and reliability. As the growth of customer load outpaces the supply, the utility power delivery reserves are dwindling, making the system more susceptible to brownouts (suppressed voltage conditions) and blackouts (loss of power). VVC helps increase the available capacity of the power delivery system.

An emerging trend in generation is that of CO2 emissions with carbon taxes and credits. If a utility can reduce the amount of generation required, especially with the coal and fossil plants, the utility can reduce the CO2 levels, thus reducing the taxes and possibly actually building up credits.

While VVC brings significant benefits to the utility, the downside is that more equipment is required on the distribution system, such as voltage regulators and capacitor banks, as well as the means to monitor and control the devices. However, smarter monitoring and control can minimize the number of total operations of the equipment and therefore reduce maintenance costs. Smarter monitoring can also identify failed equipment such as fuses and pole-top transformers or capacitor banks to allow for quicker service restoration. Besides detecting outages faster, the smart grid will enable utilities to restore faster by remotely changing the configuration of the grid. This change in configuration can add complexity to the role of volt/VAr implementation which will be discussed in upcoming sections.

Table 3.9   Typical Load Reduction Due to CVR*

Percent Voltage Regulation

Load Reduction at Unity Power Factor

Load Reduction at 0.9 Power Factor

2

1.5

0.5

4

3.0

2.0

Source: M.S. Chen, R.R. Shoults and J. Fitzer, Effects of Reduced Voltage on the Operation and Efficiency of Electric Loads, EPRI, Arlington: University of Texas, EL-2036 Volumes 1 & 2, Research Project 1419-1, 1981.

3.5.2.5  Volt/Var Control Equipment inside the Substation

The majority of electricity delivery losses occur at distribution voltages. Distribution substations are typically fed from two or more transmission lines (e.g., 115 or 230 kV). There are typically one or more power transformers in the distribution substation that step the voltage down to between 4 and 25 kV. The low side of each transformer will connect to a bus that feeds multiple distribution feeders to distribute the power to the end customers. Some substation configurations allow the interconnection of the low-side busbars so that one transformer can feed multiple busses if another transformer fails or is removed from service for maintenance. The distribution power transformers and low-side busses in the substation are the primary point in the distribution system for voltage regulation.

3.5.2.5.1  Power Transformers

A power transformer has fixed winding ratio connections (taps) and, as discussed earlier, may also have variable taps to actively change the turns ratio of the transformer while energized (LTCs). The fixed taps allow for an adjustment of the voltage on the low side and there are typically settings for 0%, ±2.5%, and ±5%. This means when the rated incoming voltage is present, the secondary voltage can be at rated voltage (0% fixed tap) or 2.5% or 5% higher or lower than rated voltage. The fixed tap setting can only be changed when the transformer is out of service. If the transformer is equipped with an LTC, the low-side voltage can be varied with the transformer in service. The LTC typically allows for a 10% variation in voltage in either the raise or lower direction by having 16 taps, 8 taps that lower the low-side voltage and 8 taps that raise the low-side voltage. The LTC has very little effect on the high-side voltage. When in the neutral position, the LTC has no effect on the low-side voltage.

The LTC is a single-phase sense, three-phase operate device. It monitors one phase of voltage and current to make decisions but then acts on all three phases. For the LTC to operate correctly, all three phases of load need to be fairly well balanced. If one phase has significantly more load than another, the voltage drop will be greater. If it is not sensing that phase, the customers on that phase may have low voltage near the end of the line. If the phase with the highest current is the phase monitored, the customers on the other two phases may have high voltage near the source. Therefore, in order to have correct three-phase operation while only monitoring one phase, an assumption is made that all three phases have close to the same load. A photograph of a typical distribution substation power transformer with an LTC is shown in Figure 3.96 and a single-line representation of a distribution substation power transformer with an LTC controlling voltage on four distribution feeders is shown in Figure 3.97.

Photograph of a distribution substation power transformer with an LTC.

Figure 3.96   Photograph of a distribution substation power transformer with an LTC.

Single-line representation of a distribution substation power transformer with an LTC controlling voltage on three distribution feeders.

Figure 3.97   Single-line representation of a distribution substation power transformer with an LTC controlling voltage on three distribution feeders.

The first advantage of an LTC is that of cost. If the transformer is feeding more than two or three distribution feeders, it will be less expensive to use the LTC than to use single-phase regulators on each distribution feeder. Second is the cost of property. The LTC takes up the smallest amount of real estate. There are several disadvantages of LTCs. First, because it is a single-phase sense, all three phases have to be fairly well balanced. This is easy to do in urban areas where customers are condensed in small areas and distribution feeders are short, typically less than 4 miles. It is not so easy to do in rural areas with distribution feeders that are over 15 miles, most of which is single-phase runs. The next disadvantage is that a failure by either the LTC or the LTC controller can cause over- or undervoltages. Because the LTC is regulating the voltage to all the customers on all three phases of all feeders attached to the transformer, a failure affects many more customers. Another disadvantage is one of maintenance. To maintain the LTC mechanism, the entire transformer has to be taken out of service. This typically requires many hours for switching and limits the maintenance to off-peak load times such as evenings and weekends.

For smart grid applications, LTCs make life much more difficult. Because one device is affecting the voltage on many feeders, it is difficult to achieve a flat voltage profile. Each distribution feeder attached to the LTC will have a different length and, therefore, different impedance and also different loading conditions. Therefore, the voltage drops (and profile) along each feeder will be different, making it difficult for the LTC alone to attempt to correctly regulate all the feeders.

3.5.2.5.2  Substation Bus Regulation

Substation bus regulation attempts to regulate the voltage on the low-side bus of the distribution substation power transformer before the individual distribution feeders. There are two approaches to bus regulation, single-phase regulation and three-phase regulation. With three-phase regulation, the approach is similar to that of the LTC with one exception: the LTC is separate from the power transformer. The advantage of doing this is that maintenance of the three-phase regulator can be performed without taking the transformer out of service. The disadvantage of a separate bus regulation LTC over the transformer LTC is that the bus regulation LTC is more expensive and the installation footprint is much larger. This approach was popular many years ago, but few implement and manufacture three-phase regulators today.

The second approach to bus regulation is the use of three single-phase voltage regulators. One advantage of using three single-phase regulators is that with each phase being individually sensed and controlled, the loads do not have to be balanced for effective phase regulation. Another advantage is that a failure of the regulator or regulator controller now only affects the customers on that phase. The primary disadvantage to single-phase bus regulation is that of equipment size. The ­single-phase regulators cannot handle as much current as the three-phase bus regulators and therefore the substation transformer size and loading must be limited when using single-phase voltage regulators. From a cost standpoint, if the transformer is 20 MVA or smaller, single-phase bus regulation can be very economical. For this reason, single-phase bus regulation is very popular in rural substations, which tend to have fewer distribution feeders and less load as customers are more dispersed. A photograph of a set of three single-phase bus regulators is shown in Figure 3.98 and a single-line representation of a distribution substation with bus voltage regulation is shown in Figure 3.99.

Bus regulation, whether single phase or three phase, still poses many of the same limitations of the LTC, mainly one device, or set of devices, regulating the voltage of multiple distribution feeders.

3.5.2.5.3  Single-Phase Voltage Regulators

With single-phase regulation, each distribution feeder is regulated separately prior to leaving the substation. In this approach, the transformer and low-side bus are left unregulated. This approach requires the most space in the substation for installation, and is typically the most expensive, but offers the most flexibility and reliability. For example, if the substation transformer is attached to a bus feeding four distribution feeders, then 12 single-phase regulators and controls would be required.

The reliability of this voltage regulation approach is higher than the approaches discussed earlier since a failure of a single control or regulator only impacts the customers on that phase of that distribution feeder. The flexibility comes from the fact that each feeder is independently regulated. This allows for different distribution feeder lengths and loads to be accommodated independently. For this reason, many utilities are now designing new substations with single-phase voltage regulation even though it has the highest installed cost. A photograph of single-phase voltage regulators on distribution feeders in the substation is shown in Figure 3.100 and a single-line representation of a distribution substation with single-phase voltage regulators on each of four distribution feeders is shown in Figure 3.101.

Photograph of three, single-phase bus voltage regulators.

Figure 3.98   Photograph of three, single-phase bus voltage regulators.

Single-line representation of a substation with bus voltage regulation using single-phase voltage regulators.

Figure 3.99   Single-line representation of a substation with bus voltage regulation using single-phase voltage regulators.

Photograph of single-phase voltage regulators on distribution feeders in the substation.

Figure 3.100   Photograph of single-phase voltage regulators on distribution feeders in the substation.

Single-line representation of a substation with single-phase voltage regulators on each of three distribution feeders in the substation.

Figure 3.101   Single-line representation of a substation with single-phase voltage regulators on each of three distribution feeders in the substation.

3.5.2.5.4  Substation Capacitor Banks

Placing capacitor banks at the substation bus level is sometimes done in order to both regulate the bus voltage and supply VArs to the distribution system and load (VAr compensation). Use of capacitor banks at the substation level has disadvantages. First, the size is typically fairly large, so when the capacitor banks operate, they have a larger effect on the secondary voltage. This can cause LTCs and regulators to operate in response. It is not recommended to use substation capacitor banks with LTCs as it is typical to see the LTC having to operate 5–8 times per operation of the capacitor bank. As discussed earlier, it is difficult to perform maintenance on LTCs so it is not a good idea to increase the number of operations. The other negative effect to the use of substation capacitor banks is that the use of the bank provides VArs at the substation and does not compensate the VArs flowing down throughout the distribution system. Therefore, station capacitor banks do not reduce losses in the distribution system. Capacitor banks for VAr compensation to reduce losses are more effective if distributed throughout the distribution system and applied close to inductive load.

Photograph of a substation capacitor bank. (© Copyright 2012 Siemens. All rights reserved.)

Figure 3.102   Photograph of a substation capacitor bank. (© Copyright 2012 Siemens. All rights reserved.)

While voltage regulators, whether LTCs, three phase or single phase, can adjust the secondary or load voltage dynamically, capacitor banks can also perform a similar function. When capacitor banks are added to a point in the power system, there is a resulting increase in system voltage at the capacitor bank. Therefore, capacitor banks provide both VAr and voltage support in the system. There are two primary differences between capacitor banks and regulators regarding voltage control. First, regulators are unidirectional in that they only affect the voltage on the load side of the regulator. Capacitors are bidirectional in that when they operate, the voltage on both sides of the capacitor is affected. Next, regulators can control the voltage in smaller increments, whereas capacitors banks do not have multiple steps of voltage control as a bank is either on, which will cause the voltage to increase, or off causing the voltage to decrease. The effect that the capacitor has on the secondary voltage is a combination of the rating of the bank (kVAr or MVAr) and the location of the bank in the distribution system. A photograph of a typical distribution substation capacitor bank is shown in Figure 3.102.

3.5.2.5.5  Summary

As can be seen, some form of voltage regulation is typically required inside the substation. Transformers with LTCs or bus regulators can be used in service areas with short distribution feeders and balanced load, but from a smart grid standpoint, ultimate flexibility is achieved with individual distribution feeder voltage regulation. Even with this equipment in place, it is typically not possible to adequately regulate the voltage along the length of an entire distribution feeder from within the substation. For this reason, devices on the distribution feeder can be added to provide additional voltage support as well as VAr compensation. Control of these down-line devices has to be coordinated with the substation devices in order to achieve the overall IVVC goals.

3.5.2.6  Volt/Var Control Equipment on Distribution Feeders

There are two main components used to aid in the voltage regulation and VAr compensation outside the substation down the distribution feeder: the single-phase line regulator and the pole-top capacitor bank. Most utilities use pole-top capacitors to flatten the voltage profile along the distribution feeder and then regulators to adjust the voltage levels.

Photograph of three single-phase voltage regulators down-line on a distribution feeder.

Figure 3.103   Photograph of three single-phase voltage regulators down-line on a distribution feeder.

3.5.2.6.1  Single-Phase Line Regulators

On feeders of considerable length or load, the regulating device at the substation may not be adequate due to the excessive amount of voltage drop along the entire feeder. If the voltage difference between the customer closest to the substation and the customer furthest away is more than 10–12 V, additional voltage correction will be required by placing single-phase line regulators somewhere between the substation and the end of the feeder (“down-line” of the substation). The single-phase voltage regulators used along the distribution feeder are similar to the single-phase voltage regulators used in the substation but are typically smaller in size and rating.

Substation regulation will regulate the voltage of the entire feeder, with primary voltage control of the section of the feeder up to the set of line regulators. The line regulators will control the voltage on the section of the feeder below the regulators. Coordination of the operation of multiple sets of regulators is required, usually the substation regulation acting first, then the line regulators. Coordination is achieved through time delay settings, with each set of regulators having a longer time delay than the set on the source side of it. A photograph of three single-phase voltage regulators down-line on a distribution feeder is shown in Figure 3.103 and a single-line representation of a distribution substation with single-phase voltage regulators down-line on three of the four distribution feeders is shown in Figure 3.104.

From a smart grid standpoint, additional real-time information from customer revenue meters on the feeder can better help in determining the placement of the line regulators and sizing that will be required to handle both normal operations and emergency operations.

3.5.2.6.2  Pole-Top Capacitor Banks

There are two types of pole-top capacitor banks: fixed and switched. Most utilities will use fixed capacitor banks to compensate for the minimum or average amount of VAr support required on the distribution system. VAr flow on the distribution system varies daily, and, therefore, fixed capacitor banks cannot effectively compensate VAr loads continuously over the load profile, and in some cases, fixed capacitor banks with the inappropriate rating and location on the distribution feeder can contribute to increased VAr flow on the distribution system. A photograph of a pole-top switched capacitor bank is shown in Figure 3.105.

Switched capacitor banks are similar to fixed banks except they have additional switches and controls, which allow the capacitor banks to be switched on or off either remotely via SCADA or an IVVC controller or locally via an automatic sensing control. Capacitor banks are similar to LTCs in that they use a single-phase sense with a three-phase operate. If multiple capacitor banks are deployed on the same distribution feeder, each bank is typically connected to sense-alternating phases of the feeder. The sense on the given phase can be voltage, current, or both depending on the type of control selected. Coordination of capacitor bank control is done exactly the opposite of regulators. Coordinated control is required between the capacitor banks and the regulators in order to avoid conflict between the control schemes. Coordinated control between voltage regulation and capacitor bank switching is the premise of IVVC.

Single-line representation of a substation with single-phase voltage regulators down-line on three distribution feeders.

Figure 3.104   Single-line representation of a substation with single-phase voltage regulators down-line on three distribution feeders.

Photograph of a pole-top switched capacitor bank.

Figure 3.105   Photograph of a pole-top switched capacitor bank.

The main reason to use a voltage regulator in the distribution system is for voltage correction. There are two possible reasons for applying capacitor banks: power factor correction and voltage correction. Some utilities use capacitor banks for voltage correction. Switched capacitor banks are less expensive to purchase and install than single-phase line voltage regulators, and the long-term maintenance costs are also less. The theory is to apply many small capacitor banks on the distribution feeder so that the voltage can be regulated in smaller steps. The capacitor bank controls used for this type of system are typically voltage controls as it is easier to coordinate substation LTCs or feeder voltage regulators with switched capacitor banks if they are all using the same measured parameter. If capacitor banks are used mainly for VAr compensation, then coordination is not as critical between the capacitor banks and the LTCs and regulators. The capacitor bank controls will typically be based on power factor or VAr measurements and include voltage overrides in the control that require coordination with the LTCs and regulators.

3.5.2.7  Volt/Var Control Implementation

3.5.2.7.1  Voltage Control

Regulator controllers are becoming much more advanced. Some controllers include functions such as voltage sag and swell detection, flicker detection, CBEMA (Computer and Business Equipment Manufacturers’ Association) power quality violation detection, fault detection, and harmonic measurements. They also have predictive maintenance features such as motor current monitoring and incorrect tap position alarming. These data can help customer service engineers responding to customer complaints on incoming service. As regulator controllers become more intelligent, changes to the construction of the regulator will follow. Some controllers have the capabilities to detect faults on the feeder. This can be useful in providing exact fault locations to field personnel for quicker restoration of service. Currently, protection relays in the substation can estimate fault distances on distribution feeders, but accuracy of the estimation depends on the distribution feeder design and configuration since the feeder may contain numerous subcircuits (taps or laterals). Therefore, more fault information from equipment down the distribution feeder, such as voltage regulators, can provide more accurate operational data to the utility.

Smart grid can aid in the deployment of voltage reduction in many ways. First, typically it is difficult to reduce the voltage outside the substation because there is no communications to regulators on the distribution feeders. With the added communications networks being built for distribution automation and AMI, utilities can now extend communications to equipment and controllers down the distribution feeders. This additional monitoring and data collection of operations of the distribution systems can also help determine the optimal location of voltage regulation equipment on any given feeder and the optimal control of voltage to meet the objectives of combined voltage and Var Control.

3.5.2.7.2  Var Control

There are many different types of capacitor bank controllers. In the earlier days, most controls operated on either time of day or temperature. Now, more advanced controls are being used that monitor voltage, current, power factor, VAr flow, or a combination of all. Twenty years ago, switched capacitor bank controller did not include remote control capabilities, but today the majority of controllers are communicating, either via one-way or two-way communications. The amount of intelligence in a capacitor bank controller depends on whether communications to the controller is provided. If communications is not provided, then all the intelligence must be in the controller, but if communications is included, the controller can have varying degrees of intelligence.

Time-based controls were popular 20 years ago because of the cost. A time-based control only required voltage to power the capacitor bank switch, so no additional sensing equipment was needed. The theory behind the time control was that for residential feeders, load would peak during certain times of the day, in the morning as people prepared for work and in the early evening as they came home and cooked dinner. The control could be programmed to have the capacitor closed during the peak hours and open during the remaining portion of the day. For commercial and industrial feeders, the load would be greatest when the businesses were open or plants were manufacturing and then would drop off when the business was closed. The control could even be programmed to take into account the weekends and holidays. Time-of-day controls have lost favor for several reasons. First, load in general is not as predictable as in the past with the advent of 24-h businesses. Second, the time clocks in the capacitor bank controllers could deviate, causing the banks to operate at the incorrect time. The controllers lose power every time there is a feeder outage, and this power loss would cause the control to lose time. Therefore, batteries are used to keep the time when the control was without power. This causes a maintenance nightmare as personnel have to inspect and replace batteries. Other concerns such as daylight savings time can also create difficulties. So, while the time-based control is the least expensive to install, it can create large maintenance costs and may not always operate correctly.

Some capacitor bank controllers are based on temperature where the controllers monitor the ambient temperature and switch the bank on or off at predefined temperatures. The theory is that, particularly in warm climate on hot days when air conditioners are running, there is a need for the capacitor banks to compensate the considerable increase in VAr current from the air-conditioning load. The advantage of temperature controls is that they require no additional sensors and are thus very inexpensive, as with time-based controls. The temperature-based controllers, unlike the time controls, do not require a battery backup to keep the time and therefore require less maintenance. The problem with temperature-based capacitor bank controllers is that not all load predictably follows the temperature. For this reason, temperature controls are not used very often.

Voltage control for capacitor bank switching has gained popularity for several reasons. First, as in the time and temperature controls, voltage control requires no additional sensing equipment and is therefore relatively inexpensive. There are two basic types of voltage controls: absolute voltage control and delta voltage control. Both use voltage as the sense to decide operation, but they are done in different manners. Absolute voltage control is based on the actual measured voltage level of the distribution feeder, as in the control of voltage regulators. Delta voltage control is based on the change in voltage level measured on the distribution feeder. The theory of delta voltage control is that the impedance of the system is primarily inductive and therefore any voltage change is due to a current change and the majority of the current would be reactive.

More advanced capacitor bank controllers using current control, power factor control, and Var Control all require monitoring of the current flowing in the distribution feeder. Since these types of controllers require a current input, the location of the capacitor bank is limited. The capacitor bank must be placed on the main distribution feeder and not off any of the taps or laterals (subcircuits, usually single phase). This is because, while the voltage on the tapped sections of the line is comparable to the main distribution feeder, the current seen at any tap will only be the amount of current flowing through that tap, not the total current flowing through the main feeder. The Var Control is typically the most popular of the three, the obvious reason being that the capacitor bank should be switched on to provide reactive support when there is a lagging power factor.

While time- and temperature-based capacitor bank controls are not extensively used as much as the primary controlling function, many utilities may still employ an override feature using either time or temperature. A primary control function such as voltage or VAr flow is now more common but may include a temperature or time control override. For example, the capacitor bank may be switched on or off at different voltage levels, but if the temperature exceeded a certain level, the capacitor bank would be closed to provide more VAr compensation.

3.5.2.7.3  Volt/Var Control Approaches

IVVC can be implemented in several ways. Each approach has its own strengths and weaknesses. Utilities need to analyze their existing infrastructure and the goals of VVC to determine the best approach to deploy VVC.

3.5.2.7.3.1  Local Intelligence Approach

The first basic approach relies on the individual transformer LTC, voltage regulator, and capacitor bank controllers to control the voltage and VArs, where the transformer LTC and voltage regulator controllers are set to properly coordinate with the capacitor controllers as discussed earlier. With this approach, the transformer LTC, voltage regulator, and capacitor bank controllers are typically not monitored and controlled remotely, although some utilities will still use communications to monitor overcurrent and to obtain metering quantities as a means to verify proper operation by the local controls. The capacitor bank controllers are run in the automatic mode and may use time, temperature, voltage, current, or VArs as the determining factor for operating the capacitor bank. Figure 3.106 shows the basic architecture of the local intelligence approach to VVC.

Local intelligence approach to VVC.

Figure 3.106   Local intelligence approach to VVC.

The capacitor banks are usually coordinated so that the one furthest from the substation closes first and opens last. This is implemented by varying the time delays in each control. The capacitor controls are also set to operate before the transformer LTCs and voltage regulators, again by the use of time delay settings. There are several types of capacitor controls that are used with this approach. The least expensive approach is to use a time, temperature, or voltage control as these require no additional inputs. The voltage is already required to operate the switch. Typically a delta voltage, or change in voltage, algorithm provides the best algorithm. The time-of-day and temperature controls are used with predictable loads, typically residential, and are usually only effective in certain climates and during certain times of the year. The voltage control is easy to use if the capacitor bank is being used more for voltage support than VAr support as it is easier to coordinate the control of transformer LTCs and single-phase voltage regulators with down-line capacitor banks if both are using voltage measurements to make decisions. A simple voltage control may not be appropriate for installations where the capacitor bank is used for power factor correction only.

While this approach to VVC is typically not supervised directly with communications, it can be supervised at the distribution feeder level with metering from the substation. Metering data, from protection relays, substation panel meters, or single-phase voltage regulators, can allow the VVC scheme to monitor the voltage, power factor, and VAr levels at the distribution feeder level to verify proper operation of the downstream devices.

3.5.2.7.3.2  Decentralized Approach 

This approach is similar to the previous approach, but with the addition of some level of integrated and coordinated control logic at the substation level remotely communicating with the transformer LTC, voltage regulator, and capacitor bank controllers. The volt/Var Controller in the substation provides “localized” monitoring and coordinated control of the volt/VAr scheme on a feeder-by-feeder basis. Communications with the field controllers provides the ability for the distribution operations control center to override the scheme remotely. Two-way communications with the field controllers can also help detect abnormal operating conditions and alarms, such as blown capacitor bank fuses, which eliminate the need for routine inspection trips to the field and may pay for the additional cost of adding the communications. Figure 3.107 shows the basic architecture of the decentralized approach to VVC.

Decentralized approach to VVC.

Figure 3.107   Decentralized approach to VVC.

3.5.2.7.3.3  Centralized Approach with No Local Intelligence 

The centralized approach is currently the most prevalent approach implemented. The centralized approach consists of central and master control logic (usually implemented on the distribution management control system [DMS]) communicating with the controllers in the field. With this approach, the transformer LTC, voltage regulator, and capacitor bank controllers do not require local intelligence for VVC actions. The capacitor bank controller tends to be an inexpensive switch with little, if any, decision-making capabilities. The capacitor bank controller has control outputs to turn the capacitor bank on or off and communications for remote control. The main cost of implementing this approach lies within the communications network and the centralized intelligence. The communications networks employed for this type of VVC approach have traditionally been one-way paging systems in order to reduce implementation costs. The centralized master sends out a command to a transformer LTC, voltage regulator, or capacitor bank controller and expects that the controller received the command and acted accordingly. More advanced control schemes also check metering quantities at the feeder level in the substation to determine whether the VAr load or voltage changed by the expected amount. If not, an alarm is generated to inspect the capacitor control, switch, capacitors, and fuses for a ­possible problem. Figure 3.108 shows the basic architecture of the centralized approach to VVC with no local intelligence.

The centralized VVC scheme is usually responsible for monitoring and coordinating control over a distribution service area with numerous substations. This allows the implementation of control schemes that have the objective of maximizing the benefits over a large service area, and not on an individual distribution feeder or substation basis, where the best control decision for each feeder or substation may not maximize the benefits of VVC for the service area.

Centralized approach to VVC with no local intelligence.

Figure 3.108   Centralized approach to VVC with no local intelligence.

3.5.2.7.3.4  Centralized Approach with Local Intelligence 

This approach is similar to the previous approach, but the one-way communications with the transformer LTC, voltage regulator, and capacitor bank controllers is replaced with two-way communications, typically digital cellular or an unlicensed mesh-network radio system. By implementing two-way communications networks, the centralized approach receives additional information from the field controllers, such as confirmation of control operations and measurements of VArs and voltage on the distribution system at the field controller locations. This further enhances the capabilities of the centralized approach.

Also, with this approach, the transformer LTC, voltage regulator, and capacitor bank controllers have local intelligence for VVC actions in order to safeguard against loss of communications or issues with the central, master VVC logic. The field controllers are typically operating individually with their local control logic, but the central master monitors and controls setpoints in the field controllers to bias and enhance the overall operation of the VVC scheme. Therefore, if communications is lost or the central master fails, the field controllers will still be operating locally, although the control actions may not be optimal.

Any additional sources of field operations data will help enhance the VVC scheme, for example, some utilities are adding end-of-the-line voltage sensors to the feeders so that the centralized application can determine how low the voltage can be reduced without affecting customers at the end of the feeder. While capacitor bank controllers typically monitor only one voltage phase, these end-of-line monitors are typically monitoring all three phases. Some utilities are also deploying midfeeder monitors that also report back VArs on each phase. Some utilities are also adding communications to line regulators, reclosers, and sectionalizing switches further down the distribution network to provide additional system data.

The centralized control approach is further complicated by reconfigurations of distribution feeders due to switching or outages, which changes the load, VAr flow, and voltage along the distribution feeder and at the substation. This drives the need for the integration and coordination of control actions and objectives among distribution automation applications in a smarter grid.

From a broader view than just the control, the centralized VVC approach typically provides the best flexibility and operation of the distribution system but comes with additional cost. The cost can be offset if there is an existing communications infrastructure that can support communications to the field devices. The cost can also be offset if the communications system is shared with other applications, such as FDIR for communications to fault locators, down-line reclosers, and sectionalizing switches. As part of smart grid communication deployments, some utilities are also considering communications networks that can support distribution automation field communications as well as automatic meter reading.

3.5.2.7.3.5  Hybrid Approach with Local Intelligence 

The hybrid approach to VVC is a combination of the centralized and decentralized approaches and leverages the advantages of both approaches. The basic principal is to implement a hierarchical control scheme with intelligence in the local field controllers, in the substation, and in the central control master. One advantage of the hybrid approach over the centralized approach is in the communications network, where the hybrid approach can use different communications technologies without depending on a single, system-wide communications network for centralized VVC. Also, a failure of the communications network only impacts a limited area of the control system instead of the entire system. The hierarchical approach can also distribute the logic and processing power required for the VVC scheme, while maintaining the objectives of the control over a large service area. The hierarchy approach is designed to have the capability to default to the lower levels of control if there are any issues with communications or control at the higher levels in the hierarchy. The hybrid approach is cost effective where utilities have some form of substation controller that can be easily upgraded to include VVC in the substation. A major drawback of the hybrid approach is the additional hardware, software, programming, configuration, and coordination of the levels of control. Figure 3.109 shows the basic architecture of the hybrid approach to VVC.

3.5.2.8  Volt/VAr Optimization

3.5.2.8.1  Optimization versus Control

In simple terms, VVC is the capability to control the voltage levels and reactive power (VArs) at different points on the distribution grid by using a combination of substation transformer LTCs, feeder voltage regulators, and capacitor bank controllers. A distribution system is complex in that an operation on any single control device can result in considerable changes in multiple aspects of the system. For example, when a capacitor bank is energized, a certain amount of reactive power is injected into the system, which will affect the voltages, VAr flows, and power factors along the distribution feeder and at the distribution substation and, in turn, will affect the distribution system energy loss and load demand.

IVVC is the capability to coordinate the control actions of the substation transformer LTCs, voltage regulators, and the capacitor bank controllers such that the interaction of the control actions are integrated and optimally coordinated during the decision-making process. VVO optimizes the objectives of VAr (loss) minimization and load reduction (with voltage constraints) using optimization algorithms and well-defined control objectives subject to various system constraints through centralized or decentralized decision makings.

3.5.2.8.2  Selecting the Right Optimization Objective

The control objective of VVO should be of business and engineering significance. People sometimes minimize an indirect objective instead of the real objective by assuming that the two are equivalent. Intuitively, such an assumption makes a lot of sense and appears to match the operational experiences. However, close examinations on the results often indicate otherwise.

Hybrid approach to VVC with local intelligence.

Figure 3.109   Hybrid approach to VVC with local intelligence.

For example, some utilities use capacitor switching to bring the distribution feeder power factor close to unity because a unity power factor can be a perfectly legitimate business objective, although the unstated business objective is actually to minimize feeder losses, while it is believed that power factor correction is the means to reduce feeder losses. If the real business objective is power factor correction and the measured distribution feeder power factor is lower, then the correct action is to switch in more capacitor banks no matter where they are located along the distribution feeder. However, if the real objective is to minimize losses, the correct action is never that simple and depends on the locations of the capacitor banks and the different VAr flows on the distribution feeder.

The whole concept of CVR is based on the premise that voltage reduction will result in energy or demand reduction, which implies that, in the aggregation, the loads on a feeder will respond to voltage reduction with demand reduction. Consider a simple example of two loads on the same distribution feeder. If the voltage is reduced at both loads and the first load decreases but the second load does not change, then the second load causes an increase in load current at lower voltage (constant power load). Therefore, while the objective was to reduce the voltage, some load on the feeder was reduced, but the losses on the feeder may not be minimized. A better understanding of the system load characteristics would help to optimize VVC. Therefore, in this hypothetical example, instead of indiscriminately reducing the voltage for the entire distribution feeder, a more effective approach may be to reduce voltage at the first load and increase the voltage at the second load to achieve maximum load and loss reduction. This example shows that minimization of losses does not always occur when the voltage profile along a distribution feeder is flattened. The difference between CVR and VVO is that VVO does not presume that the voltage reduction or increase is the correct solution. VVO determines the correct actions for different parts of the network based on load characteristics and the available measurements and controls. The objectives of VVO must be achieved while maintaining acceptable voltage profiles along the distribution feeder under dynamic operating conditions. Although the differences between ordinary VVC and optimal control are quite obvious, it is not unusual in common practice that they are either not well recognized or are ignored. Understanding the differences will enable the utilities to select the right technologies consistent with their business objectives and realize the maximum benefits.

3.5.2.8.3  Volt/VAr Optimization Approaches

As discussed earlier, there are many different approaches to the practical implementation of VVC, some requiring communications between field controllers and decentralized substation controllers or between the local controllers or the substation controllers and a central master, while some others do not. In VVO, communications between the local controllers and the central controller is a “must.” It is not possible to achieve optimal control among a group of geographically dispersed controllers without communications.

As with VVC, there are several approaches to implementing VVO; however, VVO requires the integration and exchange of data with other utility applications, such as DMS, OMS, and GIS, and the levels and capabilities of optimization will evolve in the smart grid realm.

The model hierarchy of the substation-based decentralized versus centralized VVO approach is shown in Figure 3.110. As with IVVC, the decentralized approach is implemented at the substation level of the distribution system down the feeders associated with each substation, whereas the centralized approach optimizes voltage and VArs among substations, within operating regions and across the entire distribution system.

3.5.2.8.4  Decentralized VVO Approach

The major advantages of the decentralized VVO solution, as with the decentralized IVVC approach discussed earlier, are that it can be deployed incrementally, and one decentralized system’s failure leads to very limited downtime for a small portion of the entire system, resulting in higher reliability for the overall system. This can make it easier for a utility to deploy VVO or distribution automation projects with smaller budgets and multiyear plans. However, as discussed earlier, the operation of breakers or switches in a distribution system will change the feeder configuration, which is a challenge in the decentralized control solution because it has only a partial model of the overall system. A certain level of coordination among the neighboring substations is required for proper operation of decentralized VVO.

VVO model hierarchy.

Figure 3.110   VVO model hierarchy.

Centralized VVO approach.

Figure 3.111   Centralized VVO approach.

3.5.2.8.5  Centralized VVO Approach

The centralized VVO approach uses a model of the entire distribution network, while the decentralized VVO approach uses a model of only a portion of the system. The following components should be included in the centralized VVO system (Figure 3.111):

  • Distribution SCADA system for acquiring and processing real-time measurements from the field devices
  • Short-term LE/forecast for look-ahead optimization
  • Unbalanced three-phase load flow function for the distribution network operation validation and optimization
  • Controllable devices (voltage regulators, capacitor banks, as well as other dispatchable energy resources)
  • Sensors (voltage and current transducers)
  • Substation controllers/RTUs
  • Communications infrastructure
  • VVO control algorithms in the control centers

The short-term load forecast and estimation, and the unbalanced three-phase load flow functions are used in the optimization to identify optimal switching plans and evaluate how the control strategy will perform with respect to the objective function and their impacts on other operation constraints, such as voltage limit and line rating violations. Based on the actual system operation conditions from the SCADA system, the optimal control strategy is updated on a continual basis.

3.5.2.8.6  Model-Based Approach

A recent approach to VVO utilizes a dynamic operating model of the distribution system in conjunction with mathematical optimization and power engineering calculations to optimize the volt/VAr performance of the distribution system within a given operating objective. The source of the dynamic operating model of the distribution system is typically the distribution connectivity model in a distribution organization’s GIS. The model is adjusted in the control room of the utility on a near-real-time basis. This is done either through manual operator action on the model or through an interface to the SCADA system, which transmits changes in the status of system components. The model reflects changes in the status of distribution breakers, switches, reclosers, fuses, jumpers, and line cuts.

With model-based VVO, the “as-operated” state of the system, including near-real-time updates from SCADA and the OMS application, impacts the precise level of voltage control required for distribution companies to implement CVR without violating regulatory-prescribed voltage limits. For CVR, the optimum settings for each transformer LTC, voltage regulator, and capacitor bank depend on the following:

  • The spatial distribution of load throughout the system
  • The phases on which load is connected
  • The connection type for three-phase loads (i.e., delta or wye connected), as well as transformer connections and parameters
  • The voltage-dependent characteristic of loads (constant impedance, constant power, and the percentage mix of the two)
  • Network topology and characteristics

The optimal settings of the controls in model-based VVO are determined by evaluating status and real-time data from equipment. The recent advances in technology required for distribution organizations to implement VVO include GIS-based network models, two-way communications to distribution substations and line equipment, and improvements in computing resources and architectures.

With model-based VVO, an objective function is specified, subject to a set of nonlinear equality and inequality constraints. The constraints consider the thousands of equations and state variables used for unbalanced load flow analysis.

One model-based VVO algorithm is summarized as follows:

Minimize (real power demand and/or real power losses) subject to the following engineering constraints”:

  • Power flow equations (multiphase, multisource, unbalanced, meshed system)
  • Voltage constraints (phase to neutral or phase to phase)
  • Current constraints (cables, overhead lines, transformers, neutral, grounding resistance)
  • Tap change constraints (operation ranges)
  • Shunt capacitor change constraints (operation ranges)

Using the optimization control variables:

  • Switchable shunts (ganged or unganged)
  • Controllable taps of transformer/voltage regulators (ganged or unganged)

One significant difference between model-based VVO and other VVC methods is that model-based VVO can use power engineering calculations and analysis as its solution. These include load allocation and per-phase, unbalanced load flow analysis to compute the electrical state of the network. For many distribution systems, unbalanced load flow is required rather than balanced load flow, because of single-phase loads and laterals, unsymmetric component impedances, various unbalanced transformer models, and single-phase and unganged operation of some voltage regulators and switched capacitors. With this modeling and analysis capability, model-based VVO is also able to model the voltage dependency of customer loads in terms of the percentages of constant impedance and constant power components. This influences the calculation of capacitor switch states, as well as optimal LTC and voltage regulator settings.

The development of model-based VVO, accompanied by GIS modeling and two-way communications on the distribution system, provides distribution organizations with capabilities that were not previously available.

  1. The true optimal state of voltage and Var Control equipment is calculat-ed-nonoptimal rules of thumb are not used.
    • Optimization algorithms, such as mixed integer, nonlinear programming, can be employed. Such algorithms insure that reduction in power, energy, and customer demand is maximized.
    • In addition to execution based on predetermined intervals, model-based VVO can be configured to run based on events such as feeder reconfig-urations and changes in feeder loading.
    • A copy of the online model can be used to perform off-line studies, so operations planning can study different scenarios and configuration.
  2. The algorithm uses the “as-operated” network model.
    • The as-operated network model reflects the actual connectivity of feeders, loads, and voltage/Var Control devices. This is important due to the very dynamic nature of distribution system switching and outages that routinely occur. As SCADA and operators make changes to the as-operated network model, the volt/VAr network model in inherently kept up to date without the need for a separate update process.
    • All distribution applications, including power flow, fault location, FDIR, switch orders, and VVO, can use the same model and network view.
  3. For distribution organizations implementing multiple advanced distribution applications that use the same as-operated model, a common platform reduces computing hardware and ongoing maintenance costs.
    • Model-based VVO leverages the investments made in SCADA, OMS, and other DMS applications, minimizing duplication in the costs in computing infrastructure and communications environment.
    • A common distribution network model used for all applications results in no synchronization issues, either in real-time or in incremental updates from the GIS, between different models being maintained in different applications.
  4. A three-phase unbalanced network model is utilized.
    • A detailed, unbalanced three-phase system model is utilized to provide accuracy.
    • Subtransmission and secondary voltage levels can be included in the analysis. VVO can be executed on one feeder, all feeders from a sub-station, several substations, or an entire system.
    • In addition to radial systems, networked and looped systems can be analyzed.
    • Voltage and thermal limitations on the network components are calcu-lated.
    • Voltage unbalance on the system is included, as are ganged and un-ganged controls.
  5. Distribution system loads are modeled, since they impact the optimum settings.
    • The voltage-dependent component of customer loads is modeled to represent load variation in real and reactive power as a function of voltage.
    • Load characteristics such as location, size, and type (percentage mix of constant impedance and constant power) impact the optimal LTC and voltage regulator settings, particularly for CVR.
    • Node voltages at load points, as well as all nodes throughout the cir-cuits, are calculated and compared to operating limits before control ac-tions are taken.
    • Customer load profiles can be utilized in the load allocation and load flow applications. This enables a temporal representation of customer loads, which can also impact settings.

3.5.2.8.7  Volt/VAr Optimization and the Smart Grid

VVO will also enable distribution organizations to operate their systems as new complexities are being introduced. These complexities include increased renewable generation located at distribution voltage levels, increases in automated fault location and restoration switching schemes, increased system monitoring and asset management processes, and an electric vehicle charging infrastructure.

Smart grid initiatives are now providing the means to share data among enterprise applications. For example, voltage readings of customer revenue meters from AMI systems can be shared with a centralized VVC master in order to monitor lowest customer service point voltages and ensure that the voltage profile from the substation to the customer is as uniform as possible. All the components, except for the VVO control algorithms, are usually available in modern distribution system control centers. VVO control algorithms have been undergoing rapid advancement in recent years and are at the stage to move from R&D into feasible field deployments. With the introduction of DER and the deployment of consumer demand management, the integration of VVO with the control and optimization of these resources in the distribution system is becoming a new smart grid challenge to the industry in practice. The increasing penetration of renewable generation and energy storage in the coming years with smart grids will introduce great challenges and also opportunities to VVO. Nondispatchable distributed renewable energy resources, such as PV and wind, are intermittent and unpredictable in operation. They also increase the likelihood of overvoltage conditions in the distribution system. This means that the controllable voltage and VAr resources on the distribution system need to be controlled more frequently and more accurately in order to match the stochastic output profile of the renewable energy sources. Energy storage systems, such as battery storage, will affect the power flows on the distribution system. The variable power flow from energy storage will need to be taken into account for VVO, such that the VVO becomes volt/VAr and watt optimization (VVWO). Future regulatory and business models will profoundly affect the way DERs will be owned and operated. Considerable uncertainty and R&D, however, still remain in this area.

VVO will also provide the flexibility in handling network reconfigurations. For example, a permanent fault in a distribution feeder will result in switching operations and topology changes. The power flow directions will change along with the topology change. VVO can use the updated power flow analysis to determine the optimal VVC actions based on the new configuration of the distribution system. Therefore, centralized VVO in DMS can fully take advantage of the global distribution network and load models, system configuration, and the full complement of remote measurements from SCADA, as well as data from other applications, such as load forecasting/­estimation, AMI, demand response, etc. Smart grid advances in VVO should drive toward the support of more advanced features, such as look-ahead optimization under dynamic operation conditions with planned and unplanned outages and maintenance schedules.

3.5.3  Fault Detection, Isolation, and Service Restoration

Witold P. Bik, Christopher McCarthy, and James Stoupis

Traditionally, electric utilities use the trouble call system to detect power outages. Initially, distribution system faults are interrupted and cleared by a fuse, recloser, pulsecloser, or relayed circuit breaker. Once the faults are isolated and customers experience power outages, they call the utility and report the power outage. The distribution system control center then dispatches a maintenance crew to the field. The crew first investigates the fault location and then implements the switching scheme(s) to conduct fault isolation and power restoration. This procedure for power restoration may take several hours to complete, depending on how quickly customers report the power outage and the maintenance crew can locate the fault point and conduct the power restoration [4]. Thus, one of the main drivers for smart grid is the possibility to enhance and optimize the reliability of the distribution system, which is being pushed strongly by utility regulatory bodies such as public utility commissions (PUCs).

With the recent push in smart grid, utilities have deployed more feeder switching devices (e.g., reclosers, pulseclosers, circuit breakers, switches) with IEDs for protection and control applications. The automated capabilities of IEDs, such as measurement, monitoring, control, and communications functions, make it practical to implement automated fault detection, isolation, and service restoration (FDIR). As a result, the power outage duration and the system reliability can be improved significantly. The IED data can be transmitted via communications between the IEDs themselves or back to a substation computer or a control center.

In addition to the FDIR application, other reliability-related issues arise in the normal day-to-day operations of a distribution utility, such as the failure of key distribution system assets, power quality issues caused by utility and customer equipment, and protection miscoordination. Equipment monitoring and diagnostics is a key technology that will be significant in smart grid due to its capability to prevent (and potentially to predict) the failure of assets vital to the operation of the distribution system, such as substation transformers and circuit breakers. Power electronics devices are gaining more attention due to their capability to reduce power quality issues. Adaptive protection schemes will also play a key role in modifying the substation and feeder IED protection and control settings in realtime for optimal device and system performance during faults.

3.5.3.1  Faults on Distribution Systems

The majority of faults that occur on distribution systems can be linked to a partial or complete failure of electrical insulation. The result is an increase in current, causing much stress on the overhead conductors or underground cables along the feeder. Of the faults that occur on medium-voltage overhead networks of utilities across the world, approximately 80% of the faults are transient (temporary), and 80% of the faults involve only one phase to ground [5].

Most distribution feeders today are radial in nature, meaning that power flows in a hierarchical fashion, from the distribution substations out to the loads. Most feeders have a three-phase main line, which forms the backbone of the power delivery system. It is typically an overhead line, which allows for clearing of temporary faults, as well as easier permanent fault location and repair. Single-phase and three-phase laterals, both overhead and underground, are fed from the main line and typically protected by fuses for fault isolation. Single-phase and three-phase sectionalizers and reclosers for overhead circuits, as well as underground fault interrupters, could also be used on the laterals where heavy loads are connected. A fault on the main line causes the substation circuit breaker to operate to isolate the fault. If automatic reclosers or sectionalizers are also used on the main line, and a fault occurs downstream of one of those devices, then the effect of the fault can be isolated to the downstream portion of the feeder only.

Due to the radial nature of most distribution systems today, no backup source is available, so customers are susceptible to a power outage even when the fault occurs several miles away. A basic grid is formed when the capability to transfer loads to adjacent circuits is added. A smart distribution grid emerges when the switching points between circuits, as well as several points along each circuit, have the intelligence to reconfigure the circuits automatically, either directly themselves or when receiving control commands from a substation computer or control center, when an outage occurs. More intelligent switching points yield more options to reroute power to serve the load, and communication between or to those points makes self-healing a practical reality.

Thus, in the future, it is envisioned that the distribution system will be more meshed than radial, especially when considering the connection of distributed generation and energy storage systems. This means that multiple sources will be connected to the same load. This reality makes fault management and maintaining reliability great technical challenges. Advanced protection and FDIR schemes, along with advanced sensing and high-speed communications, will be required to quickly isolate a distribution fault and restore unaffected customers.

3.5.3.2  Drivers, Objectives, and Benefits of FDIR

The major drivers for FDIR and other similar initiatives are improved reliability, enhanced system operation, and improved system efficiency. These factors all contribute to restoring unaffected customers faster after a disturbance, reducing the number of affected customers significantly, thus increasing customer satisfaction. The improved reliability is tied directly to utility reliability metrics. In most cases, utilities are under pressure from regulatory bodies, such as PUCs, to improve reliability, and the reliability metrics are used to determine distribution circuit performance.

The main reliability indices that are predominantly used throughout the world are the following IEEE Standard 1366 metrics [6]:

  • SAIDI = Sum of all customer interruption durations/Total number of customers served
  • SAIFI = Total number of customer interruptions/Total number of customers served
  • CAIDI = SAIDI/SAIFI = Sum of all customer interruption durations/Total number of customer interruptions
  • Momentary average interruption frequency index (MAIFI) = Total number of customer momentary interruptions/Total number of customers served

The majority of the world uses the IEEE reliability indices as is or calculated in a similar way but with a different name. Some other metrics used by utilities worldwide include average cost per outage, energy not supplied, customer minutes of interruption (CMI), and average interruption time. It should be noted that each utility may define their own metrics to assess reliability, that is, the IEEE indices are the closest thing to a standard on reliability but by no means the only metrics used.

The major benefit related to smart grid applications is the improvement in the reliability metrics, which ultimately results in an improvement in customer service. The need to meet goals related to the reliability metrics has motivated many utilities to install more automated switching devices out in the distribution system, which has reduced the duration of the outages due to the faster response time for the isolation of the fault and the restoration of the unaffected customers. With the installation of these devices as part of automated FDIR schemes, most utilities have established a target restoration time for fault disturbances, such as 1 or 5 min. In the future, the proliferation of more automated switching devices with communications and control capabilities will lead to even faster restoration times, leading to an even higher level of reliability. The automation of feeder switching devices also benefits other distribution automation applications. For example, the deployment of multifeeder reconfiguration or load balancing schemes enables a utility to transfer load from one feeder/substation transformer to another feeder/substation transformer, in the cases of transformer failure or peak loading conditions.

Other methods that can be used to improve reliability include more frequent tree trimming programs, the deployment of faulted circuit indicators, and the deployment of reclosers and sectionalizers instead of fuses, as well as fuse-saving schemes to only interrupt customers for permanent faults.3 Also, circuit topology and load density of distribution feeders have a large effect on the frequency of faults and, thus, the duration of outages. Longer circuits typically lead to more interruptions. Shorter circuits, especially urban networks that form a meshed network, have been found to be more reliable. Also, utilities with higher load densities tend to have better SAIFI indices [6]. These factors are a key issue for future utility reliability, because as city areas and suburbs expand, the circuits will get longer and, thus, less reliable.

3.5.3.3  FDIR Equipment

3.5.3.3.1  Substation Circuit Breaker

A circuit breaker is a switching device typically located in the substation that can make, carry, and break currents under normal and short-circuit conditions [5] by opening and closing contacts. Most distribution feeders have a substation circuit breaker as the most upstream protective device, feeding the medium-voltage conductor wires that leave the substation. For FDIR schemes, each substation circuit breaker has a protection and control relay/IED that can communicate with the local substation automation devices, transmitting data and receiving control commands. Many circuit breakers today contain vacuum interrupters with magnetic actuators to operate a drive shaft, with encapsulated poles for protection from the weather and external elements. Figure 3.112 shows an example of a substation outdoor circuit breaker.

Outdoor substation circuit breaker. (© Copyright 2012 ABB. All rights reserved.)

Figure 3.112   Outdoor substation circuit breaker. (© Copyright 2012 ABB. All rights reserved.)

3.5.3.3.2  Manual Switch

A switch is a switching device that can make, carry, and break currents under normal conditions (not short-circuit conditions) by opening and closing contacts. Manual switches are typically located out on the distribution feeders, although some manual disconnect switches are deployed by utilities in distribution substations. Manual switches are typically not used in automated FDIR schemes, only in manual FDIR schemes when maintenance crews are dispatched to perform the switching. Most the switches today contain air blade-type contacts that open and close in such a way to give visual indication to the maintenance crews.

3.5.3.3.3  Remotely Operable Load-Break Switch

A remotely operable load-break switch is a switching device typically located outside the substation that can make, carry, and break currents under normal conditions (not short-circuit conditions) by opening and closing contacts. Remotely operable switches allow for the utility operations department to operate the switches via communications from the control room or the substation, typically through a SCADA or a substation computer interface. For FDIR schemes, each remotely operable switch has a control IED that can communicate with the control room SCADA system in a control center-based FDIR scheme or a substation computer in a field-based FDIR scheme, transmitting data and receiving control commands. Most remotely operable load-break switches today contain either air blade-type contacts or vacuum interrupters with magnetic actuators as the operating mechanism. Figure 3.113 shows an example of an overhead load-break switch.

Overhead load-break switch. (© Copyright 2012 S&C Electric Company, Chicago, IL. All rights reserved.)

Figure 3.113   Overhead load-break switch. (© Copyright 2012 S&C Electric Company, Chicago, IL. All rights reserved.)

3.5.3.3.4  Automatic Sectionalizer

Automatic sectionalizers are essentially manual or remotely operable load-break switches located out on the feeder with added intelligence in their IEDs. The added intelligence allows for a local control decision to be made based on local voltage and current measurements. The sectionalizer IED counts the number of overcurrent events and/or voltage drops below a threshold when a fault on the connected feeder occurs. When the sectionalizer reaches its preconfigured count number, it opens during the dead time of an upstream circuit breaker in the substation or recloser outside the substation. Sectionalizers can be incorporated as part of FDIR schemes which would typically allow for local control decisions to be made for the fault isolation, and subsequent restoration decisions made by a master device or peer as part of a multipoint communication scheme. It should be noted that in some cases, single-phase automatic sectionalizers are used on laterals to isolate a single-phase fault that may occur, so that the rest of the distribution feeder can remain energized.

3.5.3.3.5  Automatic Recloser

The automatic recloser is similar to the automatic sectionalizer, except that it can make, carry, and break currents under normal and short-circuit conditions. Thus, instead of counting the number of overcurrent events occurring downstream on the feeder, the recloser will actually trip for those events. Like sectionalizers, automatic reclosers are also placed outside the substation out on the feeder. In some cases, single-phase automatic reclosers are used on laterals to isolate a single-phase fault that may occur, so that the rest of the distribution feeder can remain energized.

A recloser protection and control IED is typically coordinated with the upstream substation circuit breaker relay and downstream recloser IEDs and fuses, for coordination during fault disturbances. During the reclosing sequence, one or more fast trips are typically used in an attempt to clear a temporary fault, followed by slower trips if the fault is permanent. Like sectionalizers, automatic reclosers can be incorporated as part of FDIR schemes which would typically allow for local control decisions to be made for the fault isolation, and subsequent restoration decisions made by a master device or peer as part of a multipoint communication scheme.

Pole-top recloser. (© Copyright 2012 ABB. All rights reserved.)

Figure 3.114   Pole-top recloser. (© Copyright 2012 ABB. All rights reserved.)

Many reclosers today contain vacuum interrupters with magnetic actuators to operate a drive shaft, with encapsulated poles for protection from the weather and external elements. Figure 3.114 shows an example of an outdoor overhead recloser.

3.5.3.3.6  Source Transfer Gear

Source transfer equipment typically consists of pad-mounted switchgear connected to multiple feeders and to the connected loads. Voltage sensors are used on each feeder to determine the presence of voltage on both the primary and secondary sources. If the voltage on the primary source feeder drops below a predetermined threshold and the voltage on the secondary source feeder remains above a predetermined threshold, then the three-phase switch connected to the primary source is first opened and subsequently the three-phase switch connected to the secondary source is closed. In an industrial park configuration, the source transfer switchgear is wired as part of a multiloop system, where one feeder acts as the primary source for a first set of loads and the secondary source for a second set of loads, and a second feeder is the primary source for the second set of loads and a secondary source for the first set of loads. If one feeder loses voltage upstream, then all the loads are switched to the healthy feeder. Figure 3.115 shows an example of a piece of source transfer switchgear.

3.5.3.3.7  Sensors

Sensor devices come in many forms and perform various different functions. Sensors typically used for the FDIR application are typically in the form of clamp-on fault current indicators (FCIs); however, data from other types of sensors (e.g., line post sensors, temperature sensors) could also be applied. All FCIs measure current, and some also measure voltage, which can be useful for other distribution applications. Most FCIs have a visual indication that fault current has passed through the device or have a wireless signal for short-range communications, so that the utility maintenance crew can receive the data via drive-by. More recently, sensor companies are transmitting these data for long-range communications to a local data collector or even directly back to the substation or control center, for use in the FDIR application and outage management.

Source transfer switchgear. (© Copyright 2012 S&C Electric Company, Chicago, IL. All rights reserved.)

Figure 3.115   Source transfer switchgear. (© Copyright 2012 S&C Electric Company, Chicago, IL. All rights reserved.)

3.5.3.3.8  New Technology

3.5.3.3.8.1  Single-Phase Dropout Recloser 

This device is a single-phase cutout-mounted fault interrupter with a two-operation sequence—one timed overcurrent trip on a fast TCC curve and then one on a delayed TCC curve. Utilities will often implement a “fuse-blowing” scheme that coordinates the substation breaker with the lateral fuse so that the fuse will clear any downstream fault within its rating, not the breaker. Most feeder customers experience no power interruption, but the lateral customers get a prolonged outage—a bad result if the fault is temporary. A “fuse-saving” scheme has the first trip of the substation breaker intentionally set to operate faster than the fuse to clear a temporary fault downstream of the fuse. Often, the fault will be cleared during the open time interval, so that when the breaker closes back in, service is automatically restored and there is no prolonged customer outage. The second breaker trip is slower, so that if the fault is permanent, the lateral fuse will operate to clear the fault and isolate that section. A downside is that all feeder customers experience a momentary interruption for a lateral fault when the breaker trips before the fuse, regardless if the fault is temporary or permanent. Overall, this can have a negative impact on customer satisfaction since the vast majority of faults occur on taps off the main lines. A single-phase dropout recloser, used instead of a fuse, provides the best of both scenarios. The fast trip prevents a permanent outage for a temporary fault, and the substation breaker is spared from any trips even if the fault is permanent.

3.5.3.3.8.2  High-Performance Fault Testing 

After a conventional recloser or relayed circuit breaker opens to interrupt a fault, it typically recloses into the fault several times to determine if the fault is still present. Pulseclosing is a new technology for overhead distribution system protection that tests fault persistence without creating high-current surges that cause feeder stress. The pulsecloser device very rapidly closes and reopens its contacts at a precise point on the waveform to send a very short low-current pulse down the line then analyzes the pulse to determine the next course of action. If the pulse indicates a persistent fault, the pulsecloser will keep the contacts open, wait a user-configurable interval, and pulse again. This process can repeat several times until the pulsecloser determines that the line is no longer faulted. It then closes to restore service. However, if the fault persists for the duration of the test sequence, the pulsecloser will lock out to isolate the faulted section (Figure 3.116).

Pulsecloser. (© Copyright 2012 S&C Electric Company, Chicago, IL. All rights reserved.)

Figure 3.116   Pulsecloser. (© Copyright 2012 S&C Electric Company, Chicago, IL. All rights reserved.)

Conventional reclosing in response to a permanent fault. (© Copyright 2012 S&C Electric Company, Chicago, IL. All rights reserved.)

Figure 3.117   Conventional reclosing in response to a permanent fault. (© Copyright 2012 S&C Electric Company, Chicago, IL. All rights reserved.)

Figure 3.117 shows a typical current waveform pattern that would result from a conventional recloser or relayed circuit breaker operating in response to a permanent single-phase-to-ground fault. The random point-on-wave closing often results in asymmetric fault current, significantly increasing peak energy into the fault. When the pulsecloser clears a fault, however, it tests for continued presence of the fault using pulseclosing technology, closing at a precise point on the voltage wave. Figure 3.118 shows how a pulsecloser would respond to the same permanent fault. Note that both the positive and negative polarities are tested to verify that high currents are due to faults, and not transformer inrush currents.

Pulseclosing in response to a permanent fault. (© Copyright 2012 S&C Electric Company, Chicago, IL. All rights reserved.)

Figure 3.118   Pulseclosing in response to a permanent fault. (© Copyright 2012 S&C Electric Company, Chicago, IL. All rights reserved.)

3.5.3.4  FDIR Implementation

FDIR can be implemented in several ways. Each approach has its own strengths and weaknesses. The most appropriate approach will depend on a utility analyzing their existing infrastructure and the intended goals of reliability optimization, as well as the speed of response to the fault conditions required by the utility.

3.5.3.4.1  Field-Based FDIR Schemes

3.5.3.4.1.1  Substation Breaker with Fault-Detecting Switches 

Basic feeder protection is implemented with a substation breaker that coordinates with fuses on the feeder. When customers report an outage, a crew is dispatched and uses the customer outage reports to find the blown fuses. The fault is isolated when the crew opens a manual switch. Faults can be located more quickly if fault-passage detectors with a visible indicator have been installed on each phase at the switches. FDIR response times can be further improved when switches report fault current to SCADA and can be opened remotely by a SCADA command, then service can be restored to all customers on the substation side of the switch isolating the fault. The problem with using this approach is that during breaker operation, the whole feeder is subjected to momentary outages, and if the fault is permanent, the entire feeder will be locked out until the fault is located and manual switching performed. The restoration time in this case can take hours.

3.5.3.4.1.2  Substation Breaker with Midpoint Recloser 

About half of the customers on a feeder can be spared an outage when a recloser is installed at the midpoint of a radial feeder. Like the substation breaker, a recloser can interrupt fault current. The zone of protection is expanded because a midpoint recloser will sense current for a fault near the end of a feeder more accurately. If a fault occurs on the load side of the recloser, it is coordinated to open and isolate the fault before the substation breaker operates. So customers on the substation side of the recloser will not experience loss of power. The advantage of this scheme is that it can limit the extent of outage by effectively splitting the feeder into two sections. However, upstream customers are still subjected to voltage dips caused by reclosing, which is measured by the system average RMS variation frequency index (SARFI). In the case of permanent faults, SAIDI can be improved by 50%.

3.5.3.4.1.3  Substation Breaker with Automatic Sectionalizers 

Automatic sectionalizers sense the passage of fault current and then open on a predetermined loss-of-power count (substation recloser operations). This coordinates the operation of multiple sectionalizers. Because only a few reclosers can be coordinated in series, more automatic sectionalizers can be installed on the feeder, and more customers will avoid a permanent outage whenever a fault occurs near the end of a radial feeder. The advantage of this approach is that better segmentation can be achieved and the impact of the outage reduced within seconds. On the negative side, the entire feeder is subject to momentary outages during fault testing.

3.5.3.4.1.4  Fault Hunting Loop Schemes with Reclosers (No Communications) 

A loop scheme can automatically restore service with a normally open tie to a nearby feeder. Initial sectionalization occurs by the responses of various coordinated overcurrent protective devices to a feeder fault. Reconfiguration to restore power to the unfaulted feeder sections occurs by using a combination of timers and fault interruption, and no communication is required. The feeder is returned to normal configuration manually by opening the tie device and closing the midline devices. Figures 3.119 and 3.120 show the circuit topology for a three-recloser loop and a five-recloser loop, each with a normally open recloser at the tie point. Simple reliability calculations for loop systems assume a constant fault incidence rate in all feeder segments, equal segment lengths, even customer distribution, and a constant restoration time throughout the system. The benefit of a three-device loop system over two radial feeders is a 50% reduction in SAIFI and SAIDI compared to a breaker only. Expanding to a five-device loop improves the reliability indices even further.

Conventional loop schemes use loss-of-voltage timers to set the order of device operations. Closing into a fault is the only way to know if the fault is still present. The first reconfiguration action to occur, after the fault has been interrupted and isolated by the breaker and/or the midline reclosers, is when the normally open tie recloser closes based on expiration of a timer that initiates upon loss of voltage on either side. A three-recloser loop system with equal segment lengths has a one-in-two chance that the tie recloser will sense loss of voltage due to a fault in the adjacent line section. When the tie recloser closes, fault current flows through the entire previously unfaulted feeder until the recloser times on its TCC curve and locks open. For a five-recloser loop system, the faulted section is found when the tie recloser closes into the fault or when the next midline recloser subsequently closes into the fault. A loop scheme application is commonly limited to the use of two specific sources, and each source must have capacity to supply the combined load of both feeders.

Three-recloser loop scheme.

Figure 3.119   Three-recloser loop scheme.

Five-recloser loop scheme.

Figure 3.120   Five-recloser loop scheme.

3.5.3.4.1.5  Loop Restoration with Pulseclosers 

A pulseclosing device used at the tie point, in an otherwise conventional recloser loop scheme, will use a pulseclose to test for faults before closing in. This avoids putting a fault on the otherwise unfaulted feeder. Pulseclosing benefits are compounded when multiple pulseclosing devices are used in series, since pulseclosing devices will properly sectionalize a system without using TCC coordination. A loop system with automatic noncommunicating restoration can be expanded to include any number of pulseclosing devices to provide desired segmentation and improve reliability for critical customers or a problem area. Furthermore, the entire restoration process can be completed without ever reintroducing the fault to either feeder.

3.5.3.4.1.6  Distributed Systems with Peer-to-Peer Communications 

The optimal self-healing system uses a mix of decentralized fast-acting local response with a centralized system for oversight. Local clusters of automated feeders function independently of the central control to isolate problem areas and minimize disruptions quickly. The feeders may be in a reconfigured state for several hours until the crews locate and repair the fault, so the distribution operators may want to shift load, switch capacitors banks, or modify voltage regulation to optimize efficiency. Peer-to-peer communications is used as the basis for such distributed restoration systems.

Distributed logic is also capable of handling multiple events, looking for alternate sources to restore unfaulted sections that are without service. This is especially useful during strong storms that sweep across a service territory and cause multiple outages. Reconfigurations can occur simultaneously at more than one location, and by accounting for real-time loading, it ensures that a circuit will not pick up more line segments than it can handle. It’s a great advantage to have the distribution system automatically do the best restoration possible, quickly and efficiently, and report the final reconfigured state to the dispatchers. A smart restoration system will minimize the required excess source capacity, because it dynamically monitors load and can pick up more load with existing resources.

Figure 3.121 is an example of how a four-source, 12-switch deployment with distributed logic is divided into teams. All the switching points that bound a given line segment form a team. Switching points can be load-break switches, reclosers, pulseclosers, or breakers. The controls for each switching point communicate directly with all other controls in the team—which is why it is called peer-to-peer communications. Each team can share information with adjacent or remote teams, facilitating the deployment of large-area distributed logic functions such as protection and service restoration. Applications are scalable since additional teams of switching points can be added as necessary.

3.5.3.4.1.7  Substation Computer-Based Schemes 

Some utilities prefer to deploy substation computer-based FDIR schemes, especially when they have a mix of new and legacy control IEDs at the switching points outside the substation. In this case, it is simpler to add communications to the feeder switching devices and a substation computer with its own logic and to retrofit the older legacy control IEDs out on the feeder switching points. The substation computer typically has a simple connectivity model of the connected feeders with automated switching devices and receives data from the feeder and substation IEDs to make intelligent switching actions after a fault has been isolated upstream. After a recloser or the substation circuit breaker has operated to isolate the fault, the substation computer processes the IED data, determines the downstream isolation switching device that must be opened, and then determines the normally open switch that must be closed to restore power to unaffected customers.

Example of distributed logic with teams of switches. (© Copyright 2012 S&C Electric Company, Chicago, IL. All rights reserved.)

Figure 3.121   Example of distributed logic with teams of switches. (© Copyright 2012 S&C Electric Company, Chicago, IL. All rights reserved.)

The restoration algorithms of substation computers vary, but typically a capacity check is at least performed to ensure that alternate feeders can pick up the excess load. Some substation computers are also capable of supporting multisource multibackfeed restoration when more than two alternate sources, and thus at least two normally open switching devices, make up the distribution system. Figure 3.122 shows an example of this type of system. These devices also typically support restoration when multiple faults occur in the system, return-to-normal switching after the fault, and can be disabled by a single virtual “button” in the graphical user interface.

3.5.3.4.1.8  Source Transfer Applications 

Source transfer makes two different sources available to supply a specific customer load. If one source is lost, the customer is switched to the alternate source. When power is restored, the customer load is returned to its primary source. Sophisticated controls monitor source quality and availability and automatically switch to the best or only source.

3.5.3.4.1.9  New Technologies Large-Scale energy storage.

High-power batteries, efficient inverters, and sophisticated switching make energy storage a practical new alternate source. There are a small but growing number of installations of 1–4 MW energy storage systems on utility systems using sodium-sulfur (NaS) batteries, among other technologies. The intelligence in the control system charges the batteries during off-peak times and then supplies energy during peak times. This creates several opportunities for economic justification, such as the ability to make full use of intermittent renewable sources regardless of the time of day or present loading, the ability to shave peak load, and the deferral of substation capacity upgrades. In another case, energy storage will result in a vast improvement in electric service reliability in the case of a town located a long distance from the substation and supplied power by only one feeder. The stored energy can power the town as an islanded network for several hours while the feeder is out of service.

(

Figure 3.122   (See color insert.) Example of multi-backfeed restoration: (a) before fault and (b) after fault. (© Copyright 2012 ABB. All rights reserved.)

3.5.3.4.1.10  Closed-Loop Fault Clearing Systems

A high-speed fault clearing system can use two parallel redundant circuits to protect an important customer area from outages. Customers can be supplied from either circuit, and directional overcurrent protection relays with fiber-optic communication are used to keep response time fast. Faults can be cleared in 3–6 cycles, with power immediately restored to unfaulted sections, and only faulted sections will have a power interruption. With a closed-loop circuit configuration, load can be supplied while a fault is being cleared and only a few customers will experience a voltage dip. These systems offer fault clearing with no outage when deployed on URD or circuits with no load connected between switches.

3.5.3.4.1.11  Deployment Considerations 

Because the field-based FDIR schemes vary significantly depending on what scheme is used, there are a wide range of deployment aspects that must be considered, including the use or nonuse of communications devices and fault-passage indicators and the deployment of automated reclosers and sectionalizers. These devices help to determine the fault location quicker, thus enhancing reliability. By automating the field switching devices with communications and intelligent control devices, the reliability indices will be reduced significantly, compared with the noncommunicating, nonautomated schemes previously described. Cost is the major issue with the deployment of these devices. Each utility typically sees a savings associated per customer with each minute of the outage reduction. Thus, there is a trade-off between the cost to deploy the automation devices and the observed savings. The utility will see a return on investment only once the savings exceeds the deployment costs. Hence, the utility must seriously analyze the number of automated devices deployed and would be wise to deploy these devices in stages in order to ascertain the reliability improvement and cost savings as the level of automation increases across their system.

In many states in the United States, the PUCs encourage utilities to report action plans for improving their worst-performing feeders. In this fashion, reliability is considered in the utility rate case decisions, providing a financial incentive for reliability improvement.

Some countries, particularly in Europe, have large monetary incentives in place that motivate utilities to improve service reliability. The incentives often take the form of both financial penalties to the utility for poor performance or capital reimbursement funds for improved results. The external influence of positive or negative cash flow based on service reliability provides a more quantitative environment for cost/benefit calculations.

The system topology must also be considered when determining the type of field-based FDIR scheme that is deployed. For basic radial systems, simpler schemes can be deployed, such as the midpoint recloser. For the more complex multibackfeed (meshed) systems, more automated devices are required, driving the deployment cost up, but the level of reliability improvement and cost savings will also increase due to the available alternate backfeed sources. For these systems, the more advanced distributed logic solutions based on peer-to-peer communication or substation computer-based schemes should be deployed.

3.5.3.4.2  Control Center-Based FDIR Schemes

3.5.3.4.2.1  Manual Switching Using SCADA/DMS and Remotely Controlled Switching Devices 

Historically, when customers lose power, they call a utility’s automated answering system, which enters the outage data into the utility’s OMS, which in many cases is part of the DMS. Outage data are then displayed on the operator interface (see Figure 3.123). As more phone calls are answered, the OMS tries to determine the cause of the outage, for example, if a switching device or fuse in the field operated to clear a fault or if a transformer or other component failed. The operator then uses the interface to coordinate isolation and restoration of the feeder by dispatching crews to conduct the switching operations.

If the utility has automated switching devices that directly or indirectly communicate with their SCADA, then they can remotely control the feeder switching devices for faster isolation and restoration.

3.5.3.4.2.2  Automatic Switching Using SCADA/DMS and Remotely Controlled Switching Devices 

With an integrated and automated control center-based restoration scheme, the fault detection occurs in the field based on the IED-sensed network events, and the SCADA software is automatically informed, subsequently sending these data to the DMS. When the DMS receives this information, it will run a restoration switching analysis (RSA) with respect to the outage area and generate power restoration schemes or switching plans.

DMS-based restoration. (© Copyright 2012 ABB. All rights reserved.)

Figure 3.123   DMS-based restoration. (© Copyright 2012 ABB. All rights reserved.)

The RSA is based on the detailed network model and load flow analysis of the network model to ensure that the postrestoration network does not have current and voltage violations. In North America and other parts of the world, it is a requirement that the RSA supports unbalanced grounded systems, which are typical since the loads are split between the phases due to the way in which the loads are dispersed. In Europe and other parts of the world, a balanced load flow analysis is sufficient to support the three-phase ungrounded distribution systems in place. The RSA performs network topology analysis techniques that typically support both lightly loaded and heavily loaded network conditions. If the loading of the network is light, most likely a single-path restoration is sufficient. If the loading of the network is heavy, either a multipath restoration or multilayered feeder segment restoration has to be used. Figure 3.123 shows a control room operator interface with proposed switching actions.

Whether or not a switching plan is sent to the field devices for execution immediately after the RSA is executed is based on the operator’s preferences. The application has three types of restoration control settings: (1) fully automated control mode, where the operator is not involved in the switching plan execution process, that is, the best switching plan is automatically selected; (2) semiautomated mode, where the best switching plan is automatically selected and the operator performs a one-click confirmation-based switching plan execution; and (3) semiautomated supervisory mode, where the operator selects the best switching plan based on information provided as a result of the RSA.

3.5.3.4.2.3  New Technologies 

The AMI and sensor technologies are having a significant impact on control center-based FDIR schemes. These technologies are being used to help enhance the outage analysis process of the DMS, including outage verification, faulted line segment location, and restoration verification. The AMI data are typically retrieved by the DMS via an MDMS, where the data are stored after retrieval from the field. By deploying communicating meters and sensors in the field, the utility operator can pinpoint the general line segment location where the fault occurred in the DMS graphical user interface. These real-time data allow the utility to dispatch field crews for repairs before customers even call the utility trouble call system to report an outage. Once the repair work is completed and power is restored to the affected customers, the meter and sensor data can be used to verify that the power has been restored to the customers.

Because the control center-based FDIR schemes require data via direct or indirect communication to field devices, other field data could be used in the outage/fault management process, as well as other everyday processes. For example, by obtaining the current fault magnitude from a substation circuit breaker IED after a fault has occurred, the DMS can run an algorithm to estimate potential fault locations. Coupling these results with the AMI data provides a very powerful new method to improve reliability and to significantly reduce customer outage minutes. In the future, it is envisioned that utilities may call affected customers immediately after a fault has occurred, indicating estimated time to restoration. Another example is the availability at the control center to meter-level power quality data, which can be analyzed to determine the effect of disturbances at the system and customer levels. The key message is that the more data the utility control center has, the more effectively it can run its operations, becoming more proactive rather than reactive.

3.5.3.4.2.4  Deployment Considerations 

When deploying control center-based FDIR schemes, the main issues that must be considered are the communications system, supervisory modes, and the integration of AMI and sensor data. For the communications part, some utilities prefer to deploy communications to the feeder devices directly from the control center SCADA system, bypassing substation computer and gateway devices. However, the advantage of getting access to data through these devices is the proximity of the substations to the feeder devices. In some cases, the utility may also prefer to bypass the SCADA system itself, sending field data directly to the DMS, due to the data bottlenecks that sometimes occur with SCADA and the access to nonoperational data that SCADA does not typically collect.

Another issue to be determined during deployment is the setting of the operation mode of the control center-based FDIR scheme. Most utilities prefer to employ a supervisory mode for these FDIR schemes, allowing the operator to have some level of control of the field switching. However, some utilities prefer the automatic mode to expedite the management of the fault, allowing restoration to occur much more quickly.

As discussed previously, there is great value in integrating AMI and sensor data into the FDIR schemes to improve outage/fault management. However, this is a large undertaking that will consume many utility (and vendor) resources to complete. Thus, it may be wise to first integrate these data in sections of the distribution system where reliability issues are the gravest.

3.5.3.5  Reliability Needs in a Smarter Grid

With the future smart grid, there are many challenges that must be addressed to reap the operational benefits. Deploying FDIR schemes alone does not ensure that reliability will be optimized. Coordinating FDIR with other control functions, such as VVC and optimization schemes, demand response programs, DER dispatch, and load balancing, will result in more effective distribution grid management. Coordination with volt/VAr schemes alone will help to increase efficiency and minimize losses after a fault has occurred on the distribution system. Load balancing is another future function that will allow for dynamic reallocation of load to adjacent feeders to ensure reliability during overload conditions.

DER devices and demand response programs will have a big effect on utility operations and devices, as well as the way in which consumers receive and use power. Due to the availability of alternate generation, DER devices will help to enhance reliability, especially when “basic FDIR” results in unaffected customers in specific areas of the grid losing power for a significant period of time after a fault. The proliferation of DER devices on the grid will require better protection algorithms and schemes, due to the two-way flow of power and the fact that they will contribute to faults. Demand response is another resource for grid management, where loads can be shaved during peak periods of operation to reduce overall system demand. By coordinating demand response with FDIR, restoration can be achieved over larger areas of the grid due to the lower demand, again enhancing reliability.

Reliability is becoming much more of an issue in the smart grid, due to technologies becoming available that can make the deployment of reliability functions such as FDIR and load balancing more realistic. Many PUCs are also pushing for higher reliability from the utilities over which they monitor. As the level of reliability increases in other industries, especially consumer industries (e.g., smart phones and laptops with “on demand” features), the expectations on reliability of the electricity supply from consumers will only increase as a result. These realities will make reliability optimization of the utility distribution grid a mandatory requirement in the future.

3.5.4  Outage Management

Stuart Borlase, Steven Radice, and Tim Taylor

Modern computer-based OMS, utilizing connectivity models and graphical user interfaces, has been in operation for some time now. OMS typically includes functions such as trouble call handling, outage analysis and prediction, crew management, and reliability reporting.

Connectivity maps of the distribution system assist operators with outage management, including partial restorations and detection of nested outages. Outage management was originally based on receiving calls from customers and did not include a connectivity model of the system, including the connection points of all customers. Manual data recording and the use of paper maps were used to estimate the location of outages.

With the modern OMS, system connectivity information is typically stored in the GIS. Network data from GIS (and/or other data sources) are imported to the OMS database using a network data interface. This interface extracts data from GIS and performs a data model conversion based on business rules and data model mapping. The interface initially populates the database with all network data, including connectivity information, system components including protection and switching device types and locations, and distribution transformers. This is referred to as the “bulk network data load” or bulk load. The interface can also be periodically run to transfer the subset of data that has changed since the last update. This process is referred to as the “incremental network data update” or simply incremental update. A screen capture of an OMS system model in a large metro area is shown in Figure 3.124.

(

Figure 3.124   (See color insert.) OMS system connectivity model in a large metro area. (© Copyright 2012 ABB. All rights reserved.)

The data extracted from the GIS will capture necessary network data to support OMS and DMS operation. The required type data can be provided from other sources or entered manually. The following tasks are usually performed for the interface specification:

  • Review and determine the data inputs/outputs and data flow, identify application modules, and identify all unique key attributes that will need to be maintained and used by the OMS and DMS applications.
  • Review source data models in order to verify data requirements.
  • Determine mapping of source data objects to OMS/DMS objects.
  • Determine mapping of source data attributes to OMS/DMS attributes.
  • Develop a data mapping spreadsheet including transformation rules as appropriate.

Another key to successful outage management predictions with OMS includes an accurate representation of customer connectivity on the system. When a customer calls in to report an outage, or an AMI meter sends an outage notification or restoration notification, the system has sufficient information to know where the customer is connected on the system. Customer connectivity is typically maintained in either the GIS or CIS. By evaluating report locations and the as-operated topology of the network, an OMS can identify probable outages that may constitute a single customer out, a single transformer out, a protective device operation, or a de-energized source.

Outage engine algorithms use the connectivity model, location of customer calls, and statistical parameters, such as the ratio of affected customers to total customers, the number of distribution transformers with calls, and the number of downstream protective devices with predicted outages, to determine probable outage location. These parameters can be combined in various ways to achieve optimum prediction accuracy for the network. System operators are then able to track outages using dynamic symbols on the geographic maps, such as the one in Figure 3.125, as well as in tabular displays.

In recent years, OMS has become more automated. Outage prediction—the process of analyzing outage events such as trouble calls, AMI outage notifications, and SCADA-reported status changes—has improved. Interfaces to interactive voice response systems (IVR) permit trouble call entry into an OMS without call-taker interaction and also permits the OMS to provide outage status information to customers and provide restoration verification callbacks to customers who request them.

OMS systems have also become more integrated with other operational systems such as GISs, customer information systems (CIS), mobile work force management (MWFM)/field force automation (FFA), SCADA, and AMI. Integration of OMS with these systems results in improved workflow efficiency and enhanced customer service.

Today’s OMS is a mission-critical system. At some utilities, it can be utilized simultaneously by hundreds of users. It integrates information about customers, system status, and resources such as crews, providing a platform for operational decision support.

Example of outage representation using dynamic symbols in OMS.

Figure 3.125   Example of outage representation using dynamic symbols in OMS.

Three ways in which outage management is changing in smart grid implementations are the following:

  1. Integration of AMI data in OMSs
  2. The use of advanced DMS applications for supporting outage management
  3. The integration of SCADA with DMSs and OMSs

As distribution organizations have become more interested in increasing asset utilization and reducing operational costs, advanced DMS applications have been developed. These include load allocation and unbalanced load flow analysis; switch order creation, simulation, approval, and execution; overload reduction switching; and capacitor and voltage regulator control.

Two specific examples of advanced applications that reduce customer outage durations are the fault-location application and RSA, sometime called FDIR application.

The fault-location application estimates the location of an electrical fault on the system. This is different than estimating the protective device that actually opened, which typically is done based on the pattern of customer outage calls or through change in a SCADA status point. The location of the electrical fault is where the short-circuit fault occurred, whether it was a result of vegetation, wildlife, lightning, or other cause.

Finding the location of an electrical fault can be difficult for crews, particularly on long extents of conductor not segmented by protective devices. Fault location tends to be more difficult when troubleshooters or crews are hindered by rough terrain, heavy rain, snow, and darkness. The more time required to locate the fault, the more time customers are without power.

A DMS fault-location algorithm uses the as-operated electric network model, including the circuit connectivity, location of open switches, and lengths and impedances of conductor segments, to estimate fault location. Fault current information such as magnitude, predicted type of fault and faulted phases are obtained by the DMS from IEDs such as relays, recloser controls, or RTUs.

After possible fault locations are calculated within the DMS application, they are geographically presented to the operator on the console’s map display and in tabular displays. If a GIS land base has been included, such as a street overlay, an operator can communicate to the troubleshooter the possible location including nearby streets or intersections. This information helps crews find faults more quickly. As business rules permit, upstream isolation switches can be operated and upstream customers can be reenergized more quickly, resulting in much lower interruption durations.

A second advanced application that improves reliability performance indices is RSA. This application can improve the evaluation of all possible switching actions to isolate a permanent fault and restore customers as quickly as possible.

Upon the occurrence of a permanent fault, the application evaluates all possible switching actions and executes an unbalanced load flow to determine overloaded lines and low-voltage violations if the switching actions were performed. The operator receives a summary of the analysis, including a list of recommended switching actions. Similar to the fault-location application, the functionality uses the DMS model of the system but improves outage management and reduces the CAIDI and SAIDI.

The RSA application is particularly valuable during heavy loading and when the number of potential switching actions is high. Depending on the option selected, the application can execute with the operator in the loop or in a closed-loop manner without operator intervention.

In closed-loop operation, the RSA application transmits control messages to distribution devices using communications networks such as SCADA radio, paging, or potentially AMI infrastructure. Such an automated isolation and restoration process approaches what many call the “self-healing” characteristic of a smart grid.

3.5.5  High-Efficiency Distribution Transformers

V.R. Ramanan

With the ever-growing global population and an ever-increasing global demand for energy consumption, sustaining our power-hungry world calls for energy-efficient products and reliable grids. The future smart grid needs to be not only “smart” but also highly efficient and environmentally sustainable.

Although there is a tendency to take them for granted, transformers are key components in the electrical power distribution grid, playing a significant role in the efficiency of that grid. In spite of the fact that they have high efficiencies, a large total loss of energy results due to the large number of distribution transformers. These losses are estimated to be approximately 2%–3% of the total electric energy and represent approximately 25 billion dollars annually for the United States.

A testament to the importance of energy saving programs and energy efficiency requirements is present in several global and local initiatives. Examples of mandates or standards requiring high efficiencies in distribution transformers are the following: the U.S. Department of Energy’s mandated National Efficiency Standard, Australia’s Hi efficiency 2010, India’s 4 and 5 Star programs, China’s SH15 standard, and the AkA0 standard in Europe.

Transformers in operation incur two types of losses: no-load loss, P0, occurring in the transformer core which is always present and is constant during normal operation, and load loss, Pk, which occurs in the transformer electrical circuit, including windings and components, and is a function of loading conditions. Since most transformers are rated to handle peak loads which only happen at certain intervals during the day, distribution transformers can remain lightly loaded for significant portions of the day. So, specifying as low a no-load loss as possible reduces energy consumption and goes hand in hand with increased efficiency.

More end users now choose low-loss transformers based on criteria other than pure short-term profitability. The prevalent criterion is total ownership cost (TOC), which considers the future operating costs of a unit over its lifetime, brought back into present day cost and then added to its total purchase price. In calculating TOC, the losses are accounted by their financial impact, capitalized for an expected payback period for the transformer:

T O C = C t + ( A × P O ) + ( B × P k )

where

Ct is the transformer purchase price

A, the no-load loss factor, and

B, the load loss factor, are the assessed financial value (e.g., USD/W) for no-load loss and load loss, respectively

The most optimal selection would be the design with the lowest TOC as calculated earlier. Simply put, the customer/user will obtain a practical balance between investment and reward, reflecting the continuous change in global and local business conditions at any time. TOC provides the true economics in evaluating a transformer purchase.

With low-loss, high-efficiency transformers, a higher material cost anticipates a higher first cost. However, this will be compensated by reduced running costs from lower losses. Beyond a certain time, the lower losses will give a net financial saving from reduced energy costs. If higher loss transformers are replaced with new low-loss transformers, this saving becomes even greater. Furthermore, lower losses result in cost avoidance derived from elimination or deferral of extra generation and transmission capacity additions.

The development of amorphous metal core distribution transformers (AMDT) is an important step in this direction. Amorphous metal (AM) enables a significant reduction in no-load losses of transformers by up to 70%, as compared to conventional grain-oriented silicon steels (RGO). A quick back-of-the-envelope calculation highlights the energy savings potential from the deployment of AMDT. Assuming that about 1% of the installed U.S. generating capacity of 1.4 TW is lost in distribution transformer no-load losses, a 70% reduction of these losses from the use of AM cores suggests a potential annual energy saving of about 85 billion kWh.

Comparison of efficiencies of liquid-immersed and dry-type AMDT with DOE National Efficiency Standards. (© Copyright 2012 ABB. All rights reserved.)

Figure 3.126   Comparison of efficiencies of liquid-immersed and dry-type AMDT with DOE National Efficiency Standards. (© Copyright 2012 ABB. All rights reserved.)

Figure 3.126 compares the efficiencies of liquid-immersed and dry-type AMDT with the mandated minimum efficiency standards from the U.S. DOE across a wide range of transformer ratings. The improved energy efficiencies from AMDT are quite clear.

Figure 3.127 compares the TOC of liquid-immersed 1000 kVA transformers having RGO and AM cores, wherein the loss capitalization factors are as follows: A = 10 USD/W and B = 2 USD/W. The various components comprising the TOC are individually highlighted.

In summary, energy efficiency is the name of the game for electrical power distribution systems in the future. Reduction of losses due to transformers in the grid is an important first step, and developments are under way to specify and design lower loss, higher efficiency transformers. AM core transformers represent the ultralow loss, highest efficiency solutions.

TOC comparison of RGO and AM transformers. (© Copyright 2012 ABB. All rights reserved.)

Figure 3.127   TOC comparison of RGO and AM transformers. (© Copyright 2012 ABB. All rights reserved.)

3.6  Communications Systems

Harry Forbes, James P. Hanley, Régis Hourdouillie, Marco C. Janssen, Henry Jones, Art Maria, Mehrdad Mesbah, Rita Mix, Jean-Charles Tournier, Eric Woychik, and Alex Zheng

3.6.1  Communications: A Key Enabler of the Smart Grid

Never before has the electric utility industry experienced a technology revolution as transformative as the smart grid. One of the major aspects of this transformation is the addition of an integrated and pervasive communications network that will touch every part of the grid from generation, transmission, distribution down to the consumer and will support the automated intelligent transactions that will make the new grid “smart.” Many utilities will end up with communications network infrastructures that will rival the size and scale of the telecommunications companies.

Several major forces have influenced the development of wireless solutions resulting in changes to the interfaces among traditional electric industry domains. For the last 100 years, electric utilities have mostly operated in three separate domains of generation, transmission, and distribution. In the twentieth century and before, the separation of these domains enabled major development to occur. The result was to enable electricity to be generated, transmitted, and distributed to hundreds of millions of homes and businesses across the nation. In the last part of the twentieth century, technological and societal forces have changed the dynamics of these domains and how they interact. Two new domains have emerged that provide services and allow consumers to interact, not merely as customers, but also as suppliers of electricity and to directly participate in electricity supply and demand markets.

During the last 10 years, there has been an increasing regulatory and consumer interest in wireless solutions for electrical distribution and automated metering infrastructure in the United States. This movement gathered strength as a result of the northeast blackout of 2003 and, to a lesser extent, brownouts in 2000 in California. These events underscored the increasing vulnerabilities with grid reliability, which provided regulatory impetus for the passage of the 2005 Energy Policy Act that mandated states to study smart metering solutions. In 2006, North American Reliability Corporation (NERC) defined new rules to protect the bulk electric system, including components of an electronic security perimeter to protect the critical infrastructure of the national electric grid.

Since 2006, there have been several initiatives to define wireless communications architectures and their associated protective measures including funding for Advanced Metering Infrastructure (AMI) security initiatives by Congress in the American Recovery and Reinvestment Act (ARRA), which provided a portion of US$4.5 billion in funding for energy efficiency and reliability initiatives.

While AMI developments have been one of the main drivers of additional communications networks sought by electric utilities in smart grid initiatives, utilities are realizing the potential of an integrated system of technologies and communications solutions in a smart grid architecture—the merging of technology and communications in a smart grid.

3.6.2  Communications Requirements for the Smart Grid

3.6.2.1  AMI Communications

Technological advances in the areas of telecommunications network coverage, speed throughput, privacy, and security have enabled the implementation of more encompassing and capable AMI systems. AMI networks enable utilities to accomplish meter data collection, customer participation in demand response, and energy efficiency and support the evolution of tools and technology that will drive the smart grid future, including integration of electric vehicles and distributed generation.* Without the collection of AMI (interval) metering data, it is difficult to determine when customer consumption occurs in time, what customers do in response to grid management needs, and the value of customer response. Smart meters and related submeters that form the end points in the AMI architecture provide two critical roles. One is access to more granular interval usage data (e.g., last 15 min rather than last 30 days). Second, a durable communications link that is bidirectional (two-way) to deliver messages/instructions to the meter. While smart meters offer the potential of substantial benefits to electric utilities and consumers alike, the electric distribution company faces a number of possible deployment challenges. First is the need to establish and manage a communications network that is sufficiently flexible to reach most meters in the service area and is adaptable enough to change as customer and business needs change. Second, the deployment must be justified in terms of its cost and must provide for revenue recovery to satisfy both regulators and utility management. Third, customers must be educated about the benefits of smart meters and the related services that will be enabled through AMI. Fourth, the AMI architecture must embed systems and software that fully address cybersecurity and privacy needs.

Communications for smart grid AMI and demand management deployments should include

  • An open-standard architecture to enable interoperability among systems, flexibility in communications choices, and future innovations from third-party technology providers
  • Two-way communication to every meter to enable advanced control capabilities as well as remote device configuration and firmware updates
  • Wireless in-home networking for demand response and load control devices, such as smart thermostats, smart appliances, in-home displays, and load controllers
  • Advanced service switch for remotely connecting, disconnecting, and limiting service
  • Positive outage notification and restoration verification

The purpose of an AMI communications system is to provide electric utilities with a communications network permitting connectivity between grid devices such as electric meters and a head-end system. AMI communications network options are numerous: they can be power line carrier (PLC), satellite, cellular (2G, 3G, or 4G), WiMAX, RF mesh, etc. PLC and cellular technologies (general packet radio service [GPRS]) have been traditionally used in Europe, whereas the United States has generally favored wireless technologies (cellular, RF mesh).

Several OFDM-based PLC technologies are being standardized in Europe (PLC-G3, Prime). Wireless solutions are becoming more prevalent around the world. New technology standards are emerging, such as 802.15.4 g (the PHY specification for the ISM [industrial, scientific, and medical] radio band for smart utility networks) and 802.15.4-2006, the frequency-hopping spread ­spectrum MAC. Furthermore, the AMI communications system may require access point, remote, and backhaul radios. The AMI communications system leverages the earlier technologies to provide an industry standard, reliable, scalable, and secure system for AMI applications.

The choice should be made based on edge device density, network performance, and AMI application requirements. Smart grid communications must support the use of multiple transport technologies seamlessly integrated to cost-effectively deliver the best combination of reliability, security, and functionality.

The scope of communications, beyond current grid interface and control, will expand dramatically with the smart grid. Grid intelligence is closely related to the extent to which the “decision platform” extends beyond current capabilities to more fully integrate grid components with customer premises equipment and response. This integration of communications and grid functionality enables operational visibility into the power delivery process, from the generation of electrical power down to the energy consumer. This visibility can only be achieved through platforms that support the exchange of data across the interfaces in the power system. This requires capabilities to reliably exchange large amounts of information in short periods of time. Reliable, fast, and secure communication is the cornerstone of any modern and smart grid power system.

The smart grid aims to assure secure, reliable, and optimal delivery of electrical power from generation to consumption point, with minimal impact on the environment through

  • Better coordination of the energy supply and demand to reduce the overall need for generation, transmission, and distribution, while minimizing system losses
  • Increased integration of dispersed renewable and distributed energy sources

3.6.2.2  Communications for Smart Grid Operations

Table 3.10 summarizes smart power network attributes and related communications requirements that will enable advanced operations in smart grid deployments. Critical to this is a viable platform for enhanced usage of information and communications technologies. Operational communications are already used extensively in specific parts of the power system (e.g., transmission) but are less developed in others (e.g., distribution and customer interface). Figure 3.128 presents some of the operational communications domains that can be used to further integrate the power delivery system.

3.6.2.2.1  Operational Applications

Protection relays require real-time transfer of electrical measurements, signals, and commands between substations to ensure the protection of the power system and its assets. This is a critical operational communication need in the substation and constitutes a basic building block for the network’s “self-healing” capability. Stringent data transmission time requirements dictate the use of telecommunications grade communications devices and circuits. This requires that the utility either build out a dedicated communications infrastructure or procure advanced communications services. Dedicated communications is typically required between substations for exchanging data between protection relays.*

Automation in the “smart” substation at the feeder, bus bar, or substation levels exchanges information through different levels of Ethernet LANs. Although automation at this level is generally constrained to the substation perimeter, external communications beyond protection relays is also necessary.* Real-time data exchanges beyond the substation are required for incorporation of feeder automation and interfaces to distributed energy resources.

 Operational communications domains in the electric utility. (© Copyright 2012 Alstom Grid. All rights reserved.)

Figure 3.128   (See color insert.) Operational communications domains in the electric utility. (© Copyright 2012 Alstom Grid. All rights reserved.)

Table 3.10   Communications Requirements to Support Smart Grid Operations

Power Delivery Attributes

Operational Requirement

Smart Application

Communications Services

Self healing

Prompt reaction of the power network to changes through a coordinated automation system and “network-aware” protection schemes for rapid detection of faults and power restoration

Protection relay, networked automation, wide area protection

Low-latency, time-predictable communications channels, time-controlled Ethernet wide area network (WAN)

Enhanced visibility and grid control

Enhance visibility of the power flow and the network state across the interconnected, multiactor, competitive market to achieve an increased level of security of supply

EMS/DMS/SCADA, wide area monitoring systems

Resilient IP connections for SCADA and WAMS. Secure communication for inter-control center communications

Enhanced control of the power flow

Enable decoupling of networks and control of the power flow by utilizing power electronic devices in the network (HVDC, FACTS)

Wide area control

Time-controlled Ethernet local area network (LAN) within the perimeter of the plant

Empower consumers

Incorporate consumer equipment and behavior into the design and operation of the grid. Demand response and peak shaving through information exchange with the energy consumer (and potentially the consumer’s electrical appliances)

Smart metering and AMI

Two-way “real-time” wireless or wireline communication from the service provider to the individual consumers

Resist and survive physical and cyber attacks

Mitigate cybersecurity issues in the power delivery information system covering the control center, substation, and the communications network in-between. Remotely monitor unmanned installations and assure physical integrity of power utility critical sites

Centralized cybersecurity monitoring facilities. Video surveillance and access control

Secure and redundant communications for remote security barriers and intrusion detection systems. Communication to video and security access systems

Maintain service and recover from natural disasters

Enable the operational system to resist disasters and be prepared to reestablish operations when they do happen

Backup control centers. Fast deployment, mobile control platform

Automatic failover communications to back up facilities, sites, and staff dispersed across the network

Accommodate clean power

Secure integration of the dispersed power generation, mainly large wind farms, but also “energy-producing consumers” (solar, wind, etc.)

Remote generator unit monitoring, supervision, and connect/disconnect

Reliable two-way communications with dispersed generators to automate switching, routing, and storage of energy from alternative sources

Optimize asset usage and life-cycle management

Use power system assets at full efficiency, to their real end-of-life, without disruption of service due to failure or environmental risks, through remote monitoring and right-on-time corrective action (no unnecessary preventive replacement)

Asset condition monitoring (circuit breakers, transformers, etc.)

Secure IP communication from the monitoring device associated with the power asset to the monitoring application platforms

Source: © Copyright 2012 Alstom Grid. All rights reserved.

Energy management and control center communications include a number of different applications for the “enhanced visibility and grid control” component of smart grids:

Wide area monitoring, protection, and control (WAMPAC) systems enable accurate visibility of transmission system power flow across multiple interconnected power networks. They constitute GPS-synchronized measurements of the power system bus voltages, line currents, etc., using synchrophasors or phasor measurement units (PMUs) collecting time-tagged measurements every 5–20 ms as defined by the IEEE C37.118 standard. The required bandwidth is in the range of 10–100 kbps per PMU device and few hundred kbps for PDC (phasor data collector) communications.* The communications requirements depend upon the application:

  • Display systems (voltage, phase, power swing, line loading, etc.) and monitoring applications (frequency and voltage stability, power oscillation, line temperature) can tolerate a relatively large time latency (tens of seconds)
  • Applications related to online analysis of network stability have a time constraint similar to SCADA (i.e., few seconds)

Energy metering for settlement and reconciliation at the HV substation delivery point is an important component of the new deregulated power system. This is another potential IP-based communication of the HV substation with a bandwidth that depends upon the frequency of data capture and transfer.

Condition monitoring and asset management of primary components of the substation (circuit breaker, power transformer, etc.) generate condition monitoring data collected for maintenance, loading and stress analysis, and life-cycle management. All substation intelligent electronic device (IED) as well as telecommunications devices also need to support remote management. An asset monitoring network can be implemented across the communications infrastructure using web services with servers residing in the substation or at some other location. Monitoring in the substation should also include environment monitoring to protect substation assets and premises (e.g., temperature monitoring, fire detection). While this type of data is not as time critical as, for example, SCADA data, it potentially includes a large amount of data from equipment all over the T&D grid and over various communications networks.

Security, surveillance, and safety communications is driven by national authorities with increasing focus on the mitigation of security risks on “critical infrastructures” including those related to electric utilities. Authorities are setting specific security standards that concern not only the security of information and communications but also the physical security of electrical installations and process sites across the power delivery system. The deployment of these applications require broadband IP connectivity as enormous quantities of information need to be transported in realtime from a large number of dispersed sites to centralized security monitoring stations.

Video monitoring of unmanned substations and night surveillance of premises can be performed via widely used intelligent video over IP surveillance cameras, advanced video analytics, and automatic alarm triggering based on motion detection.

Substation nonoperational data collection includes event recorder data and analog waveforms captured continuously and uploaded for postincident analysis. The volume of data and its non-real-time nature often dissociates it from SCADA. Communications requirements related to these applications are covered through IEC 61850 and constitute part of the TCP/IP network load.

Mobile workforce communications need to incorporate new field work practices using multimedia-centric command control and dispatch communications solutions, replacing or complementing the traditional switched telephone network and/or wireless voice facilities. In-house and contractor maintenance staff require remote access to online maintenance manuals, maintenance applications, substation drawings and plans, accurate maps, pictures, and timely communication of work orders to carry out their tasks. Broadband IP network access with reliable wireless connectivity is required in order to meet an acceptable transactional time performance considering the relatively large volume of data to be handled (file transfer, multimedia group communication, instant messaging, video streaming, etc.).

3.6.2.2.2  Operational Communication Constraints

In order to provision communications services for operational applications, whether through procurement of telecommunications services or through implementing a dedicated network infrastructure, a number of issues must be addressed. Apart from the most basic need, which is the coverage of the operational zone, these issues imply requirements to be predictable, robust, error-proof, and future-proof.

Many critical power process-related applications require predictable behavior in the related communications service. Predictability in this sense can be defined as follows:

  • Deterministic information routing-This means that both in normal time and in presence of anomalies and failures, one can precisely determine the path taken by the communication. Fixed or constrained routing limits the opera-tion of network resilience mechanisms into a predefined scheme in which every state taken by the network is previously analyzed. Deterministic rout-ing is not a natural instinct of the network designer who is tempted to em-ploy every resilience capability of the employed technology. However, it constitutes one of the bases for fault tolerant design and for a predictable time behavior.
  • Predictable time behavior—This is the capability to determine the time latency of the communications link for an application. This attribute is essential for applications such as protection relaying and WAMPAC and needs, as a prerequisite, deterministic information routing.
  • Predictable network transit time, requiring dedicated network resources including backhaul network and back-end connection resources, to prevent competition for resources with other network traffic.
  • Predictable time behavior must also take into account the time required to restore service in the event of a network anomaly.
  • Predictable time behavior is assumed to harness “measuring tools” to monitor the “time latency” for every critical service.
  • Fault tolerance is the capability of continued service in the event of a communications network fault, achieved through the predictable behavior of the system, for example, normal and back-up services without use of common resources, equipment, link, power supply, fiber cable, etc.

Robustness is a system capability to resist to the severe environment in which it must operate. Concerning the operational communications services, different aspects must be considered:

  • Reliable and stable hardware and software-The duplication of critical mod-ules and subsystems and, in certain cases, of the whole equipment or plat-form increases the availability of the system. Availability is a statistical pa-rameter that must be estimated across the whole chain and must be coordinated among the different constituents of the system. It com-plements but cannot replace fault tolerance which is a deterministic concept.
  • Power autonomy—An essential attribute of operational communications is their continuity in the event of AC power supply interruption for a specified duration ranging from few hours to few days depending on operational constraints. This is often a major drawback for using public communications facilities that generally lack sufficient autonomy. Adequately dimensioned DC batteries and backup generators allow utility telecommunications infrastructure to remain operational for restoring the power system.
  • Mastering cybersecurity risks—The robustness of the communications service depends also upon the degree of invulnerability of the network infrastructure to security risks in particular for increasing numbers of IP-based smart grid applications. Proper isolation of different services (through VLANs or VPNs) and a security policy covering not only the control center but the whole distributed intelligence system incorporating the communications infrastructure and monitoring of cyber access are some of the aspects of security risk mitigation.

3.6.2.3  Home Area Network

Home area networks (HANs) provide the means for electric utilities to communicate to individual consumer load devices (mostly residential consumers) in support of demand management applications. The HAN is the local communications network in the house to communicate among the various demand management devices, such as in-home displays, home energy management devices, programmable communicating thermostats, and smart appliances, and is the interface to communicate with the electric utility. The electric meter is considered the obvious choice for the communications interface to the residential consumer; however, other means of communications to the customer premise include cable TV, phone lines, and commercial wireless networks. Within the customer premise, several communications options are available to integrate demand management devices—wired and wireless. From the utility side, there is momentum to use PLC and wireless solutions, for instance, HomePlug GreenPhy (wired) modulating data over the home electrical wiring and ZigBee (wireless), a constrained mesh network. HomePlug GreenPhy is intended for low-cost energy management and home automation and allows for less expensive hardware modulation by sacrificing bandwidth but still allowing for coexistence with the high-speed version called HomePlug AV. ZigBee leverages IEEE 802.15.4 technology and a network, transport, and application layer as well as security that is currently tied to ZigBee called SEP 1.x (smart energy profile 1.x.).

The consumer side is tending to lean toward home automation through the use of ZigBee home automation profile (which unfortunately does not coexist with SEP 1.x) and low-power Wi-Fi. Although home automation provides some rudimentary energy management, it is not nearly as complete as SEP clusters.

The HomePlug consortium and others have worked with the ZigBee Alliance to create a link layer agnostic version of SEP (version 2.0) that has separated the SEP v1.x layers out and leverages IPv6 and TCP/UDP for the networking and transport, off-the-shelf certificate technologies for security, and HTTP for services. In the case of ZigBee and applicable constrained networks, there is also a requirement for 6LoWPAN (IETF RFC 6282), which performs header compression of the IPv6 network layer and the UDP/TCP transport layer, and a requirement for CoAP (constrained application protocol—draft-ietf-core-coap), which performs compression of HTTP server and client headers. In addition to being link-layer agnostic, going to an Internet-based network layer and off-the-shelf certificate management allows for SEP 2.0 devices and the next specification for the home automation profile to coexist at the link and network layers.

Other technologies used on the consumer side that will give opportunity for the HAN-based energy management include low-power Wi-Fi and Bluetooth low energy, which can each coexist at the link layer of their associated full-power implementations.

Implementing demand management systems requires utilities to take into account flexibility and scalability of networks. Utilities can take advantage of multiple network protocols, topologies, and potentially carriers to implement DR systems. In addition, utilities can also potentially implement their own proprietary networks in geographic localities where coverage is poor or when it otherwise makes sense in order to implement and support demand management systems.

But energy efficiency opportunities are not limited to simply the domain of infrastructure owned and controlled by the electric utility. Home energy management is an area that is just beginning to come into focus, but some forms will involve the networking of appliances possessing “intelligence” with applications that monitor a broad variety of devices consuming energy within the home and optimize consumption based on consumer preferences and knowledge of the marginal cost of the energy being consumed. This capability will require new access to data from the electric supplier (marginal price, current consumption, load curtailment signals) as well interactive capabilities and action notices to the home owner (e.g., to permit override of planned/automated actions). More informed and efficient consumption decisions on the part of the consumer will require gathering and storing information that has the potential for misuse. Thus the participant in this sector, the electricity, energy management application, and communications service providers will be faced with new demands to assure the privacy of the data.

For the average ratepayer and electric consumer, wireless fourth generation networks will enable consumers to access smart grid network applications across utilities from a great number of mobile platforms, homes, and other networks without degradation of service and fast access to Internet-based and web-enabled services. It also means that smart grid applications will be extended to mobile devices enabling consumers to have availability to a wide range of applications and controls, for example, controlling electric consumption, monitoring appliance usage, and interfacing with utility’s back-office applications (such as billing) from anywhere in the world.

3.6.3  Wireless Network Solutions for Smart Grid

3.6.3.1  Cellular

The adoption of cellular wireless networks as the communications choice for electric utilities started more than 20 years ago. As cellular technology became more readily available, utilities began the migration of certain applications to wireless networks run by the telecommunications companies, but the adoption path has been slow at times as the cautious, risk-averse world of electric utilities has lagged the rapid growth and constant innovation in the mobile wireless telecommunications industry.

Cellular networks are familiar to a majority of the world’s population today. The ubiquitous personal mobile phones and the cellular towers dotting the landscape have become so commonplace that most people take their existence for granted. However, these networks are complex systems that have been refined over decades of development and use. A cellular network is a radio network made up of small low-powered transceivers (also known as “cell phones,” “cellular radios,” or “mobile phones”), a network of powerful fixed, geographically distributed transceivers (also known as “base stations,” “cell sites,” or “cell towers”), and an infrastructure to tie the base stations together and to the public telephone system. Though a full description of cellular technology is beyond the scope of this book, the details necessary for an assessment of their role in the development of the smart grid are included here. Figure 3.129 provides a representative depiction of a basic cellular network.

The first use of cellular technology in the electric utility industry was the adoption of AMPS (advanced mobile phone system) starting in the late 1980s for automated meter reading applications, particularly for commercial meters or for very hard-to-read residential meters. AMPS (considered to be a first-generation or “1G” cellular technology) was an improvement in many ways over the previous methods of meter reading for the commercial and hard-to-read environments in that it did not require a field visit, nor did it require that a POTS (“plain old telephone service”) line be run to the meter and maintained by the customer. Though there were growing pains with the low-bandwidth (9–24 kbps) wireless technology, utilities began to recognize its benefits. When products based on paging networks became available as competitors to the AMPS networks in the late 1990s, pricing pressures and improved electronics led to further adoption of wireless technologies across the utility industry.

However, a challenge to broader commercial wireless adoption for years to come would then follow—the decommissioning of the AMPS networks by their owners that started in 2002 and was complete by 2008. These actions, which were due to a combination of federal pressure, wireless carrier economics, and the inefficient use of valuable spectrum by the AMPS technology, led many utilities to question the wisdom of relying on systems and networks not only beyond their control, but under the control of a commercial entity with a much broader set of business objectives than just keeping utility communications networks intact. This concern, set off by the experience of “losing” AMPS, would linger for years.

Basic cellular network.

Figure 3.129   Basic cellular network.

3.6.3.1.1  2G Networks

The second-generation (“2G”) wireless systems that began to be deployed in the 1990s have slowly repaired the relationship between utilities and commercial cellular network providers. The 2G network rollouts coincided with the massive adoption of cellular technologies worldwide, making the networks more ubiquitous, reliable, and cost effective. The cellular world split into two main camps at this point: those carriers who chose to utilize the global system for mobile communications (GSM) system and its data protocols of GPRS and enhanced data rates for GSM evolution (EDGE) and those who chose the CDMA2000 family and its one times radio transmission technology (1×RTT) data protocol. The North American carriers that chose GSM included Cingular/AT&T Wireless, T-Mobile, and Rogers Wireless, while Verizon, Sprint, and Bell Canada chose CDMA2000.

The 2G systems could provide much greater bandwidth and better pricing than 1G, and advanced electronics made it simpler to create full meter reading systems that could use the new 2G networks. As a result, acceptance for its use in grid applications grew worldwide. This speed was doubled in 2000 with the introduction of commercial GPRS networks in the United States that provided speeds of about 28 kbps—a major leap forward—but only comparable to dial-up speeds experienced by users connected via wireline networks. The introduction of EDGE and similar technologies boosted speeds from about 28 kbps to about 150 kbps of expected average user throughput, which enabled another class of applications to enter the smart grid arena. While 28 kbps was sufficient for transmitting point-to-point metering data containing a few hundred bytes of data, the introduction of EDGE allowed file transfers and support of applications requiring much greater bandwidth. This was a fivefold increase in throughput speeds, and as commercial carriers invested heavily in coverage, reliability of the network significantly increased. The largest AMI deployments through 2009 were in Europe, where hundreds of thousands of meters with GPRS radios inside were installed, though pilots for deployments on a similar scale are underway in North America as well. EDGE allowed an entire new class of utility applications to be implemented. For example, utilities were able to implement wireless wide area network (WWAN) routers to provide backup and continuity of service to traditional wireline circuits. They were also able to extend networks with reasonable throughput to geographic areas where wireline DSL connectivity was not available—for example, remote generation, distribution facilities, and service centers in remote locations.

Perhaps most importantly, a number of other industries began adopting 2G wireless systems for their remote communications needs, creating applications in security, oil and gas drilling, fleet management, point-of-sale terminals, and vehicle tracking. The large numbers of applications and devices have attracted greater attention by the carriers, and they have subsequently created business units and pricing plans to further grow this high-margin and low-turnover component of their business. The carriers have recognized that these devices do not require significant spectrum to support and have consequently begun to provide long-term service-level agreements (SLAs) to large data-only customers who are concerned about the longevity of the 2G networks. This has further increased the installed base of 2G devices in the field, and thus 2G deployments are expected to continue to grow due to this “snowball effect.” The attractive economics, profitable spectrum use, and business agreements have combined to make an “AMPS-like” decommissioning of the 2G systems unlikely without significant economic fallout for the carriers.

3.6.3.1.2  3G Networks

In the mid-2000s, the first of the third-generation or “3G” networks were deployed. The GSM-based carriers such as AT&T and T-Mobile deployed the GSM variant of 3G (known as high-speed packet access or HSPA) while Sprint and Verizon deployed the CDMA variant, evolution-data optimized (EV-DO). These networks provided substantially greater bandwidth to the mobile device—approximately 10 Mbps (peak speed). The uptake of 3G technology by consumers was greater than anticipated, in part due to breakthrough devices like Apple’s iPhone, and, as a result, the commercial carriers have invested billions of dollars in the infrastructure necessary to support the greater bandwidth requirements, with billions more yet to come. The increase of average throughput speed from 150 kbps to 1.5 Mbps was significant. Most smart grid applications (such as AMI) today place a premium on low cost and ubiquitous coverage rather than greater bandwidth, which has led to further adoption of the 2G technologies that have been deployed already. The introduction of 3G allowed AMI solutions to transmit large volumes of data using data collector units connected to the utility meter data management systems and back-office servers using 3G cards.

In addition to the boost in wireless throughput speeds, 3G technologies provided significant enhanced security capabilities, which enabled utilities to address privacy and security concerns (for instance, GPRS and HSPA allow the implementation of authentication mechanisms). In addition, 3G technologies introduced a newer, stronger, and more robust algorithm named Kasumi that addresses the vulnerabilities associated with earlier generation cryptographic algorithms. These capabilities have and will continue to affect the type of smart grid network elements and the network architecture of future systems.

3.6.3.1.3  4G Networks

The changes in network speeds and capabilities are just beginning—all major carriers have decided to converge on the long-term evolution (“LTE”) set of protocols as their future 4G infrastructure. This has the potential to further accelerate technological innovation in the cellular field, as it will be the first time since the 1980s that a single technology platform will be shared among the vast majority of cellular users worldwide. These networks are scheduled to be deployed beginning in 2011, which will enable utilities to have peak speeds up to 100 Mbps and average user throughput speeds of about 10 Mbps—faster than existing commercial WANs and enabling wireless utility data transmission capabilities that approximate the speed of DSL wireline networks currently in existence. 4G networks use packet switching technologies. One of the key features of 4G in contrast to 3G is that it is expected to be all IP-based.

To date, little has been said about what comes after the 4G technologies such as LTE and WiMAX. However, based on the history of cellular development thus far, the degree to which the desire for bandwidth is insatiable, and the amount of money worldwide that flows through the industry to seek a competitive advantage, something will likely come along for deployment in the 2017–2020 timeframe. Based on the trends that have been set by the industry so far, the bandwidth would likely be on the order of optical fiber today (1 Gbps or more), with increased reliability and reduced cost per bit. The impact of this level of capability on the smart grid of 2020 and beyond is hard to predict, but it is likely that these trends will make cellular technology more attractive to utility customers as the broadening gap with utility-owned wireless networks becomes clear.

3.6.3.1.4  Strengths and Weaknesses of Cellular Communications

Cellular currently plays some part of the communications infrastructure for a large number of utilities, typically for their commercial and industrial customers or for hard-to-read residential meters. A number of vendors have focused on this part of the overall smart meter market, for example, SmartSynch, Metrum, Trilliant, Elster, and Comverge. With the rapidly dropping prices for data and electronic components, the residential market as a whole has now become addressable by cellular technology. For example, the largest single deployment of cellular-based smart grid technology to date was the first major AMI deployment in the Province of Ontario by Hydro One, who selected SmartSynch to provide and install 20,000 residential meters using 2G technology (GPRS) in 2006. As for 4G deployments to date, Grid Net has been the market leader in promoting the use of WiMAX for the smart grid and has an early market win in Australia. Gridnet is now also supporting LTE [1]. Ausgrid in Australia have announced intentions to migrate their network from Wimax to LTE [2].

The widespread global use of cellular networks—with over five billion mobile phone connections worldwide in 2010 [3]—certainly implies that there might be broad benefits to using the networks that could also be applicable to smart grid use. Perhaps the most significant benefit is the enormous ecosystem that supports the global cellular industry and those five billion customers. Utilities who utilize cellular networks are leveraging billions of dollars per year in technology innovation, infrastructure deployment, and engineering education. The intense competitive environment in the telecommunications industry—not only for the carriers acting at the retail level but also for infrastructure providers, software companies, and consulting firms—leads to continuous cost savings and service improvements at every link of the communications chain. The U.S. carriers alone invest billions of dollars each year just in infrastructure, which is subsequently invested by their vendors to improve performance and so on. The result, as the world has already seen in the last few years, is an accelerating pace of innovation and progress. With the number of cellular users increasing daily, the cumulative benefit to utilities will be trillions of dollars of investment in their communications choice, regardless of whether the utility itself takes any proactive steps to invest in that technology or not. No other utility communications option can match even a significant fraction of the investment being made in the cellular industry.

Another positive attribute of cellular networks for the smart grid is that they are already deployed throughout a very large part of the developed world. This gives utilities the ability to choose where they want to deploy the actual grid-specific components of the smart grid such as sensors and meters without having to deploy a communications network first. Even when factoring in the extremely remote areas of a utility’s service territory, using inexpensive cellular for those areas where coverage exists leaves the utility with more funds to target the remote areas where cellular networks have not been deployed.

Alternatively, cellular systems are a much simpler and cheaper solution for provisioning in developing regions or countries, as the infrastructure cost associated with green-fielding a wired POTS system is intractably more costly than a wireless cellular system.

Finally, a great strength of the choice of cellular for utilities is the widespread base of expertise that exists to build, operate, troubleshoot, and optimize these networks, developed by virtue of the numerous networks that have been deployed for many years. Most university electrical engineering departments now include an option in their design programs to learn about cellular networks, and, perhaps more importantly, virtually all students are intimately familiar with the capabilities and challenges of daily cellular use.

Cellular is not without its weaknesses as it relates to the smart grid—if not for these factors, cellular would certainly see a higher penetration rate than it does today. The technical issue cited most often by utilities as a weakness of cellular is the restriction on any network’s coverage footprint. “I couldn’t get coverage there” is a lament that has a first-order impact on the ability to use cellular at a given location. In the past, utilities have not been able to significantly influence placement of cellular towers due to business case issues. Commercial carriers have improved the situation in certain circumstances by using higher-powered cellular radios, purchasing and provisioning sub-GHz spectrum (which has better penetration in buildings containing cement and rebar), providing external antennas with long runs, or installing repeaters and micro/femto-cells.

Yet, utility organizations are made up of individuals who likely use a cellular phone regularly, either for work or personal use. They use them at all times of the day in a variety of conditions, and consequently these individual experiences mold opinions about the apparent reliability of cellular communications. “Dropped calls” and unclear connections are familiar experiences for most cellular phone users, and they assume that these issues will occur for cellular-based smart grid communications as well. In reality, any wireless network will have some low-fidelity and lost connections, and for low-latency high-quality communications like voice calls, these issues will be highlighted. In the case of smart grid applications, the data networks—which perform differently in many material ways—are used exclusively. However, given that most power-system cellular communications equipment will be directly connected to power and not need to rely on maximizing handset battery, equipment vendors can use cellular radios with higher transmit power and more sensitive antenna to achieve data communications links in areas that would be considered “dead zones” for cellular voice communications. In addition, cellular systems have a number of algorithms in place at the tower and the device to enhance the reliability and security of data connections. Nonetheless, any concerns regarding the perceptions of reliability and daily performance must be addressed directly by smart grid equipment vendors who utilize these networks.

Another significant hurdle for cellular-based smart grid business cases is the relatively higher bill of materials cost of the radios used to communicate on the networks. The cellular protocols and infrastructure were designed to accommodate secure and reliable communications even when the handset is moving at high speeds, with the need to switch towers frequently, in environments with high amounts of electromagnetic noise. The result is a very capable but complex radio that inevitably costs more than a radio without the same security, noise, bandwidth, and protocol requirements.

While the aforementioned positives and negatives are fundamental traits of commercial cellular networks, some factors regarding their use within the smart grid are changing rapidly and thus are the subject of debate. One of the most contentious is the utilities’ desire to have some sense of “control” of the networks to minimize operational risk, which manifests itself in two important ways—the level of counterparty risk (i.e., the risk that a wireless carrier would default on its responsibility to provide a reliable network to the utility) assumed by the utility and the risk that technology evolution would drive wireless carriers to make the utility communications equipment obsolete. These are legitimate and serious concerns.

Sophisticated utilities consider the counterparty risk within the context of the overall risk environment, including the counterparty risk for the alternatives. The risk of having a nationwide wireless carrier default on its responsibility, given the size of those companies and very large number of other parties with whom they have counterparty arrangements (including a large consumer population and government agencies), and combined with the level of internal risk to the utility if the carrier defaults (the utility only needs to change out the end devices to a competitor network) are small relative to the actual counterparty risk of having a utility-only communications provider default, as these companies are typically smaller, less diversified, and with very few counterparty arrangements, and their defaults put at risk the entire utility network infrastructure (not just end devices) without future support. Some utilities, such as Ausgrid in Australia, have decided to install, own, and operate their own cellular communications networks.

With regard to obsolescence, utilities expect equipment deployments that do not need to be revisited for many years, if not decades, and expect a very high level of consistency of performance throughout those years, while carriers must balance the desires of consumers, nationwide spectrum holdings, and network operations cost. On one side of the debate, cellular would not seem to be a good candidate: carriers are subject to economic and market considerations that have led to the decommissioning of first-generation networks and the deterioration of the paging networks, and the wireless industry as a whole moves so quickly that some utilities believe they are better off not participating directly. On the other hand, when looking at decisions made today from the vantage point of a few years in the future, utility-specific technologies will almost certainly find that they cannot keep up with the rest of the world, creating frustrated utility customers as well as regulatory commissions and utilities who have the albatross of an inadequate communications network on their balance sheets. The reality for utilities in the twenty-first century is that technical obsolescence is a fact of life, and the wisest choice is to minimize their exposure to the ongoing technology upgrade cycle by focusing on only their equipment and not the network infrastructure as well. A solid middle ground perhaps exists in which carriers address the concerns of utilities through SLAs, long-term commitments to utilize the networks, and products that embrace the utilities’ and the carriers’ concerns.

Other arguments revolve around how regulatory commissions allow cost recovery of capital equipment deployed. So far, regulatory commissions have not widely allowed utilities to recover operating expenses associated the operation of the network and the subsequent meter reading expenses. This creates a large operational expense (OpEx) that can be difficult for some utilities to accept based on the regulatory incentives. The alternative for utilities is to create and build their own networks and capitalize the cost (CapEx) of these proprietary networks, increasing the rate base and passing the cost to consumers and ratepayers. When considering the overall cost of these deployments, industrial and consumer groups are pressuring regulatory bodies to allow cost recovery of expenses associated with commercial networks. Utilities will have to find an optimum financial equilibrium point between using commercial networks and deploying their own proprietary networks based on these factors. Utilities have a choice: in 5–10 years, will they have to address the costs of upgrading the devices on their network to satisfy smart grid needs, or will they have the costs of the devices and the network itself that they are responsible for upgrading—and in which direction will their regulating commissions be amenable to cost recovery?

Lastly, the impact of natural disasters on both the electric and telecommunications networks is an important area for constructive discussions. Electricity generation, transmission, and distribution systems in the developed world are a modern marvel, and its consumers have grown accustomed to the availability of ample electricity whenever it is needed. As a result, modern societies have become almost completely dependent on electricity for normal life, and this dependency is never more apparent than after a natural disaster. Energy—specifically electricity and fuel—has joined food, water, and shelter as necessities of life. Utilities are therefore inclined to take whatever steps they can to ensure that they can restore electricity as soon as possible after a widespread outage, and the “smart” elements of the grid are no exception. Some utilities believe that it is thus necessary to build and operate their own communications networks, since they will then have control over which aspects of the network are brought online first to assist in restoring electricity overall. Wireless carriers argue, however, that the emphasis on building a proprietary network for this reason arose during a different time, when only a small part of the population was interested in wireless networks. In addition, since the introduction of 2G networks, commercial carriers have configured the EDGE, HSPA, and soon-to-be-introduced LTE networks to allow data throughput even in cases where voice channels are saturated. This voice/data resource segmentation ensures utilities that their data will still get through the commercial networks even during periods of high voice utilization or limited network infrastructure availability. The long-term path forward probably involves a much closer relationship between the electric utilities, communications providers, and emergency response personnel so that the highly connected world is restored in the best interests of the population as a whole. This is especially true for municipalities that may push for a single telecommunications infrastructure for providing electricity as well as water, gas, electricity, video, etc.

The worldwide impact of cellular technologies has been significant since it was first introduced over 20 years ago, and its effect on the adoption of the smart grid will surely be significant as well. The nature of that effect, the timing of the adoption, and the evolution of the relationship between commercial wireless carriers and electric utilities will all be interesting to watch.

3.6.3.1.5  Role of Cellular Communications in the Smart Grid

Twenty-first century smart grid solutions should be founded on architectural principles and not on specific technologies because of the accelerated pace of technological change. In general, three architectural principles should be followed when considering cellular communications for smart grid deployments: (1) rapid expandability, (2) integrated stratums, and (3) transparent commonality. These three architectural principles allow utilities to implement infrastructure elements that can be upgraded in a modular manner which enables a longer life for smart grid elements consistent with regulatory commissions’ desires to reduce cost for ratepayers and systems subscribers.

Rapid expandability: Electric utilities should implement network elements and smart grid components that are based on technologies that can be expanded rapidly. The concept of expandability allows utilities to respond to changes in regulatory climate and ratepayers’ and consumers’ demand and provides utilities with the ability to respond to technological network changes in a cost-effective manner. While the full realization of the smart grid vision may take many years to be fulfilled, utilities should implement communications network capabilities in a rapidly expandable manner in order to support various forms of smart grid deployments—such as AMI—in the next 3–5 years. In addition, utilities must also implement networks that can respond to technological changes and rapid evolution in the world’s wireless ecosystem. An example of this rapid evolution in the GSM architecture during the last 10 years is the introduction of WWAN technologies every 18 months beginning with GPRS in 2001, EDGE in 2003, UMTS in 2005, HSPDA in 2007, HSUPA in 2008, and HSPA 7.2 in 2009, and now LTE. The evolution of this ecosystem has enabled utilities to support legacy network implementations while at the same time introduce newer and faster network capabilities as needed. For example, utilities are using the faster 3G network capabilities to support data collectors that manage large number of meters in mesh networks. Solutions implemented should be rapidly expandable across smart grid components and systems and should also have the ability to transcend individual data technologies.

Integrated stratums: At the same time, smart grid communications systems should be built upon an integrated stratum approach where one layer can be upgraded or changed without disturbing other layers of the smart grid model. For example, specific wireless modems supporting smart meters should be able to be upgraded to LTE without affecting the AMI head-end system or meter data management system application layer.

Transparent commonality: While rapid expandability and integrated stratums are important principles, wireless solutions also depend on their ability to integrate across multiple platforms and wireless systems. For example, an electric utility may have more than one wireless commercial carrier and more than one type of WWAN solution that supports sending data to an integrated meter data management system. These systems must be capable of interfacing with a variety of application solutions and carriers to effectively collect data and interface to other back-office core utility systems. Therefore, transparent commonality should be a key component of any communications solution implemented in smart grid systems. Systems that are integrated rely on transparent and common application interfaces to communicate across stratums. The term transparent commonality is used to denote the ability to implement architecturally similar systems in order to reduce the initial cost of capital and ongoing operational expenses. Transparently common systems allow utilities to implement a supportable architecture over their service territory minimizing change. In other words, utilities should minimize the types of networks and technologies deployed across their smart grid systems and ensure that networks are implemented in response to consumer and business requirements. For example, a utility may choose to implement one type of AMI architecture to support urban and high-density consumers while another AMI architecture to support consumers in rural areas. By creating these categories, utilities can deploy common and transparent systems across their service territory while optimizing the cost of implementation. Thus, transparent commonality is an important component of the smart grid that reduces overall costs associated with smart grid architectures and allows components to communicate among each other in a transparent manner. Commonality requires smart grid elements to use published and open interfaces and that these interfaces only provide exposure to a limited set of system capabilities allowed for security purposes. Thus, applications do not need to know the internal structures of other network elements and application systems, but they do need to know how to interface in a transparent manner. The principle of transparent commonality reduces the level of complexity associated with smart grid components which if left unmanaged can become unreasonable.

3.6.3.2  RF Mesh

Radio-frequency (RF) mesh technologies compose the communications backbone of numerous existing AMI deployments today. RF technologies are simple, cheap, and widespread. RF relies on RF wavelengths to communicate between devices and back to an access point that then is connected to the utility via a backhaul network. RF technologies can generally provide bandwidth of approximately 100–200 kbps, which is usually deemed sufficient for typical smart grid field applications, but not for backhaul networks that are expected to aggregate data. Examples of major RF mesh communications providers include Silver Spring Networks, Trilliant, Itron, Landis & Gyr, and Elster.

A key feature of RF mesh technologies is the ability to form a “peering” network. In this configuration, each device is capable of communicating with nearby peers and then sending information via those peers to an access point that has a direct communications path to the utility. A simple way to think of this is that every mesh node acts as a router—the advantage of this method is that not every device has to have a direct communications path all the way back to the utility, they only have to have a communications path to a peer. This saves on costs and power in that the communications chips in each device can be considerably less sophisticated and use less power in their signals. This peer-to-peer type network can also repair itself if a connection to a given peer is interrupted, as long as there are other peers in communicating range (usually line of sight). Thus, coverage is easy to roll out in a phased manner. In an RF mesh configuration, an access point may cover several blocks in a neighborhood, as opposed to a single-cell tower that must cover several miles. Thus, while more access points are required, the power and size of each access point is smaller than of a cell tower.

Another advantage of RF mesh networks is that they provide a greater degree of network control for utilities. Utilities can closely specify the operating characteristics, extent, and cost of the network. Utilities are less concerned about the impact of emerging technologies and the eventual phase-out of legacy technologies if their network is under their own control.

The disadvantage of RF mesh technologies is that they are typically based on proprietary technologies with single-vendor sources. This is a risk for utilities looking for long-term sustainability and vendor support where the vendors supplying the technology may go out of business. In addition, some RF mesh solutions are based on the unlicensed ISM frequency that is susceptible to interference. Licensed spectrum is narrowband and is vendor specific via arrangements with spectrum owners or municipalities. This limits the applications and device ecosystem compared to cellular networks.

3.6.4  Communication Standards and Protocols

The key word in smart grid development is information. In order to operate a smart grid, accurate information about the power system, its current status, its trends, its historic information, and its applications, is necessary. The access to information is granted by communication, and this is why communication standards and protocols play a key role in smart grid developments around the world. Through standardized communications interfaces that describe the functionality, the information is not only accessible, but it is also interoperable allowing a cost-effective implementation of the required functionality across domains.

Communication is the core of smart grid applications. It enables the exchange of information between the different elements of the grid, such as feeder equipment, substation equipment, and network control centers, by providing a common set of rules for data representation and data transmission. In a utility environment, the exchange of information handled via communications protocols that, depending on the application, have to satisfy different constraints. For example, protection applications have more stringent requirements on real-time and reliable information delivery compared to monitoring applications. Similarly, the cybersecurity requirements may vary. For example, the cybersecurity requirement can be of higher importance for applications interacting with customer meters than for monitoring applications local to a substation.

The discussion that follows is an overview of the major communications protocols considered for smart grid applications. The goal is not to give an exhaustive list of communications protocols but rather to identify the main areas of application.

From a smart grid point of view, communication can be classified into two main categories:

  • Communications systems specific to smart grid applications such as IEC 61850 [3–6] and IEC 61968-9 [7] and communications protocols such as DNP3 [8], IEC 60870-5 [9], IEEE C37.118 [10], ANSI C12.19 [11], ANSI C12.18 [12], ANSI C12.21 [13], ANSI C12.22 [14]
  • Auxiliary protocols playing a major role in smart grids but not limited to this application domain, such as IEC 62439 (HSR/PRP)* [15], IEEE 1588 [16], NTP [17], and the widely used Ethernet [18], IP [19], and TCP [20]/UDP [21]

IEC 61850, and more precisely the mapping to the communications protocols as defined in IEC 61850-8-1 and IEC 61850-9-2, was originally designed for use within substations at transmission and distribution levels. However, through the latest extensions, IEC 61850 now provides a communications solution for substations, between substations, to control centers, for hydro-electric power plants, for distributed energy resources and wind farms, and it is expected that more domains will be added to the standard in the near future.

DNP3 and IEC 60870-5 are protocols that are mainly used at the transmission and distribution levels to exchange data between substation equipment and the network control center.

IEC 61968-9 specifies the interaction with customer meters and therefore targets low-voltage applications. Similarly, ANSI C12.19, ANSI C12.18, ANSI C12.21, and ANSI C12.22 define a set of standards for the data exchange between a customer meter and a meter reader over multiple media such as an optical port, a modem, or a network.

HSR and PRP are two reliable communications protocols for industrial automation suited for protection and control applications, while IEEE 1588 and NTP can handle the time synchronization constraints required by the devices on the network. IEEE 802.3 (Ethernet), IETF RFC 791 (IPv4), and RFC 761 (TCP) and RFC 768 (UDP) are the basis of most of the smart grid communications systems and protocols. IPv4 born out of DARPA (Defense Advanced Research Projects Agency) in the 1970s, which is used to this day as the backbone for the modern Internet has been so successful, it has started to outgrow itself—the network address blocks have been completely depleted to the point that IANA (Internet Assigned Number Authority) and RIRs (Regional Internet Registries) since April of 2011 have been in a constant churn of reclaiming, redistributing, and reallocating network space. This is not so much an issue for the utility sector, but is for the Internet, and was seen as a problem on the horizon for the rest of the Internet as early as the 1990s. A great amount of research leads to a solution that not only expanded the address space but also solved fundamental problems associated with IPv4. Born out of that research was RFC 2460 (IPv6)—a network protocol that would solve issues of address space, network configuration, network discovery, neighbor discovery, routing redundancy, mobile routing, and network security. At the transport layer, TCP and UDP have been used for reliable streaming and datagram support for applications, an alternative which may be more suited for the utility sector is the use of RFC 4960 (SCTP [Stream Control Transmission Protocol]) which allows for reliable transport of packets out of order. It should be noted that the Internet Protocol Suite (IPv4/v6 & UDP/TCP) combines layers that are defined in the ISO OSI 7 layer model (Figure 3.130) into four layers—link, Internet, transport, and application. For a better understanding of Ethernet, IPv4/v6, and TCP/UDP, readers should refer to the abundant literature for Internet protocols such as [22] or [23].

3.6.4.1  IEC 61850

The standard IEC 61850 “communication networks and systems for power utility automation” defines a communications system for interoperability between equipment from different manufacturers. The standard introduces several features that impact the design of systems, such as the use of communications services for the exchange of time-critical information between IEDs, such as protection relays. With that, the hard wiring of signals between the relays used to implement the protection schemes can effectively be eliminated. Other impacts of IEC 61850 are related to hardware and software design of equipment, the design, commissioning and testing of a system, and the required training for engineers.

The standard IEC 61850 Edition 1 was published by IEC between 2003 and 2005. Since then, new editions of the standard are under development, and the scope of the standard has been extended extensively to include many new domains as indicated earlier. The purpose of IEC 61850 is to provide all the necessary specifications required to achieve interoperability between the equipment of an integrated system. To achieve that, the standard defines communications services based on TCP/IP and Ethernet, and object models describing data visible to the other equipment. It further defines a language to exchange engineering information between tools. More information on IEC 61850 can be found in the Smart Substation section of this book.

3.6.4.2  DNP3 and IEC 60870-5

DNP3 and IEC60870-5 were developed in the 1990s and are two commonly used protocols for the communication between field devices, residing either inside or outside a substation, and the network control center. They are also sometimes referred as “SCADA protocols” as they are intended to standardize the communication of SCADA systems. Their main applications are monitoring and controlling field equipment. Both protocols use a simplified version of the open system interconnection (OSI) seven-layer reference model [24]. It is referred to as the enhanced performance architecture (EPA) and only includes the physical, link, and application layers. However, a fourth layer is sometimes considered for DNP and targets the transport layer (DNP transport). While DNP3 and IEC 60870-5 were primarily intended to be based on RS-232 or RS-485, they evolved to now use Ethernet and TCP/IP. In many respects, DNP3 and IEC 60870-5 are quite similar; however, they have significant technical differences and are not compatible. Moreover, DNP3 is mainly used in North and South America, while IEC 60870-5 is predominant in Europe and the Middle East.

ISO (International Organization for Standardization;

Figure 3.130   ISO (International Organization for Standardization; www.iso.org) Open Systems Interconnect (OSI) seven-layer communications model (http://www.novell.com/info/primer/prim05.html).

3.6.4.3  IEEE C37.118

The IEEE C37.118 standard defines the exchange of synchronized phasor measurements used in power system applications. It was first published in 1995 and was revised in 2006. A synchronized phasor measurement, or synchrophasor, is produced by a PMU and represents the magnitude and phase angle of a waveform. PMUs distributed across the electric grid produce synchronized measurements, that is, measurements taken at the same time. As of today, IEEE C37.118 is primarily used for WAMPAC applications.

The IEEE C37.118 standard defines the communication rules for a single PMU, or a PMU data aggregator, called PDC (Phasor Data Concentrator). Due to the distributed nature of the measurement points across the network, IEEE C37.118 has to be implemented on top of a routable protocol such as TCP/IP or UDP/IP. To ensure the success of applications based on IEEE C37.118, two main challenges have to be addressed: (1) synchronization of the measurement points or PMUs and (2) transmission delays. The synchronization aspect is currently handled through the integration of a GPS receiver directly into the PMU. In the future, IEEE 1588 should be able to provide the required synchronization needs through the communications network and therefore remove the need for a GPS receiver in PMU devices. The transmission delay challenge depends highly on the network communication topology, that is, number of switches, length and type of links, etc. However, a common practice is to sacrifice the reliability characteristic of TCP by implementing IEEE C37.118 over UDP.

3.6.4.4  IEC 61968-9 and MultiSpeak

The IEC 61968-9 standard, issued in 2009, defines the interfaces for reading, monitoring, and controlling meters installed at a customer site. As such, IEC 61968-9 is not a full communications protocol since only the seventh layer of the OSI model is covered. The standard defines a set of XML schemas for the different operations applicable to meters. As an example, such operations can be read load curve, read or write contract parameters, read technical maintenance data, read meter values, etc. Even though IEC 61968-9 defines only the XML schemas, it implicitly imposes some constraints on the lower-level protocols. The usage of XML implies that most messages will be truncated to be transmitted over the physical media. Therefore a protocol with fragmentation capabilities, such as IP, is required. Moreover, due to the implicit fragmentation, each fragment must be sent reliably; that is, a fragment cannot be lost otherwise the message is unreadable, and therefore a protocol such as TCP rather than UDP is preferred.

Similarly to IEC 61968-9, MultiSpeak is a specification defining messages to interact with electrical meters. However, there are some important differences between the two standards. First, MultiSpeak is focused to meet the needs of electric cooperatives in the North American market, while IEC 61968-9 is focused toward all utilities in the international marketplace. Second, IEC 61968-9 is transport independent while MultiSpeak requires SOAP messages using HTTP and TCP/IP socket connections. Third, message headers can be easily mapped between MultiSpeak and IEC 61968-9, but the mapping of message content between the two is more complex. MultiSpeak has a longer history than IEC 61968-9 and had its third and latest version in 2008. It is worth noting that an ongoing harmonization effort was started in 2008 and will eventually lead to a complete mapping of MultiSpeak messages to IEC 61968-9.

3.6.4.5  ANSI C12.19, ANSI C12.18, ANSI C12.21, and ANSI C12.22

ANSI C12.19 initiated in the 1990s by the American National Standards Institute is the main standard used in North America for data exchange between gas, water, and electricity meters and utilities. It provides a data model of the meter through the specification of a set of common data structures referred as “tables” to read, write, and configure a metering device. However, ANSI C12.19 only provides a model of the meters and is not as such a complete communications protocol. ANSI C12.18, ANSI C12.21, and ANSI C12.22 define the underlying protocols used by ANSI C12.19 to transport the data over various communication media.

The ANSI C12.18 standard is written specifically for meter communications via an ANSI Type 2 Optical Port. It is a complete point-to-point communications protocol covering the seven layers of the OSI model. Additionally, it details the criteria required for communications between an ANSI C12.18 device and an ANSI C12.18 client via an optical port. The ANSI C12.18 client may be a handheld reader, a portable computer, a master station system, or another electronic communications device. It is mostly used for manual meter reading using the infrared optical port currently in use by most North American meters.

Similarly, the ANSI C12.21 standard details the criteria required for communications between a C12.21 device and a C12.21 client via a modem connected to the switched telephone network. It is also a point-to-point communications protocol, but compared to ANSI C12.18, it allows the remote reading of a meter. It also includes authentication.

ANSI C12.22 is the designation of the latest standard being developed to allow the transport of ANSI C12.19 table data over networked connections. C12.22 is intended for use over already existing communications networks just as C12.21 is intended for use with already existing modems. Examples of such communications networks covered by C12.22 include TCP/IP over Ethernet, SMS over GSM, or UDP/IP over PPP over serial communications links. ANSI C12.22 is suited for automated reading of meter devices.

3.6.4.6  High-Reliability Protocols

IEC 62439-3, published in 2003, standardizes several protocols for industrial communication with a strong focus on reliability aspects. From a smart grid point of view, two protocols are particularly of interest: PRP (parallel redundant protocol) and HSR (high-availability seamless ring). Compared to other protocols, PRP and HSR provide an instantaneous recovery time in case of a link failure, which is a crucial feature for real-time applications, for example, differential protection application based on IEC 61850-9-2. Moreover, PRP and HSR are primarily intended for LANs and impact only the layer 2 of the OSI model, which make them good candidates for substation automation applications. Other protocols require at least several milliseconds to recompute a new route after the occurrence of a link or switch failure.

The principles of PRP and HSR are simple and can be summarized in three points: (a) each device is redundantly connected to the network through two independent network interface controllers (NIC) and two independent links, (b) the messages issued by the sender are duplicated over the two connections and sent simultaneously, and (c) the receiver transmits the first received message to the application (e.g., a protection function or a TCP/IP stack) and discards the duplicated message. From an application point of view, PRP and HSR are transparent and therefore do not require any modification. Moreover, failure of a link between the sender and the receiver does not introduce any delay since the messages are duplicated and transmitted simultaneously. The use of PRP versus HSR depends on the network topology: PRP is applicable for a point-to-point topology, while HSR is only applicable for a ring topology. PRP can be implemented entirely in software (at the driver level) and only requires an additional NIC on the device, while HSR requires the HSR switch functionality implemented by each device participating in the ring.

3.6.4.7  Time Synchronization Protocols

Time synchronization over communications networks is mainly achieved through NTP/SNTP (Network Time Protocol, Simple Network Time Protocol) or IEEE 1588 also called PTP (Precision Time Protocol). While NTP was defined back in 1985, IEEE 1588 is more recent and was first published in 2002 and revised in 2008. Besides the technical differences of the two protocols, their main differentiator is the accuracy they can provide: SNTP can provide an accuracy of tens of milliseconds across a WAN, while PTP can provide submicrosecond accuracy on an LAN. SNTP is mainly intended to run over a WAN but can also be run on an LAN in which case the accuracy can improve to a couple of hundreds of microseconds under ideal conditions. Conversely, PTP is restricted to an LAN and requires some specific hardware to achieve high accuracy. From a smart grid point of view, SNTP is mainly used for control and monitoring applications, while PTP is mostly used for protection applications.

SNTP and PTP are based on a similar mechanism involving the exchange of messages between a reference time source and a device. The purpose of the message exchange is to transmit the value of the reference clock and then to evaluate the transmission delay. While SNTP assumes a symmetric delay between the reference time source and the device, which is never valid in a WAN because of the switched nature of the network and the unpredictable delays introduced by switches and routers, PTP precisely evaluates the transmission delay by requesting the switches to report the residence time, that is, the delay a message is held by the switch. Therefore, for high accuracy, PTP requires some specific features implemented by the switches to support the residence time calculation.

3.6.5  Communications Challenges in the Smart Grid

3.6.5.1  Harnessing Technology Complexity

Modern operational applications in the smart grid environment and the corresponding communication access systems are propagating network intelligence to hundreds of substations spread across the grid. IP routers and Ethernet switches, VPN coding devices and firewalls, web servers, service multiplexers, and communication gateways require a great amount of parameter setting, which is generally expert oriented.

Furthermore, unlike serial communications links that did not operate with incorrect configurations, the present IP networks and Ethernet LANs generally “find a way to deliver information” even with incorrect parameters. Latent “setting errors” in the substation communications can cripple the communications network’s performance, availability, capacity, and security. Communication and network devices installed in the substation environment must have “substation user-oriented” interfaces converting substation parameters into telecommunications network technical parameters to allow error-free configurations and operation by staff with limited communications network expertise.

3.6.5.2  Legacy Integration, Migration, and Technology Life Cycle

Telecommunications is a fast-moving technology driven by an enormous mainstream market and competition. Power system technology, on the other hand, evolves orders of magnitude slower and is deployed to fully replace an older technology over many years. As an example, Ethernet and IP networking that are being gradually introduced into the substation communications environment have been mature technologies for a long time in the mainstream commercial market, while serial RS232 communications links, which are still currently used in many power systems, have virtually completely disappeared in the telecommunications world. The increasing introduction of electronic intelligence into the smart grid will therefore produce two extremely unequal life cycles inside the same system: circuit breakers and power transformers do not have the same “technology life cycle” as their associated protection, monitoring, control, and communications devices. It should be assumed that in future, not only the power network but often every single substation may incorporate different generations of information and communications technology installed at different times. “Legacy integration” becomes an important part of any power system communication plan or project.

Operating with some older generation components in the system is not a temporary transitional state but the permanent mode of operation of the power system communications network: by the time that the older generation equipment is dismantled, the “once new generation” equipment itself has become obsolete and “legacy.” The master plan for the operational communications network must include a preestablished migration strategy that stipulates not only how a new technology can be introduced into the network but ideally also how it can be removed from the network in a smooth manner without jeopardizing the whole power system. Excessive functional integration may present an attractive cost advantage at the time of deployment but may also be a major concern when one part of the integrated system needs to be replaced.

In general, the communications system refurbishment can be partial and performed by layer or service according to requirements (e.g., upgrading or replacing the transport core but not the substation multiplexing). The communications network architecture must be layered in order to allow such layered refurbishment and replacement of one technology without causing major network disturbances and service disruption. Similarly, all sites of the power network are not constructed, equipped, or refurbished at the same time and through the same project. This results in a multivendor and multirelease environment inside the same functional layer of the network. The power system communications network is therefore implicitly multivendor, multirelease, and multitechnology but still should operate as a single network.

3.6.5.3  Communications Service Planning and Evolution Trends

Most of smart grid operational applications that constitute the basis for the “power network of the future” already use to some extent communications solutions in today’s existing and field-proven telecommunication industry. Therefore, future prospects in terms of power system telecoms are more about estimating power system application requirements than about predicting telecommunications technology evolutions.

When an operational telecommunication network is being planned for deployment or rehabilitation to enable smart grid applications, the following points require particular attention:

  1. Ethernet ubiquity and SONET/SDH bandwidth allocation Ethernet is the dominant access interface for almost all smart grid operational applications, the standard local network technology and the optimal transport technology in the operational environment of the electric utility providing low connection cost, bandwidth flexibility, and a wide variety of topologies and transmission media (copper pair, fiber, wireless, etc.) [25]. Converters and coordination between many types of communications interfaces are gradually disappearing. However, legacy interfacing shall remain a major issue still for a long time. Terminal servers and interface conversion remain the solution to many legacy issues and allow the encapsulation of many non-Ethernet services in order to benefit from Ethernet flexibility and wire-saving properties. Moreover, Ethernet transport being the underlying network for many time-sensitive operational applications, it is essential to allocate reserved bandwidth resources as well as the flexibility of virtual separation (VLAN). Ethernet over SONET/SDH is a particularly efficient manner of implementing time-controlled Ethernet connections. SONET/SDH over optical fiber is used to implement multiple independent Ethernet transport connections with individually allocated bandwidths together with some small capacity dedicated to multiplexed circuits for protection relay communications and legacy applications.
  2. Multiple secure IP networks The great majority of smart grid operational applications rely on the capability to connect network sites to control, monitor, or support platforms through an IP network. Many of these applications need segregated bandwidth to assure predictable network behavior, guaranteed time performance, or security. Although applications may be grouped according to their requirements into a number of networks, a large multiservice IP network with VPN separation and IP-based Quality of Service (QoS)* control may not fulfill the operational requirements. VLANs over separate SONET/SDH bandwidth or separate wavelengths are currently employed to implement separate IP networks and can be scaled up through a technology such as VLAN trunking prioritization with 802.1p, MPLS (Multiprotocol Label Switching), or DSCP (Differentiated Services Code Point) when the numbers get too large. These distinct IP networks can, for example, be allocated to EMS/SCADA, to asset monitoring, to site surveillance and facility management, and to support voice and data services (refer to Figure 3.131).
  3. Service separation through wavelength multiplexing Another telecommunications technology which is increasingly used in utilities for separating multiple networks is wavelength-division multiplexing (WDM). WDM is becoming a secure and affordable way for separating traffic between the following:
    • Operational and corporate networks over the same fiber
    • SONET/SDH multiplexed network and MPLS/gigabit Ethernet networks
    • Protection relay communications and other communications

  • Use of wireless technologies and procured telecommunications services Distribution utilities often have a much larger number of sites to cover and very small traffic requirements associated with monitoring and control of each of these sites. Moreover, distribution relays do not always intercommunicate, and when they do, they have less severe time constraints. Distribution companies also have larger mobile workforce on the operational side due to little or no staffed installations. These challenges often favor different wireless technologies for communications across distribution systems: private mobile radio (PMR) operated by the utility or wireless services procured from a service provider (GPRS, VSAT, WiMAX, etc.). When the use of a telecommunications service provider is considered, it is important to assess the capability of the operator to continue service provision at times of major disaster, for example, in the case of a prolonged power outage.
Communications network overlay architecture (Power System Telecommunications—A new landscape, M. Mesbah, Areva T&D Application Note, January 2009). (© Copyright 2012 Alstom Grid. All rights reserved.)

Figure 3.131   Communications network overlay architecture (Power System Telecommunications—A new landscape, M. Mesbah, Areva T&D Application Note, January 2009). (© Copyright 2012 Alstom Grid. All rights reserved.)

Distribution utilities often have a much larger number of sites to cover and very small traffic requirements associated with monitoring and control of each of these sites. Moreover, distribution relays do not always intercommunicate, and when they do, they have less severe time constraints. Distribution companies also have larger mobile workforce on the operational side due to little or no staffed installations.

These challenges often favor different wireless technologies for communications across distribution systems: private mobile radio (PMR) operated by the utility or wireless services procured from a service provider (GPRS, VSAT, WiMAX, etc.). When the use of a telecommunications service provider is considered, it is important to assess the capability of the operator to continue service provision at times of major disaster, for example, in the case of a prolonged power outage.

3.6.5.4  Cybersecurity for Wireless Networks

In 2008, the Federal Energy Regulatory Commission (FERC) approved eight new critical infrastructure protection (CIP) reliability standards designed to protect the nation’s bulk power system against potential disruptions from cybersecurity breaches. These standards were developed by the NERC and provide a cybersecurity framework for the identification and protection of critical cyber assets. The eight cybersecurity standards address the following areas: critical cyber asset identification, security management controls, personnel and training, electronic security perimeters, physical security of critical cyber assets, systems security management, incident reporting and response planning, and recovery plans for critical cyber assets.

A key concept associated with these NERC requirements is the establishment of an electronic security perimeter to protect smart grid network elements. These perimeters allow utilities to define transport network paths for data delivery.

One way of implementing these electronic security perimeters in WWANs is through defined standards for security in mobility networks using defined access point names (APN). These standards enable utilities to transport data from the AMI to core information technology (IT) infrastructure using authorized and encrypted capabilities. The implementation of APNs in 3G networks provides linkage from the wireless network to the utilities’ core IT infrastructure using either frame relay circuits or MPLS connectivity and provides multiple levels of security, access controls, and encryption that many electric, natural gas, and water utilities find beneficial. For example, all data traffic from mobile devices using the radio access network is encrypted and subsequently tunneled from the core network serving nodes to core network gateways that provide connectivity to utility enterprise systems. Custom APNs segment traffic in a layer 2 VLAN at the gateway layers in the commercial carrier core network. The traffic then enters an MPLS virtual routing facility (VRF) using connectivity routers to maintain traffic separation to the customer’s enterprise system.

Utilities should be aggressive in identifying and correcting vulnerabilities and exposures associated with smart grid network elements. In the context of smart grid deployments, security must be (1) encompassing, (2) circulative, and (3) aggressive.

Encompassing security: In order to address security requirements of smart grid components, utilities must consider exposures and vulnerabilities within network domains and supporting smart grid infrastructures that include intranets (premise domains), Internet connectivity, fixed communications links, and wireless connectivity. An encompassing approach to identifying these cross-domain vulnerabilities is required because an exposure in any of these domains can (if not properly isolated) lead to exploitation of smart grid network elements residing in other domains.

Circulative security: Because of the potential large number of network elements and components associated with the smart grid, utilities must deploy automated and centralized security event management and control systems that should be centralized and have a broad encompassing situational awareness. This implies that utilities must implement integrated security management and incident reaction controls. These security controls would provide wide spread alarms that flow to a central location where critical incident and crisis action teams can react and take proper action. Without this type of circulative situational awareness and centralized control, vulnerabilities can be exploited, and security events associated with smart grid network elements can occur without the appropriate response. Security of smart grid network elements must also be pronominally current and up to date meaning that operating systems, applications, browsers, network interfaces, access control lists, and other smart grid network elements must be constantly updated and protected accordingly. Thus, keeping network elements protected cannot be a one-time or periodic event for electric utilities but rather a continuous process where smart grid network elements are updated and protected proactively before events occur.

Aggressive security: Utilities must also be very aggressive in the care and resources dedicated for the protection of smart grid network elements. Electric utilities must take the initiative in complying and exceeding security standards by implementing knowledge-based and artificial intelligence systems that are able to detect network security events before they actually occur. By the time a smart grid security event occurs, the damage is already done, and it is too late. After the incident, utilities must deal with a consequence management effort as opposed to a security event. Therefore, it makes good policy and business sense for electric utilities to be proactive about security and invest in aggressive measures that will see trouble before it occurs.

3.6.5.4.1  Functional Domains

In order to plan and deploy wireless smart grid systems that are secure, it is helpful for utilities to have a functional view of the network and logically segment the smart grid WWAN elements into five domains. These domains are consistent with the 3GPP [26] standard body view of cellular networks and include the following: (1) smart grid mobility devices, (2) smart grid airlink interfaces, (3) carrier core networks, (4) connectivity to utilities’ core systems, and (5) utilities’ enterprise domains.

The first domain of the smart grid network view includes wireless end-point devices or mobile stations (such as electric vehicles) that provide services to smart grid consumers. In 3GPP terminology, mobile devices are termed user equipment or UEs. These mobile devices have a subscriber identity module or SIM that serves as the identity mechanism of the device. The SIM contains preprogrammed identification information such as an International Mobile Station Identifier or IMSI, the SIM key, authentication algorithms, and home short message service (SMS) numbers. The SIM can also store a list of subscriber names, numbers, and received short messages.

Devices that encompass this domain are the only components in the entire smart grid security architecture that users can directly tamper with because they reside in their homes and/or they have control over them. If a device is not properly secured, users can access protected data, perform unsafe software downloads, disable local store encryption, turn off local authentication, remove or disable virus protection, and potentially retrieve sensitive user names and passwords. For these reasons, policies and cybersecurity measures to protect, monitor, enforce policy, and control smart grid devices by electric utilities are critical. In this domain, it is worth mentioning the trend toward LTE and M2M (machine-to-machine deployment). In particular, the ongoing work on M2M communication identity module (MCIM) implementing a “software SIM.”

The second domain of this functional view of smart grid networks is the airlink interface, which is composed of base transmitting stations in WWANs. In the GSM/HSPA environment, these base stations are called Node Bs (radios) and are attached to radio network controllers (RNCs). The combination of Node Bs and RNCs is referred by 3GPP as the universal terrestrial radio access network or UTRAN. The UTRAN works in close coordination with the serving support node and mobile switching centers described earlier to provide the basic functions of smart grid data connectivity for consumers and ratepayers. The UTRAN maintains and manages the air interface protocols for smart grid devices attached to wireless networks as well as specific call-sustaining procedures (power control and handover management). The UTRAN interfaces with core serving nodes in order to authenticate users and provide airlink encryption.

The wireless industry term “airlink” refers to the radio transmission of voice or data from the wireless device to the network base station, and from there to the other network segments for authentication and transport. The airlink segment is a separate functional domain and is not usually included in the carrier segment because, when the user is roaming, the airlink segment is not under the carrier’s control. For example, mobile wireless devices with a SIM that belongs to one wireless carrier but roams to another geographic area that is serviced by another carrier. These carriers may have roaming agreements, but security for the UTRAN belongs to each respective carrier.

Securing the air interface is a critical responsibility either for the utility operating their own WAN or for the commercial network carrier working together with its roaming partners. While utilities today operate at national levels, it should be noted that over 90% of the world commercial wireless carriers comply with 3GPP air interface security standards.

An important security mechanism that protects smart grid data transmissions is encryption which can occur at the air interface layer and at the application layer. In the air interface layer, smart grid data are encrypted between the base stations or Node Bs and the mobile smart grid wireless end-point device.

At a high level in 3G networks, following authentication and key agreement, the network and smart grid wireless end-point device calculates a one-time (once per session) 128 bit encryption key by applying a key-generating algorithm known as Kasumi. Once the encryption key is derived, communication between the wireless end-point device and the 3G network is encrypted using the Kasumi algorithm. It is stronger than earlier 2G GSM proprietary algorithms. Authentication of wirelessly enabled smart grid devices in 3G networks is a two-step process in which devices such as electric meters first authenticate the network and then the commercial carrier serving the meter authenticates the user to the network.

The third domain of this functional view is the core mobility network or carrier segment. Once a smart grid device has authenticated to the WWAN, a data session or a packet-data-protocol (PDP) session is established in the Gateway Support Node or GGSN. The GGSN serves as the gateway between commercial wireless carriers and the utilities’ core enterprise network routable network. All packets between the air interface UTRAN and core network interface exit through gateways called GGSNs. The GGSN provides IP services to the utility enterprise information system network via a number of connectivity options including Internet connections, secure network-to-network VPN connections, dedicated frame relay circuits, or MPLS cloud connectivity. Commercial carriers supporting smart grid networks are responsible for securing customers’ confidential data as they move through or are stored in the carrier’s network. This includes logged and archived data and all customer personal data such as billing information.

The connectivity segment of this model includes network elements and network circuits that link the commercial carrier core network and the utility enterprise network which resides outside of the carrier’s control. As mentioned earlier, common connectivity segments include the Internet, either with or without VPN, frame relay circuits, and links through MPLS clouds.

The enterprise segment includes the utilities’ back-office and core IT systems located inside the enterprise. Utilities are responsible for perimeter defense and other security systems within this segment.

3.6.5.4.2  Application Domains

While breaking down smart grid wireless communications into five functional domains is helpful in order to understand a functional view of the network, a different type of analysis is required in order to understand the network application architecture of smart grid network elements. In order to facilitate this analysis, an eight-layer architectural model was originally created by Todd Allen and refined for the utility industry by Art Maria of AT&T. This architectural model which was originally developed for AT&T’s Wireless Reference Architecture has been adapted for the utility industry. The Allen/Maria model addresses the architectural elements that exist between the smart grid users and devices on one side of the communications link and the utility’s enterprise applications the user wishes to access on the other side.

The first layer of the Allen/Maria model is the user layer. Because wireless end-point users are the weakest link of this model, utilities must implement policies that enforce security of wireless end-point devices. Without the enforcement of these policies, security becomes merely guidelines and not rules. In most cases, wireless telemetry end-point devices such as smart meters should never allow end users to access these devices. In other cases where devices such as smart meters provide gateway access into the home, end-user policies must be carefully crafted and implemented in order to enforce strict security standards.

The second layer of this architectural model is the device layer. If strong security controls are not implemented, devices such as electric meters, thermostats, and other demand response units represent the greatest security risk in a wireless smart grid application. If a device is not properly secured, it can provide access to data stored on the device or to data in the utility’s enterprise systems that the device is connected to. Therefore, utilities must evaluate what type of security software must reside in the wireless end-point device. Depending on the end-point device capability, additional layers of user authentication, encryption, and device management must be implemented. These security controls are in addition to those provided by wireless carriers.

Taking human factors into account is an important part of the successful deployment and adoption of a wireless security architecture. Security policies can either drive or impede adoption, depending on the circumstance. The more intrusive a security policy is on users, the more they will attempt to circumvent it, which makes policy enforcement even more important.

The third layer of the architectural model is the network layer. Authentication and encryption elements of this layer enable utilities to establish electronic security perimeters.

The fourth layer of the architectural model is the presentation and interface service layer, which provides management of smart grid network elements. For example, wireless meters are connected through the wireless network to one or multiple gateways. These gateways are considered head-end systems (HESs) and manage individual devices. Head-end system interface with meter data management systems (MDMSs), which can manage multiple head-end systems and provide overall system management, control, and data collection of the AMI for the utility. HES and MDMSs are a significant portion of the presentation and interface service layers in the wireless smart grid architecture. Head-end systems and meter data management systems must provide additional layers of authentication, application controls, and encryption to reduce the risk of security exposure.

The fifth layer of the architectural model is the business service layer that provides management control services for smart grid network elements connected to the utility. Elements of this layer most likely reside in network control centers and provide logical interfaces between the fourth layer HES and MDMS and the utility’s back-office support systems.

The final three layers of the architectural system can be tightly coupled and form the infrastructure of the utility’s back-office and SCADA control systems. They include the application service layer, the data service layer, and the data source layer.

The application service layer includes systems such as the AMI servers and other applications that interface with meter data management systems. This layer also includes application service hosts such as those supporting outage management systems and customer billing. Application servers can be web-based servers, which use standard web service interfaces, or they can also rely on middleware servers. They communicate with the data service layer via applications programming interfaces (API) such as net, HTTP, MV90, SOAP, and ODBC. The data service layer provides a repository of data where customer information is stored.

As the electric utility addresses the pressures of creating and maintaining a secure environment, it will need to transition from a relatively closed system to one where millions of end points have the potential to send information to or require information from the electricity provider. This heightens the need for assuring security and privacy. The risks come from a variety of sources and motivations, and the result is disrupted business processes and higher costs. The defense is not to remain frozen in the twentieth century; rather the solution is to apply best practices of privacy and cybersecurity.

3.6.5.5  Management and Organization Challenges

Providing communications services to the whole spectrum of new smart grid operational applications in the power utility represents a change of scale in terms of management and organization. The requirements are indeed very different depending on the mode of service provisioning.

In a procured service mode, this represents a much larger scope of contract and therefore new grounds for negotiation with the provider, but also the opportunity to redefine SLAs* regarding the availability and continuity of communications services. It may also require new ways to measure the quality of the delivered service and the assurance that the contracted SLA is met.

In a utility-operated dedicated telecommunications network environment, a significant increase in the number of communications services requires the reorganization of the telecommunications delivery structures. If previously, service management was nothing more than a few phone calls between the telecommunications O&M team, SCADA supervisor, and substation staff, a sharp increase in the number of concerned parties may imply a fundamentally different “service user/service provider” management model in which the tasks of service management need to be explicit and formal.

The first step toward this change of scale is the formal definition of a two-level architecture separating core communications services from different application networks using core communications resources. The management of the core network infrastructure then becomes the responsibility of the “core service provider” with SLA obligations toward each power system application network. The core service provider notifies service users of the availability and performance of the communications services through “service dashboards” constituting the basis for service “situation awareness.”

3.6.6  Communications in the Smart Grid: An Integrated Roadmap

In the myriad of changes that face the domain of communications for smart grid, it is difficult to know where to start. Utilities have to manage short-term requirements (deployment of AMI enforced by national regulations, adapt to the increasing number of small renewable power generators) and long-term requirements (anticipate future communications needs, provide security all along the value chain). Utilities are often enticed to make heavy investment in AMI without a clear idea of the long-term benefits. They may be pushed along the tracks of mainstream telecommunications technologies without always understanding what is at stake. In order to build a reliable, resilient, and secure communications infrastructure, utilities need to embrace a holistic approach starting by the definition of their smart grid roadmap.

Communication changes impact all stages of the power delivery chain, and different solutions are already deployed, exist, or are being developed to fulfill new requirements. Communications requirements differ from utility to utility depending on a great number of factors, including

  • Density of communication end points-The geographical area to cover and the distances between the sites greatly differ depending upon the segment of the power delivery system (generation, transmission, distribution), the geographical spread and scale of the country (e.g., European versus American scale), and population distribution (e.g., urban, semi-urban, rural).
  • Topology of the power network—Where some distribution networks may be extensive and serve a large number of customers, such as a densely populated urban area, a two-tier communications system with LV PLCs to a concentrator at an MV/LV transformer and a backhaul system from that point to the central platform may be appropriate communications solution. On the other hand, for a distribution network out in a rural serving only a few customers over a large geographical area, a one-tier communications architecture using wireless solutions may be more attractive.
  • The condition of the power network assets—Depending on investments made over the years, the need for condition-based asset monitoring may be more crucial in certain networks than in others, creating the conditions for a strong requirement on communications to the sites where these assets are located.
  • The amount of distributed generation currently in the power network and planned in predictable future—In many European countries, the current legislation has greatly encouraged the deployment of small renewable generators (solar panels, individual wind generators, small biomass plants). Bidirectional communication to these sites is becoming essential not only for metering purposes but also for the safe and secure operation of the power system.
  • Population and industry growth and resulting power network extension plans—Residential and industrial complexes built from the ground up in many parts of the world are today based upon the principles of energy autonomy (e.g., microgrids) with centralized intelligent energy exchange with the outside world and therefore heavily rely upon communications.
  • Regulatory context and legislation—Enhanced human safety, critical infrastructure site security and surveillance, liability on nondelivery of power, disaster readiness with auditable recovery schemes and tools, as well as environmental hazard monitoring are some of the hot legislation topics in many countries which necessitate increased “smartness” of the network and therefore more extensive communications.

For numerous utilities, smart grid is currently narrowed down to smart metering as the local regulators are often pushing to deploy AMI. However, AMI is only one out of several utility communications domains in which intelligence is to be shared. Table 3.11 summarizes major smart grid applications and corresponding data communications requirements and potential communications technologies.

Wireless solutions are becoming more and more widespread. One of the major advantages of wireless communications is that, in many cases, solutions can be deployed more easily and at a lower cost than wired solutions. For instance, it would be very costly to install an optical fiber to each existing secondary substation of a distribution grid. From a technical point of view, new technologies, such as LTE, also allow communications systems to provide higher-bandwidth and lower-latency requirements that were not possible before.

The communications network in smart grid must enable a set of functionalities across the power system to facilitate interaction across the grid and with customers [27]. First, it must make use of advanced sensors that are integrated with real-time communications to enable modeling and simulation computations. These functions must be provided in visual forms to enable system operations and administration. Second, the smart grid communications system must reinforce the transmission and distribution systems in ways that enhance data transfer, control center operation, and protection schemes. Third, communications functionality must facilitate the relief of congestion on the grid, from generators to customer premises, to enable increased power flow, enhanced voltage support, and greater reliability. Connectivity to customers to enable value-added services is also expected, as the consumer is the final user of the smart grid. This must take the concept of wholesale market settlement—attribution and accounting of power transactions—down to the retail level. Bridging wholesale and retail transactions to ensure dollar settlements is critical for the smart grid.

Table 3.11   Communications Requirements for Smart Grid Applications

Smart Grid Application

High-Level Communications Requirements

Candidate Communications Technologies

Home area network

Connect meters and home appliances for demand response and energy efficiency applications

Broadband PLC, Wi-Fi, WSN mesh, WPAN ZigBee

AMI (last mile)

Connect customer meter to a concentrator forwarding data to a customer relationship management platform and distribution management system (outage management)

NPLC, GPRS (low requirements) broadband PLC, fiber to the home (FTTH) for backhaul

Customer premises access (not via meter)

Home gateway or energy box for energy services (connect to server)

Public Internet

Distribution network automation

Connect MV switches and control platforms (SCADA)

Wireless, satellite, fiber, license-free spread spectrum radio

Microgrid management

New specific architecture for local microgrid (LV or MV) management and DR/DG local optimization

Wireless, PLC

Distribution network asset monitoring

IP connectivity at MV/MV and MV/LV devices (substation)

Broadband PLC, wireless

Dispersed generation (solar, wind, biomass, etc.)

Treated as a SCADA RTU or via meter if residential

Broadband PLC, wireless, satellite, license-free spread spectrum radio

Substation automation

Substation digitalization

Real-time process bus, IEC 61850

Large renewable plants (off-shore wind farms)

Voice, video, control, monitoring, SCADA

Fiber, microwave, UHF, etc.

Transmission asset monitoring

IP and web service in the HV substation

Robust Ethernet and IP router with security architecture

Transmission network wide area monitoring

IP connectivity from substation to PDC

Robust Ethernet and IP router, specific communications architecture

Transmission network automation (W. A. Protection and Control)

Time-predictable wide area Ethernet

Ethernet over SONET/SDH and time control

HVDC communications

Long-distance, low-capacity communications

Long-range optical links

Source: © Copyright 2012 Alstom Grid. All rights reserved.

PLC, power line carrier; WSN, wireless sensor network; WPAN, wireless personal area network (IEEE 802.15); NPLC, narrowband power line carrier.

3.7  Monitoring and Diagnostics

Mike Ennis and Mirrasoul J. Mousavi

Monitoring and diagnostics in smart grids require three fundamental elements in the broadest sense: data, intelligent algorithms, and communications. Data are provided by sensors and sensor systems including intelligent electronic devices (IEDs) and switch controllers. Intelligence is provided by digital processors, which are instructed to perform certain operations on sensor data based on specific algorithms. Communications are required to deliver the derived monitoring and diagnostics intelligence to the right person/device, in the right format, and at the right time. These three elements are also the building blocks of current control and monitoring systems, but, in the era of smart grids, a dramatic boost is needed in functionality, performance, and coverage across the power delivery chain down to the last mile including the end customers. More importantly, expect to see an infusion of intelligence into every system and device, coupled with an integral means for communications, ranging from a local HMI to a broadband IP or fiber optic network.

In recent years, considerable progress has been made in measurement and instrumentation due largely to the progress in integrated circuit technology, the availability of low-cost analog and digital components, and efficient microprocessors [1]. Consequently, the performance, efficiency, and cost of sensors and sensor systems have seen much improvement. The emergence of local and international standards coupled with advancements in communications technology paves the road for more advancements (e.g., wireless sensor networks) and application areas such as monitoring and diagnostics for smart grids.

Smart sensors are composed of many processing components integrated with the sensor on the same chip. These sensors have intelligence of some form and provide value-added functions beyond passing raw signals, leveraging communications technology for telemetry and remote operation/reporting. Increasingly, local devices will have to report information rather than data, since otherwise data bottlenecks will ensue, compromising the ability of transmission and distribution grids to deliver value to their operators. Automated, reliable, online, and off-line analysis systems are needed in conjunction with sensors/sensor systems supporting smart grid monitoring and diagnostics applications.

3.7.1  Architectures

Smart sensor technologies enable condition monitoring and diagnosis of key substation and line equipment including transformers, cables, breakers, relays, capacitors, switches, and bushings. These sensors use digital data for monitoring and diagnostic purposes that, depending upon the type of the asset and monitoring requirements, may include conventional and nonconventional voltage and current measurements and temperature readings. A fault passage indicator on a distribution line is an example of a smart sensor that senses the overcurrent condition and communicates the passage of the fault current to a local or remote human or machine operator (see Figure 3.132).

These sensors, empowered by a central processing unit, offer functionalities beyond conventional sensors through fusion of embedded intelligence to process raw data into actionable information that can trigger corrective or predictive actions. It is this combination of sensors, intelligence, and the communication of information, rather than mere data, which earns them the description “smart.” They may perform a number of functions based on the level of sophistication [1]. These functions, depicted in Figure 3.133, may include

Example sensor system with embedded intelligence and communications. (© Copyright 2012 GridSense, Inc. All rights reserved.)

Figure 3.132   Example sensor system with embedded intelligence and communications. (© Copyright 2012 GridSense, Inc. All rights reserved.)

Block diagram of a smart sensor.

Figure 3.133   Block diagram of a smart sensor.

  1. Basic sensing of a physical measure
  2. Digitization and storage
  3. Raw data processing and analysis by the central processing unit
  4. Local and remote communications
  5. Local and remote HMI

These sensors may be stand-alone devices or integrated into multifunctional IEDs. To deliver the best value, these sensor systems may be deployed in three tiers depending upon the available architecture and application requirements.

3.7.1.1  Tier 1: Local Level

All smart sensor functions including sensing and analysis are local to the asset they are monitoring. The sensor is a stand-alone device with embedded intelligence for local data processing and local/remote communications (Figure 3.134). A visual fault indicator on a terminal pole is an example of a sensor product in this tier. A transformer monitor used solo inside a substation is another example.

Information and data from these sensors may be loosely or tightly integrated into feeder or substation automation systems. When fully integrated, these sensors make up an integral part of the automation solution. By far, this is the most common architecture for smart sensors, and fault indicators are the most common sensor in use today. The future trend is a higher level of integration of these sensors with operations and automation systems.

3.7.1.2  Tier 2: Station/Feeder Level

Monitoring and diagnostics at this level involve smart sensors that are in fact distributed systems with remote access to sensor measurements outside the substation environment. In these systems, sensor functions are distributed among system components that may physically be located apart. A common architecture involves sensing and measurements that are polled into a computing environment (e.g., station computer) for analysis and interpretation as shown in Figure 3.135.

Tier 1 monitoring and diagnostics.

Figure 3.134   Tier 1 monitoring and diagnostics.

Tier 2 monitoring and diagnostics—hierarchical topology.

Figure 3.135   Tier 2 monitoring and diagnostics—hierarchical topology.

Since smart grids will contain both hierarchical and distributed sensors, the topology of Figure 3.136 is also likely. The substation computer in Figure 3.136 might take on either a supervisory or gateway role, but it is equally likely to envisage that such a mesh topology might relate to third-party equipment, operated by, for example, energy service provider. In this type of mesh topology, communications occur between peers based on system needs rather than polling cycles from a master controller.

Feeder monitoring through peer-to-peer communications or via a substation computer is an example of this tier solution. Often these functionalities are integrated into protection and control IEDs and systems forming multifunctional devices and systems.

3.7.1.3  Tier 3: Centralized Control Room Level

System-wide monitoring and diagnostics applications require architecture at the control room level where information and/or data from field sensors are pooled into a central repository to support real-time and back-office applications as presented illustratively in Figure 3.137. Integrated substation monitoring application with enterprise network security systems, including both cyber and physical security, is based on such architectures.

Tier 2 monitoring and diagnostics—meshed topology.

Figure 3.136   Tier 2 monitoring and diagnostics—meshed topology.

Tier 3 monitoring and diagnostics. (© Copyright 2012 ABB. All rights reserved.)

Figure 3.137   Tier 3 monitoring and diagnostics. (© Copyright 2012 ABB. All rights reserved.)

For enterprise T&D asset management applications, data repositories are needed for data update, retrieval, and reporting via a stand-alone application with server inside client firewall with interfaces to external test data systems (i.e., DOBLE, POWERDB, Hydran), mobile computers, and handhelds.

This architecture empowers operations and maintenance departments with decision support based on real-time and historical data. Such decision support systems provide asset condition diagnostics function by utilizing pattern recognition and intelligent algorithms, enabling asset data management system to perform reliability-based as well as predictive maintenance. Such architecture should be designed for seamless integration into other enterprise systems such as work management system, inventory system, financial system, and regulatory compliance.

3.7.2  Wireless Sensor Networks

The majority of the T&D assets are located inside transmission or distribution substations, but other system components are outside or extended from the substations. Regardless of the location, the sensor or sensor system installed on these assets has to have communication ability to send and receive signals or enable remote diagnostics.

The communications and interfacing technology deployed depend upon the application and requirements of the specific sensor or sensor system. Most systems are equipped with LCD screens, RS232, RS485, and Ethernet interfaces [1]; wireless communications are also developed and utilized in some applications with continued research and development to improve performance. Bundles of lead wires and fiber optic cables are common in most hard-wired sensors, but the trend is changing. Significant installation cost, long-term maintenance cost, and limited number of deployed sensors are impediments to the widespread use of wired sensors, but wireless sensor networks are now eliminating these constraints and offering attractive sensor solutions.

A wireless sensor network is typically composed of a number of sensors that are linked to each other through a base station or gateway or through peer-to-peer connections forming a star or mesh network. The data are collected at each sensor node, possibly preprocessed, and forwarded to the base station directly or through other nodes in the network. The collected data are then communicated to the system via the gateway connection. Recent advancements in wireless sensor networks offer a single sensor package integrating sensors, radio communications, and digital electronics into an integrated circuit. This compact design results in substantial cost reduction and enables low-cost sensors to communicate with each other using low-power wireless data routing protocols [2].

The radio link in a wireless network is the largest power-consuming component that can be characterized in terms of the operating frequency, modulation scheme, and hardware interface to the system. There are many low-power proprietary radio chips on the market, but the use of a standard-based radio interface enables interoperability among networks from different vendors.

The existing radio standards include IEEE 802.11x (LAN), IEEE 802.15.1 and 2 (Bluetooth), IEEE 802.15.4 (ZigBee), and IEEE 1451. Public carrier telecom networks are also now beginning to open up and become viable for middle-mile communications.

For short-range wireless sensing applications, IEEE 802.15.4 has a number of features which can be used as a benchmark for other wireless solutions. The IEEE 802.15.4 standard specifies multiple data rates of 20, 40, and 250 kbps for transmission frequency bands of 868 MHz, 902 MHz, and 2.4 GHz, respectively. The 2.4 GHz band being essentially license-free worldwide is the most appealing band. By accommodating higher data rates, it reduces the transmission time and consequently lowers the power consumption level of the radio. This provides for a long term and potentially maintenance-free network for monitoring applications in many areas including smart grids.

A number of companies are working together to develop reliable, cost-effective, low-power, and wirelessly networked products. The ZigBee Alliance, for example, promotes the use of wireless networks for home/building monitoring and control applications using an open global standard (IEEE 802.15.4). As smart grid initiatives are rolled out, wireless sensors networks will be an integral and vital part of many application areas related to grid monitoring and diagnostics including the consumer space. In this rapidly evolving area, however, new solutions have to be always borne in mind; DASH7 (ISO 18000-7), Wibree (Bluetooth low power), and UWB PHY applications of ZigBee have the potential to open up new niches within low-power, short-range wireless transmission.

3.7.3  Diagnostics

Transmission and distribution asset monitoring and diagnostics applications extensively utilize sensors and sensor systems for various functionalities ranging from basic alarming to online and nondestructive condition assessment. Transformers, load tap changers, regulators, circuit breakers, reclosers, HV and MV vacuum/SF6 switchgear, underground cables, overhead lines, switched capacitors, reactors, surge arresters, insulators, shunt devices, batteries, battery chargers, and power electronics interfaces are the major power system assets that may be equipped with some kind of sensor or sensor systems for continuous monitoring, diagnostics, and real-time asset management. The total cost of the power equipment and its failure risk usually determine the need, complexity, and features of the installed monitoring systems. Some components of the power system may not have a monitoring system installed, but every piece of equipment participating in smart grid communications should be equipped with sensors for measuring, monitoring, and/or control applications.

Figure 3.138 shows some of the most common sensors and application areas. The sensors and sensor systems supporting monitoring and diagnostics applications range from conventional CTs and VTs to state-of-the-art optical and acoustic sensors. These sensors are used to measure and sense physical attributes such as electric current and voltage, temperature, gas-in-oil, moisture-in-oil, acoustic wave, vibration, pressure, weather parameters, UHF and RF waveforms, water, thermal profile, motion, proximity, x-rays, displacement, and erosion. Constant improvements in performance and cost are expected to continue and accelerate full-scale deployment of various sensors to enable smart grids.

Example sensors and application areas for smart grids. (© Copyright 2012 ABB. All rights reserved.)

Figure 3.138   Example sensors and application areas for smart grids. (© Copyright 2012 ABB. All rights reserved.)

These sensors support the following applications in particular:

  1. Cable diagnostics and prognostics
  2. Water penetration monitoring of high-voltage cables
  3. PD detection, localization, and monitoring of power transformers and cables
  4. Power and distribution transformer monitoring
  5. Transmission grid monitoring
    • Real-time monitoring of conductor temperature
    • Asset health monitoring
    • Voltage instability monitoring of transmission corridor

In addition to monitoring and diagnostics areas, the data collected from these sensors may be utilized to support other system functions such as VAr management, design improvements, real-time control applications, dynamic loading of transformers, triggering advanced diagnostics, and off-line applications.

In the era of smart grids, ubiquitous sensors and measurement points will enhance situational awareness and monitoring of system components down to the last mile. This will in turn mean more data and increased processing needs. The utility environment of today is overflowed by the amount of data already collected by existing systems. Thus the addition of new data points will simply exacerbate the situation, unless the data-to-information conversion process is considered in each step of the way giving rise to more automation and emergence of proactive system health management and auto-notification systems [3].

The ability to proactively address T&D system problems and respond as quickly as possible to outages and asset failures, along with movement toward predictive maintenance, will be a significant contributor to fulfill the promise of smart grids. Typically, maintenance schedules for the assets are set on a preprogrammed basis without specific intelligence about the asset condition and health. In a self-healing smart grid, with automated analysis of sensor data and predictive maintenance technologies, the Operations and Maintenance departments will have the ability to respond quicker to outages, send the right restoration/repair crew, assess the risks, and proactively address system problems. The planning department will in turn have access to better information for upgrades and long-range reliability enhancement projects.

The intelligent machine algorithms used for monitoring and diagnostics in smart grids may reside at different levels in the supervision and control hierarchy. Protection and control IEDs can host such algorithms to detect anomalies in the power system behavior and identify an abnormal situation (such as emerging faults). They can take appropriate action (such as tripping appropriate circuit breakers) based on this local analysis or just forward the fault/event information to the higher entity in the hierarchy, which can be a substation computer or a full-fledged health management system (see Figure 3.139). This essentially minimizes the amount of data that should be retrieved by the substation or control center computers for analysis and decision making giving rise to reduced communications bandwidth required.

The substation computer may host the intelligent algorithms; this can provide system monitoring capability at Tier II, which is unavailable with conventional sensor systems without remote communications. Along the same lines, the intelligence can be hosted at the control center level. Each deployment option comes with its own benefits and limitations, which need to be evaluated carefully to achieve optimal net benefit from the chosen architecture. These solutions can be tailored to the application based on the complexity, scalability, cost, communications options, and customer preference requirements.

Monitoring and diagnostics data flow [

Figure 3.139   Monitoring and diagnostics data flow [3]. (© Copyright 2012 ABB. All rights reserved.)

3.7.4  Future Trends

Many attributes of smart grids for monitoring and diagnostics require sensors and sensor systems that are reliable, scalable, and integrated into automation systems. Although there has been significant progress in the recent decades on sensors and sensor systems in general, there is room for continued improvement for smart grid applications. These improvement areas include cost per node reduction, managing power requirements, expanding communications capabilities, reducing footprint, retrofitability, ease of installation/configuration/calibration, accuracy, scalability, reliability, interoperability, and finally security aspects. Future trends will involve efforts to continue to make sensor systems cost-effective, accurate, scalable, fault tolerant, interoperable, secure, self-powered, remotely available, and maintenance free, all in an integral part of utility automation infrastructure.

3.8  Geospatial Technologies

Stephen Byrum and Paul Wilson

3.8.1  Technology Roadmap

The business of an electric utility is inherently spatial in nature. Managing power flows over a large geographic area requires detailed information about the vast network of wires and equipment that composes the grid—and much of that information is spatial.

Since the genesis of widespread electricity distribution in the 1880s, there has always been a need for geospatial information to help manage the grid. The electrical grid is inherently spatial, rooted in the geography of the service territory served by the utility. It is a complex network of wires, supported by devices that control the flow of electrons through those wires. To build and manage that network, the utility has to know the location of all those components and how they are connected. Managing this locational and topological data, and providing users with methods to view and use it, requires technology that is designed to handle large amounts of geographic data.

A great deal of utility work has a high level of “where” content, reflecting the spatial nature of the grid. For any operations function, much of the day-to-day work requires access to location-based facilities data. Where are my facilities? Where are my customers? Where is the device that controls this circuit?

For field crews—the “tech in the truck” that makes up a large part of the utility workforce—there are additional spatial questions at the heart of their daily work. Where am I? Where do I need to be for my next assigned job? Where is the switch that controls this line?

3.8.1.1  Age of Paper

For almost a century, the mechanism for storing all of this spatial data was the paper map (or, for permanent records, a more durable equivalent such as vellum). The “data” were created and maintained by manual drafting. It was distributed by making copies of map books for each person (or field crew) needing the data.

Edison’s first distribution network, the Pearl Street project, covered a very small geographic area—several blocks of lower Manhattan. Even with a territory that almost disappears within the territory of a modern utility, however, a map was needed to show the spatial extent of the network (Figure 3.140).

As the size of utilities’ service territories grew, the scope of the mapping effort grew as well. Recording changes became more of a problem as data volume rose dramatically. Each utility developed a system for organizing and cataloging maps. The service area was typically divided into a map grid—a series of tiles, where each tile corresponded to a defined geographic area and was represented by a map sheet (Figure 3.141). This mapping structure often made its way into the field, as numbers based on the map grid were stamped onto poles and other equipment. As paper maps became more congested, the grids had to be split and redrawn into fourths and sixteenths to provide a workable resolution, adding to the cost and effort of maintaining, publishing, and distributing map books.

Pearl Street project in Manhattan. (Courtesy of Consolidated Edison, New York.)

Figure 3.140   Pearl Street project in Manhattan. (Courtesy of Consolidated Edison, New York.)

Typical utility map grid. (© Copyright 2012 General Electric. All rights reserved.)

Figure 3.141   Typical utility map grid. (© Copyright 2012 General Electric. All rights reserved.)

3.8.1.2  Emergence of Digital Maps

Utilities, like most other businesses, first used computers for back-office functions such as payroll, billing, and accounting. Starting in the late 1960s, some utilities (notably Public Service of Colorado) started to experiment with harnessing this computing power for representing maps.

The mainframes were, by today’s standards, very limited and primitive tools. Even with the limitations, however, it quickly became apparent that the growth in computing technology offered utilities a new way to handle spatial data. As we moved through the 1970s, there was clearly a new age of geospatial technology in utilities: the sheer volume of map data had overrun the ability of paper-based methods to keep up with changes, and digital tools were increasingly seen as a potential answer.

Over the next two decades, almost every large utility invested heavily in computer infrastructure. Massive data conversion projects were necessary to turn paper maps, often decades old and with questionable cartographic accuracy, into usable data. This required tying points and lines on the old maps to a common coordinate system (latitude/longitude, state plane, or UTM*).

Although at this stage the storage of spatial data started to move from physical to digital, the communication of spatial knowledge still relied on paper. After all, the mainframe was not readily accessible to the people involved in managing the grid. Interactive, on-screen graphic capability was fairly primitive and very costly. The emphasis, then, was still on producing paper maps. The data used to produce the maps may have been stored digitally, and the maps might have been generated by a digital plotter, but the end result was still a paper map. In most cases, the goal was to reproduce—in a more efficient way—what had been used for decades. The paper map products from a digitally stored map had a more appealing look and feel with the capability of more detail and consistency, but the information that the map contained still had to be communicated through paper to humans with no interface or tools to make use of all the captured map data. Map content and symbology mirrored the standards in use at each utility but also carried with them the accuracy issues of the original paper maps.

3.8.1.3  From Maps to Geospatial Information Systems

The next stage in the evolution of geospatial technology shifted the emphasis from maps to applications. The graphic representation of facilities in two- (2-D) or three-dimensional (3-D) space—the map—was still important, but the data behind the map began to be used in different and more powerful ways.

Early systems were used to store geographic data and communicate it through maps. The next big step was to treat data about the grid as not just a map but as a collection of objects that have location, attributes, and topology. Adding attributes makes it possible to retrieve related information (what are the voltage ratings on that transformer?) and to search (where is pole B45806?). Common database functions allow for complex queries across data sets (where do we have 500 kV oil-filled pad transformers installed within 1000 yards of the Chesapeake Bay management area?). Establishing topology (the relationship of features to each other) enables network connectivity, supporting models of current flow. This added a whole new dimension to the data available from the mapping database, a powerful resource for utility network planning and operations.

This change marked the transition from automated mapping to true geospatial information systems (GIS). Together, these characteristics support analysis of asset attributes and much more. CAD (computer-aided design) systems also played an important role during this period, adding intelligence to the process of designing new facilities on the grid.

Even though these systems enabled profound changes in the way that map data were stored and managed, the direct effect on utility operations back then was minimal. Access to GIS tools and applications was limited to professionals with extensive training in the technology. Frontline users (the crews in their trucks) were, for the most part, still using paper maps. Even though the maps represented digital data and were generated by plotters, the users were still constrained by the limitations of having to use the paper format of the geospatial information.

3.8.1.4  Across the Enterprise

A fourth stage of the geospatial grid started to emerge in the 1990s. Desktop computers proliferated, network infrastructure grew, and (in the late 1990s) mobile computers rugged enough to survive field conditions were deployed. Paper was replaced by computer applications that could search for objects, display attributes, and even trace through the network to identify trouble spots. A rich set of applications made GIS capabilities accessible to planning and operations managers and also field crews.

Geospatial technologies have evolved to become a true enterprise system, extending from meager map digitization, to meaningful GIS systems, to a valuable data resource that crosses many enterprise applications. GIS for utilities has become a business-critical technology, supporting operations as the “system of truth” for the grid. Interoperability has allowed it to become the integration point for other utility enterprise data—asset databases, sensor and monitoring equipment, customer information systems, work management, compliance records, as well as third-party and public map sources. It is now common for a utility to visualize load information, assets, protection schemes, workforce locations, and public/commercial maps and photography all at the same time and through the same interface.

Interoperability has magnified the need for accurate data in all systems. Reliability is tied to properly functioning applications that are dependent on accurate and up-to-date data. Smart devices that report load information can dictate a demand response application, but if there are inaccuracies in the asset management system and geospatial representation, the application will be ineffective.

In many ways, this drive to automation in the utility industry has mirrored technology trends in other sectors, where the platform for computing has moved steadily closer to the user’s place of work. It parallels ubiquitous mobile platforms and social networking, which have brought computing power to the hands of almost everyone. (And, in developing countries, it is outstripping conventional desktop computing as the dominant platform.)

Delivering geospatial tools to the field is essential to the operations of a utility company because much of the work has to be done outside the office. The assets and the customers are all outside in the field, spread across the service territory. Consequently, much of the utility workforce is also outside the office. The field personnel jobs are inherently mobile, moving around the grid “in the geography” to job locations that change rapidly (Figure 3.142).

This spread of geospatial technologies to the field is worth emphasizing because of its profound impact on how utilities do their work. While in the early digital age, a utility might map the orders and track the location of field workers, the push to provide this capability to the field has been driven by the demand for enterprise information by the field worker. The field technician is usually the front line of work with the grid and is increasingly a frequent point of contact with the customer. There are several forces driving this spread of technology to the field workforce:

  • Fewer people, more work
  • Growing complexity of work
  • Increased safety and security standards
  • The increased cost of outages
  • Higher expectations for customer service
  • The expectation of technology by the younger workers accustomed to social networking

The aging utility workforce has a major impact here. Most utilities are faced with the prospects of replacing a key cadre of workers that represents much of the organizational knowledge. This group, in effect, carries the system maps in their heads. As this segment of the workforce nears retirement age, it will be essential to support less experienced workers with strong geospatial tools.

Extending maps to the field. (© Copyright 2012 General Electric. All rights reserved.)

Figure 3.142   Extending maps to the field. (© Copyright 2012 General Electric. All rights reserved.)

Mobile applications often show a short return on investment. By taking technology to the work site, these systems can close the loop and digitize work processes from beginning to end. This eliminates many sources of errors and speeds up processes that were once paper-bound. For safety and efficiency, much of the supervisory team is also in the field close to the work being performed. Often the supervisor is the most qualified to make an assessment and network decision, but now the data streaming from the system is required to make those decisions. Extending the data set from GIS and related enterprise applications to the field improves field work efficiency and safety and provides a synergistic return that is often overlooked and hard to measure by traditional standards.

Over the last decade, we have seen a major transformation in mobile computing technology. The rapid development of consumer technology has helped drive acceptance of smartphones and tablets into commercial markets. Ubiquitous personal and business improvement applications are now used in almost every company. Because of the field-centric nature of much utility work, mobile systems play a large role in operations. Field applications for a utility are job-critical and time-critical. A breaker or regulator that is bypassed for maintenance must be accurately identified and modeled in the GIS for the systems that rely on them to function properly. The most reliable current source for this information is the worker performing the action. Therefore, field applications have to work wherever and whenever they are needed.

3.8.1.5  Developing World

The technology evolution described earlier has been fairly consistent in North America, Europe, Australia, and many parts of Asia. In the developing world, technology for managing the grid has taken a different shape.

Part of this difference is due to the grids being different. In some emerging economies, large power grids have not been as common as in the developed world. Therefore, with developing countries, there is an opportunity to take advantage of modern tools for the smarter grid in the expansion phase rather than having to deal with issues of retrofitting the older grid. The technology aspect is different too. By starting later on the GIS curve, some parts of the world are avoiding the sometimes uneven evolution of hardware and software systems over the last four decades.

In much of the developing world, where large landline communications infrastructures are lacking, mobile phones are rapidly becoming the tool of choice for both businesses and consumers. A newer technology has replaced the need to build out an older (and more expensive) infrastructure. Similarly, the predominant computing platform is not the desktop but mobile devices. Tablet computers and smartphones provide utility employees with enterprise-wide strategies from the onset, rather than having to later add mobile applications to office-bound systems.

A significant benefit of the late implementation of geospatial technologies is skipping much of the data conversion process. Rather than dealing with the painful and expensive projects to convert paper maps of old facilities into digital form and to correct the cartographic error of paper products that were generated over time, a utility that is now expanding into new areas can capture designs and as-built drawings electronically as part of the construction process.

Capturing this data electronically not only allows for more accurate asset and grid inventory but also provides a means for spatially accurate records and correct connectivity. These data can be captured at the time of installation to improve accuracy and provide a shorter database posting cycle. This allows the GIS to be as accurate as the actual facilities it represents as quickly as possible.

3.8.2  Changing Grid

Throughout the first four stages of geospatial applications, the technology has changed dramatically, but the electrical grid has remained largely the same. (It is often said that Thomas Edison, looking at today’s grid a century after his Pearl Street project, would easily recognize what he saw: a one-way, fairly static network where a flow of electrons was created at a small number of power generation plants and distributed to customers.) The electrons, for all practical purposes, flow one way. There is little system-wide information flow. SCADA systems are sometimes used to monitor overall flows through the system backbone, but this capability rarely reaches all the way to the customer. Each customer has a meter—a device that measures the usage at the customer point so that billing can take place. Almost all of these characteristics change with the smart grid. The old grid, with its static, one-way flow, becomes a much more complex and dynamic system.

Much of this added complexity has a geographic dimension. To begin with, take generation: the old paradigm of a few power plants, all controlled by the utility, gives way to a system that may have numerous power sources. Wind farms and solar installations are often privately owned, so the utility has a challenge in adding them to the network data model. And since they are subject to weather factors that neither the owner nor the utility can control, managing system flows become far more complex. Information about the “whereness” of weather, which varies over space and time, can help balance the complexity of the generation mix in the utility system.

It is a similar story on the customer side. Most utilities have not included details about customer locations in their spatial data. The GIS data model often extended only to a distribution transformer, sometimes with links to data about the customers fed from that transformer. Does the smarter grid, with smart meters and perhaps smart appliances, require that the GISs capture location beyond the transformer?

The increased penetration of electric vehicles (EVs) will add yet another dimension. Although charging points are static, the vehicles themselves move around and might connect to the grid at different locations.

There is also an impact of the utility’s crews. These frontline employees, who have to build the system and resolve any operational problems, are faced with a more complicated job, needing far more data and new tools to analyze this data.

Clearly the changing grid will increase the demand for more geospatial data and the need to integrate the geospatial data across numerous business and operational applications in the utility enterprise.

3.8.3  Geospatial Smart Grid

Now we are on the edge of a fifth “age” of the geospatial grid. This time, the changes are driven not by gains in geospatial technology but by the transformation of the grid itself: the emergence of the smart grid.

How does geospatial technology contribute to planning, building, and operating the smart grid? In this section, we will examine the importance of these tools, reviewing a number of applications in the utility sector. One key in planning business-critical applications is to ensure a consistent base of geospatial data. The GIS is typically seen as the platform for managing these data—the “system of truth,” which is synchronized with local data requirements for other enterprise systems. Close attention to interoperability is required. The stringent requirements of a smarter grid with constantly updated data may challenge the traditional abilities of GIS to continually exchange data with other applications.

3.8.3.1  Core Spatial Functionality

It used to be easy to equate geospatial applications with geographic information systems. After all, GIS was the tool of choice (and often the only tool) for any functions that required spatial data. That has changed dramatically; today, applications in virtually every part of the utility automation sector manage and display map-based data.

Here, the focus is on software applications. Although the division is somewhat arbitrary, the applications can be divided into categories that reflect how they are used.

The first group of functions includes those that are central to spatial data handling. They are traditionally the core components of the GIS.

3.8.3.1.1  Managing Spatial Data: The System of Truth

Geospatial tools, at a basic level, provide a common source of information for operating the grid. Since the operating system of a utility is spread out over a large geography, the data necessary to run it are spatial in nature, and managing these data spatially is critical to the business. This has been a major driver in the adoption of GIS within the utility sector. GIS today is often viewed as the “system of truth,” the single trusted source for any data that are spatial. For many years, this viewpoint was hard to challenge. Almost any application of geospatial data was handled by the GIS software and managed by the utility’s GIS group. It was clearly the single source of data because the data were not used anywhere else.

That has changed, however. As more operation functions have been automated, spatial data have made its way into other applications. OMS/DMS applications rely heavily on a spatial view of the grid. Who would have thought in the early days of a GIS “map” that we would see it presenting and interacting with SCADA? Work management systems now include map-based views of how work and crews are distributed over the service territory. Even planning and marketing groups in the utility employ geospatial data, using maps of current infrastructure in conjunction with demographic and land use data. It is due to this data codependence that the savvy spatial data manager recognizes the importance of data accuracy and strives for perfect data.

These applications must therefore use the same source of spatial data. They are, after all, covering the same geographic area. And they should reflect the same “reality” of the physical grid. For many years, all spatial data—anything with coordinates attached—were strictly the province of the GIS. It was argued that “spatial is special” or that the unique nature of geospatial data meant that only dedicated GIS systems were capable of handling and displaying these data. Now, however, many of these other systems have evolved to include spatial tools. So where are the geospatial data? Which data sets reside in which system?

Advances in hardware and software technologies make these questions more difficult. Abundant storage means that keeping multiple copies of spatial data (perhaps in slightly different forms) is not cost prohibitive. The concept of cloud storage even eliminates the “where is my data” question—although it does raise other questions, like how to maintain security for critical infrastructure data. On the software side, database software commonly used by other applications may now include tools to manage this type of data (e.g., Oracle Spatial).

This spread of spatial functionality into other systems clearly has great advantages. It does raise a data management dilemma, however. If every application that uses geospatial data stores a copy of that data, how do we synchronize the systems to ensure that they are all operating on the same “truth”? A common approach is to keep the base data in the GIS and then feed to other systems as needed. As we will see in a later section, the changing requirements of the smart grid may make that method more difficult.

3.8.3.1.2  Geovisualization

Maps are used for a reason—they are the best means of communicating certain types of information. For the electric grid, this means a spatial view of the relationships between network, customers, and field crew locations.

We can refer to this process as geovisualization. It is a way of communicating spatial information in ways that support human decision making. If done well, presenting a clear view of operating data supports situational awareness and improves decision making.

One of the beauties of spatial technology is that the same data can be used in so many different ways. It can, in effect, produce a near endless series of maps. At one scale, the data produce a wall map, an overview of a large area. The same data can also generate a series of larger-scale maps (or even schematics) with details for smaller areas. By managing scale, geospatial technology can produce the map that is most appropriate for the job at hand.

Today’s computing technology can offer ways of visualization that go far beyond the static, 2-D paper map. GIS tools can quickly render views based on an almost endless combination of geographic and thematic filters.

LiDAR point cloud. (Courtesy of LiDAR Services International, Calgary, Alberta, Canada.)

Figure 3.143   LiDAR point cloud. (Courtesy of LiDAR Services International, Calgary, Alberta, Canada.)

The emergence of 3-D viewing also adds exciting possibilities. Most existing facility data, since they were created by converting paper maps, are 2-D. This has limited the use of 3-D viewing tools. New data collection methods such as LiDAR create point clouds that can be processed to build 3-D facility databases (Figure 3.143).

3.8.3.1.3  Queries and Reporting

Early automated mapping systems utilized special file structures to handle XY coordinates and attributes. In the late 1970s, the emergence of general-purpose relational databases offered a new storage paradigm. Soon most GIS vendors offered databases as a way to manage the increasing volume of noncoordinate data. Over time, the ability to manage and manipulate geospatial data has become widespread in commercial databases (e.g., Oracle Spatial).

With this underlying database structure, it is, of course, very straightforward to perform queries and generate reports. The added geospatial element enables spatial filters that add to the power of data retrieval:

  • A pure data query-"list all of my transformers"-has usually too much information.
  • Adding a filter by a landbase polygon—“list all of my transformers is this district”—is more useful but may still be too much information for most tasks.
  • Adding a filter by proximity from a linear landbase feature—“list all of my transformers within 1000 ft of this road”—starts to focus an important subset of the data.
  • The real power of spatial data may come from a filter based on connectivity—“list all of my transformers between these two points on this circuit.”
  • More complex queries—“list the customers served by the transformers selected earlier”—can be designed to pinpoint data that are key for a certain task.

This query/report capability, combined with geovisualization, is often used to extend spatial data to settings where computers may not be appropriate. For example, work packets for a vegetation management crew can combine lists of tree trimming work with maps that illustrate the work in a geographic context. This capability may also be useful as a way to provide data outside the company, such as contractor crews that may be validated for access to live data.

3.8.3.2  Planning and Designing the Grid

GIS and CAD systems have a long history of supporting the plan/design/build processes in electric utilities. A number of commercial systems are available for these tasks. While the spatial aspect of laying out facilities is native to these applications, the details of structural and electrical analysis and even work management have to be considered for a design tool to be effective.

3.8.3.2.1  System Planning

Prior to detailed engineering design, utilities often have to perform long-range planning for service territory expansion or system improvements. This may involve projections of population growth used to predict future system needs or an analysis of environmental factors for a construction project.

Defining a transmission corridor is a classic example of this type of project. The need for a new line may be established based on current demand and on projections of future demand. Once it is determined that a new line is needed to connect a generation source with an area of demand, there may be an array of corridor choices that must be analyzed. This analysis will include factors such as terrain, environmental impact, current land use patterns, and cost. The selection process includes a bewildering mix of political and public interest actors—another case where geovisualization tools can have a major impact by communicating the spatial context.

While much of the data used in this process are spatial, they likely do not reside in the utility GIS. Land use and population data may come from local governments, while terrain, weather and wildlife patterns, and other geospatial technical data sets may be available from federal agencies. It is almost inevitable that multiple data sources, with data in multiple formats, will be needed. The tools used for utility system planning must be capable of handling this combination of data sources.

3.8.3.2.2  Grid Design

Detailed design of the electrical network is a category at the heart of the geospatial smart grid. These analysis and optimization systems, often including DLTs (design layout tools), are critical components for designing robust networks (Figure 3.144). When used well, they can also achieve significant cost savings.

Applications in this category have to handle the entire spectrum of the utility’s facilities:

  • Both transmission and distribution
  • Overhead as well as underground
  • Linear UG facilities (ducts, trenches, conduits)
  • UG structure nodes (manholes, handholes, vaults, pads)
  • Substation internals

Grid design application. (© Copyright 2012 General Electric. All rights reserved.)

Figure 3.144   Grid design application. (© Copyright 2012 General Electric. All rights reserved.)

Fundamental capabilities of these applications include tracing by phase and circuit, schematic layout creation, and the ability to handle multiple levels of detail (e.g., showing a switch as a single element and the related internal view).

A large component of the design system capability is optimizing conductor and transformer sizing. Flexibility is important. Results can be based on customer class data or spot load models. Tools have to consider load growth and check for allowable transformer overloading settings and potential voltage drop and flicker problems. Conductor sizing relies on both power factor and quality, considering both overloading and underloading.

Along with design of the core electrical network, these applications also may have tools for corridor management (right-of-way, vegetation, dam inundation), joint-use pole management, and streetlight layout.

3.8.3.2.3  Communications Network Design

One of the primary changes with smart grid is the addition of a communications network to the electric grid: a truly smart grid is as much about information flows as electron flows. This requires tools that enable efficient design of the communications network.

Communications design tools have to support an integrated view of the entire network. This includes both inside and outside plants, and both physical and logical networks (Figure 3.145).

The communications physical network model includes all of the ducts, cables (both underground and overhead), and support structures (street cabinets, manholes, splice closures, rack-mounted equipment) that compose the system. One of the challenges is the need to manage both extensive geographic areas and the details of buildings (including floor plans, rack locations, down to the communications port).

Communications network design application. (© Copyright 2012 General Electric. All rights reserved.)

Figure 3.145   Communications network design application. (© Copyright 2012 General Electric. All rights reserved.)

Communications design applications also have to manage the logical network (active network elements, customer circuits, and bearer circuits).

Utilities have been utilizing grid design software for many years. What is different now is the need to manage the rollout of large communications networks. And, clearly, the key is integration of the engineering design of both electric and communications networks so that together they help manage a smarter grid.

3.8.3.3  Operating and Maintaining the Grid

Once the smart gird is designed and built, the emphasis changes to operating and maintaining it. Geospatial technologies are a core component of operational processes and applications.

3.8.3.3.1  Network Analysis

As the complexity of the grid increases, power system analysis tools will play an even larger role. These tools are used to manage circuit configuration, direction of flow, voltage, and phasing.

A major part of this functionality is transformer load management. By aggregating data from summer and winter peak loads for each customer (gathered from CIS billing data) and adding information about the performance specifications of individual transformers, these analysis applications can identify overloaded or underloaded transformers (Figure 3.146).

3.8.3.3.2  Outage Restoration

The previous section described analysis tools that are used in the day-to-day operations of an electric utility. Another set of tools comes into play when things go wrong; the lights are out, impatient customers are waiting for answers, and the utility is facing significant monetary losses (both in lost revenue and penalties).

Outage management and work management systems are described in more detail elsewhere in this book. Here, we will just mention the key role of geospatial data in several parts of the outage process.

Much of the restoration process is driven by the utility’s field crews. Here, it is vital to have a coordinated view of repairs between the OMS and work management systems. Communications between dispatchers and the field are very location-based; getting the right skills and equipment to the right place requires a detailed view of the network. As repairs are made, it is also important to record changes to the facility base and communicate those changes back to the GIS.

Network analysis software. (© Copyright 2012 General Electric. All rights reserved.)

Figure 3.146   Network analysis software. (© Copyright 2012 General Electric. All rights reserved.)

Responding to storms and other outages is a perfect example of the need for high performance in geospatial systems. It is truly a “high-stress GIS” situation. There are huge financial stakes in restoring critical infrastructure more quickly. Public perception, fueled by high-profile outages in the last decade, plays an increasingly important role. The GIS cannot be a roadblock, so it is crucial that it can manage large volumes of rapidly changing data.

3.8.3.4  Mobile Geospatial Technologies

As noted earlier, one of the more recent steps in the evolution of utility geospatial technology is the ability to move map and facility data out of the back office and to the field worker. This trend has been enabled by major advances in mobile computing and related technologies such as GPS and wireless communications.

All field applications, of course, have to link closely to back-office systems. As noted in a later section, managing the data flows between office and field is a difficult but necessary element of geospatial design.

3.8.3.4.1  Map Viewing

A fundamental part of field capability revolves around viewing geospatial data. It is giving the field user answers to many of the “where” questions described earlier.

Early field automation systems were aimed at replacing paper. There are both productivity and cost advantages in eliminating the paper map books that utilities had relied on for decades. It is not hard to beat the functionality of paper. Instead of dealing with fixed scales, users can easily zoom in and out, getting the level of detail they need for the task at hand. Symbology can change with scale so it is more easily read. And by grouping data into different layers, each with a display range tied to zoom levels, it is possible to reduce clutter and improve visibility.

There may even be useful view modes that take advantage of attributes linked to geometry, such as the ability to render circuits by color rather than a default mode of showing conductor color and line thickness by voltage or phase (Figure 3.147).

Displaying circuits by color. (© Copyright 2012 General Electric. All rights reserved.)

Figure 3.147   Displaying circuits by color. (© Copyright 2012 General Electric. All rights reserved.)

Circuit trace. (© Copyright 2012 General Electric. All rights reserved.)

Figure 3.148   Circuit trace. (© Copyright 2012 General Electric. All rights reserved.)

One of the primary advantages of a digital map viewer is the ability to search. Finding a specific facility on a paper map can be very time consuming, even if some assets, such as line poles, used a numbering system linked to a grid based on map sheets. Searching on objects other than the grid facilities is important too. Landbase features (streets, intersections, points of interest) are useful in helping a crew navigate to work assignments—especially if they are in storm recovery mode and working in an unfamiliar area.

Searches can extend to external databases that can be linked to location. Customer data are a good example. Even if the customer (meter) coordinates are not part of the GIS data model, most utilities do link customer records to a transformer. This lets viewers zoom to a location that is in close proximity to the meter and to even show lists of other customers served by the same transformer.

Most mobile viewers today include some analysis functions like circuit tracing (Figure 3.148), which takes advantage of the connectivity data present in the GIS. Tracing an electric circuit is a huge productivity gain in the field when field troubleshooting. The ability to designate an underground route through a building complex or urban setting with the connection to protective devices is impossible on paper. Similarly, the field crew cannot gain a clear picture of affected areas from an out-of-service protective device using a paper map.

3.8.3.4.2  Workforce Management

It is easy to forget the people element of smart grid. After all, much of what we hear is about the totally automated, self-healing nature of the future electric network. It is described as almost an “untouched by human hands” system. Although a great goal, we know that this will not always be the case. Smart meters will not be very smart when they are lying in the rubble of a house destroyed by a tornado, and the intelligent network will fail if vital components are damaged by falling trees. The bottom line is that people will always be a key part of running a utility.

Even though workforce management software has traditionally been a separate category, managed by a different group in the utility and provided by a different set of vendors, it is included here because of the strong geospatial link. The essence of these systems is to get the right people to the right place at the right time, so the space/time aspects of geospatial technologies are an essential component.

Work management systems come into play in almost all aspects of utility field work. These systems may, for example, schedule distribution designer field visits with customers and then manage the resulting construction work. After the system is built out, work management systems play a vital role in managing both the daily service work of the utility and the stressful periods of outages. Workforce systems may also play a role in special projects such as AMI deployment.

These applications typically have both back-office and field components. The back-office system manages the overall field workforce, tracking crews and equipment. As work is needed, the system creates service requests. It then assigns the task to a specific crew based on a complex mix of factors including crew/vehicle capability, current locations, and expected task completion times. This drives the scheduling and dispatch of a given crew to each work task.

As the work is completed, the system tracks the progress, looking at current status and estimated time for completion. It may also manage parts inventory based on the materials used in each job.

The field component of workforce management takes a different perspective. Instead of managing multiple crews, the focus is on the assigned work of a single crew. Communicating with the back-office system is important to update assignments and job status. Although routing from one job to another is often handled in the office system, the ability to update routes in the field is useful since traffic conditions may affect the original path.

For many years, commercial work management systems tended to focus on a specific type of work. They were designed to handle either short-cycle (service) crews or long-cycle (construction) work. Today, as the utility workforce has evolved, much of that distinction has disappeared, and these systems use a “work is work” philosophy and take a unified approach to the entire mobile workforce (both in-house and contractor crews).

A relatively new capability in workforce applications is predictive analytics, forecasting how the utility’s future workloads are most likely to be distributed over time and over the geography of its service area. This functionality utilizes historic trends and projected needs to help balance future demand (the amount and distribution of required work) and supply (crews, vehicles, materials).

3.8.3.4.3  Inspections

All utilities are required to periodically inspect facilities. Some inspections are self-imposed, and others are mandated by regulators. These inspections may focus on a specific facility type such as transformers, conditions that affect facilities like vegetation growth, or they may look at all facilities in a given area (e.g., a circuit or a substation).

Special purpose categories include pole audits (looking at either the utility-owned poles themselves or updating foreign attachments), vegetation surveys, or storm damage assessment work (Figure 3.149).

The back-office part of an inspection system schedules the work and manages historical data for the relevant facilities. The field application provides form display, validations, and editing capabilities, along with markup or sketching functions and attachments such as photographs. An additional advantage of digital inspections is the incorporation of GPS. GPS can be used with inspections to improve the “where” of mapped facilities as well as validate that the inspector was actually at the correct site when performing the inspection.

3.8.3.4.4  Routing and Navigation

As consumer navigation systems have proliferated, it is increasingly common to see navigation capabilities as part of a field automation suite (Figure 3.150). This can be an important capability even if a back-office system generates a preferred route as part of a work order since traffic or other real-world conditions might require changes in the original route.

The underlying technology is familiar: after the user defines a destination, the system uses GPS to determine current location, calculates point-to-point routing over an intelligent street network, and highlights the route with turn-by-turn directions. A utility setting adds some special requirements. For example, some bridges and underpasses might have restrictions that would prevent certain utility trucks from using them; the generated route has to take these restrictions into account. Similarly, some states constrain the use of driving directions on computer screens in certain vehicles, so the routing app has to communicate with verbal driving directions.

Field inspection application. (© Copyright 2012 General Electric. All rights reserved.)

Figure 3.149   Field inspection application. (© Copyright 2012 General Electric. All rights reserved.)

Navigation. (© Copyright 2012 General Electric. All rights reserved.)

Figure 3.150   Navigation. (© Copyright 2012 General Electric. All rights reserved.)

3.8.3.4.5  Data Collection and Update

Mobile applications that enable field data collection take advantage of the fact that utility field crews are in a good position to update the geospatial database. They are, after all, knowledgeable about the facilities and are often placed in close proximity to the objects in the field.

These capabilities can be used throughout the facility lifecycle. In some cases, they focus on collecting data that are not in the GIS such as dangerous trees or dig-in damage. They are also used as part of the construction process, capturing the differences between as-designed and as-built facilities. One of the more frequent uses of this capability lies in the ad hoc data updates that arise from a field crew seeing a discrepancy between what they see on the screen and what they observe in the real world.

There are several flavors of mobile data collection tools. Some are simple drawing tools, letting someone in the field draw on top of the map and submit the sketch to a mapping group for interpretation. Other redlining tools provide the ability to add notes and more complex drawing capabilities (text, symbols, and annotation) over the existing map. The most complex applications link to compatible unit databases and include data validation tools to help ensure that data collected in the field is usable in related systems.

Even if mobile applications support the field update of facility data, the back-office components must be able to capitalize by providing a rapid and secure process for inserting changes into the corporate GIS. The slow update process has been a source of frustration for many utilities. There are, of course, valid reasons for insuring the validity of data before the “system of truth” is changed. In many cases, the legacy of paper mapping systems remains a roadblock. Given today’s technology and the example of “crowdsourcing” tools (see next section), there is little excuse for an update cycle that is measured in weeks or months.

3.8.3.5  Engaging the Consumer

The applications discussed earlier are all aimed at the employees and contractors of the utility—the people who build and maintain the grid. What about the consumer, the end user of smart grid?

In most cases, there is no legitimate need for the consumer to access the detailed facility data in utility geospatial systems. Even if there is interest, security is a real concern. There are cases, however, where the customer would find it useful to have a spatial view of the grid. In a major outage, for example, many utilities post a web page showing the extent of current outages. These data should, of course, reflect the more detailed view of current status that the utility is using internally.

The emergence of EVs may yield other examples. The driver of an EV, dealing with range limitations, has a vital need for updated locations of charging stations and perhaps even a count of the available outlets. If these data are present in the utility GIS, it should be in sync with what the consumer is seeing. These data need to be made available to the driver in the vehicle through an onboard system.

Other consumer-facing applications will undoubtedly emerge as we move into the smart grid era. Geospatial data will often provide a framework for these applications, serving as a common view of network assets and status.

3.8.4  Smart Grid Impact on Geospatial Technology

In the previous section, we looked at how geospatial technology will help support the growth and management of smart grid. There is another interesting angle to that relationship: How will the emergence of the smart grid impact geospatial?

Today, GIS has been implemented in most utilities and is considered a successful example of technology implementation. Many in the industry, however, believe that the new grid will challenge current systems. After all, most current technology was designed to support the business processes that were rooted in the old grid—processes that, in some cases, may date to a century ago.

How does the shift to a smarter grid impact the geospatial systems now being used by utilities? Perhaps the most obvious change is that everything scales up—there are more data, tied together in more complex ways, and the need for speed and accuracy is dramatically increased. These factors will challenge almost every aspect of geospatial system design, forcing major changes in how spatial data are managed and distributed. The following sections describe potential problem areas and suggest how geospatial system design may evolve to handle these issues.

3.8.4.1  Coping with Scale

One of the clear differences with the geospatial smart grid is the change in scale—everything gets bigger. Even now, a spatial database for a large utility that contains detailed landbase and complex facility data for its entire service territory commonly exceeds 100 GB. That does not count related data from even larger systems (customer information, asset tracking).

Many observers estimate that there will be at least a thousand times as much data with smart grid deployment. Not all of that data, of course, will be managed by the geospatial systems, but we can anticipate a significant rise in volume.

For a large utility, the data volume is driven by the need for a lot of detail. This volume is multiplied by the large area that has to be covered, resulting in tremendous amounts of data. The GIS data models typically cover both the transmission system (power plant to substation) and the distribution network (substation to customer). In most cases, the model (or at least the populated database) does not extend to the actual customer premise (the meter) but ends at a transformer that may handle dozens of customers. Adding intelligence at the customer site will require handling data about the meter and the characteristics of the customer. (Are there solar panels on the roof? Is there an EV?) Not only are these data tied to a premise, but much of them are associated to the consumer (e.g., details on EVs and smart appliances). The historical aspect of this consumer data, as well as the need to keep it updated as the consumer moves or replaces items, all add to the data volume.

The smart grid will require more objects and more attributes for those objects that are in the facility “layer” of the utility GIS. It may also demand new sources of data (e.g., weather) and new analysis tools to dissect complex relationships among objects.

The key question here is whether current GIS architectures are optimal for these larger volumes. Data structures that have worked reasonably well in the current environment may not meet performance expectations as data needs scale rapidly.

3.8.4.2  Moving to Realtime

Even though a typical utility GIS now has thousands of changes every day, the system can be viewed as relatively static. New or changed facilities are reported through a review process and then validated by GIS staff before being added to the database.

GIS is often seen as a spatial data warehouse, a “system of truth” that is used as a trusted reference for the grid. Currently, the practice is to do periodic extracts for applications like outage management or mobile data; GIS is seen as a data source that feeds other more time-critical applications. These operational systems are important for maintaining the grid and responding to power outages, and there are critical safety considerations when crews are doing work on the lines.

When the lights are out after a storm, data can start to change very quickly. For example, as the system is reconfigured to reroute power and resolve outages, the status of switches may change. The load on devices and conductors fluctuates, changing their working capacities. Some of the information about the grid is less likely to be valid if there is damage. Repairing the system takes precedence over recording the details of how those repairs were done. And public demand to get the lights back on adds to the time pressure; making this a true “high-stress GIS” situation.

The current slow change GIS cycle may not be adequate for a grid that is highly dynamic. In today’s grid, even when object attributes and electrical flows change, location is almost always static. With EVs, that is no longer true; an EV is an active part of the network that can change location during the day. With solar generation rates changing as clouds pass across the sky and wind turbine farms coming on- and off-line based on wind speed, energy storage that absorbs system capacity can also move throughout the day and night. Managing this data will require the ability to handle moving objects in the spatial database.

Unless the GIS can support these near-real-time demands, it will not be able to retain its position as the single reference for the grid.

Geocollaboration. (© Copyright 2012 General Electric. All rights reserved.)

Figure 3.151   Geocollaboration. (© Copyright 2012 General Electric. All rights reserved.)

3.8.4.3  Supporting Distributed Users

One of the paradoxes of the smart grid is that even though the system is touted as being self-healing, its deployment puts even more burden on the people who maintain it. We not only need more information but need it faster; we need to get it to the people who can act on the information. In a utility, that means the field crews—often spread out over a very large area and often (in situations like storm recovery) without consistent communications capability.

Data communications is a crucial design parameter for mobile applications. Utility crews have to be able to work anywhere within the service area, which means that there are almost always limited coverage areas for any large utility. Systems also have to allow for “no comm” situations, where storm damage to the grid may be accompanied by communications outages (or, at best, limited bandwidth). The bottom line is that field applications must be designed to support base functionality without any wireless connectivity. Critical applications have to work whenever and wherever they are needed.

There are many cases in grid maintenance where multiple crews, along with supervisors in an office, are working together to handle a situation like a large outage. These users would benefit from a higher degree of interactivity with the back-office GIS (data input or sketches) to communicate changes; this can be seen as a need for geocollaboration (Figure 3.151) on a large scale.

3.8.4.4  Usability

The changes noted earlier—more data, changing faster, with distributed users—will lead to another challenge: how do users interact with the smart grid geospatial system? In an era with a more complex grid, user categories may be less distinct. Instead of a separate group of GIS professionals who maintain and control the system, we may find more of an operational bias—electrical engineers viewing the system as a platform for applications.

An additional degree of difficulty is present in field settings. It is a challenging work environment; the display screen is typically smaller than is common in the office, and viewing conditions are rarely ideal. In events like storms, there is a great deal of pressure on the user (and consequently the system) to work quickly.

A design goal is to hide the complexity of GIS and CAD systems. The details of the application user interface should disappear. Ideally, users view the field application as a tool: something that helps them do their work and has an easily understood function.

There are several key questions that must be addressed in usability. How can we help users pick out key information in a system with more potential for clutter? Is it possible to define key data based on context (location, time of day, current user activity)? What design strategies are needed to support high performance?

3.8.4.5  Visualization

One of the design elements that relates to usability is visualization. One key advantage of a geospatial system is its ability to render maps in many different ways depending on the audience and the intent. A GIS screen can act, if needed, like a wall map, giving operations staff in the control room a quick overview of system conditions, such as power flows and bus voltage levels by color combinations over a large service territory. It can, alternatively, display a detailed schematic of a transformer vault to help a crew safely make repairs.

It is important to remember that, in field settings, viewing maps is made more difficult by system constraints (typically a smaller screen size) and environmental conditions (glare from sunlight).

The increasing complexity of the grid will demand new forms of visualization. We may need to extend the GIS toolkit to take advantage of advances in other fields of computer graphics like 3-D entertainment systems.

One of the key aspects of visualization is helping the user quickly focus on what is important. How can we help users pick out critical information in a system that has more data and consequently more potential for clutter? Symbology design and color will play important roles in this area. Another key to managing data display is the use of multiple layers. Given the vast amount of data that relate to the grid (especially if other networks, such as gas, water, or communications, are also present), the user can be overwhelmed by visual clutter. We can have the “fog of data” like the confusion of the “fog of war.” By selecting groups of data elements that are job-specific and giving the user easy ways to select what they want to see based on the task they are performing, we can minimize information overload.

This problem can be mitigated with intelligent filtering of the data:

  • Functional layers (e.g., landbase vs. facilities)
  • Setting visibility (zoom) levels for each object type
  • Symbology changes based on display scale
  • Highlighting key objects

These techniques can be used together to create thematic views, where the goal is to present only that data related to the user’s current task, using visualization techniques to highlight critical objects. For example, a field user may need an overview of an electrical circuit, where the conductor would be rendered with a thicker line and devices like switches would be represented with larger symbols.

Other aspects of data presentation drive from the environmental factors noted earlier. For example, using a light color line style to denote a high-voltage line may work fine in a controlled office environment but is likely to cause problems on field devices.

3.8.4.6  Standards

Given the vision of the smart grid as a vast interconnected network, having standards for all of the components is essential. Interoperability of software components moves from a goal to a requirement.

Several organizations, notably NIST (National Institute of Standards and Technology) in the United States, have led the way with grid standards. Although there has been a considerable amount of work with GIS standards by OGC (the Open Geospatial Consortium) and others, the perception is that geospatial technology is less advanced in this area than some of the “engineering” disciplines. There does seem to be momentum around ideas like Common Information Model (CIM), and discussion of GIS/grid standards appears to be growing. OGC has been involved in Smart Grid Standards Roadmap Workshops organized by NIST and the Electric Power Research Institute (EPRI).

As we have seen in earlier sections, the need for spatial data is not confined to GIS. Many applications are driven by spatial data. It is vital, then, that these systems use common structures for managing and visualizing geospatial data.

3.8.4.7  Data Quality

What are the key characteristics of usable data for the geospatial grid?

  • It must be complete, covering all the relevant data types over the right geographic area. For example, the extents of the data must include all relevant parts of the electric grid that could be included in a network trace.
  • Objects must be positioned accurately. This is often a challenge for utility data derived from old paper maps, which typically were not surveyed and were mapped in the pre-GPS era. Positional accuracy problems become more obvious when facility data from paper maps are overlaid with other data sources that were derived from imagery, GPS, or other more accurate methods.
  • Objects have to be classified accurately and have to include attributes that support the utility’s business processes (e.g., size of transformers, phases of electrical lines).
  • The data must be reasonably current. The update cycle varies according to use. In some cases (e.g., switching status of operating devices on the grid), outdated or inaccurate data pose safety hazards.
  • For data types that would be part of a network trace, topological relationships must be accurately captured. Connectivity may be explicit (i.e., driven by a table) or geometric (based on drawn location and proximity).

Information that passes these tests can form the basis of useful smart grid applications. Smart grid adds an element of uncertainty to current GIS practice. Where does the data model stop? Current distribution network traces, for example, cover the range from substation to transformer. As intelligence is added at the edge of the grid, other objects may come into play. Distributed generation sites and microgrids add complexity to power flows. On the customer side, do intelligent appliances and EVs need to be included?

In general, we can assume that there will be higher requirements for data quality to operate the smart grid than is true with the current grid. Control is more data-driven and that requires complete and accurate data.

An ESRI survey of electric utilities published in August 2010 found that most companies were not ready for the increased demands of smart grid data. Only 15% of respondents reported high confidence in their GIS data accuracy. A common issue highlighted in the ESRI report is the long lead time to update the GIS with data from the field. Lag times of several months are not uncommon. Part of the problem is that update processes in many utilities are still based on the age of paper maps, where a specific group in the utility has the sole responsibility for final changes to the database. This slow process creates problems today and could be a major issue in the faster changing world of smart grid. As suggested in a later section, concepts like VGI (volunteered geographic information [VGI]) may offer a solution to this slow update cycle.

3.8.4.8  More Open: Sensors and Other Data Sources

IEDs (intelligent electronic devices), RFID (radio-frequency identification) tags, and other types of sensors are being used more often in utilities. As the grid becomes smarter, more of its components will be digitally accessible and identifiable. The grid will be, in some ways, a perfect example of the “Internet of things.”

Although most of these devices are deployed to monitor and communicate specific measurements, their location—in terms of XY coordinates and relative to the topology of the grid—is an important element. In addition to these inputs that are controlled by the utility, there will be a need to integrate external data sources. Current and forecast weather data, for example, will be an important tool when trying to predict power flows from geographically dispersed solar and wind sites.

Research is needed to determine whether available GIS architectures can handle the flow of data from sensors and integrate with other data sources, such as weather feeds, into grid management applications.

3.8.4.9  More Closed: Security

Utility GIS has always been a very closed system. Part of this, of course, is based on security concerns, given the need to keep the grid (clearly an example of critical infrastructure) safe from malicious actions. This issue is an even more visible concern with smart grid since automating control of power flows means that physical security is not sufficient.

Another reason for the closed nature of utility systems has been a concern for data integrity. In most utilities today, GIS management has restricted change access to the database to trained personnel in the belief that it is necessary to maintain accuracy. A field user reports a change (which could result from an actual construction change or an observation that the existing map is incorrect) through a structured (and often time-consuming) process. A trained GIS professional validates the update and then makes the actual database change. This multistage process is frustrating to operations staff, since field employees are usually in a better position to compare the map with the real-world features that they are seeing.

How can these security concerns be addressed? There is considerable activity around security standards, made more challenging by the rapid pace of development in hardware and software technology. Because the stakes are so high, we can assume that security concerns will be a major filter on adoption of new approaches (such as cloud computing and crowdsourcing, covered in a later section).

NERC (North American Electric Reliability Corporation) in the United States has implemented a CIP (Critical Infrastructure Protection) program to establish a set of standards for all facets of utility security, including cyber assets. The IEEE is also very active in defining standards for grid data and applications.

3.8.4.10  More Closed: Privacy

The previous section looked at security from the perspective of the utility. It also works in the other direction: concerns about individual consumer privacy.

Given recent high-profile cases of hacking and identity theft, it is no surprise that many people have issues with any technology that collects and manages data about individuals. Smart grid, with its emphasis on data collection from tracking consumer usage habits to smart meters and sensors, fosters a growing concern about how to protect privacy rights.

The geospatial nature of much of the data adds another complication to the issue. Even if great care is taken to protect the most obvious aspects of individual identity (name, address, phone numbers), location information can be used as a link to find these data.

Legal challenges around the use of map-based tools (e.g., Google Street View) have resulted in the term “geolocation privacy.” Safeguarding individual data becomes more difficult as data sources proliferate. Any utility applications that utilize geospatial data have to take these privacy concerns into account.

3.8.5  Future Directions

Geospatial technology—even when the technology was ink on paper—has always been an essential part of the electric grid. Tools have improved dramatically over the last decade, and GIS technology has become a business-critical force in helping utilities cope with a changing business environment. Utility geospatial tools will continue to evolve. They have moved from mapping to designing to managing in a relatively short period of time. As the GIS has grown in capability, we have seen the use of spatial data and spatial analysis tools in other applications. And now, as we enter a new era, we can see more clearly that the smart grid is about data—and much of that data are spatial. The need for these geospatial tools will continue to grow. At the same time, we see the need for new and better tools. As the electric network undergoes major changes, it seems clear that the technology being used today will not be adequate to support the increased size and complexity of a smarter, more connected grid. As the grid becomes smarter, systems that help manage the grid will have to be smarter too. We can look forward to the continuing evolution of geospatial technology to meet these needs.

The previous sections, while describing the key role of geospatial technology in running the grid, have also pointed out several gaps—areas where there are needs for future development to support a smarter grid. Here, we will look at several areas of development that may prove useful.

As the application of geospatial tools to electrical networks becomes more widespread, we are seeing an interesting convergence of technologies. There are four major threads:

  • Vendor-based GIS systems that have been the core of the growing geospatial industry for four decades
  • Ancillary technologies, such as GPS receivers and navigation systems, which focus on the value of location
  • Consumer-oriented mapping tools such as Google Maps, offering toolsets for displaying data on map backgrounds
  • Open-source tools and data (OSM, Ushahidi, the Map Kibera project), initially used for data collection but now extending into visualization and analysis

3.8.5.1  Architecture

Looking at the geospatial grid automation picture as a whole, what is likely to be the predominant future architecture? Will this thing called a GIS continue to be the central repository for spatial data, feeding other applications as needed? Or will GIS disappear as other applications add the ability to store and manipulate spatial data?

It is likely that we will see something in between. With the rise of ubiquitous spatial capabilities, it seems clear that the preeminent role of corporate GIS will decline. Some planning functions, like grid design, are likely to remain within the domain of GIS systems and GIS vendors. Other operational tools, with the need for more real-time capability, will store and manipulate spatial data internally. In effect, the GIS will be embedded in multiple places.

The key is data integrity. It is vital that there is a single accurate and consistent view of the grid across all applications, and GIS still seems the most appropriate place to manage that “truth” for the utility.

3.8.5.2  Cloud

Cloud computing is one of the hottest areas of technology today. The appeal, in many ways, is obvious: forget the details of managing hardware, storage, and software and view applications as a service.

Does the cloud have a role in utility geospatial? (There is some irony in the fact that the cloud computing concept is often defined as a utility and is described by comparing it to the electrical grid.)

Security concerns are often cited as a barrier. At this point, it is hard to envision a scenario where utilities turn over the management of critical infrastructure and operations data to a third-party storage provider. Some utilities are further constrained by regulations that govern the physical location of data storage. (Private clouds, utilizing the same concepts but maintaining physical custody of vital data, are more likely.) The tools to support the design and creation of the GIS data, however, may reside on a public cloud to reduce utility costs and allow the company to better leverage shared and contractor labor.

As we have seen with the grid, however, data needs extend far beyond the facility objects that represent the utility’s assets. Other geographic data from other sources are needed for planning and design. Renewable energy is subject to weather variations, so real-time weather data are needed for grid operations. These external spatial datasets are good candidates for the cloud approach.

3.8.5.3  Place for Neo-Geo

In the last few years, we have seen rapid progress in the “neo-geo” arena—companies and individuals using consumer tools such as Google maps (or even open-source spatial toolkits) to solve significant real-world problems.

A great example is crowdsourcing or VGI. Fueled by grassroots efforts to help disaster recovery efforts in Haiti and elsewhere, VGI has proven to be a valuable tool. By supporting a larger group of users, it has been far more agile and more responsive than many of the traditional GIS efforts.

In some ways, communications and connectedness are the essence of a smarter grid. Can neo-geo play a part? There are security concerns and other barriers, but it does appear that these tools can help apply “people” intelligence to some utility processes, like collecting damage information after a storm.

3.9  Asset Management

Catherine Dalton, Soorya Kuloor, Tim Taylor, and Steve Turner

Utility executives have turned to asset management as an organizational model that creates operational and financial success by reducing the dependence on capital spending. Federal and state regulatory agencies and utilities themselves have raised the expectations for power system reliability. Asset management is a common approach or tool that ties these objectives together into an actionable methodology. An effective asset management program can help utilities to maximize the rate of return per O&M and capital dollar spent, evolve to a competitive culture, invest in training, base decisions on sound business principles, and learn the impact of expenditures on quality of service.

Asset management is managed maintenance of generation, transmission, and distribution assets by means of acquiring data from these assets to execute actionable intelligence on their behalf.

One of the main drivers for smart grid is the possibility to optimize the reliability of the distribution system, which is being pushed strongly by electric utilities as well as their regulatory bodies. Therefore, asset management is no longer business as usual. Rather, new developments in ways to approach asset management are driving smart grid activities. At the same time, smart grid activities are driving new developments in approaches to asset management.

With the recent push in smart grid, utilities have deployed an increased number of intelligent electronic devices (IEDs) for protection, monitoring, control, and metering applications. The functionality built within these IEDs allows for very robust asset management tools to be implemented on the electrical distribution system that enable real-time asset management or managed maintenance of existing and new electrical equipment. The addition of intelligence optimizes the delivery of electricity by allowing utilities to operate electrical systems at maximum capacity at all times.

Asset management for electric utilities can be considered the process of

  • Optimizing system performance, profitability, and business growth
  • Balancing stakeholder interests
  • Positioning for long-term viability
  • Scheduling replacement of capital assets based on specific criteria
  • Scheduling the addition or replacement of fixed assets required to maintain current or anticipated levels of service
  • Enabling reliability-centered maintenance for substations
  • Inspection and performance monitoring of assets
  • Tracking of asset data
  • Prioritizing of investment decisions

Successful asset management programs require a balanced perspective that takes into account not only reliability but also safety, financial, and regulatory perspectives. Managing and balancing these critical perspectives is the key to future success in today’s environment. The electric utility and the consumer will both benefit from using asset management philosophies. Benefits to successful asset management include gaining ability to manage infrastructure and understand necessary resource requirements, enhancing customer satisfaction, reducing costs to improve return to investors, and improving reliability data and reporting to regulators.

3.9.1  Drivers

3.9.1.1  Safety

The safety of the public and the electric utility employee are nonnegotiable. There is zero tolerance for human mistakes made with respect to daily operations and maintenance of the electric utility system. Safety is among the highest priorities of electric utilities. Given that fact, electric utilities proactively invest in training that includes safety, equipment, installation, operations, commissioning, testing, settings, design, and many other areas of expertise. Utility managers encourage teamwork. Central organizations develop plans with their field operations groups. These plans encourage involvement at all levels of the organization. No one should be afraid to speak up if he or she feels there may be room for improvement in safety processes. Utility managers also empower field operations employees and make them accountable. Field personnel should be rewarded and encouraged for a job well done. They have clear accountability in an end-to-end process. Management should clearly delineate who is responsible for what process and/or procedure and train personnel accordingly. Management should be held accountable as well as field personnel when it comes to safety. Assets not only include power system equipment but also the employees themselves, and safety is a critical component in managing and safeguarding a utility’s assets.

3.9.1.2  Reliability

Electric utilities’ knowledge of the regulatory environment may be limited. The utility may be located in numerous jurisdictions with different regulatory requirements. The utility has to expend time and effort to become familiar with regulatory requirements in each of those jurisdictions. Mandated expenditures and penalties from regulatory agencies may force utilities to prescribe to methods about which they need to become more adequately educated, such as new IT systems and processes they may need to implement. Prescribed standards and continuing demands for improvements in reliability, or at least no degradation of reliability, place continuing pressure on electric utilities. The increased scrutiny on reporting methods and consistency require utilities to dedicate resources that create additional overhead costs. Loss of credibility in the eyes of the regulatory agencies creates even more pressure for the utilities to implement improvements in their electrical systems and supporting systems. Utilities must balance capital and O&M investments with reliability requirements and pursue rate cases to ensure adequate equitable arrangements in cost recovery.

There are some actions that utilities can take to help balance the reliability perspective. They can establish local reliability “owners” who understand the goals of reliability, participate in identifying and suggesting reliability improvements, and coordinate day-to-day reliability activities of a district or region. They can support IT systems that will enable tracking of assets and their associated parameters. Moreover, they can learn state mandates regarding reliability, safety, etc. They can maintain service based on performance information such as customer demand, revenue, and cost by segment and operating history. And finally, they can balance analyses with experience from field operations.

3.9.1.3  Financial

Electric utilities face several operating and maintenance challenges. There may be underutilized assets within the utilities’ service area. “Smart” devices will assist utilities in identifying geographical areas or individual pieces of equipment for immediate action and ensure equipment is being utilized to its maximum capabilities. Utilities can assess the variable profitability of service territories, since an electric utility’s profitability depends on allowed rate of return by a state regulatory agency. Acceptable rates of return vary from state to state. Utilities can also provide for investment in new customer or system improvement without adequate incremental cash flow. If adjustments are not permitted in an electric utility’s rate base, the utility may not be able to afford to invest in system improvements. Utilities struggle with low allowed returns on investment (ROI). Depending on the state and its regulatory environment, some states allow higher ROI than others. Utilities want to maximize ROI in the states in which they are allowed to do so.

Asset management tools can also assist in work prioritization. With “smart” devices, utility employees will have up-to-date information reporting or predicting probable equipment failures and possible consequences. This information will allow utilities to prioritize their workload. Similarly, smart tools will allow utilities to accurately forecast operational and maintenance budgets, capital forecasts, and required resources. In addition, it is very challenging to predict future revenue with so many unknowns. Will costs be recovered in a rate base? Will load growth trends continue in certain areas? Asset management tools can assist in more accurate revenue forecasts. Most importantly, utilities will be able to foresee the consequences of lack of maintenance and make informed business decisions and develop effective contingency plans that are actionable because they will be based on real information and not only on assumptions.

3.9.1.4  Regulatory

There are numerous regulatory challenges faced by electric utilities, mainly with respect to electrical power distribution system reliability. There exists a complexity of multiple regulatory bodies, with different measuring and reporting criteria for electrical distribution reliability indices, such as SAIFI, SAIDI, CAIDI, MAIFI, and others. Even the same reporting criteria may vary in its formula from state to state. Also, obtaining accurate, complete, and timely information for reporting purposes is a challenge. Furthermore, managing distributed generation and renewable energy sources, momentary interruptions, and other operational challenges requires complex technical and regulatory insight. Some actions that utilities can take to help implement a more effective asset management program include being proactive with regulators in order to help regulators understand the challenges of regulation in a smarter grid environment.

3.9.2  Optimizing Asset Utilization

Electric utilities can optimize assets by means of measuring efficient operation and improved performance. These measurements are made possible with the use of smart grid technologies. There are numerous ways that electric utilities perform optimal asset utilization from the power plants all the way to the electrical consumers’ homes. They include the following:

  1. Electric utilities can operate the assets they already own for as long as possible. They can retire inefficient equipment and maximize energy throughput of existing assets. Electric utilities can spend capital funds to build new revenue-producing assets and invest maintenance funds to support the legacy system. Some older equipment may not accommodate new technologies that can enable predictive maintenance. These assets will still need to be maintained, but manually. A communications infrastructure may make financial sense to implement for asset management considering the benefits of the increased amount of asset monitoring data available. Electric utilities should understand that the age of revenue-producing assets should be irrelevant, but the condition of the assets is key. They should understand how often equipment fails in service and understand how often a power plant or power delivery system is out of service due to unplanned events.
  2. Utilities should have a solid standards strategy for work practices, design, construction, materials, and equipment specifications.
  3. Utilities should have an effective supply chain that includes consolidated supplier arrangements.
  4. Utilities should perform long-range system planning. System planners need to know the options available to optimize asset loading. Asset data are a knowledge base for system planners and engineers. These data can show different ways to optimize loads that would allow more efficient load management. Examples of these data include an energy supply plan, an infrastructure risk analysis (provides information on over- or underutilized assets and information for maintenance and replacement planning), and a delivery expansion plan (provides information for targeted economic development).
  5. A human asset plan should be in place for workforce optimization. This plan would contain a workload assessment that would ensure that employees are fully utilized and maintain the proper work/life balance, while maintaining safety. This plan would also contain a skill gap assessment that would allow for augmenting system training with targeted training in order to address specific gaps in skill sets. Moreover, a training and development plan would be part of the human asset plan. It would provide appropriate tools and equipment to employees, as well as position employees to challenges, technologies, and common issues they may face. It would also identify how long after training it takes an employee to reach proficient productivity levels.
  6. Maintenance procedure performance should be evaluated. Repeatable and measurable processes should be put into place that minimize and manage maintenance procedures. Tools, such as performance metrics, balanced scorecards, work activity analysis, process improvement tracking, and cost/performance models and correlations, can be used to further enhance the management of assets. Performance metrics can be tied to processes. Balanced scorecard data can be tracked, trended, and measured among various criteria such as financial, operational, safety, and employee metrics. Work activity analysis can reduce exposure to potential accidents by minimizing manual maintenance. Less manual maintenance could mean less exposure to potentially risky situations. Improvement in these processes can result by tracking specific data associated with specific processes and analyzing this data. Furthermore, by developing cost models and process performance models, correlations can be made that may have not been considered in the past due to the availability of more data from “smart” devices.
  7. Utilities can set customer-based targets to measure customer satisfaction. Some targets can include measurements around transaction surveys, customer advocacy, perception of reliability, voice of the customer, and customer phone calls. Utilities can use transactional surveys, which periodically survey their customers to identify areas of improvement from the customer perspective. Utilities can act as customer advocates. For example, if a customer calls into a call center with an issue, utility call center employees should work as customer advocates and promote customer needs in the utility. Moreover, utilities can track the perception of reliability. They can ask customers about their power interruptions. For example, how often does a customer believe he or she is suffering inadequate electric service? They can listen and respond to the voice of the customer. The customer should be periodically queried via surveys or focus groups so that utility management can realistically assess customer needs and requirements. Utilities can improve customer satisfaction through common approaches and processes and empower personnel with knowledge to work more effectively and confidently with a customer. Most importantly, utilities can quickly provide accurate and courteous customer service.

3.9.3  Asset Management Implementation

There are five steps for implementing an efficient asset management program.

The first step is to deploy focused and effective predictive maintenance. Identifying and prioritizing defective conditions that might lead to interruptions is the key to lower cost reliability. Intervening in the deterioration process with low-cost maintenance avoids the cost of replacement and continually reduces future dependence on construction spending. Preventative maintenance is the key to lower cost and is the foundation to an effective asset management culture.

The second step is to provide lower-cost options to the cost of equipment replacement. Providing the organization with alternative choices that maintain or improve reliability, but at a lower cost than replacing the asset, requires innovation, creativity, and organizational discipline. Some ways to provide lower-cost options include the following: learning the impact of type of expenditures on quality of service and developing plans to optimize; setting targets for per unit cost of work; tracking overhead costs and setting targets for reduction; allocating capital for replacements and maintenance based on territory, asset types, and regulatory environment. For example, a utility should look at its expenditures versus its improvement in reliability indices (such as SAIFI, CAIDI), for a particular state, or in other words, the cost/reliability relationship.

The third step is to assess the condition of assets using data from smart devices. Managers can be empowered with asset condition data as a way to create accountability for making better decisions and using lower-cost options. Without detailed asset condition information, the organizational reflex is to build something new or do nothing. In order to assess the condition of assets, the asset manager needs to (1) identify asset age, condition/health, and time to failure prediction; (2) monitor performance based on key measures that include safety, reliability, customer satisfaction, and ROI; (3) develop a maintenance strategy using consistent and cost-based maintenance practices; and (4) plan any required additions, plan to address underutilized assets, and plan for asset replacements and life extensions.

The fourth step is to implement a balanced scorecard evaluation. A balanced scorecard can be used to track the performance improvements based on the asset management approach. Some possible measures that can be used in each area include the following:

  1. Financial performance: Performance measures can include added shareholder value and O&M per line mile.
  2. Customer service: Performance measures can include percent of customers very satisfied, number of public utility commission (PUC) justified complaints, number of PUC violations, and CAIDI.
  3. Operational performance: Performance measures can include SAIFI, percent of jobs completed within plus or minus percent of estimate, percent of meters read, and percent of calls answered within a certain time period.
  4. Employee performance: Performance measures can include motor vehicle incident rate, average employee satisfaction survey, lost time incident rate, and unscheduled hours off per employee.

Finally, the fifth step is to have management commitment and controls in place. Changing a culture that evolved from decades of managing in a cost plus environment is a daunting challenge. Therefore, leadership support for key initiatives is imperative for successful implementation. A clear vision, continual communication, and actions that support the plan are all requirements for success. Controls should be in place to restrict the old activities that add little value to the organization. Low-cost, efficient, and measurable reliability improvements are needed. Success takes ownership, passion, and commitment from senior management, and it requires accountability from everyone. Success takes leadership, and success takes time.

3.9.4  Where Smart Grid Meets Business: The Electric Utility Perspective on Asset Management

Utilization of assets can be maximized by efficient operation of existing assets. Such an operation involves the following steps:

  1. Identification of stressed assets
  2. Improving the operation of these stressed assets by relieving the stress on these assets
  3. Working around aging assets

In order for performance to be improved, an electric utility must ask itself numerous questions related to its operations. These questions should be asked in terms of capacity planning, investment planning, maintenance and replacement planning, design, regulatory strategy, customer service processes, and work and resource management.

3.9.4.1  Asset Condition Monitoring

The direct way to monitor asset condition is with the use of sensors to measure power flow and other conditions (such as oil temperature) of the asset. Such sensors may be monitored in real-time through a SCADA system or off-line by periodically collecting the measurements accumulated by the sensors. The information from these sensors is then analyzed to analyze asset utilization and stress.

3.9.4.1.1  Asset Loading

Asset loading is used to measure the stress on any given asset, such as a transformer or an underground cable. The percentage loading is expressed as

Percentage Loading = MVA or Amps Flow MVA or Amps Rating × 100

Assets are stressed when their percentage loading is high, which causes degradation in asset life expectancy when the assets are loaded above 100% (i.e., above their rated value). Identifying system loading and preventing overloading contribute significantly toward increasing the life of assets.

3.9.4.1.2  Improving Load Factors

Typically the total system load peaks in the evening (Figure 3.152). Transmission and distribution (T&D) systems are designed to handle this peak load demand. Effective management of this peak load has significant impact on the utilization of system assets. The system load factor defines the ratio between the average system load and the peak load. Load factor is defined as

L f = L a v g L p e a k

where the average load Lavg is expressed as

L a v g = Total kWh Number of hours

Load factors can be calculated for the entire system, or for individual assets, such as transformers. Load factors measure the average utilization of any asset or a group of assets. Assets that have low load factors are considered underutilized. Assets that have high load factor are better utilized assets.

Typical daily electric load curve of an electric utility.

Figure 3.152   Typical daily electric load curve of an electric utility.

Reducing peak loads on assets improves their load factor and hence their utilization. For example, consider a scenario where the peak load on an asset is reduced by 10% while keeping the total kWh loading the same:

L f = L a v g L p e a k = L a v g 0 .9 L p e a k = 1 .1111 L f

As shown in the earlier equation, this results in load factor improvement of over 11%.

Present utility planning and operational practices are based mainly on peak load. Heavily loaded assets need to be replaced or upgraded. This upgrade or replacement cost can be avoided if the loading on the asset is reduced. By using smart grid technologies and the information obtained through systems such as AMI and advanced distribution automation, a much better understanding of loading and loading durations can be obtained. By reducing system and feeder peak load and flattening load profile, the load factor of the existing system assets can be significantly improved. Thus the system can be used to supply more demand.

Several approaches can be used to reduce asset loading:

  1. Load shifting: Shifting load from higher loaded parts of the system to lower loaded parts of this system, for example, on distribution systems by switching operations to change the configuration of feeders. This can be performed on a seasonal, weekly, or daily basis depending on loading patterns.
  2. Phase balancing: Balancing the loading on phases, more typically of distribution feeders. This has dual benefits:
    • Provides better balancing of the phases, which can reduce I2R losses on a feeder and address potential peak demand issues.
    • Moves load from loaded phases to less loaded phases, hence better asset utilization.
    • Dynamic rating: Maximum loading limits of power equipment such as transmission lines and transformers vary based on how long the loading levels are sustained. Short-term overloading of these devices is acceptable provided that these load levels do not occur frequently and enough time is provided for the power equipment to cool down. Similarly, external temperature and weather conditions have an effect on how much an asset can be loaded. Dynamic rating of devices takes these into consideration during system operation.
    • Demand management: Demand management can be used to effectively manage system peak loads and improve asset utilization. Smart tech-nologies, such as the use of distributed energy sources, battery storage, or load management, can help manage peak demand loading at specific points of the power delivery system.

3.9.4.1.3  Lowering System Losses

Electrical losses in line sections and transformers occur due to the current flowing through them. For any given electrical element that is carrying an AC current, the active power electrical loss is expressed as

P l o s s = I 2 R

where

I is the current flowing through the element in Amperes

R is the resistance of the element in Ohms

Ploss is expressed in Watts

As seen from the equation, the loss varies as the square of the current flowing through the element. Therefore, as current increases, the amount of losses also increases. The electrical loss is wasted as heat in the element. When electrical assets are overloaded, their losses significantly increase, thus causing heat-related damage. Managing the flow of current through an asset improves the health of the asset. Reducing losses also reduces waste and has direct financial benefits.

Reducing system losses can have significant revenue savings for a utility. For example, assume a utility with a peak load of 2500 MW, a load factor of 52%, and an average generation cost of 6 c/kWh. Assume an average loss for distribution systems around 5%. If this loss is reduced by around 10%–4.5% of the total system load, the annual saving for the utility is calculated as shown:

Annual cost of power = 2,500  × 1,000  × 52% × 8,760 × 6/100 = $683,280,000
Annual cost of losses  = 683,280,000 × 5% = $34,164,000

Savings due to reducing losses by a factor of 10% = $3,416,400

Reducing system losses has direct impact on asset loading. Reduced losses reduce the power flow on the assets. In some cases, the reduction of losses may avoid or delay the need for reconductoring lines and feeders to increase the available power delivery capacity.

3.9.4.1.4  Reducing Outage Frequency and Duration

Improving outages has the following direct benefits to asset management:

  • Improved system reliability (SAIDI, SAIFI, CAIDI, and other reliability indices)
  • Better utilization of assets
  • More loads served and hence improved revenue

Typical approaches used for improving outages include the following:

Fault detection, isolation, and service restoration (FDIR): Installing automated switches and control devices in the system to automatically restore as much of the system as quickly as possible. (Covered in more detail in this book.)

Vegetation management: Managing vegetation around overhead lines will minimize the frequency of related faults.

  • Fault detection, isolation, and service restoration (FDIR): Installing automated switches and control devices in the system to automatically restore as much of the system as quickly as possible. (Covered in more detail in this book.)
  • Vegetation management: Managing vegetation around overhead lines will minimize the frequency of related faults.

3.9.4.1.5  Improved Investment Planning

Asset investments are required for increasing capacity and for maintenance and replacement programs. Project expenditures could include increased tree-trimming budget, increased pole inspection and maintenance, lightning protection upgrades, infrared inspection programs, animal remediation programs for substations, tap fusing programs, etc.

Utilities should determine per unit capacity costs and volumes for

  • The cost per installed MW or capital dollars to serve a MW of electricity
  • The cost per delivered MW or O&M dollars spent to serve a MW of electricity
  • The cost per MW transformed or O&M cost of transformers per MW of electricity

Utilities need to take into account the existing infrastructure before developing optimal investment strategies for smart grid implementations. The investment analysis is challenging because of the large number of technologies, operational practices, and enterprise application software within the electric utility, where some large investments typically require more than a decade to fully implement. Smart grid investment planning and analysis should include a short-term as well as long-term, comprehensive, enterprise-wide, detailed cost/benefit analysis. Over time, utilities must make good use of their resources and determine relationships or parameters that have the greatest impact on investment returns.

3.9.4.2  More Effective Management of the Workforce

The organization must be in alignment in terms of its vision and values. People need to understand how they contribute to the success of the business and be rewarded for their efforts. Communication by top leadership is essential. People, integrated systems, and best in class processes make up the foundation of a solid asset management program, but people make the difference.

With an effective and efficient asset management program in place based on smart grid innovations and technologies, grid maintenance activities in the field can be achieved with much greater ease and accuracy. An efficient work schedule can be determined for field services, design, engineering, and other important groups within the electric utility. Also, the work schedule will be more accurate and allow timely completion of work since it will be based on real-time information on the processes. In addition, it will be easier to maintain organizational alignment and identify groups accountable for organizational processes or functions. Integrated asset management programs will help institute required performance incentives and prepare employees for organizational changes.

3.9.4.2.1  Mobile Workforce Management

Geospatial Information Management (GIS) is a long-accepted technology that has proven its value repeatedly to thousands of utilities worldwide. Likewise, mobile workforce management (MWFM) (also known as field force automation [FFA]) has enjoyed similar success and utilization over many years. The former originates essentially in an office environment, with paper maps then produced and used by mobile crews and technicians. The latter occurs primarily in the field, used also by mobile crews and technicians but also by dispatchers and managers. Over the last 5 years in particular, these two technologies have been converging in ways that are powerfully useful in contributing to both improved workforce productivity and more effective asset management. To better appreciate this convergence and its likely direction and development over the next 5 years, it would be helpful to first review how MWFM has developed to where it is today.

Utilities conduct business with a unique set of operating characteristics that have evolved over decades, beginning with paper-based operations and evolving into complex IT-driven business processes. Aiding in this transformation is the arrival of improved workforce management (or MWFM) technologies and systems that have helped utilities more efficiently keep the lights on, the water running, and the gas flowing.

MWFM represents the evolution of work and workforce management in a utility environment and the associated technologies and strategies for managing the three main divisions of work (i.e., customer and meter services, inspection and maintenance, and construction). As a result, current business trends and economic and environmental drivers are pushing utilities to more creatively manage all phases of work from an enterprise perspective in order to achieve the company’s ultimate objectives.

There are four overarching divisions of work that any utility must manage:

  1. Customer and meter services (outage)-This division of work includes everyday field work typically completed within a shorter duration (hours, days); it is often considered "unplanned" and "undesigned" work that the utility must manage. Examples include responses to customer inquiries, such as a new service hookup, gas or water leak, etc.
  2. Outages—This division of work includes responses to outage conditions, such as a downed power line or broken main.
  3. Inspection and maintenance—This division of work includes fieldwork around managing and maintaining assets, such as regular maintenance inspection of distribution assets/equipment and dispatching work crews to repair or replace key distribution assets.
  4. Construction-This division of field work includes complex work that must be planned, designed, scheduled, and executed over longer periods of time (i.e., days, weeks, months). Long-cycle work is often resource-intensive, involving multiple stakeholders and the management of compatible units. Examples include construction of new infrastructure, including mains, points, and spans, and replacement of aging infrastructure and corresponding assets.

MWFM from its origin has focused on work type #1 mentioned earlier, customer and meter services, including outages. As such and given the available technologies of the day, there was limited, if any, GIS support available for this type of work.

The challenges facing utilities today can, in part, be addressed by holistically managing the three divisions of work described earlier. While aging workforce and aging infrastructure issues are accepted as major challenges by utility management, the utilization of MWFM technology to help efficiently address these business process changes can be a distinct advantage.

When addressing the dual threat of aging assets and infrastructure in the field, utilities must leverage integrated asset and workforce management technologies to capture and transmit data across the enterprise. The advantage of all field users living in a single system stretches across the three divisions of work. For example, when a field technician repairs a downed power line, he or she enters data via a mobile device, which is shared with customer service, dispatch, scheduling, and other departments. When it comes time to inspect and/or maintain that same power line, the field crew can access the prior repair information on-site. In turn, this data can be leveraged when planning and preparing for larger-scope work (long cycle). It is easy to see how access to this information, especially in geospatial form, can ultimately be a cost-saving tool when it comes to deploying both human capital and physical resources.

A recent analysis of the workforce management industry indicated that, given various market drivers in play, mobilizing the asset-oriented/long-cycle work remains an untapped opportunity for utilities. Most utilities do not have a viable option across short- and long-cycle work and often manage mobility with multiple MWFM systems versus an enterprise application. Also, an aging workforce continues to create challenges that highlight the importance of enterprise-wide technologies to support business process execution. In addition, aging infrastructure issues are driving companies to focus resources on asset-intensive work execution. These combined forces set the stage for companies to transition their focus toward improving field productivity through MWFM.

A clear strategy for achieving both business effectiveness and peak operational performance is the deployment of an enterprise-wide MWFM system. Most solutions in existence today provide ample functionality to manage singular parts of the three divisions of work; however, the majority of technologies handle only specific pieces, such as customer/meter service and inspection and maintenance, or are unable to handle construction work in conjunction with inspection and maintenance. Companies need a MWFM solution that can support utilities in managing all three divisions of work, but this solution to be truly effective must be geospatially enabled across the board.

Significant functionality is required to handle all three key divisions. Many utilities have carved off only part of the MWFM solution. For instance, some utilities only use mobile devices for deployment of customer work, or they schedule and dispatch inspection and maintenance work. With an enterprise MWFM solution with full GIS capability, a utility can operate a whole enterprise application, where mobility, scheduling, dispatch, and data transparency are carried throughout every layer of work.

A key opportunity for organizations will be the enterprise MWFM functionality that can link the customer, meter, outage, inspection and maintenance, and construction work with GIS support in the field to decrease overall IT spend and increase companywide productivity.

Utilities are continually looking to improve service, react effectively to emergencies, and identify opportunities in the field to improve service and operations. By definition, work consists of the day-to-day operations that define a utility. In the future, the work schedule will be intrinsic to the holistic operational plan. Responses to customer service inquiries, such as a new service hookup or a gas or water leak, can be combined and balanced with inspection and maintenance work.

For example, responses to customer care can be combined with managing other important assets in the field that are in close proximity to the customer care request. A mobile system with GIS can provide visibility into the work and work type, along with the crew and contractor information. This increased visibility will result in better customer service, workforce efficiency, and, ultimately, cost savings.

As much of the utility infrastructure is aging in parallel with a maturing and retiring workforce, utilities are entering a proactive phase of managing both assets and people with improved technology. For years, most work has been managed by silos of IT systems supporting only short-cycle (customer and meter services) work. With modern MWFM systems, utilities can now more directly accommodate, plan for, and have visibility into the status of inspection and maintenance duties.

The next focus of development will be the successful management of construction in addition to improving customer, inspection, and maintenance works. Installing the next mile, managing new neighborhood designs and implementation and maintaining this infrastructure will require enhanced MWFM capabilities with even stronger GIS components. MWFM combined with full-featured GIS capabilities is going to be a major field force transformative process that utilities and communications companies will face in the twenty-first century.

It meets the challenges of increased customer service with lower costs that is and will be a hallmark of field operations, and at the same time provides significant configurable flexibility at the point of service provision in the field. Through a combination of software solution implementation and professional services on the organizational and procedural aspects of field operations, this MWFM/GIS solution can produce measurable and repeatable benefits to virtually all utility companies who deploy large and active field service organizations.

3.9.4.2.2  Dual Threat: Aging Infrastructure and Aging Workforce Call for Integrated Asset and Workforce Management

Few people in the utility T&D business need convincing that the above and below ground asset infrastructure, be it electric, gas, or water, is a critical component that is today showing unmistakable signs of age. And most utility managers understand that the field workforce tending to that infrastructure—and the customers attached to it—is also aging and retiring at an increasing rate. These issues of aging infrastructure and aging workforce are often examined independently in articles, papers, and presentations to industry forums. Although richly worthy of attention in their own right, it is the interaction of the two, the dual threat of asset infrastructure aging simultaneously with the workforce maintaining it, that should be a particular concern of utility management, regulators, employees, and customers.

When a utility field technician or crew is dispatched to complete a series of inspections and maintenance orders at, for example, a substation, one can assume that they are experienced and fully trained to do so. With the median age of utility industry employees in the United States currently at 49, this would certainly be a valid assumption. Indeed, for the utility industry, there is a large group of well-experienced, well-trained current employees in the 45–54-year-old range. In executing the various procedures and tests involved in inspections and maintenance work, these employees are drawing directly on the considerable expertise and familiarity with the assets they have developed over some 25 years of work in the field.

Although inspections and maintenance procedures are well documented and the subject of frequent refresher training for crews, the personal knowledge component is just as important. Knowing the peculiarities of a given asset type, even down to the model number, is a valuable additional aid to properly conduct an inspection and, if necessary, make repairs. As these lead technicians and crew chiefs age, however, an increasing number are taking advantage of utility retirement packages to depart on or, in some cases, before their scheduled retirement dates. When such individuals leave, their individualized knowledge goes with them. And with strict cost controls in place at most utilities, hiring replacements is a lengthy and demanding process. Even once hired, the time needed to achieve the same level of personal knowledge is measured in many years, not months. With the increasing use of contractors (also exposed to the aging workforce factor), utilities might not even be in a position to hire new employees who will eventually be developing the in-depth skills needed for field work.

So when a relatively new lead technician or crew goes out next year to do that same inspection series at a substation, the personal expertise brought out to the job with them may be significantly lower than in prior years. No doubt they will exert their best efforts, will follow documented procedures, and will be subject to on-site personal review by a field supervisor. But that innate ability to sense what the status really is for a given piece of the asset infrastructure will be lessened.

At the same time as this aging workforce factor is coming into play, those same assets and infrastructure being maintained are also aging, thus requiring more and lengthier inspections and maintenance procedures and, in some cases, replacement. Newspaper headlines and TV news stories the last 5 years have all too often featured a failure, sometimes spectacular, of a given piece of electric, gas, or water infrastructure that at the very least led to service interruptions or, more dramatically, produced significant damage and human injury as a result of its failure. Much of that infrastructure is composed of operating assets that utilities of all types must continually inspect, maintain, and replace to ensure reliable performance. And while a good portion of the nation’s utility asset infrastructure is in satisfactory condition, an increasing percentage of it is nearing (or exceeding) its planned operating life and therefore requiring field work to maintain or replace it. In such work, utility employee personal knowledge of individual assets