Advent of power electronic switching is introducing more and more non-linear loads in the low voltage grid. Besides harmonic current generation in the frequency range below 2 kHz, these non-linear loads are also responsible for current emission in the range of 2 kHz to 150 kHz, commonly known as supraharmonic emission. Supraharmonic currents mainly flow between nearby appliances and heavily influence the overall emission of neighboring devices. This paper presents an analysis of supraharmonic interaction between a photovoltaic inverter and an electric vehicle. It has been noticed that intermodulation distortion arises as a result of interaction between different switching frequencies used by the devices. Later, additional household equipment were added to photovoltaic and electric vehicle to observe their effect on intermodulation distortion. All the measurements were conducted in a controlled laboratory environment imitating a domestic customer.
This paper continues the pursuit of getting a deeper understanding regarding the transient stability of low-frequency AC railway power systems operated at 16 2/3 Hz synchronously to the public grid. The focus is set on the impact of different load models. A simple constant-current load model is proposed and compared to a previously proposed and studied load model in which the train’s active power is regulated.
The study and comparison is made on exactly the same cases as and grid as with the already proposed and more advanced load model. The railway grid is equipped with a low-frequency AC high-voltage transmission line which is subjected to a fault. The study is limited to railways being fed by different distributions of RFC (Rotary Frequency Converter) types. Both AT (auto transformer) and BT (booster transformer) catenaries are considered.
The RFC dynamic models are essentially Anderson-Fouad models of two synchronous machines coupled mechanically by their rotors being connected to the same shaft.
The differences in load behaviour between the proposed constant-current load model and the previously proposed and studied voltage-dependent active power load model are analyzed and described in the paper.
This paper continues the pursuit of getting a deeper understanding regarding the transient stability of low-frequency AC railway power systems operated at 16 2/3 Hz that are synchronously connected to the public grid. Here, the focus is set on such grids with a low-frequency AC high-voltage transmission line subject to a fault. The study here is limited to railways being fed by different distributions of Rotary Frequency Converter (RFC) types. Both auto transformer (AT) and booster transformer (BT) catenaries are considered. No mixed model configurations in the converter stations (CSs) are considered in this study. Therefore, only interactions between RFCs in different CSs and between RFCs, the fault, and the load can take place in this study. The RFC dynamic models are essentially two Anderson-Fouad models of synchronous machines coupled mechanically by their rotors being connected to the same mechani- cal shaft. Besides the new cases studied, also a new voltage-dependent active power load model is presented and used in this study.
In present-day railway power supply systems using an AC frequency lower than the one in the public power system of 50/60 Hz, high voltage overhead transmission lines are used as one measure of strengthening the railway power supply system grids. This option may be economically beneficial, compared to strengthening the grid purely by increasing the density of converter stations or increasing the cross section areas of the overhead catenary wires. High voltage AC transmission lines in the railway power supply system allow larger distances between converter stations than would otherwise be possible for a given amount of train traffic. Moreover, the introduction of AC transmission lines implies reduced line losses and reduced voltage level fluctuations at the catenary for a given amount of train traffic. However, due to the increased public and government resistance for additional overhead high voltage AC transmission lines in general, different alternatives will be needed for the future improvements and strengthening of railway power systems. For a more sustainable transport sector, the share and amount of railway traffic needs to increase, in which case such a strengthening becomes inevitable. Earlier, usage of VSC-HVDC transmission cables has been proposed as one alternative to overhead AC transmission lines. One of the main benefits with VSC-HVDC transmission is that control of power flows in the railway power systems is easier and that less converter capacity may be needed. Technically, VSC-HVDC transmission for railway power systems is a competitive solution as it offers a large variety of control options. However, there might be other more economical alternatives reducing the overall impedance in the railway power system. In public power systems with the frequency of 50/60 Hz, an excess of reactive power production in lowly utilized cables imposes an obstacle in replacing overhead transmission lines with cables. In low frequency AC railway power system, the capacitive properties are less significant allowing longer cables compared to 50/60 Hz power systems. Moreover, in converter-fed railways, some kind of reactive compensation will automatically be applied during low-load. At each converter station, voltage control is already present following the railway operation tradition. Therefore, in this paper, we propose AC cables as a measure of strengthening low-frequency AC railway power systems. The paper compares the electrical performances of two alternative reinforcement cable solutions with the base case of no reinforcement. The options of disconnecting or toggling the cables at low load as well as the automatic reactive compensation by converter voltage control are considered. Losses and voltage levels are compared for the different solutions. Investment costs and other relevant issues are discussed.
The energy demand of data centers is increasing globally with the increasing demand for computational resources to ensure the quality of services. It is important to quantify the required resources to comply with the computational workloads at the rack-level. In this paper, a novel reliability index called loss of workload probability is presented to quantify the rack-level computational resource adequacy. The index defines the right-sizing of the rack-level computational resources that comply with the computational workloads, and the desired reliability level of the data center investor. The outage probability of the power supply units and the workload duration curve of servers are analyzed to define the loss of workload probability. The workload duration curve of the rack, hence, the power consumption of the servers is modeled as a function of server workloads. The server workloads are taken from a publicly available data set published by Google. The power consumption models of the major components of the internal power supply system are also presented which shows the power loss of the power distribution unit is the highest compared to the other components in the internal power supply system. The proposed reliability index and the power loss analysis could be used for rack-level computational resources expansion planning and ensures energy-efficient operation of the data center.
Hyper-scale data centers are used to host cloud computing interfaces to support the increasing demand for storage and computational resources. For achieving specific service level agreements (SLA), this infrastructure demands highly available cloud computing systems. It is necessary to analyze the server failure incidents to determine the way of improving the reliability of the system since the computational interruption causes financial losses for the data center owners. Regarding the reliability analysis, it is important to characterize the time to failure and time to repair of the servers. In this paper, a publicly available data set from Google cloud-cluster data center will be analyzed to find the distribution function for the time to failure and the time to repair for the servers in a cloud based data centers.
The number of data centers and the energy demand are increasing globally with the development of information and communication technology (ICT). The data center operators are facing challenges to limit the internal power losses and the unexpected outages of the computational resources or servers. The power losses of the internal power supply system (IPSS) increase with the increasing number of servers that causes power supply capacity shortage for the devices in IPSS. The aim of this paper is to address the outage probability of the computational resources or servers due to the power supply capacity shortage of the power distribution units (PDUs) in the IPSS. The servers outage probability at rack-level defines the service availability of the data center since the servers are the main computational resource of it. The overall availability of the IPSS and the power consumption models of the IPSS devices are also presented in this paper. Quantitative studies are performed to show the impacts of the power losses on the service availability and the overall availability of the IPSS for two different IPSS architectures, which are equivalent to the Tier I and Tier IV models of the data center.
The internal power condition system (IPCS) in data centers is prone to have cable faults that cause voltage dips and swells. The voltage dips and swells impact the power supply units (PSUs) with the servers. The servers connected with the PUSs restart or turn-off when the input voltage comes out of the voltage-tolerance range. This paper analyses the impact of such voltage disturbances on server outages due to a single-phase fault in the IPCS. The voltage-tolerance range of the PSUs is considered according to the guideline of the Information Technology Industry Council (ITIC). The voltage dip propagates to the healthy load sections from the fault location, while voltage swells are also observed due to sudden load reduction. Moreover, the current limitation mode of the inverter in the uninterrupted power supply (UPS) is identified as a cause of voltage dip to almost zero experienced by the PSUs. The reliability of the data center considering the outage probability of the servers are finally quantified to show the impacts of the voltage dips and swells in the IPCS.
Enhancing the efficiency and the reliability of the data center are the technical challenges for maintaining the quality of services for the end-users in the data center operation. The energy consumption models of the data center components are pivotal for ensuring the optimal design of the internal facilities and limiting the energy consumption of the data center. The reliability modeling of the data center is also important since the end-user’s satisfaction depends on the availability of the data center services. In this review, the state-of-the-art and the research gaps of data center energy consumption and reliability modeling are identified, which could be beneficial for future research on data center design, planning, and operation. The energy consumption models of the data center components in major load sections i.e., information technology (IT), internal power conditioning system (IPCS), and cooling load section are systematically reviewed and classified, which reveals the advantages and disadvantages of the models for different applications. Based on this analysis and related findings it is concluded that the availability of the model parameters and variables are more important than the accuracy, and the energy consumption models are often necessary for data center reliability studies. Additionally, the lack of research on the IPCS consumption modeling is identified, while the IPCS power losses could cause reliability issues and should be considered with importance for designing the data center. The absence of a review on data center reliability analysis is identified that leads this paper to review the data center reliability assessment aspects, which is needed for ensuring the adaptation of new technologies and equipment in the data center. The state-of-the-art of the reliability indices, reliability models, and methodologies are systematically reviewed in this paper for the first time, where the methodologies are divided into two groups i.e., analytical and simulation-based approaches. There is a lack of research on the data center cooling section reliability analysis and the data center components’ failure data, which are identified as research gaps. In addition, the dependency of different load sections for reliability analysis of the data center is also included that shows the service reliability of the data center is impacted by the IPCS and the cooling section.
The data centers host sensitive electronic devices like servers, memory, hard disks, network devices, etc., which are supplied by the power supply units. The regulated direct current (DC) output of the power supply units fluctuates with input voltage variation since they typically contain single phase switch-mode power supplies. The voltage dips caused by faults in the internal power supply system of the data center can be large enough to violate the Information Technology Industry Council (ITIC) proposed voltage-tolerance guideline. The output of the power supplies, hence the operation of the servers will be interrupted due to such voltage dips. In this paper, the outage probability of the servers caused by the voltage dips are analyzed for different fault location in the internal supply system of a data center.
In the context of modern information technology (IT) industry, cloud computing is gaining popularity for big data handling. Therefore, IT service providers like Google, Facebook and Amazon are expanding their technical resources by building data centers to improve the data processing and data storage facilities under cloud service pattern. However, data centers consume a large amount of electrical energy. In recent years, a lot of research has been done to reduce the electrical energy consumption of data centers by high performance computing. However, very few researchers have focused on the electrical energy consumption by the electrical components inside the data center. In this paper, a component based electrical energy consumption modelling approach is presented to identify the losses of different components as well as their interactions to the total electrical energy consumption of the data center. The electrical energy consumption models of servers and other components are presented as a function of server utilization.
The rapid expansion of photovoltaic (PV) systems has raised voltage concerns. This paper investigates voltage variations measured at four hundred on-line PV installations in Sweden. Small (<10 kW inverter size) three phase residential PV systems had the least impact whereas single phase systems had the most impact for the same amount of power injected per phase. PV systems were grouped based on post code location into urban and rural areas. Urban areas were found to be more resilient to PV induced voltage fluctuations with a narrower back-ground voltage band in comparison to rural areas, indicating that PV inverter measurements can be an efficient method to empirically determine grid strength.
The distribution system planner should be able to coordinate smart grid solutions in order to find cost effective expansions plans. These plans should be able to deal with new added system uncertainties from renewable production and consumers while guaranteeing power quality and availability of supply. This paper proposes a structure for distribution systems planning oriented to help the planner in deciding how to make use of smart solutions for achieving the described task. Here, the concept of a system planning toolbox is introduced and supported with a review of relevant works implementing smart solutions. These are colligated in a way that the system planner can foresee what to expect with their combined implementation. Future developments in this subject should attempt to theorize a practical algorithm in an optimization and decision making context.
Capacity remuneration mechanisms have been originally oriented to ensure availability and continuity of supply on the power generation pool. Equivalent generation-based capacity mechanisms could be implemented to enhance and prolong the usability of the distribution grid. In particular, such capacity mechanisms would provide an alternative to traditional expansion options leading to investment deferral. In this work, a distribution capacity mechanism to fit within a distribution network planning methodology will be proposed and discussed. The capacity mechanism will be outlined following similar guidelines as for the design of capacity mechanisms used in the energy only market. The result of the design is a volume based capacity auction for a capacity-constrained system, oriented to both the active and the reactive power provision.
This work presents a generic storage model (GSM) inspired by the scheduling of hydraulic reservoirs. The model for steady state short-term (ST) operational studies interlaces with the long-term (LT) energy scheduling through a piecewise-linear Future Cost Function (FCF). Under the assumption that a Stochastic Dual Dynamic Programming (SDDP) approach has been used to solve the energy schedule for the LT, the FCF output from that study will be processed to obtain an equivalent marginal opportunity cost for the storage unit. The linear characteristic of a segment of the future cost function (FCF) will allow a linear modeling of the storage unit production cost. This formulation will help to coordinate the renewable resource along with storage facilities in order to find the optimal operation cost while meeting end-point conditions for the long-term plan of the energy storage. The generic model will be implemented to represent a battery storage and a pumped-hydro storage. A stochastic unit commitment (SUC) with the GSM will be formulated and tested to assess the day-ahead scheduling strategy of a Virtual Power Plant (VPP) facing uncertainties from production, consumption, and market prices.
This work presents a linear solution for the short-term hydro-thermal scheduling problem linked to long-term conditions through a piecewise-linear Future Cost Function (FCF). Given end-point conditions to conform long-term water releases, and given actual reservoir conditions, a segment of a pre-built piecewise future cost function will be chosen. The linear characteristic of the FCF segment will allow a linear modeling of the hydro-power plant, in a similar fashion as a thermal unit with an equivalent marginal opportunity cost. A short-term hydro thermal coordination problem will be formulated considering parallel and cascaded hydro-reservoirs. Three study cases involving different reservoir configurations and scenarios will be computed to test the model. The results of this model mimics coherently the future-cost hydro-thermal coordination problem for the different configurations tested. Given similarities with other forms of energy storage, a new theoretical model for generic storage will be proposed and discussed.
The long-term (LT) scheduling of reservoir-type hydropower plants is a multistage stochastic dynamic problem that has been traditionally solved using the stochastic dual dynamic programming (SDDP) approach. This LT schedule of releases should be met through short-term (ST) scheduling decisions obtained from a hydro-thermal scheduling that considers uncertainties. Both time scales can be linked if the ST problem considers as input the future cost function (FCF) obtained from LT studies. Known the piecewise-linear FCF, the hydro-scheduling can be solved as a one-stage problem. Under certain considerations a single segment of the FCF can be used to solve the schedule. From this formulation an equivalent model for the hydropower plant can be derived and used in ST studies. This model behaves accordingly to LT conditions to be met, and provides a marginal cost for dispatching the plant. A generation company (GENCO) owning a mix of hydro, wind, and thermal power will be the subject of study where the model will be implemented. The GENCO faces the problem of scheduling the hydraulic resource under uncertainties from e.g. wind and load while determining the market bids that maximize its profit under uncertainties from market prices. A two-stage stochastic unit commitment (SUC) for the ST scheduling implementing the equivalent hydro model will be solved.
Under the present European directive concerning common rules for the internal market in electricity, distribution companies are not allowed to own distributed generation (DG) but encouraged to include it as a planning option to defer investment in traditional grid reinforcements. Distribution system operators (DSOs) have used the provision of capacity contracted to DG as a viable alternative under current regulatory arrangements. Here, the topics bonding DSOs and DG owners under the present regulation will be explored and a planning structure that considers distribution capacity contracts as a planning option will be proposed. This will serve as a road map for DSOs to implement its preferred planning tools in an optimisation context, considering costs of investment, reliability, operation, and capacity provision while complying with current regulation.
A Distribution System Operator (DSO) might consider a capacity contract as a planning alternative to defer grid investments. A Virtual Power Plant (VPP) might be able to provide such capacity and change its production as requested by the DSO. This article presents an assessment of the impact of this type of distribution capacity contract (DCC) on VPP's remuneration. This assessment is done by comparing the optimal production / bidding strategy which maximize its profit, under presence or absence of these contracts. The impact of intermittent generation and storage while evaluating these scenarios will be investigated as well. A stochastic unit commitment will be used to determine the VPP's strategy under uncertainties from wind power, load, market prices, and the requested power by the DSO. The model showed that the VPP involvement in distribution capacity contracts can improve its remuneration when certain types of Distributed Energy Resources (DER) are used to provide the service.
This paper will give a general overview of the potential problems associated with remote-meter reading via the power grid and describe some of the technologies available. A comparison will be made between the power grid as a communication channel and other, dedicated and shared, channels. Examples will be given of practical cases in which the communication channel does not function in the intended way.
This paper describes a new and highly efficient measurement method (algorithm) that determines how flicker propagates throughout the network and also traces the dominant flicker source. The fundamental principle of the method is to use the fact that a flicker source produces an amplitude modulation in the voltage and current waveforms. The low frequency variations in voltage and current that cause flicker are retrieved in a demodulation and filtering process. By first multiplying the low frequency variations in voltage and current and then integrate, a new quantity, flicker power, is achieved. The sign and the magnitude of flicker power give the direction to the flicker source as well as tracing the dominating flicker source.
Industries that produce flicker are often placed close to each other and connected to the same power grid system. This implies that the measured flicker level at the point of common coupling (PCC) is a result of contribution from a number of different flicker sources. In a mitigation process it is essential to know which one of the flicker sources is the dominant one. Therefore this paper proposes a method to determine the flicker propagations and trace the flicker sources by using flicker power measurements. Flicker power is considered as a quantity containing both sign and magnitude. The sign determines if a flicker source is placed downstream or upstream with respect to a given monitoring point and the magnitude is used to determine the propagation of flicker power throughout the power network and to trace the dominant flicker source. This paper covers the theoretical background of flicker power and describes a novel method for calculation of flicker power that can be implemented in a power network analyzer. Also a conducted field test based on the proposed method is described in the paper.
This paper describes an algorithm for calculating the direction to a flicker source with respect to a monitoring point. The proposed algorithm is based on sampling of both the voltage and current. The low-frequency fluctuations in voltage and current are recovered from the input signals by demodulation, and passed through a bandpass filter as described in IEC 61000-4-15. A new quantity - flicker power - is defined from the output signals of the two filters. The direction to a flicker source is obtained from the sign of this flicker power. The proposed algorithm has been validated by simulations and several field measurements
The support vector machine (SVM) is a powerful method for statistical classification of data used in a number of different applications. However, the usefulness of the method in a commercial available system is very much dependent on whether the SVM classifier can be pretrained from a factory since it is not realistic that the SVM classifier must be trained by the customers themselves before it can be used. This paper proposes a novel SVM classification system for voltage disturbances. The performance of the proposed SVM classifier is investigated when the voltage disturbance data used for training and testing originated from different sources. The data used in the experiments were obtained from both real disturbances recorded in two different power networks and from synthetic data. The experimental results shown high accuracy in classification with training data from one power network and unseen testing data from another. High accuracy was also achieved when the SVM classifier was trained on data from a real power network and test data originated from synthetic data. A lower accuracy resulted when the SVM classifier was trained on synthetic data and test data originated from the power network.
This paper addressed the harmonic emission from a large off-shore wind farm. An overview is given of the issues, where a distinction is made between frequencies below and above 2 kHz. Three different approaches are presented: a simplified mathematical model; a more detailed mathematical model; and measurements at the point of connection for an off-shore wind farm. It is concluded from both models and measurements that the emission is small for frequencies above a few kHz. However, specific resonances at higher frequencies involving the power transformers, when coinciding with switching frequencies or harmonics of switching frequencies, could result in high emission even at these high frequencies. Studies, including the propagation through the collection grid, are needed with the connection of any wind park to the grid.
This paper addresses the harmonic emission from a large off-shore wind farm. An overview is given of the issues, where a distinction is made between frequencies below and above 2 kHz. Three different approaches are presented: a simplified mathematical model; a more detailed mathematical model; and measurements and the point of connection for an off-shore wind farm. It is concluded from both models and measurements that the emission is small for frequencies above a few kHz. However, specific resonances at higher frequencies involving the power transformers, when coinciding with switching frequencies or harmonics of switching frequencies, could result in high emission even at these high frequencies.
This paper presents some methods to extract additional information from voltage dip recordings, beyond residual voltage and duration. Additionally it discusses some issues related to the massive amount of data obtained from modern measurements that, is referred to as Big Data. The paper proposes some Deep Learning based algorithms as good candidates to extract complex features from big data as a step towards additional information. The applications of the information include predicting individual equipment performance, fault type and location, protection operation, and overall load behavior. Individual equipment and overall load include production as well as consumption
This paper verifies the potential of ellipse parameters as voltage dip characteristics. The space-phasor model (SPM) of three phase voltages is generally in form of an ellipse in the complex plane. Mathematical relations are derived between the single-event characteristics (Characteristic Voltage; PN factor and Dip Type), and the ellipse parameters (semi-major axis, Semi-minor axis and major axis direction). The relations are validated by applying them to several actual recorded voltage dips and synthetic voltage dips.
This paper presents a review of literature on voltage dips, from several points of view, throughout the last decade. It also summarizes the results related to voltage dip mitigation in both AC and DC power systems whereas it shows the remaining challenges that requires further research on voltage dips.
This paper presents a diagnostic method to detect switch failure of PWM power converters. The proposed method uses space phasor model (SPM) of three voltages measured at terminal of the power converter, then it applies principal component analysis to detect and localize the failure mode. The SPM results in one unique rotated ellipse or semi ellipse for every failure mode of every faulty leg. The quadrants occupied by the ellipse or semi ellipse also determine the faulty switch location in the leg. The proposed method is validated through comprehensive simulations.
This paper proposes a method for monitoring of voltages in three-phase systems using parameters of the ellipse, correspondent to the space phasor model of three-phase voltages. Three main parameters, semi-minor axis, semi-major axis and rotating angle of the ellipse are calculated as single-cycle characteristics. Once these characteristics exceed predefined threshold values, different voltage events are detected. Given whole event data the parameters of the corresponding ellipse are calculated as ‘single-event characteristics’. The proposed method is applied to different measured voltage waveforms. The simulation results confirm that the ellipse parameters are a good basis for both detecting and characterizing voltage events.
This paper gives a general introduction to “Big Data” in general and to Big Data in smart grids in particular. Large amounts of data (Big Data) contains a lots of information, however developing the analytics to extract such information is a big challenge due to some of the particular characteristics of Big Data. This paper investigates some existing analytic algorithms, especially deep learning algorithms, as tools for handling Big Data. The paper also explains the potential for deep learning application in smart grids.
This paper proposes a method for characterizing voltage dips based on the space phasor model of the three phase-to-neutral voltages, instead of the individual voltages. This has several advantages. Using a K-means clustering algorithm, a multi-stage dip is separated into its individual event segments directly instead of first detecting the transition segments. The logistic regression algorithm fits the best single-segment characteristics to every individual segment, instead of extreme values being used for this, as in earlier methods. The method is validated by applying it to synthetic and measured dips. It can be generalized for application to both single- and multi-stage dips.
This paper proposes a novel method for voltage transient detection and characterization using space phasor model (SPM) of the three phase-to-neutral voltages as basis. A Gaussian model based anomaly detection technique is used to extract transient samples as anomalous samples. The proposed method introduces and calculates a set of 'single-transient characteristics'(STC) for voltage transient events. This facilitates quantification of transients, leads to additional information about transient origin, and enables comparing different transients. The proposed method is not sensitive to shallow harmonic distortion particularly in deal with oscillating transients.
A number of transients measured at distribution or transmission level have been applied to the proposed method. The simulation results support the effectiveness of the SPM for voltage transient analytics.
This paper proposes a machine-learning-based framework for voltage quality analytics, where the space phasor model (SPM) of the three-phase voltages before, during, and after the event is applied as input data. The framework proceeds along with three main steps: (a) event extraction, (b) event characterization, and (c) additional information extraction. During the first step, it utilizes a Gaussian-based anomaly detection (GAD) technique to extract the event data from the recording. Principal component analysis (PCA) is adopted during the second step, where it is shown that the principal components correspond to the semi-minor and semi-major axis of the ellipse formed by the SPM. During the third step, these characteristics are interpreted to extract additional information about the underlying cause of the event. The performance of the framework was verified through experiments conducted on datasets containing synthetic and measured power quality events. The results show that the combination of semi-major axis, semi-minor axis, and direction of the major axis forms a sufficient base to characterize, classify, and eventually extract additional information from recorded event data.
In many real applications, the ground truths of class labels from voltage dip sequences used for training a voltage dip classification system are unknown, and require manual labelling by human experts. This paper proposes a novel deep active learning method for automatic labelling of voltage dip sequences used for the training process. We propose a novel deep active learning method, guided by a generative adversarial network (GAN), where the generator is formed by modelling data with a Gaussian mixture model and provides the estimated probability distribution function (pdf) where the query criterion of the deep active learning method is built upon. Furthermore, the discriminator is formed by a support vector machine (SVM). The proposed method has been tested on a voltage dip dataset (containing 916 dips) measured in a European country. The experiments have resulted in good performance (classification rate 83% and false alarm 3.2%), which have demonstrated the effectiveness of the proposed method.
This paper proposes a novel method for voltage dip classification using deep convolutional neural networks. The main contributions of this paper include: (a) to propose a new effective deep convolutional neural network architecture for automatically learning voltage dip features, rather than extracting hand-crafted features; (b) to employ the deep learning in an effective two-dimensional transform domain, under space-phasor model (SPM), for efficient learning of dip features; (c) to characterize voltage dips by two-dimensional SPM-based deep learning, which leads to voltage dip features independent of the duration and sampling frequency of dip recordings; (d) to develop robust automatically-extracted features that are insensitive to training and test datasets measured from different countries/regions.
Experiments were conducted on datasets containing about 6000 measured voltage dips spread over seven classes measured from several different countries. Results have shown good performance of the proposed method: average classification rate is about 97% and false alarm rate is about 0.50%. The test results from the proposed method are compared with the results from two existing dip classification methods. The proposed method is shown to out-perform these existing methods.
Rapid and wide-spread deployment of newtechnologies in the electric power distribution network under theconcept of smart grids has resulted in a growing need to newstandards and guidelines specifically designed to addressemerging technological challenges and further streamline the useof new technology. This document is prepared as part of theactivities to develop the IEEE Power and Energy Society’s“smart distribution application guide”, to give guidance toutilities and network operators in the use of new technology inelectric power distribution. The document will provide adescription of the available new technology based on itsapplication, followed by a more detailed description of thetechnologies and associated supporting solutions. The topicsdiscussed in this document include improving the reliability ofsupply, improving the power quality, improving the efficiency ofdistribution-system operation, increasing hosting capacity fornew production or for new consumption, and allowing marketfunctioning and participation of all network users. This papercontains some examples of texts as they are currently beingdiscussed within the smart distribution working group
In this paper, a deep learning (DL)-based method for automatic feature extraction and classification of voltage dips is proposed. The method consists of a dedicated architecture of Long Short-Term Memory (LSTM), which is a special type of Recurrent Neural Networks (RNNs). A total of 5982 three-phase one-cycle voltage dip RMS sequences, measured from several countries, has been used in our experiments. Our results have shown that the proposed method is able to classify the voltage dips from learned features in LSTM, with 93.40% classification accuracy on the test data set. The developed architecture is shown to be novel for feature learning and classification of voltage dips. Different from the conventional machine learning methods, the proposed method is able to learn dip features without requiring transition-event segmentation, selecting thresholds, and using expert rules or human expert knowledge, when a large amount of measurement data is available. This opens a new possibility of exploiting deep learning technology for power quality data analytics and classification.
Traditionally when studying harmonics at transmis-sion system level the medium and low voltage grid is modeledwith either an equivalent short-circuit impedance or a fixed PQload (based on the powerflow) directly at the secondary side ofthe transmission transformer. However the recent replacement ofconventional loads with electronic loads as well as the extendeduse of power electronics in LV level will result in reductionof damping and increase of household capacitance which maychange the harmonic impedance seen from the upstream network(MV or HV) both in the dominant frequency and magnitude.In this paper a HV feeder is modeled in detail starting fromthe transmission transformer up to the household equipmentto investigate whether changes in the type of load affect theimpedance seen from the HV. The simulations are performedusing software package PSCAD/EMTDC.
For the study of switching transients in Extra High Voltage (EHV) networks it is common practice to model a relatively small part of the EHV network. The downstream network is either disregarded or modeled as a parallel RC or RL circuit, based on the powerflow, directly at the secondary side of the transmission transformer. In this paper an investigation for the extent and model type of the downstream network (below 380 kV) is performed. A detailed model for an example downstream network is developed. Different simplifications and equivalent models are compared in frequency domain and in time domain, during cable energization. The impact of the lower voltage network on both the maximum overvoltage and the harmonic content during energization is assessed. Moreover, components that are of major importance are identified and proper equivalent models for the downstream network are proposed. Simulation results show that in particular the type of the 150 kV network (underground cable or overhead line) affects greatly both the maximum overvoltage and the harmonic content during energization. On the other hand, the parameters of end customer loads have only minor effect.
This paper explores the approach of having a common Power Factor Correction circuit for domestic and commercial loads. This leads to lower harmonic distortion without the need to install (expensive) active rectifiers in each end-user device. The need for power-factor correction as well as a number of design options is discussed in this paper. The design and cost estimation of a common Power Factor Correction scheme and some reliability issues are discussed
Consumer electronic devices mostly get their energy from the electric power grid. Such devices might be continuously connected to the grid (like televisions) or only connected to charge the batteries (like cell phones). The amount of energy taken from the grid is not reduced by using devices powered by batteries. Instead, the electrical energy consumption is more likely increased due to the losses in the conversion process and because there are more opportunities to use the device.
This paper contains some thoughts on the relations between power quality and smart grids. It includes some of the, in our opinion, important research and development activities that are needed within power quality as part of the transition to the smart grid.