Circuit model representing generator characteristics is essential in stability analysis and control system design of doubly-fed induction generator (DFIG). This paper develops the relationship between generated voltage and magnetic field in DFIG, and establishes an equivalent circuit model considering rotor motion. The novel magnitude-phase equivalent circuit (MPEC) model is established based on vector magnitude and angle. Furthermore, the paper analyzes phasor diagrams, calculates the power expression under the MPEC, and obtains steady-state power-angle characteristic of the DFIG. In the proposed power-angle characteristic, the DFIG's output incorporates variations in slip ratio and power-angle, which combines the characteristics of asynchronous and synchronous machines. Based on the proposed MPEC and power-angle characteristic, the steady-state maximum power of the DFIG is determined at the critical stable power-angle of π/2. Finally, simulation and experimental results are given to validate the effectiveness of the proposed equivalent circuit model and power-angle characteristic of DFIG. The results show that the MPEC model is capable of analyzing the DFIG stability from the power-angle motion perspective, and the system exhibits a potential for instability when the power-angle exceeds 1.5.
Algorithmic fairness has aroused considerable interests in data mining and machine learning communities recently. So far the existing research has been mostly focusing on the development of quantitative metrics to measure algorithm disparities across different protected groups, and approaches for adjusting the algorithm output to reduce such disparities. In this paper, we propose to study the problem of identification of the source of model disparities. Unlike existing interpretation methods which typically learn feature importance, we consider the causal relationships among feature variables and propose a novel framework to decompose the disparity into the sum of contributions from fairness-aware causal paths, which are paths linking the sensitive attribute and the final predictions, on the graph. We also consider the scenario when the directions on certain edges within those paths cannot be determined. Our framework is also model agnostic and applicable to a variety of quantitative disparity measures. Empirical evaluations on both synthetic and real-world data sets are provided to show that our method can provide precise and comprehensive explanations to the model disparities.
Algorithmic fairness has received lots of interests in machine learning recently. In this paper, we focus on the bipartite ranking scenario, where the instances come from either the positive or negative class and the goal is to learn a ranking function that ranks positive instances higher than negative ones. In an unfair setting, the probabilities of ranking the positives higher than negatives are different across different protected groups. We propose a general post-processing framework, xOrder, for achieving fairness in bipartite ranking while maintaining the algorithm classification performance. In particular, we optimize a weighted sum of the utility and fairness by directly adjusting the relative ordering across groups. We formulate this problem as identifying an optimal warping path across {different} protected groups and solve it through a dynamic programming process. xOrder is compatible with various classification models and applicable to a variety of ranking fairness metrics. We evaluate our proposed algorithm on four benchmark data sets and two real world patient electronic health record repository. The experimental results show that our approach can achieve great balance between the algorithm utility and ranking fairness. Our algorithm can also achieve robust performance when training and testing ranking score distributions are significantly different.
Abstract The conventional deloading control has certain problems, such as low power generation efficiency and a small speed adjustment range. To improve the system's frequency quality and enhance the power grid's stability, this study comprehensively considers the effect of random source‐load power fluctuations on the system frequency. Moreover, this study proposes a smooth primary frequency control strategy for wind turbine based on the coordinated control of the variable power point tracking and supercapacitor energy storage. The impact of wind power fluctuations on the system frequency at different timescales for wind turbine is studied based on the historical data of wind power fluctuations in a strong wind meteorological cycle of a wind farm. The method determines the capacity of the energy storage device required for frequency smoothing at the optimal timescale. In combination with the required capacity of the wind turbine to participate in the system's primary frequency regulation, the supercapacitor energy storage device is optimally configured, and a set of supercapacitor energy storage device with the lowest cost under the highest charging/discharging efficiency is designed. Simulation and experimental results show that the primary frequency adjustment capability of the control strategy proposed in this study is significantly improved compared with the conventional primary frequency modulation control.
Algorithmic fairness has aroused considerable interests in data mining and machine learning communities recently. So far the existing research has been mostly focusing on the development of quantitative metrics to measure algorithm disparities across different protected groups, and approaches for adjusting the algorithm output to reduce such disparities. In this paper, we propose to study the problem of identification of the source of model disparities. Unlike existing interpretation methods which typically learn feature importance, we consider the causal relationships among feature variables and propose a novel framework to decompose the disparity into the sum of contributions from fairness-aware causal paths, which are paths linking the sensitive attribute and the final predictions, on the graph. We also consider the scenario when the directions on certain edges within those paths cannot be determined. Our framework is also model agnostic and applicable to a variety of quantitative disparity measures. Empirical evaluations on both synthetic and real-world data sets are provided to show that our method can provide precise and comprehensive explanations to the model disparities.
Federated recommendation aims to collect global knowledge by aggregating local models from massive devices, to provide recommendations while ensuring privacy. Current methods mainly leverage aggregation functions invented by federated vision community to aggregate parameters from similar clients, e.g., clustering aggregation. Despite considerable performance, we argue that it is suboptimal to apply them to federated recommendation directly. This is mainly reflected in the disparate model architectures. Different from structured parameters like convolutional neural networks in federated vision, federated recommender models usually distinguish itself by employing one-to-one item embedding table. Such a discrepancy induces the challenging embedding skew issue, which continually updates the trained embeddings but ignores the non-trained ones during aggregation, thus failing to predict future items accurately. To this end, we propose a personalized Federated recommendation model with Composite Aggregation (FedCA), which not only aggregates similar clients to enhance trained embeddings, but also aggregates complementary clients to update non-trained embeddings. Besides, we formulate the overall learning process into a unified optimization algorithm to jointly learn the similarity and complementarity. Extensive experiments on several real-world datasets substantiate the effectiveness of our proposed model. The source codes are available at https://github.com/hongleizhang/FedCA.
Recently there has been an increased interest in unsupervised learning of disentangled representations on the data generated from variation factors. Existing works rely on the assumption that the generative factors are independent despite this assumption is often violated in real-world scenarios. In this paper, we focus on the unsupervised learning of disentanglement in a general setting which the generative factors may be correlated. We propose an intervention-based framework to tackle this problem. In particular, first we apply a random intervention operation on a selected feature of the learnt image representation; then we propose a novel metric to measure the disentanglement by a downstream image translation task and prove it is consistent with existing ground-truth-required metrics experimentally; finally we design an end-to-end model to learn the disentangled representations with the self-supervision information from the downstream translation task. We evaluate our method on benchmark datasets quantitatively and give qualitative comparisons on a real-world dataset. Experiments show that our algorithm outperforms baselines on benchmark datasets when faced with correlated data and can disentangle semantic factors compared to baselines on real-world dataset.
In mobile and IoT systems, Federated Learning (FL) is increasingly important for effectively using data while maintaining user privacy. One key challenge in FL is managing statistical heterogeneity, such as non-i.i.d. data, arising from numerous clients and diverse data sources. This requires strategic cooperation, often with clients having similar characteristics. However, we are interested in a fundamental question: does achieving optimal cooperation necessarily entail cooperating with the most similar clients? Typically, significant model performance improvements are often realized not by partnering with the most similar models, but through leveraging complementary data. Our theoretical and empirical analyses suggest that optimal cooperation is achieved by enhancing complementarity in feature distribution while restricting the disparity in the correlation between features and targets. Accordingly, we introduce a novel framework, \texttt{FedSaC}, which balances similarity and complementarity in FL cooperation. Our framework aims to approximate an optimal cooperation network for each client by optimizing a weighted sum of model similarity and feature complementarity. The strength of \texttt{FedSaC} lies in its adaptability to various levels of data heterogeneity and multimodal scenarios. Our comprehensive unimodal and multimodal experiments demonstrate that \texttt{FedSaC} markedly surpasses other state-of-the-art FL methods.
With the increasing penetration of renewable energy, the traditional energy storage capacity planning method may become impracticable due to space-time asymmetry of electricity generation and consumption, and multiple random factors coupling. In this paper, a cogeneration microgrid with advanced adiabatic compressed air energy storage system (AA-CAES) is constructed to accomplish the energy cascade use in a complex environment, in this context, a capacity planning strategy of AA-CAES is proposed to catering the challenges of multiple random based on the Latin Hypercube Sampling (LHS) and K-means clustering algorithms. The numerical simulation is carried out to validate the result and illustrated the relationship between the system economy and the number of samples N and the clusters K. The result indicates that the operation and construction costs of the system are likely to be stable when the number of samples N is more than 8×10 4 and the number of clusters K is less than 5, the economy of the system is optimal at this time.