Multilevel clustering problems where the content and contextual information are jointly clustered are ubiquitous in modern datasets. Existing works on this problem are limited to small datasets due to the use of the Gibbs sampler. We address the problem of scaling up multilevel clustering under a Bayesian nonparametric setting, extending the MC2 model proposed in (Nguyen et al., 2014). We ground our approach in structured mean-field and stochastic variational inference (SVI) and develop a tree-structured SVI algorithm that exploits the interplay between content and context modeling. Our new algorithm avoids the need to repeatedly go through the corpus as in Gibbs sampler. More crucially, our method is immediately amendable to parallelization, facilitating a scalable distributed implementation on the Apache Spark platform. We conduct extensive experiments in a variety of domains including text, images, and real-world user application activities. Direct comparison with the Gibbs-sampler demonstrates that our method is an order-of-magnitude faster without loss of model quality. Our Spark-based implementation gains another order-of-magnitude speedup and can scale to large real-world datasets containing millions of documents and groups.
We consider the problem of decentralized detection under constraints on the number of bits that can be transmitted by each sensor. In contrast to most previous work, in which the joint distribution of sensor observations is assumed to be known, we address the problem when only a set of empirical samples is available. We propose a novel algorithm using the framework of empirical risk minimization and marginalized kernels and analyze its computational and statistical properties both theoretically and empirically. We provide an efficient implementation of the algorithm and demonstrate its performance on both simulated and real data sets.
We propose to study and promote the robustness of a model as per its performance through the interpolation of training data distributions. Specifically, (1) we augment the data by finding the worst-case Wasserstein barycenter on the geodesic connecting subpopulation distributions of different categories. (2) We regularize the model for smoother performance on the continuous geodesic path connecting subpopulation distributions. (3) Additionally, we provide a theoretical guarantee of robustness improvement and investigate how the geodesic location and the sample size contribute, respectively. Experimental validations of the proposed strategy on \textit{four} datasets, including CIFAR-100 and ImageNet, establish the efficacy of our method, e.g., our method improves the baselines' certifiable robustness on CIFAR10 up to $7.7\%$, with $16.8\%$ on empirical robustness on CIFAR-100. Our work provides a new perspective of model robustness through the lens of Wasserstein geodesic-based interpolation with a practical off-the-shelf strategy that can be combined with existing robust training methods.
Homogeneous charge compression ignition (HCCI) is a futuristic automotive engine technology that can significantly improve fuel economy and reduce emissions. HCCI engine operation is constrained by combustion instabilities, such as knock, ringing, misfires, high-variability combustion, and so on, and it becomes important to identify the operating envelope defined by these constraints for use in engine diagnostics and controller design. HCCI combustion is dominated by complex nonlinear dynamics, and a first-principle-based dynamic modeling of the operating envelope becomes intractable. In this paper, a machine learning approach is presented to identify the stable operating envelope of HCCI combustion, by learning directly from the experimental data. Stability is defined using thresholds on combustion features obtained from engine in-cylinder pressure measurements. This paper considers instabilities arising from engine misfire and high-variability combustion. A gasoline HCCI engine is used for generating stable and unstable data observations. Owing to an imbalance in class proportions in the data set, the models are developed both based on resampling the data set (by undersampling and oversampling) and based on a cost-sensitive learning method (by overweighting the minority class relative to the majority class observations). Support vector machines (SVMs) and recently developed extreme learning machines (ELM) are utilized for developing dynamic classifiers. The results compared against linear classification methods show that cost-sensitive nonlinear ELM and SVM classification algorithms are well suited for the problem. However, the SVM envelope model requires about 80% more parameters for an accuracy improvement of 3% compared with the ELM envelope model indicating that ELM models may be computationally suitable for the engine application. The proposed modeling approach shows that HCCI engine misfires and high-variability combustion can be predicted ahead of time, given the present values of available sensor measurements, making the models suitable for engine diagnostics and control applications.
Finite mixture models have long been used across a variety of fields in engineering and sciences. Recently there has been a great deal of interest in quantifying the convergence behavior of the mixing measure, a fundamental object that encapsulates all unknown parameters in a mixture distribution. In this paper we propose a general framework for estimating the mixing measure arising in finite mixture models, which we term minimum $\Phi$-distance estimators. We establish a general theory for the minimum $\Phi$-distance estimator, where sharp probability bounds are obtained on the estimation error for the mixing measures in terms of the suprema of the associated empirical processes for a suitably chosen function class $\Phi$. Our framework includes several existing and seemingly distinct estimation methods as special cases but also motivates new estimators. For instance, it extends the minimum Kolmogorov-Smirnov distance estimator to the multivariate setting, and it extends the method of moments to cover a broader family of probability kernels beyond the Gaussian. Moreover, it also includes methods that are applicable to complex (e.g., non-Euclidean) observation domains, using tools from reproducing kernel Hilbert spaces. It will be shown that under general conditions the methods achieve optimal rates of estimation under Wasserstein metrics in either minimax or pointwise sense of convergence; the latter case can be achieved when no upper bound on the finite number of components is given.
We introduce a formulation of optimal transport problem for distributions on function spaces, where the stochastic map between functional domains can be partially represented in terms of an (infinite-dimensional) Hilbert-Schmidt operator mapping a Hilbert space of functions to another. For numerous machine learning tasks, data can be naturally viewed as samples drawn from spaces of functions, such as curves and surfaces, in high dimensions. Optimal transport for functional data analysis provides a useful framework of treatment for such domains. In this work, we develop an efficient algorithm for finding the stochastic transport map between functional domains and provide theoretical guarantees on the existence, uniqueness, and consistency of our estimate for the Hilbert-Schmidt operator. We validate our method on synthetic datasets and study the geometric properties of the transport map. Experiments on real-world datasets of robot arm trajectories further demonstrate the effectiveness of our method on applications in domain adaptation.
We propose Dirichlet Simplex Nest, a class of probabilistic models suitable for a variety of data types, and develop fast and provably accurate inference algorithms by accounting for the model's convex geometry and low dimensional simplicial structure. By exploiting the connection to Voronoi tessellation and properties of Dirichlet distribution, the proposed inference algorithm is shown to achieve consistency and strong error bound guarantees on a range of model settings and data distributions. The effectiveness of our model and the learning algorithm is demonstrated by simulations and by analyses of text and financial data.
We present a Bayesian nonparametric framework for multilevel clustering which utilizes group-level context information to simultaneously discover low-dimensional structures of the group contents and partitions groups into clusters. Using the Dirichlet process as the building block, our model constructs a product base-measure with a nested structure to accommodate content and context observations at multiple levels. The proposed model possesses properties that link the nested Dirichlet processes (nDP) and the Dirichlet process mixture models (DPM) in an interesting way: integrating out all contents results in the DPM over contexts, whereas integrating out group-specific contexts results in the nDP mixture over content variables. We provide a Polya-urn view of the model and an efficient collapsed Gibbs inference procedure. Extensive experiments on real-world datasets demonstrate the advantage of utilizing context information via our model in both text and image domains.
In this article, a stochastic gradient based online learning algorithm for Extreme Learning Machines (ELM) is developed (SG-ELM). A stability criterion based on Lyapunov approach is used to prove both asymptotic stability of estimation error and stability in the estimated parameters suitable for identification of nonlinear dynamic systems. The developed algorithm not only guarantees stability, but also reduces the computational demand compared to the OS-ELM approach based on recursive least squares. In order to demonstrate the effectiveness of the algorithm on a real-world scenario, an advanced combustion engine identification problem is considered. The algorithm is applied to two case studies: An online regression learning for system identification of a Homogeneous Charge Compression Ignition (HCCI) Engine and an online classification learning (with class imbalance) for identifying the dynamic operating envelope of the HCCI Engine. The results indicate that the accuracy of the proposed SG-ELM is comparable to that of the state-of-the-art but adds stability and a reduction in computational effort.
In finite mixture models, apart from underlying mixing measure, true kernel density function of each subpopulation in the data is, in many scenarios, unknown. Perhaps the most popular approach is to choose some kernel functions that we empirically believe our data are generated from and use these kernels to fit our models. Nevertheless, as long as the chosen kernel and the true kernel are different, statistical inference of mixing measure under this setting will be highly unstable. To overcome this challenge, we propose flexible and efficient robust estimators of the mixing measure in these models, which are inspired by the idea of minimum Hellinger distance estimator, model selection criteria, and superefficiency phenomenon. We demonstrate that our estimators consistently recover the true number of components and achieve the optimal convergence rates of parameter estimation under both the well- and mis-specified kernel settings for any fixed bandwidth. These desirable asymptotic properties are illustrated via careful simulation studies with both synthetic and real data.