Multi-view clustering has attracted growing attention owing to its capabilities of aggregating information from various sources and its promising horizons in public affairs. Up till now, many advanced approaches have been proposed in recent literature. However, there are several ongoing difficulties to be tackled. One common dilemma occurs while attempting to align the features of different views. {Moreover, due to the fact that many existing multi-view clustering algorithms stem from spectral clustering, this results to cubic time complexity w.r.t. the number of dataset. However, we propose Anchor-based Multi-view Subspace Clustering with Hierarchical Feature Descent(MVSC-HFD) to tackle the discrepancy among views through hierarchical feature descent and project to a common subspace( STAGE 1), which reveals dependency of different views. We further reduce the computational complexity to linear time cost through a unified sampling strategy in the common subspace( STAGE 2), followed by anchor-based subspace clustering to learn the bipartite graph collectively( STAGE 3). }Extensive experimental results on public benchmark datasets demonstrate that our proposed model consistently outperforms the state-of-the-art techniques.
Risk-sensitive reinforcement learning (RL) aims to optimize policies that balance the expected reward and risk. In this paper, we present a novel risk-sensitive RL framework that employs an Iterated Conditional Value-at-Risk (CVaR) objective under both linear and general function approximations, enriched by human feedback. These new formulations provide a principled way to guarantee safety in each decision making step throughout the control process. Moreover, integrating human feedback into risk-sensitive RL framework bridges the gap between algorithmic decision-making and human participation, allowing us to also guarantee safety for human-in-the-loop systems. We propose provably sample-efficient algorithms for this Iterated CVaR RL and provide rigorous theoretical analysis. Furthermore, we establish a matching lower bound to corroborate the optimality of our algorithms in a linear context.
Panoramic image mosaic, which aims to take many regular photographic or video images in order to cover the entire viewing space, plays an important role in many remote sensing tasks including map updating, change detection, environmental monitoring and surveillance. Typical mosaic methods involve four steps, namely, feature extraction, feature matching, transformation estimation, and blending. In this study, we introduce several novel strategies in these steps for the fast and automatic construction of unmanned aerial vehicle (UAV) panoramic image mosaic. First, we analyze and test several existing feature extraction techniques and propose to use oriented FAST and rotated BRIEF (ORB) because of its efficiency and capability to generate high-quality feature points. Second, we introduce a fast and robust feature matching strategy based on descriptor similarity along with a locality preserving geometric constraint. Third, we model the spatial transformation between a UAV image pair with an affine function and introduce a robust Bayesian framework to estimate this transformation from the ORB feature matches even if these matches are contaminated by false ones. Finally, we propose a gradual fading method to fuse and blend the matched images to create an attractive panorama. The qualitative and quantitative results of an image set demonstrate that our method exhibits superior performance over existing methods in terms of accuracy and efficiency.
Anchor graph has been recently proposed to accelerate multi-view graph clustering and widely applied in various large-scale applications. Different from capturing full instance relationships, these methods choose small portion anchors among each view, construct single-view anchor graphs and combine them into the unified graph. Despite its efficiency, we observe that: (i) Existing mechanism adopts a separable two-step procedure-anchor graph construction and individual graph fusion, which may degrade the clustering performance. (ii)These methods determine the number of selected anchors to be equal among all the views, which may destruct the data distribution diversity. A more flexible multi-view anchor graph fusion framework with diverse magnitudes is desired to enhance the representation ability. (iii) During the latter fusion process, current anchor graph fusion framework follows simple linearly-combined style while the intrinsic clustering structures are ignored. To address these issues, we propose a novel scalable and flexible anchor graph fusion framework for multi-view graph clustering method in this paper. Specially, the anchor graph construction and graph alignment are jointly optimized in our unified framework to boost clustering quality. Moreover, we present a novel structural alignment regularization to adaptively fuse multiple anchor graphs with different magnitudes. In addition, our proposed method inherits the linear complexity of existing anchor strategies respecting to the sample number, which is time-economical for large-scale data. Experiments conducted on various benchmark datasets demonstrate the superiority and effectiveness of the newly proposed anchor graph fusion framework against the existing state-of-the-arts over the clustering performance promotion and time expenditure. Our code is publicly available at https://github.com/wangsiwei2010/SMVAGC-SF.
Knowledge graph reasoning (KGR), aiming to deduce new facts from existing facts based on mined logic rules underlying knowledge graphs (KGs), has become a fast-growing research direction. It has been proven to significantly benefit the usage of KGs in many AI applications, such as question answering, recommendation systems, and etc. According to the graph types, existing KGR models can be roughly divided into three categories, i.e., static models, temporal models, and multi-modal models. Early works in this domain mainly focus on static KGR, and recent works try to leverage the temporal and multi-modal information, which are more practical and closer to real-world. However, no survey papers and open-source repositories comprehensively summarize and discuss models in this important direction. To fill the gap, we conduct a first survey for knowledge graph reasoning tracing from static to temporal and then to multi-modal KGs. Concretely, the models are reviewed based on bi-level taxonomy, i.e., top-level (graph types) and base-level (techniques and scenarios). Besides, the performances, as well as datasets, are summarized and presented. Moreover, we point out the challenges and potential opportunities to enlighten the readers.