Unmanned aerial vehicle (UAV) has facilitated a wide range of real-world applications and attracted extensive research in the mobile computing field. Specially, developing real-time robust visual onboard trackers for all-day aerial maneuver can remarkably broaden the scope of intelligent deployment of UAV. However, prior tracking methods have merely focused on robust tracking in the well-illuminated scenes, while ignoring trackers' capabilities to be deployed in the dark. In darkness, the conditions can be more complex and harsh, easily posing inferior robust tracking or even tracking failure. To this end, this work proposes a novel discriminative correlation filter-based tracker with illumination adaptive and anti-dark capability, namely ADTrack. ADTrack firstly exploits image illuminance information to enable adaptability of the model to the given light condition. Then, by virtue of an efficient enhancer, ADTrack carries out image pretreatment where a target aware mask is generated. Benefiting from the mask, ADTrack aims to solve a novel dual regression problem where dual filters are online trained with mutual constraint. Besides, this work also constructs a UAV nighttime tracking benchmark UAVDark135. Exhaustive experiments on authoritative benchmarks and onboard tests are implemented to validate the superiority and robustness of ADTrack in all-day conditions.
As a crucial robotic perception capability, visual tracking has been intensively studied recently. In the real-world scenarios, the onboard processing time of the image streams inevitably leads to a discrepancy between the tracking results and the real-world states. However, existing visual tracking benchmarks commonly run the trackers offline and ignore such latency in the evaluation. In this work, we aim to deal with a more realistic problem of latency-aware tracking. The state-of-the-art trackers are evaluated in the aerial scenarios with new metrics jointly assessing the tracking accuracy and efficiency. Moreover, a new predictive visual tracking baseline is developed to compensate for the latency stemming from the onboard computation. Our latency-aware benchmark can provide a more realistic evaluation of the trackers for the robotic applications. Besides, exhaustive experiments have proven the effectiveness of the proposed predictive visual tracking baseline approach.
In this study, a software tool (IFGFA) for identification of featured genes from gene expression data based on latent factor analysis was developed.Despite the availability of computational methods and statistical models appropriate for analyzing special genomic data, IFGFA provides a platform for predicting colon cancer-related genes and can be applied to other cancer types.The computational framework behind IFGFA is based on the well-established Bayesian factor and regression model and prior knowledge about the gene from OMIM.We validated the predicted genes by analyzing somatic mutations in patients.An interface was developed to enable users to run the computational framework efficiently through visual programming.IFGFA is executable in a Windows system and does not require other C.H.
Siamese object tracking has facilitated diversified applications for autonomous unmanned aerial vehicles (UAVs). However, they are typically trained on general images with relatively large objects instead of small objects observed from UAV. The gap on object scale between the training and inference phases is prone to suboptimal tracking performance or even failure. To solve the gap issue and tailor general Siamese trackers for UAV tracking, this work proposes a novel scale-aware domain adaptation framework, i.e., ScaleAwareDA. Specifically, a contrastive learning-inspired network is proposed to guide the training phase of Transformer-based feature alignment for small objects. In this network, feature projection module is designed to avoid information loss of small objects. Feature prediction module is developed to drive the aforementioned training phase in a self-supervised way. In addition, to construct the target domain, training datasets with UAV-specific attributes are obtained by downsampling general training datasets. Consequently, this novel training approach can assist a tracker to represent objects in UAV scenarios more powerfully and thus maintain its robustness. Extensive experiments on three authoritative challenging UAV tracking benchmarks have demonstrated the superior tracking performance of ScaleAwareDA. In addition, quantitative real-world tests further attest to its practicality.
Aircraft tracking plays a key and important role in the Sense-and-Avoid system of Unmanned Aerial Vehicles (UAVs). This paper presents a novel robust visual tracking algorithm for UAVs in the midair to track an arbitrary aircraft at real-time frame rates, together with a unique evaluation system. This visual algorithm mainly consists of adaptive discriminative visual tracking method, Multiple-Instance (MI) learning approach, Multiple-Classifier (MC) voting mechanism and Multiple-Resolution (MR) representation strategy, that is called Adaptive M 3 tracker, i.e. AM 3 . In this tracker, the importance of test sample has been integrated to improve the tracking stability, accuracy and real-time performances. The experimental results show that this algorithm is more robust, efficient and accurate against the existing state-of-art trackers, overcoming the problems generated by the challenging situations such as obvious appearance change, variant surrounding illumination, partial aircraft occlusion, blur motion, rapid pose variation and onboard mechanical vibration, low computation capacity and delayed information communication between UAVs and Ground Station (GS). To our best knowledge, this is the first work to present this tracker for solving online learning and tracking freewill aircraft/intruder in the UAVs.
Previous advances in object tracking mostly reported on favorable illumination circumstances while neglecting performance at nighttime, which significantly impeded the development of related aerial robot applications. This work instead develops a novel unsupervised domain adaptation framework for nighttime aerial tracking (named UDAT). Specifically, a unique object discovery approach is provided to generate training patches from raw nighttime tracking videos. To tackle the domain discrepancy, we employ a Transformer-based bridging layer post to the feature extractor to align image features from both domains. With a Transformer day/night feature discriminator, the day-time tracking model is adversarially trained to track at night. Moreover, we construct a pioneering benchmark namely NAT2021 for unsupervised domain adaptive night-time tracking, which comprises a test set of 180 manually annotated tracking sequences and a train set of over 276k unlabelled nighttime tracking frames. Exhaustive experiments demonstrate the robustness and domain adaptability of the proposed framework in nighttime aerial tracking. The code and benchmark are available at https://github.com/vision4robotics/UDAT.