Landslides are common and highly destructive geological hazards that pose significant threats to both human lives and property on a global scale every year. In this study, a novel ensemble broad learning system (BLS) was proposed for evaluating landslide susceptibility in Taiyuan City, Northern China. Meanwhile, ensemble learning models based on the classification and regression tree (CART) and support vector machine (SVM) algorithms were applied for a comparison with the BLS-AdaBoost model. Firstly, in this study, a grand total of 114 landslide locations were identified, which were randomly divided into two parts, namely 70% for model training and the remaining 30% for model validation. Twelve landslide conditioning factors were selected for mapping landslide susceptibility. Subsequently, three models, namely CART-AdaBoost, SVM-AdaBoost and BLS-AdaBoost, were constructed and used to map landslide susceptibility. The frequency ratio (FR) was used to assess the relationship between landslides and different influencing factors. Finally, the three models were validated and compared on the basis of both statistical-based evaluations and ROC curve-based evaluations. The results showed that the integrated model with BLS as the base learner achieved the highest AUC value of 0.889, followed by the integrated models that used CART (AUC = 0.873) and SVM (AUC = 0.846) as the base learners. In general, the BLS-based integrated learning methods are effective for evaluating landslide susceptibility. Currently, the application of BLS and the integrated BLS model for evaluating landslide susceptibility is limited. This study is one of the first efforts to use BLS and the integrated BLS model for evaluating landslide susceptibility. BLS and its improvements have the potential to provide a more powerful approach to assess landslide susceptibility.
Abstract The unique, ambiguous, and complex navigable environment determines the essential difference between Arctic shipping routes and conventional routes in regard to safety issues. To achieve a scientific understanding of the characteristics and variations of environmental risks involved in the Arctic shipping, it is essential to rationally address the uncertainty and incompleteness of environment‐related risk information. In this study, fuzzy evidential reasoning is introduced to carry out multisource heterogeneous data fusion and spatiotemporal dynamic assessment of navigable environmental risks for Arctic shipping routes. Based on big Earth data collected from the European Center for Medium‐Range Weather Forecasts, National Snow And Ice Data Center, National Center for Environmental Information, and University of Bremen from 2012 to 2019, a case study of the Northeast Passage is considered to demonstrate the feasibility of the proposed methodology. Finally, the results are described from three aspects: spatial distribution, temporal changes, and sensitivity analysis, with consideration of the entire passage and five marginal seas at the same time. Based on these findings, the prospect of application of big Earth data in risk assessment is further discussed from two aspects of knowledge acquisition by big data and risk analysis at different scales, to inspire sustainable development of Arctic shipping.
Gomoku (also called "Five in a row") is one of the earliest popular checkerboard games invented by humans. In computing research, a tree-based data structure and its branching factor is the common technique of analyzing such board game. While eye-tracking techniques have been used for decades in interpreting human behaviors and improving human-computer interaction, it had rarely been utilized in analyzing engagement in game playing. Utilizing a game refinement theory alongside eye-tracking technique, the objective of this paper is to propose a new algorithm to measure the branching factor and quantify personalized challenge in playing a game. In addition, investigating the eye-tracking parameters may also provide means of measuring the players ability, further supporting the main objective of this paper. The findings showed that the proposed algorithm is a promising approach in order to help the game developer to design a game with a personalized challenge that changes according to the players ability; thus, possibly improving the players engagement and entertainment in games as well as other domains in the future.
The real-world network structure often presents the characteristics of overlapping communities. High-quality communities are helpful to understand the real complex networks. The discovery of overlapping communities has become a hot spot in current recommendation research algorithms. Regarding the randomness and instability problems in the original overlapping community discovery algorithm, this paper proposes the overlapping community discovery algorithm CODA-BS, which is based on the priority of community degree subordination. In each iteration process, the CODA-BS algorithm uses information entropy to calculate the vertex threshold, uses the weighted function proposed in this paper to optimize the label membership coefficient, and performs label screening and normalization of the membership coefficient according to the label propagation rules to detect a more ideal community. Tests on the benchmark data set and data sets of Chinese Association for Science and Technology scholars, show that the CODA-BS algorithm has higher accuracy and improved stability than COPRA algorithm, and the CODA-BS algorithm is more accurate and more stable than the classic COPRA algorithm.
Coffee sequential flowering events detection and flower density estimation is essential to predict the ripening time and yield of coffee. In this study, we detected coffee flowering events automatically based on estimated flower densities in high spatial time-series digital images, using a multi-scale region based flower segmentation method. The study area is a coffee plantation in Lujiangba in the Yunnan Province of China. There are 5 flowering events in the coffee flowering period. A digital camera obtained 24 RGB images at each shooting time of day (8:00, 9:00, 10:00, 11:00, 12:00, 13:00, 14:00, 15:00, 16:00, 17:00, 17:30) by automatic adjusting the sensor with 3 depression angles and 8 azimuth angles during the coffee flowering period from March 1st to May 31st. For segment the flowers in one image, multi-scale regions were primarily generated by equally-sized superpixel segmentation and subsequent superpixel merging process. Next, the feature vectors of each region were extracted by color moments (CM) operator and local binary patterns (LBP) operator. Afterwards, the support vector machine (SVM) classifier trained on these features was applied to recognize the flower regions. Thus, the percentage of flower pixels referred as flower proportion (FP), which can estimate the image-based flower density, was calculated in preparation for detecting flowering events of time-series images. In this stage, coefficients of Recall, Precision and intersection over union (IoU) were employed to evaluate the performance of segmentation methods on 14 test images at threes depression angles and then the best flower segmentation algorithm and the optimal angle can be determined. In the stage of flowering events detection, the FPs of different shooting time multitemporal images under the optimal depression angle were calculated and plotted. Then, a threshold of FP, K, was selected to determine whether an image is on a flowering day. To determine the best shooting time for flowering events detection, Recall, Precision and IoU were also employed to evaluate the performance of time-series images shot at 11 time of day for flowering day detection. The results show that the images shot at the depression angle of 77.5 degree is the optimal depression angle for flower segmentation and meanwhile our proposed method achieves the best performance with Recall, Precision and IoU of 84.89%, 74.83% and 65.46% respectively. In the test of flowering day detection of 11 shooting time multitemporal images, the time-series images shot at 13:00 is superior to the time-series images shot at other time of day, with Recall of 65.00%, Precision of 100% and IoU of 65.00% when the K set as 0.4%. Meanwhile, all flowering events can be detected except the fifth event which has few flowers and the FPs of time-series images can correctly indicate the flower densities of each events. In conclusion, our approach can estimate the image-based flower density and detect the coffee sequential flowering events in small fields, so the results can be used for coffee fruit maturity prediction and yield estimation.
Abstract Digital image watermark has been studied as object. It analyzed the typical digital watermark algorithms based on the space domain and transform domain and key researched watermarking algorithm based on discrete wavelet transform. It has designed and improved blind watermarking algorithm and color image watermarking algorithm. Finally, based on the two improved watermarking algorithm, it has designed a dual watermarking algorithm. Both are separated but related. It authenticates dual watermarking algorithm in addition to subjective visual evaluation, but also use numerical objective evaluation and quantitative analysis. Experimental results show that this dual watermarking algorithm combines with robustness and concealment.
Recently, few-shot learning has attracted significant attention in the field of video action recognition, owing to its data-efficient learning paradigm. Despite the encouraging progress, identifying ways to further improve the few-shot learning performance by exploring additional or auxiliary information for video action recognition remains an ongoing challenge. To address this problem, in this paper we make the first attempt to propose a relational action bank with semantic–visual attention for few-shot action recognition. Specifically, we introduce a relational action bank as the auxiliary library to assist the network in understanding the actions in novel classes. Meanwhile, the semantic–visual attention is devised to adaptively capture the connections to the foregone actions via both semantic correlation and visual similarity. We extensively evaluate our approach via two backbone models (ResNet-50 and C3D) on HMDB and Kinetics datasets, and demonstrate that the proposed model can obtain significantly better performance compared against state-of-the-art methods. Notably, our results demonstrate an average improvement of about 6.2% when compared to the second-best method on the Kinetics dataset.
In the field of visual attention, bottom-up or saliency-based visual attention allows primates to detect non-specific conspicuous objects or targets in cluttered scenes. Simple multi-scale "feature maps" detect local spatial discontinuities in intensity, color, orientation, and are combined into a "saliency" map. HMAX is a feature extraction method and this method is motivated by a quantitative model of visual cortex. In this paper, we introduce the Saliency Criteria to measure the perspective fields. This model is based on cortex-like mechanisms and sparse representation, Saliency Criteria is obtained from Shannon's self-information and entropy. We demonstrate that the proposed model achieves superior accuracy with the comparison to classical approach in static saliency map generation on real data of natural scenes and psychology stimuli patterns.