As an effective learning method in the machine learning,support vector machine has been used in the classification and recognition of tongue images.Although it gained achievements to a certain extent,the result was not satisfactory due to unbalanced size in tongue samples belonging to different categories.To solve this problem,an alternative method called weighted support vector machine is introduced.Its basic idea is to assign different weights on penalty term to the samples from two different classes respectively according to the relative importance.Thus the recognition accuracy of the more important data is improved while the accuracy rate of the less important samples is still acceptable.Experimental results show that the method works well in the classification of unbalanced tongue images.
The highest energy gamma-rays from gamma-ray bursts (GRBs) have important implications for their radiation mechanism. Here we report for the first time the detection of gamma-rays up to 13 TeV from the brightest GRB 221009A by the Large High Altitude Air-shower Observatory (LHAASO). The LHAASO-KM2A detector registered more than 140 gamma-rays with energies above 3 TeV during 230$-$900s after the trigger. The intrinsic energy spectrum of gamma-rays can be described by a power-law after correcting for extragalactic background light (EBL) absorption. Such a hard spectrum challenges the synchrotron self-Compton (SSC) scenario of relativistic electrons for the afterglow emission above several TeV. Observations of gamma-rays up to 13 TeV from a source with a measured redshift of z=0.151 hints more transparency in intergalactic space than previously expected. Alternatively, one may invoke new physics such as Lorentz Invariance Violation (LIV) or an axion origin of very high energy (VHE) signals.
In this paper, vocabulary tree based large-scale image retrieval scheme is proposed that can achieve higher accuracy and speed. The novelty of this paper can be summarized as follows. First, because traditional Scale Invariant Feature Transform (SIFT) descriptors are excessively concentrated in some areas of images, the extraction process of SIFT features is optimized to reduce the number. Then, combined with optimized-SIFT, color histogram in Hue, Saturation, Value (HSV) color space is extracted to be another image feature. Moreover, Local Fisher Discriminant Analysis (LFDA) is applied to reduce the dimension of SIFT and color features, which will help to shorten feature-clustering time. Finally, dimension-reduced features are used to generate vocabulary trees which will be used for large-scale image retrieval. The experimental results on several image datasets show that, the proposed method can achieve satisfying retrieval precision.
As a prevalent constraint, sharp slew rate is often required in circuit design, which causes a huge demand for buffering resources. This problem requires ultrafast buffering techniques to handle large volume of nets while also minimizing buffering cost. This problem is intensively studied in this paper. First, a highly efficient algorithm based on dynamic programming is proposed to optimally solve slew buffering with discrete buffer locations. Second, a new algorithm using the maximum matching technique is developed to handle the difficult cases in which no assumption is made on buffer input slew. Third, an adaptive buffer selection approach is proposed to efficiently handle slew buffering with continuous buffer locations. Fourth, buffer blockage avoidance is handled, which makes the algorithms ready for practical use. Experiments on industrial netlists demonstrate that our algorithms are very effective and highly efficient: we achieve about 90x speedup and save up to 20% buffer area over the commonly used van Ginneken style buffering. The new algorithms also significantly outperform previous works that indirectly address the slew buffering problem.
Tongue diagnosis is widely used in the Traditional Chinese Medicine (TCM) and tongue image classification based on pattern recognition plays an important role in the development of the modernization of TCM. However, due to labeled tongue samples are rare and costly or time consuming to obtain, most of the existing methods such as SVM utilize labeled training samples merely. Therefore the classifiers usually have poor performance. In contrast, Universum SVM is a promising method which incorporates a priori knowledge into the learning process with labeled data and irrelevant data (also called universum data). In tongue image classification, the number of irrelevant instances could be very large since there are many irrelevant categories for a certain tongue's type. But not all the irrelevant instances joined in training can improve the classifier's performance. So an algorithm of selecting the universum samples is also introduced in this paper. Experimental results show that the Universum SVM classifier is improved and the algorithm of selecting universum samples is effective.
The impression section of a radiology report summarizes the most prominent observation from the findings section and is the most important section for radiologists to communicate to physicians. Summarizing findings is time-consuming and can be prone to error for inexperienced radiologists, and thus automatic impression generation has attracted substantial attention. With the encoder-decoder framework, most previous studies explore incorporating extra knowledge (e.g., static pre-defined clinical ontologies or extra background information). Yet, they encode such knowledge by a separate encoder to treat it as an extra input to their models, which is limited in leveraging their relations with the original findings. To address the limitation, we propose a unified framework for exploiting both extra knowledge and the original findings in an integrated way so that the critical information (i.e., key words and their relations) can be extracted in an appropriate way to facilitate impression generation. In detail, for each input findings, it is encoded by a text encoder, and a graph is constructed through its entities and dependency tree. Then, a graph encoder (e.g., graph neural networks (GNNs)) is adopted to model relation information in the constructed graph. Finally, to emphasize the key words in the findings, contrastive learning is introduced to map positive samples (constructed by masking non-key words) closer and push apart negative ones (constructed by masking key words). The experimental results on OpenI and MIMIC-CXR confirm the effectiveness of our proposed method.
For improving the detection efficiency of hidden information blind detection system, an improved hidden information detection method based rough set theory is proposed against the high dimension of statistical features and high relevance about images. First, an improved general steganalysis system framework is proposed with practical method and steps; second, the Algorithm based on the rough set theory reduces feature dimension, computational complexity of classification, and eliminates the relevance among statistical features; third, the realization procedure is offered in this algorithm; the SVM classifier is employed to test the spread spectrums steganalysis Cox and Piva. And the large body of experimental results proves that the algorithm is correct and with a higher time efficiency and accuracy than Shi's and the method mentioned in reference.