OMAPL138-based portable EEG detection system is designed in this paper.This system transplants algorithm to OMAPL138 processor to realize preprocessing, storage and dynamic display of data by receiving amplified and processed digital EEG through wireless module.By contrast of traditional EEG detection system, the flexibility and portability of EEG detection system is added to this system.Compared to the existing portable EEG processing system, it is separated from upper compute and doesn't need living power supply.The whole system is more portable and smaller in size.Thus, this system possesses many characteristics such as: small volume, low power consumption, strong antigambling capacity and portability, etc.
Abstract To detect and intelligently identify the defects of the vermicular cast iron cylinder head, defective casting samples were made corresponding to each type of the actual defects. We setup the ultrasonic testing system to examine the defective samples. The detected defect signals were processed to obtain the characteristic spectrograms of the defects, which were further sorted and classified into a sample database. An algorithm based on a convolutional neural network was proposed to identify the defects intelligently. A convolutional neural network model was established. The network structure and parameters were optimized. It shows that a neural network with 3×3 convolution kernel dimension, 3 convolution layers, 20 convolution kernels in each layer and a learning rate of 0.0005 can effectively identify the spectrograms of the defects. The results show that the identification accuracy of the proposed algorithm is 97.14%. The model meets the practical requirements of cylinder head defect detection. The detection efficiency has improved significantly.
The purpose of the study is to partially replicate and extend Fan’s (2012 Fan, F. (2012). A quantitative study on the lexical change of American English. Journal of Quantitative Linguistics, 19(3), 171–180.[Taylor & Francis Online], [Web of Science ®] , [Google Scholar]) study and investigate the lexical change of American and British English between the 1960s and the 2010s. The study is different from Fan (2012 Fan, F. (2012). A quantitative study on the lexical change of American English. Journal of Quantitative Linguistics, 19(3), 171–180.[Taylor & Francis Online], [Web of Science ®] , [Google Scholar]) in that we used word types instead of lemmas and we used different reference corpora to solve the issue of incomparability between the corpora. Results of the top 100 high frequency words were comparable to Fan (2012 Fan, F. (2012). A quantitative study on the lexical change of American English. Journal of Quantitative Linguistics, 19(3), 171–180.[Taylor & Francis Online], [Web of Science ®] , [Google Scholar]). Meanwhile, results of vocabulary growth in terms of both word types and lemmas showed that there is no change in vocabulary growth of both American and British English, which is different from those of Fan (2012 Fan, F. (2012). A quantitative study on the lexical change of American English. Journal of Quantitative Linguistics, 19(3), 171–180.[Taylor & Francis Online], [Web of Science ®] , [Google Scholar]). We argue that the difference may result from the incomparability of the texts domains in the corpora used in Fan (2012 Fan, F. (2012). A quantitative study on the lexical change of American English. Journal of Quantitative Linguistics, 19(3), 171–180.[Taylor & Francis Online], [Web of Science ®] , [Google Scholar]). Meanwhile, though significant difference of vocabulary richness was found in American and British English between the 1960s and the 2010s, which is similar to Fan (2012 Fan, F. (2012). A quantitative study on the lexical change of American English. Journal of Quantitative Linguistics, 19(3), 171–180.[Taylor & Francis Online], [Web of Science ®] , [Google Scholar]), the effect size of the difference was very minuscule and the differences may result from the large size of the corpora. Last, also different from Fan (2012 Fan, F. (2012). A quantitative study on the lexical change of American English. Journal of Quantitative Linguistics, 19(3), 171–180.[Taylor & Francis Online], [Web of Science ®] , [Google Scholar]), the word length of American English was found to have significantly decreased from the 1960s to the 2010s in terms of word types, while that of the British English remained stable. The effect size of the word length chi-square tests on both American and British English is also very minuscule.
Abstract The defects formed in the manufacture of the vermicular graphite cast iron engine cylinder head seriously affect the operation of the engine, which is necessary to detect. Ultrasonic testing is a non-destructive testing method that has the advantages of quick response, high resolution, and high security. In this paper, various types of specimens are prepared corresponding to different types of actual defects in the vermicular iron cylinder head. An ultrasonic A-scan system was built to test the specimens. The short-time Fourier transform, the continuous wavelet transform, the empirical wavelet transform, and the empirical modal decomposition were adopted to transform the signals into spectrograms which were further analyzed to reveal the inherent features of defects. The results show that the short-time Fourier transform can be used to distinguish all the common defects comparing to other methods. Comparing to the time-domain waveforms, the transformed spectrograms provide clear time-frequency distribution and highlight the inherent characteristics of the signal.
Abstract This paper presents a novel approach to time series forecasting, an area of significant importance across diverse fields such as finance, meteorology, and industrial production. Time series data, characterized by its complexity involving trends, cyclicality, and random fluctuations, necessitates sophisticated methods for accurate forecasting. Traditional forecasting methods, while valuable, often struggle with the non-linear and non-stationary nature of time series data. To address this challenge, we propose an innovative model that combines signal decomposition and deep learning techniques. Our model employs Generalized Autoregressive Conditional Heteroskedasticity (GARCH) for learning the volatility in time series changes, followed by Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (CEEMDAN) for data decomposition, significantly simplifying data complexity. We then apply Graph Convolutional Networks (GCN) to effectively learn the features of the decomposed data. The integration of these advanced techniques enables our model to fully capture and analyze the intricate features of time series data at various interval lengths. We have evaluated our model on multiple typical time-series datasets, demonstrating its enhanced predictive accuracy and stability compared to traditional methods. This research not only contributes to the field of time series forecasting but also opens avenues for the application of hybrid models in big data analysis, particularly in understanding and predicting the evolution of complex systems.
Recent studies have advocated the detection of fake videos as a one-class detection task, predicated on the hypothesis that the consistency between audio and visual modalities of genuine data is more significant than that of fake data. This methodology, which solely relies on genuine audio-visual data while negating the need for forged counterparts, is thus delineated as a `zero-shot' detection paradigm. This paper introduces a novel zero-shot detection approach anchored in content consistency across audio and video. By employing pre-trained ASR and VSR models, we recognize the audio and video content sequences, respectively. Then, the edit distance between the two sequences is computed to assess whether the claimed video is genuine. Experimental results indicate that, compared to two mainstream approaches based on semantic consistency and temporal consistency, our approach achieves superior generalizability across various deepfake techniques and demonstrates strong robustness against audio-visual perturbations. Finally, state-of-the-art performance gains can be achieved by simply integrating the decision scores of these three systems.
Bhatia and Richie (2009), in their book chapter, compare computer-mediated communication (CMC) and face-to-face communication by analyzing how learners behave when they learn a language in these two modes. Studies on face-to-face communication (e.g., VanPatten, 1990) reveal that learners have a tendency to process meaning before form because human interaction is conducted in real time. Speakers have to attend to the form (i.e., the oral output) and the meaning of the verbal production simultaneously. Previous studies on working memory (Li, 1999; Maehara and Saito, 2007) reveal that there is a trade-off between the maintenance and processing of information, as both involve working memory. VanPatten (2004), in particular, pinpoints that processing second language (L2) input involves making form-meaning connections in real-time comprehension, an online task that takes place in the working memory. As such, L2 learners have less memory space to store new information in face-to-face communication, given that the working memory is used for processing input. In contrast, CMC is said to provide more opportunities for focus on form.
Recent studies have advocated the detection of fake videos as a one-class detection task, predicated on the hypothesis that the consistency between audio and visual modalities of genuine data is more significant than that of fake data. This methodology, which solely relies on genuine audio-visual data while negating the need for forged counterparts, is thus delineated as a 'zero-shot' detection paradigm. This paper introduces a novel zero-shot detection approach anchored in content consistency across audio and video. By employing pre-trained ASR and VSR models, we recognize the audio and video content sequences, respectively. Then, the edit distance between the two sequences is computed to assess whether the claimed video is genuine. Experimental results indicate that, compared to two mainstream approaches based on semantic consistency and temporal consistency, our approach achieves superior generalizability across various deepfake techniques and demonstrates strong robustness against audio-visual perturbations. Finally, state-of the-art performance gains can be achieved by simply integratingthe decision scores of these three systems.