Pidgin is one of the special kinds of language variation. Historically, Chinese pidgin has undergone such four stages as Chinese-Portugal pidgin, Cantonese-English pidgin, Shanghai-English pidgin and modern Chinese-English pidgin. As one of the most important ingredients of modern Chinese-English pidgin, Chinese cyber-pidgin is getting more and more popular in recent years. Sociolinguistically, Chinese-Portugal pidgin, Cantonese-English pidgin and Shanghai-English pidgin resulted from the political, military, economical invasion of the imperialists so that it is viewed as a colonial language variable, but Chinese cyber-pidgin is quite different from the pidgin at those different historical stages in that it results from foreign cultural penetration and the importance of English language proficiency that Chinese educational departments or personnel departments emphasize. Therefore it, in fact, acts as the acculturation model of pidgin. Now, the majority of the speech community members of Chinese cyber-pidgin are Chinese so that most of the words or phrases in a sentence are Chinese mixed with some loan words, most of which are derived from English. Morphologically, the loan words used in Chinese cyber-pidgin can be classified into two kinds: the word of content morpheme and the word of allomorph, to which some word-forming approaches have been employed such as blending, clipping, abbreviation, prefix-word blending, number or number-letter blending, partial tones or even drawing, etc. The invention and popularization of Chinese cyber-pidgin are found to be based on such integrated motivations as psychological motivation, expressive motivation, logical motivation, rhetorical motivation, aesthetic motivation and regional motivation. In addition, Chinese cyber-pidgin has some underlying and powerful impacts on the invention and development of Chinese net cultural neology, and those different approaches to allomorph in the system of the Chinese cyber-pidgin have been so widely employed that an increasing number of lexical, phonological or syntactic variations have been found in Chinese net cultural neology.
Sequential programming models express a total program order, of which a partial order must be respected. This inhibits parallelizing tools from extracting scalable performance. Programmer written semantic commutativity assertions provide a natural way of relaxing this partial order, thereby exposing parallelism implicitly in a program. Existing implicit parallel programming models based on semantic commutativity either require additional programming extensions, or have limited expressiveness. This paper presents a generalized semantic commutativity based programming extension, called Commutative Set (COMMSET), and associated compiler technology that enables multiple forms of parallelism. COMMSET expressions are syntactically succinct and enable the programmer to specify commutativity relations between groups of arbitrary structured code blocks. Using only this construct, serializing constraints that inhibit parallelization can be relaxed, independent of any particular parallelization strategy or concurrency control mechanism. COMMSET enables well performing parallelizations in cases where they were inapplicable or non-performing before. By extending eight sequential programs with only 8 annotations per program on average, COMMSET and the associated compiler technology produced a geomean speedup of 5.7x on eight cores compared to 1.5x for the best non-COMMSET parallelization.
The precise dosage of insulin plays an important role in the treatment of diabetes. To offer accurate dosage, some AI-based auxiliary dosing systems have been proposed. Unfortunately, these schemes demand real-time health data, which is highly relevant to the health situation of the diabetics. The traditional personalized drug delivery frameworks for accurate dosing of insulin always collect and transmit medical data in plaintext, which may cause the disclosure of user privacy. Therefore, to optimize insulin dosage and protect privacy simultaneously, we propose a framework for an optimized insulin dosage via privacy-preserving reinforcement learning for diabetics (OIDPR). In OIDPR, both the additive secret sharing and edge computing are deployed to complete data encryption and improve efficiency. The user’s medical data is divided into secret shares uniformly at random, then compute separately at the edge servers. During the computation task of Q-learning, data is stored in the format of ciphertext and processed using the proposed additive secret sharing protocol. Finally, comprehensive theoretical analyses and experiment results demonstrate the security and efficiency of our framework.
In this paper, we propose a new scheme which uses blind detection algorithm for recovering the conventional user signal in a system which the sporadic machine-to-machine (M2M) communication share the same spectrum with the conventional user. Compressive sensing techniques are used to estimate the M2M devices signals. Based on the Hopfield neural network (HNN), the blind detection algorithm is used to recover the conventional user signal. The simulation results show that the conventional user signal can be effectively restored under an unknown channel. Compared with the existing methods, such as using the training sequence to estimate the channel in advance, the blind detection algorithm used in this paper with no need for identifying the channel, and can directly detect the transmitted signal blindly.
The research in this paper is based on OFDM signal, providing a target detection algorithm and an improved compensation algorithm. By using subcarriers of OFDM signal, this paper presents a super-resolution joint estimation algorithm of angle and distance (JADE). The actual data results show that the improved compensation algorithm has strong suppression of velocity dimension expansion; the simulation results confirm that JADE can achieve distance super-resolution, which is better than traditional target detection algorithms in performance.
The training of deep neural networks relies on massive high-quality labeled data which is expensive in practice. To tackle this problem, domain adaptation is proposed to transfer knowledge from label-rich source domain to unlabeled target domain to learn a classifier that can well classify target data. However, people don't consider privacy issues in domain adaptation. In this paper, we introduce a novel method that builds an effective model without sharing sensitive data between source and target domain. Target domain party can benefit from label-rich source domain without revealing its privacy data. We transfer the traditional domain adaptation into a federated setting, where a global server contains a shared global model. Additionally, homomorphic encryption (HE) algorithm is used to guarantee the computing security. Experiments show that our method performs effectively without reducing the accuracy. Our method can achieve secure knowledge transfer and privacy-preserving domain adaptation.
In the era of artificial intelligence, college English teaching is reforming to implement from the mobile Internet ecological perspective. The rapid development of information technology has brought new opportunities and challenges to college English teaching. This paper, based on the Eco-linguistics Theory, explores the concept of "wisdom education" and the college English ecological teaching model. Relying on the data mining technology, we can realize the individual analysis and modeling portraits of language learners, and implement the intelligent classroom of individualization of autonomous learning, cooperation of learning activities, virtualization of learning environment and automation of educational management. Finally, the situation of harmonious development of various niches in language learning is formed.
In complex underwater environments, the single mode of a single sensor cannot meet the precision requirement of object identification, and multisource fusion is currently the mainstream research approach. Deep canonical correlation analysis is an efficient feature fusion method but suffers from problems such as not strong scalability and low efficiency. Therefore, an improved deep canonical correlation analysis fusion method is proposed for underwater multisource sensor data containing noise. First, a denoising autoencoder is used for denoising and to reduce the data dimension to extract new feature expressions of raw data. Second, given that underwater acoustic data can be characterized as 1-dimensional time series, a 1-dimensional convolutional neural network is used to improve the deep canonical correlation analysis model, and multilayer convolution and pooling are implemented to decrease the number of parameters and increase the efficiency. To improve the scalability and robustness of the model, a stochastic decorrelation loss function is used to optimize the objective function, which reduces the algorithm complexity from O( $n^{3}$ ) to O( $n^{2}$ ). The comparison experiment of the proposed algorithm and other typical algorithms on MNIST containing noise and underwater multisource data in different scenes shows that the proposed algorithm is superior to others regardless of the efficiency or precision of target classification.