Intrusion detection systems based on recurrent neural network (RNN) have been considered as one of the effective methods to detect time-series data of in-vehicle networks. However, building a model for each arbitration bit is not only complex in structure but also has high computational overhead. Convolutional neural network (CNN) has always performed excellently in processing images, but they have recently shown great performance in learning features of normal and attack traffic by constructing message matrices in such a manner as to achieve real-time monitoring but suffer from the problem of temporal relationships in context and inadequate feature representation in key regions. Therefore, this paper proposes a temporal convolutional network with global attention to construct an in-vehicle network intrusion detection model, called TCAN-IDS. Specifically, the TCAN-IDS model continuously encodes 19-bit features consisting of an arbitration bit and data field of the original message into a message matrix, which is symmetric to messages recalling a historical moment. Thereafter, the feature extraction model extracts its spatial-temporal detail features. Notably, global attention enables global critical region attention based on channel and spatial feature coefficients, thus ignoring unimportant byte changes. Finally, anomalous traffic is monitored by a two-class classification component. Experiments show that TCAN-IDS demonstrates high detection performance on publicly known attack datasets and is able to accomplish real-time monitoring. In particular, it is anticipated to provide a high level of symmetry between information security and illegal intrusion.
The popularity of Internet of Vehicles (IoV) has made people's driving environment more comfortable and convenient. However, with the integration of external networks and the vehicle networks, the vulnerabilities of the Controller Area Network (CAN) are exposed, allowing attackers to remotely invade vehicle networks through external devices. Based on the remote attack model for vulnerabilities of the in-vehicle CAN, we designed an efficient and safe identity authentication scheme based on Feige-Fiat-Shamir (FFS) zero-knowledge identification scheme with extremely high soundness. We used the method of zero-one reversal and two-to-one verification to solve the problem that FFS cannot effectively resist guessing attacks. Then, we carried out a theoretical analysis of the scheme's security and evaluated it on the software and hardware platform. Finally, regarding time overhead, under the same parameters, compared with the existing scheme, the scheme can complete the authentication within 6.1ms without having to go through multiple rounds of interaction, which reduces the additional authentication delay and enables all private keys to participate in one round of authentication, thereby eliminating the possibility that a private key may not be involved in the original protocol. Regarding security and soundness, as long as private keys are not cracked, the scheme can resist guessing attacks, which is more secure than the existing scheme.
Large language models (LLMs) have raised concerns about potential security threats despite performing significantly in Natural Language Processing (NLP). Backdoor attacks initially verified that LLM is doing substantial harm at all stages, but the cost and robustness have been criticized. Attacking LLMs is inherently risky in security review, while prohibitively expensive. Besides, the continuous iteration of LLMs will degrade the robustness of backdoors. In this paper, we propose TrojanRAG, which employs a joint backdoor attack in the Retrieval-Augmented Generation, thereby manipulating LLMs in universal attack scenarios. Specifically, the adversary constructs elaborate target contexts and trigger sets. Multiple pairs of backdoor shortcuts are orthogonally optimized by contrastive learning, thus constraining the triggering conditions to a parameter subspace to improve the matching. To improve the recall of the RAG for the target contexts, we introduce a knowledge graph to construct structured data to achieve hard matching at a fine-grained level. Moreover, we normalize the backdoor scenarios in LLMs to analyze the real harm caused by backdoors from both attackers' and users' perspectives and further verify whether the context is a favorable tool for jailbreaking models. Extensive experimental results on truthfulness, language understanding, and harmfulness show that TrojanRAG exhibits versatility threats while maintaining retrieval capabilities on normal queries.
As the Internet of vehicles (IoV) is the perceptual information subject, the intelligent connected vehicle (ICV) is establishing an interconnected information transmission system through opening more external interfaces. However, security communication problems thereby are generated, attracting massive attention for researchers. Hence, the in-vehicle network system is responsible for controlling the state of the ICV and has a major impact on driving safety. In this paper, we designed an efficient secure ciphertext-policy attribute-based encryption (CP-ABE) system for protecting communication. The research focuses on mining the frequency features between vehicle nodes through the max-miner association rules algorithm, aiming to build frequent item sets. Furthermore, an improved asymmetric ABE scheme can implement secure communication in-vehicle nodes that belong to the same classification set. Through the hardware platform and in-vehicle network simulator (INVS) to evaluate our scheme, the results demonstrate that the work possesses enough security without reducing communication efficiency, meanwhile improve bus load performance.
Autonomous vehicles (AVs) are more vulnerable to network attacks due to the high connectivity and diverse communication modes between vehicles and external networks. Deep learning-based Intrusion detection, an effective method for detecting network attacks, can provide functional safety as well as a real-time communication guarantee for vehicles, thereby being widely used for AVs. Existing works well for cyber-attacks such as simple-mode but become a higher false alarm with a resource-limited environment required when the attack is concealed within a contextual feature. In this paper, we present a novel automotive intrusion detection model with lightweight attribution and semantic fusion, named LSF-IDM. Our motivation is based on the observation that, when injected the malicious packets to the in-vehicle networks (IVNs), the packet log presents a strict order of context feature because of the periodicity and broadcast nature of the CAN bus. Therefore, this model first captures the context as the semantic feature of messages by the BERT language framework. Thereafter, the lightweight model (e.g., BiLSTM) learns the fused feature from an input packet's classification and its output distribution in BERT based on knowledge distillation. Experiment results demonstrate the effectiveness of our methods in defending against several representative attacks from IVNs. We also perform the difference analysis of the proposed method with lightweight models and Bert to attain a deeper understanding of how the model balance detection performance and model complexity.
Pre-training has been a necessary phase for deploying pre-trained language models (PLMs) to achieve remarkable performance in downstream tasks. However, we empirically show that backdoor attacks exploit such a phase as a vulnerable entry point for task-agnostic. In this paper, we first propose $\mathtt{maxEntropy}$, an entropy-based poisoning filtering defense, to prove that existing task-agnostic backdoors are easily exposed, due to explicit triggers used. Then, we present $\mathtt{SynGhost}$, an imperceptible and universal task-agnostic backdoor attack in PLMs. Specifically, $\mathtt{SynGhost}$ hostilely manipulates clean samples through different syntactic and then maps the backdoor to representation space without disturbing the primitive representation. $\mathtt{SynGhost}$ further leverages contrastive learning to achieve universal, which performs a uniform distribution of backdoors in the representation space. In light of the syntactic properties, we also introduce an awareness module to alleviate the interference between different syntactic. Experiments show that $\mathtt{SynGhost}$ holds more serious threats. Not only do severe harmfulness to various downstream tasks on two tuning paradigms but also to any PLMs. Meanwhile, $\mathtt{SynGhost}$ is imperceptible against three countermeasures based on perplexity, fine-pruning, and the proposed $\mathtt{maxEntropy}$.
The rapid adoption of large language models (LLMs) in multi-agent systems has highlighted their impressive capabilities in various applications, such as collaborative problem-solving and autonomous negotiation. However, the security implications of these LLM-based multi-agent systems have not been thoroughly investigated, particularly concerning the spread of manipulated knowledge. In this paper, we investigate this critical issue by constructing a detailed threat model and a comprehensive simulation environment that mirrors real-world multi-agent deployments in a trusted platform. Subsequently, we propose a novel two-stage attack method involving Persuasiveness Injection and Manipulated Knowledge Injection to systematically explore the potential for manipulated knowledge (i.e., counterfactual and toxic knowledge) spread without explicit prompt manipulation. Our method leverages the inherent vulnerabilities of LLMs in handling world knowledge, which can be exploited by attackers to unconsciously spread fabricated information. Through extensive experiments, we demonstrate that our attack method can successfully induce LLM-based agents to spread both counterfactual and toxic knowledge without degrading their foundational capabilities during agent communication. Furthermore, we show that these manipulations can persist through popular retrieval-augmented generation frameworks, where several benign agents store and retrieve manipulated chat histories for future interactions. This persistence indicates that even after the interaction has ended, the benign agents may continue to be influenced by manipulated knowledge. Our findings reveal significant security risks in LLM-based multi-agent systems, emphasizing the imperative need for robust defenses against manipulated knowledge spread, such as introducing ``guardian'' agents and advanced fact-checking tools.