In recent years, mobile ad hoc networks are becoming more and more widely used because they do not rely on infrastructure, can be networked at any time and any place, and are low in cost. This paper proposes an effective broadcast protocol to transfer data in mobile ad hoc networks. The proposed protocol is a network topology awareness based probabilistic broadcast (NTAPB) protocol. It requires each node in the network to be configured with a unique ID. Every time a data packet broadcast from the source node is rebroadcast by another node, the ID of the rebroadcast node is recorded in the rebroadcast record table of the packet header. One packet is only allowed to be rebroadcast by the same node once. The network topology knowledge is dynamically awared by each node according to the information of source node and rebroadcast record table carried in the header of the received data packets. The protocol adopts different rebroadcast probability calculation strategies according to different network topologies. It distinguishes network topologies into two categories, fully connected and non-fully connected. For fully connected topologies, it uses a novel mathematical model to solve the optimal rebroadcast probability. For non-fully connected topologies, it distinguishes egress nodes and non-egress nodes to calculate rebroadcast probability separately. Different from related literature that only carried out simulation experiment evaluations, the paper has carried out a large number of physical experiment evaluations. The experimental results show that the protocol can obtain satisfactory packet delivery ratio and end-to-end delay under different network topologies.
Recent studies in neuro-symbolic learning have explored the integration of logical knowledge into deep learning via encoding logical constraints as an additional loss function. However, existing approaches tend to vacuously satisfy logical constraints through shortcuts, failing to fully exploit the knowledge. In this paper, we present a new framework for learning with logical constraints. Specifically, we address the shortcut satisfaction issue by introducing dual variables for logical connectives, encoding how the constraint is satisfied. We further propose a variational framework where the encoded logical constraint is expressed as a distributional loss that is compatible with the model's original training loss. The theoretical analysis shows that the proposed approach bears salient properties, and the experimental evaluations demonstrate its superior performance in both model generalizability and constraint satisfaction.
Learning fair representations is an essential task to reduce bias in data-oriented decision making. It protects minority subgroups by requiring the learned representations to be independent of sensitive attributes. To achieve independence, the vast majority of the existing work primarily relaxes it to the minimization of the mutual information between sensitive attributes and learned representations. However, direct computation of mutual information is computationally intractable, and various upper bounds currently used either are still intractable or contradict the utility of the learned representations. In this paper, we introduce distance covariance as a new dependence measure into fair representation learning. By observing that sensitive attributes (e.g., gender, race, and age group) are typically categorical, the distance covariance can be converted to a tractable penalty term without contradicting the utility desideratum. Based on the tractable penalty, we propose FairDisCo, a variational method to learn fair representations. Experiments demonstrate that FairDisCo outperforms existing competitors for fair representation learning.
Learning-based vehicle planning is receiving increasing attention with the emergence of diverse driving simulators and large-scale driving datasets. While offline reinforcement learning (RL) is well suited for these safety-critical tasks, it still struggles to plan over extended periods. In this work, we present a skill-based framework that enhances offline RL to overcome the long-horizon vehicle planning challenge. Specifically, we design a variational autoencoder (VAE) to learn skills from offline demonstrations. To mitigate posterior collapse of common VAEs, we introduce a two-branch sequence encoder to capture both discrete options and continuous variations of the complex driving skills. The final policy treats learned skills as actions and can be trained by any off-the-shelf offline RL algorithms. This facilitates a shift in focus from per-step actions to temporally extended skills, thereby enabling long-term reasoning into the future. Extensive results on CARLA prove that our model consistently outperforms strong baselines at both training and new scenarios. Additional visualizations and experiments demonstrate the interpretability and transferability of extracted skills.