In recent years, satellite-integrated content centric networking (SCCN) has become an important solution for the future network with excellent bandwidth savings and file distribution capability in a wide range by the intrinsic caching function. At present, however, the high-speed changes of satellite network topology and coverage make it difficult to predict the file popularity of SCCN network, resulting in a low timeliness. This will lead to low node cache efficiency and bad data distribution performance. To address this issue, in this article, we have proposed a deep learning-enabled file popularity-aware caching replacement mechanism to achieve efficient file distribution in SCCN. In the proposed mechanism, we have developed a virtual location division scheme to keep the return path of content data invariable by remapping the time-varying topology of network into a static topology with virtual nodes. Furthermore, we have put forward a minimum delay file-caching set algorithm to predict the popularity of files in the proposed SCCN via a well-designed deep learning framework, which can find those high-popularity files most worthy of caching. The simulation results verified the proposed method can obviously degrade the access delay of all users and the cache hit ratio of satellite nodes, compared with current strategies, i.e., cache everything everywhere with least recently used, probCache, content-aware placement and discovery, and replacement (APDR), respectively.
In recent years, a disruption-tolerant network (DTN) has attracted significant attention regarding lunar exploration, as they can be employed in various relay spacecraft, such as lunar orbiters, due to their excellent constant all-time coverage capability. An exclusive relay satellite with limited storage capacity and downlink communication bandwidth to Earth could be congested with huge amounts of return data from multiple sources on the lunar surface, resulting in low efficiency in the backhaul link. To address this issue, a network-coded forwarding scheme employing a Markov-decision-assisted bundle aggregation for a Halo-orbit Earth–lunar DTN is proposed in this article. Specifically, a network-coding-enabled store-and-forward scheme is designed for the downlink relay delivery in the L2-point Halo orbit using a two-priority bundle-aggregation mechanism. A size-optimal bundle slicing-aggregation algorithm is developed for the two-source network coding scheme using the Markov decision algorithm. Simulation results demonstrate that the proposed mechanism exhibits better performance than existing delivery schemes in reducing end-to-end delivery latency. Thus, the throughput can be improved, and the arrival rate of high-priority bundles can be guaranteed.
Integrated satellite-terrestrial (IST) networks will play a promising role in future peer-to-peer (P2P) distribution by bridging multiple isolated P2P sub-networks to provide real-time cross-regional file access in certain scenarios. However, due to the periodic orbital movements of satellites, highly time-varying visibility among peer nodes in IST networks imposes significant challenge on file lookup and retrieval, which is difficult to be solved with current methods in terrestrial P2P networks. Therefore, in this paper, we propose a contact-aware dual-layer Chord (DL-Chord) lookup mechanism for P2P-based IST networks by carefully considering link contacts among satellites and terrestrial P2P nodes. In the proposed DL-Chord mechanism, we develop a group of novel algorithms, comprising an orbital parameter-based dual-layer identification mapping scheme, a contact-based dual-layer finger table, an efficient file lookup algorithm, and a node updating scheme. Furthermore, a theoretical analysis is conducted based on the visibility of satellite constellations to evaluate the lookup performance of the proposed DL-Chord mechanism, i.e., average lookup delay (ALD) and average lookup rounds (ALR). Simulation results verify that the proposed mechanism achieves better efficiency in terms of ALD and ALR with a lower update cost than other similar schemes, i.e., the stochastic mapping-based original chord ring scheme (ORCO) and physical location-aware mapping-based wireless location-aware scheme (WILCO).
In a typical Satellite-integrated Internet of Things (SIoT), the limited transmission rate of a sensor causes a stale in data freshness due to unavoidable time waiting for transmission. Moreover, due to the high bit error rate (BER) of the satellite-to-ground link, data freshness will be further exacerbated by frequent retransmissions. Generally, this issue is partially solved using a powerful compression scheme that can reduce the data volume. However, conventional compression schemes will necessitate a significant amount of time to accumulate constant data to a certain quantity, posing a difficult challenge. Therefore, this study proposes an age-optimal hybrid temporal-spatial generalized deduplication and automatic repeat request (HARQ-GD) protocol for the high-sampling data collection in SIoT, considering data compression, and transmission collaboratively. A novel Age of Information (AoI) metric is developed for timeliness evaluation over a two-hop end-to-end link of SIoT, which is optimized to design the proposed HARQ-GD protocol by considering the temporal and spatial correlations of sampled data with specific encoding/decoding algorithms and packet formats. The simulation results indicate that the proposed HARQ-GD protocol performs better performance than typical generalized deduplication (GD) and hybrid automatic repeat request with chase combing (HARQ-CC) schemes in reducing AoI, because of its fewer transmission times and higher compression rate.
In this article, we consider the data updating problem in the Satellite-integrated Internet of Things network for the time-critical scenarios, i.e., animal tracking and environmental monitoring. Due to the limited channel rate during contact of transmission, however, constantly updating data with huge volume over the uplink will incur obvious waiting and transmission delay bringing stale information to the satellite node. To address this issue, we propose a novel metric, spatially temporally correlative mutual information (STI), to characterize the information timeliness from perspective of information entropy by considering the correlations between the last update message and the status of the information source. By maximizing the averaged STI, we find the optimal allocation policy of channel slots with a fixed updating period by formulating the problem as a Markov decision process (MDP) with possibly infinite state space. Furthermore, we derive the optimal amounts of allocated time slots in a unit frame by solving a constrained range integer optimization problem with respect to the average STI. The simulation results show that the proposed periodically updating policy can significantly improve the information freshness compared with the original slot allocation strategy and current commonly used scheduled access strategies, i.e., slotted ALOHA and Threshold-ALOHA.
Explorations on the Lunar Far-side Surface (LFS) have recently developed rapidly with various landers, cruisers, and orbital space vehicles. Research on relay satellites with Earth-Moon Libration Point (EMLP) orbits has aroused widespread interest in supporting these probe missions, i.e., "Queqiao" spacecraft, due to their particular relative positions in the Earth-Lunar system. For example, a Halo orbital relay satellite at the Earth-Moon L2 point can provide more transmission opportunities for communication nodes on LFS to the destination nodes on the Earth. However, a single Halo orbital spacecraft could not provide all-time connections for the cislunar communication with its limited coverage on the LFS. In this paper, we proposed a dual-relay Halo orbit constellation for the cislunar communication network, providing high-proportion coverage on the LFS at any time using an analytical method of the grid point. Based on the proposed constellation, we designed an energy efficiency-optimal relay selection algorithm for the nodes on the LFS to return data back to the Earth by solving a MIP (Mixed-Integer-Programming) problem with collaborative considerations of source and relay node's power allocation. The simulation results demonstrate that the proposed algorithm has the maximum energy efficiency (EE) while ensuring the user's quality of service, compared with the algorithm of Spectrum Efficiency Maximization (SEM).