A Multi-Modal Artificial Intelligence System-Based Distributed Network Latency Measurement

2020 
Network latency plays an important role in the server-selection process as well as real-time applications. Depending on the network system size, network latency can be either explicitly measured or predicted. While for small-scale systems explicit delay measurements can be performed between any pair of network nodes, this method is not feasible for large-scale networks due to the tremendous traffic and processing overhead. As a result, networking companies as well as researchers use the estimation methods for round-trip time (RTT) in large-scale networks. In such methods, network latency estimation is based on performing a small set of actual RTT measurements and predicting the rest of latencies among all network nodes. However, they suffer from several drawbacks such as poor performance, long convergence duration, or lack of convergence. In this article, we present a novel method of large-scale network latency estimation using artificial intelligence (AI). Our system uses a multimodal deep learning algorithm for high accuracy and computing speed. The proposed AI-based system is trained and evaluated using the well-known KING data set derived from the measurements of a real large-scale network. Performance evaluations show that our proposed approach significantly outperform existing techniques, achieving the 90th percentile relative error of 0.25 and an average accuracy of 96.1%, and 76.4% of the measurements are within 20% estimation error.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    31
    References
    8
    Citations
    NaN
    KQI
    []