Loss classification in optical burst switching networks using machine learning techniques: improving the performance of TCP

2008 
Optical burst switching (OBS) is considered as a contending technology for the core of the Internet in future. However, due to lack of the buffers, losses occur due to contention among simultaneously arriving bursts at the core nodes. Contention losses do not necessarily indicate a situation of congestion in the network. Thus differentiation (classification) of losses is essential in many applications to avoid false identification of congestion. In this paper, we propose a loss classification technique for the OBS networks based on machine learning techniques. We devise a new measure to differentiate between congestion and contention losses, which is derived from the observed losses, called the number of bursts between failures (NBBF). We observe that the NBBF follows a Gaussian distribution with different parameters for contention and congestion losses. This feature is used in differentiation. We use both a supervised learning technique (hidden Markov model (HMM)) and an unsupervised learning technique (expectation maximization (EM) clustering) on the observed losses and classify them into a set of states (clusters) after which an algorithm differentiates between the congestion and contention losses. We also demonstrate the use of loss differentiation in improving the performance of transport control protocol (TCP) over OBS networks. We modify congestion control mechanism of TCP suitably to arrive at two variants of TCP, HMM-TCP and EM-TCP. Their performance is compared with TCP NewReno, TCP SACK, and Burst TCP (X. Yu et al., Mar. 2004). Simulation results demonstrate the effectiveness and accuracy of the loss classification technique in different network scenarios.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    20
    References
    30
    Citations
    NaN
    KQI
    []