We address the problem of providing quality-of-service (QoS) guarantees in a multiple hop packet/cell switched environment while providing high link utilization in the presence of bursty traffic. A scheme based on bandwidth and buffer reservations at the Virtual Path level is proposed for ATNI networks. This approach enables us to provide accurate end-to-end QoS guarantees while achieving high utilization by employing statistical multiplexing and traffic shaping of bursty traffic sources. A simple round robin scheduler is proposed for realizing this approach and is shown to be implementable using standard ATlVi hardware viz. cell spacers. The problem of distributing the bandwidth and buffer space assigned to a VP over its multiple hops is addressed. We prove the optimality of the approach of allowing all the end-to-end loss to occur at the first hop under some conditions and show that its performance can be bounded with respect to the optimal in other conditions. This results in an equal amount of bandwidth to a VP at each hop and essentially no queueing after the first hop. Using simulations, the average case performance of this approach is also found to be good. Additional simulation results are presented to evaluate the proposed approach.
Network applications require certain individual performance guarantees that can be provided if enough network resources are available. Consequently, contention for the limited network resources may occur. For this reason, networks use flow control to manage network resources fairly and efficiently. This paper presents a distributed microeconomic flow control technique that models the network as competitive markets. In these markets, switches price their link bandwidth based on supply and demand, and users purchase bandwidth so as to maximize their individual quality of service (QoS). This yields a decentralized flow control method that provides a Pareto optimal bandwidth distribution and high utilization (over 90% in simulation results). Discussions about stability and the Pareto optimal distribution are given, as well as simulation results using actual MPEG-compressed video traffic.
Richard Moser, PhD1, Min Hua, PhD2, Paul Courtney, MS3, Dianne Reeves, RN, MSN1, Riki Ohira, PhD4, Abdul Shaikh, PhD, MHSc1 and Bradford Hesse, PhD1 1National Cancer Institute 2Fox Chase Cancer Center 3SAIC-Frederick Inc/ National Cancer Institute at Frederick 4Booz Allen Hamilton
Current peer-to-peer (P2P) file sharing applications are remarkably simple and robust, but their inefficiency can produce very high network loads. The use of super-peers has been proposed to improve the performance of unstructured P2P systems. These have the potential to approach the performance and scalability of structured systems, while retaining the benefits of unstructured P2P systems. There has, however, been little consensus on the best topology for connecting these super-peers, or how to construct the topology in a distributed, robust way. In this paper we propose a scalable unstructured P2P system (SUPS). The unique aspect of SUPS is a protocol for the distributed construction of a super-peer topology that has highly desirable performance characteristics. The protocol is inspired by the theory of random graphs. We describe the protocol, and demonstrate experimentally that it produces a balanced and low-diameter super-peer topology at low cost. We show that the method is very robust to super-peer failures and inconsistent information, and compare it with other approaches.
Intrusion detection has been studied for about twenty years since the Anderson''s report. However, intrusion detection techniques are still far from perfect. Current intrusion detection systems (IDSs) usually generate a large amount of false alerts and cannot fully detect novel attacks or variations of known attacks. In addition, all the existing IDSs focus on low-level attacks or anomalies; none of them can capture the logical steps or attacking strategies behind these attacks. Consequently, the IDSs usually generate a large amount of alerts. In situations where there are intensive intrusive actions, not only will actual alerts be mixed with false alerts, but the amount of alerts will also become unmanageable. As a result, it is difficult for human users or intrusion response systems to understand the intrusions behind the alerts and take appropriate actions. This paper presents a novel approach to address these issues. The proposed technique is based on the observation that most intrusions are not isolated but related as different stages of series of attacks, with the early stages preparing for the later ones. In other words, there are often logical steps or strategies behind series of attacks. The proposed approach correlates alerts using {\em prerequisites of intrusions}. Intuitively, the prerequisite of an intrusion is the necessary condition for the intrusion to be successful. For example, the existence of a vulnerable service is the prerequisite of a remote buffer overflow attack against the service. The proposed approach is to identify the prerequisite (e.g., existence of vulnerable services) and the consequence of each type of attacks and correlate the corresponding alerts by matching the consequence of some previous alerts and the prerequisite of some later ones. The proposed approach has several advantages. First, it can reduce the impact of false alerts. Second, it provides a high-level representation of the correlated alerts and thus reveals the structure of series of attacks. Third, it can potentially be applied to predict attacks in progress and allows the intrusion response systems to take appropriate actions to stop the on-going attacks. Our preliminary experiments have demonstrated the potential of the proposed approach in reducing false alerts and uncovering high-level attack strategies.
Botnets pose serious threats to the Internet. In spite of substantial efforts to address the issue, botnets are dramatically spreading. Bots in a botnet execute commands under the control of the botnet owner or controller. A first step in protecting against botnets is identification of their presence, and activities. In this paper, we propose a method of identifying the high-level commands executed by bots. The method uses run- time monitoring of bot execution to capture and analyze run- time call behavior. We find that bots have distinct behavior patterns when they perform pre-programmed bot commands. The patterns are characterized by sequences of common API calls at regular intervals. We demonstrate that commands aiming to achieve the same result have very similar API call behavior in bot variants, even when they are from different bot families. We implemented and evaluated a prototype of our method. Run-time monitoring is accomplished by user-level hooking. In the experiments, the proposed method successfully identified the bot commands being executed with a success rate of 97%. The ability of the method to identify bot commands despite the use of execution obfuscation is also addressed.
Through simulation, we studied the performance of three adaptive “wormhole” routing strategies, and compared them with static routing. Since adaptive routing is susceptible to deadlock, an abort-and-retry strategy was used to prevent it from arising. The impact of packetization of long messages and buffering at message destinations were also studied. Results are presented and analyzed for a variety of hardware configurations and traffic conditions. The combination of adaptive routing, abort-and-retry, and buffering at the destination is shown to achieve excellent performance for modest cost.