So that the routers forward an IP packet with his destination, they are running a forwarding decision on an incoming packet to determine the packet’s next-hop router. This is achieved by looking up the longest matching prefix with a packet destination address in the routing tables. Therefore, one major factor in the overall performance of a router is the speed of the IP address lookup operation due to the increase in routing table size, in this paper, a new IP address lookup algorithm based on cache routing-table is proposed, that contains recently used IP addresses and their forwarding information to speed up the IP address lookups operation in the routers. We evaluated the performance of our proposed algorithm in terms of consultation time for several sets of IP addresses, the results of performance evaluation show that our algorithm is efficient in terms of the lookup speed since search can be immediately finished when the input IP address is found in the cache routing table.
In recent years, there is a large number of features in datasets used in pattern classification, which include relevant, irrelevant, and redundant features. However, irrelevant and redundant features decrease the computational time and reduce the classification performance. Feature selection is a preprocessing technique that which to choose a sub-set of relevant features to achieve a similar or even better classification performance than using all features. This paper presents two new hybrid algorithms for a feature selection called particle swarm optimization with crossover operator (denoted as PSOCO1 and PSOCO2); the algorithms are based on the integration of a particle swarm optimization (PSO) and a crossover operator (CO) of the genetic algorithms. A new relevant features vector (RFV) is introduced and used by our algorithms for execute a crossover operator between the RFV and other features vectors. To demonstrate the effectiveness of these algorithms, we compared them with standard PSO [14], PSO4-2 [8] and HGAPSO [28] on twelve benchmark datasets. The results show that the two proposed algorithms significantly reduce the number of selected features and achieve similar or even better classification accuracy in almost all cases.