Intrusion detection has been studied for about twenty years since the Anderson''s report. However, intrusion detection techniques are still far from perfect. Current intrusion detection systems (IDSs) usually generate a large amount of false alerts and cannot fully detect novel attacks or variations of known attacks. In addition, all the existing IDSs focus on low-level attacks or anomalies; none of them can capture the logical steps or attacking strategies behind these attacks. Consequently, the IDSs usually generate a large amount of alerts. In situations where there are intensive intrusive actions, not only will actual alerts be mixed with false alerts, but the amount of alerts will also become unmanageable. As a result, it is difficult for human users or intrusion response systems to understand the intrusions behind the alerts and take appropriate actions. This paper presents a novel approach to address these issues. The proposed technique is based on the observation that most intrusions are not isolated but related as different stages of series of attacks, with the early stages preparing for the later ones. In other words, there are often logical steps or strategies behind series of attacks. The proposed approach correlates alerts using {\em prerequisites of intrusions}. Intuitively, the prerequisite of an intrusion is the necessary condition for the intrusion to be successful. For example, the existence of a vulnerable service is the prerequisite of a remote buffer overflow attack against the service. The proposed approach is to identify the prerequisite (e.g., existence of vulnerable services) and the consequence of each type of attacks and correlate the corresponding alerts by matching the consequence of some previous alerts and the prerequisite of some later ones. The proposed approach has several advantages. First, it can reduce the impact of false alerts. Second, it provides a high-level representation of the correlated alerts and thus reveals the structure of series of attacks. Third, it can potentially be applied to predict attacks in progress and allows the intrusion response systems to take appropriate actions to stop the on-going attacks. Our preliminary experiments have demonstrated the potential of the proposed approach in reducing false alerts and uncovering high-level attack strategies.
Flexibly expanding the storage capacity required to process a large amount of rapidly increasing unstructured log data is difficult in a conventional computing environment.In addition, implementing a log processing system providing features that categorize and analyze unstructured log data is extremely difficult.To overcome such limitations, we propose and design a MongoDB-based unstructured log processing system (MdbULPS) for collecting, categorizing, and analyzing log data generated from banks.The proposed system includes a Hadoop-based analysis module for reliable parallel-distributed processing of massive log data.Furthermore, because the Hadoop distributed file system (HDFS) stores data by generating replicas of collected log data in block units, the proposed system offers automatic system recovery against system failures and data loss.Finally, by establishing a distributed database using the NoSQL-based MongoDB, the proposed system provides methods of effectively processing unstructured log data.To evaluate the proposed system, we conducted three different performance tests on a local test bed including twelve nodes: comparing our system with a MySQL-based approach, comparing it with an Hbase-based approach, and changing the chunk size option.From the experiments, we found that our system showed better performance in processing unstructured log data.
Traditional intrusion detection systems (IDSs) focus on low-level attacks or anomalies, and raise alerts independently, though there may be logical connections between them. In situations where there are intensive attacks, not only will actual alerts be mixed with false alerts, but the amount of alerts will also become unmanageable. As a result, it is difficult for human users or intrusion response systems to understand the alerts and take appropriate actions. This paper presents a sequence of techniques to address this issue. The first technique constructs attack scenarios by correlating alerts on the basis of prerequisites and consequences of attacks. Intuitively, the prerequisite of an attack is the necessary condition for the attack to be successful, while the consequence of an attack is the possible outcome of the attack. Based on the prerequisites and consequences of different types of attacks, the proposed method correlates alerts by (partially) matching the consequences of some prior alerts with the prerequisites of some later ones. Moreover, to handle large collections of alerts, this paper presents a set of interactive analysis utilities aimed at facilitating the investigation of large sets of intrusion alerts. This paper also presents the development of a toolkit named TIAA, which provides system support for interactive intrusion analysis. This paper finally reports the experiments conducted to validate the proposed techniques with the 2000 DARPA intrusion detection scenario-specific datasets, and the data collected at the DEFCON 8 Capture the Flag event.
The number of service techniques available for digitized home appliances is rapidly increasing as a result of various advances in digital technology. Users can now easily control and monitor home appliances via sensor networks formed among home appliances in ubiquitous environments. However, home appliances generate such large amounts of metadata about their status every month that in order to provide home appliance monitoring services to users, an approach that is able to store, analyze, and process these large amounts of metadata is needed. We propose a system that uses UPnP to collect metadata from home appliances and cloud computing technology to store and process the metadata collected from ubiquitous sensor network environments. Our proposed system utilizes a home gateway and is designed and implemented using UPnP technology to search for and collect device features and service information. It also provides a function for transmitting the metadata from the home appliances to a cloud-based data server that uses Hadoop-based technology to store and process the metadata collected by a home appliance monitoring service.
Many people now receive various multimedia services through their personal devices via a home network. The rapid growth in multimedia content and smart devices means that multimedia services can encounter many problems in home networks. High volumes of storage capacity and processing resources are required manage these large amounts of media content. In addition, users may find it difficult to receive smooth streaming services in remote places because of limited remote access functions. To address these problems, we propose a model that uses cloud computing technology to store and manage large volumes of multimedia content. The proposed model provides remote access functions for receiving multimedia services in home networks and in wireless local area networks (WLANs). The model has four components, i.e., the Assistant Gateway, cloud server, remote controller, and media device renderer. The Assistant Gateway is an important component in the proposed model. It can access a cloud server to obtain multimedia content information before sending it to users in a home network. Thus, users can receive home media streaming services via the Assistant Gateway using a media device renderer. Based on this model, we implemented each component to support streaming service for users in remote areas.
We presented the author's SMCCSE (Social Media Cloud Computing Service Environment) that supports the development and construction of Social Networking Service (SNS) based on large amounts of social media including audio, video and image in earlier publication. The main contributions of this paper are to present how to design and implement Map Reduce-based image conversion module for social media transcoding and transmoding that are the core functions in our SMCCSE as well as to verify performance evaluation results for image conversion module. In this paper, we show a partially functional image conversion module based on Hadoop in SMCCSE except for video and audio. Moreover, we discuss experimental results performed on a 28 node SMCCSE cluster under varying experimental conditions in order to verify the performance for image conversion function.
Diversified SaaS applications allow users more choices to use, according to their own preferences.However, the diversification of SaaS applications also makes it impossible for users to choose the best one.Furthermore, users can't take advantage of the functionality between SaaS applications.In this paper, we propose a platform that provides an SaaS mashup service, by extracting interoperable service functions from SaaS-based applications that independent vendors deploy and supporting a customized service recommendation function through log data binding in the cloud environment.The proposed SaaS mashup service platform consists of a SaaS aggregation framework and a log data binding framework.Each framework was concreted by using Apache Kafka and rule matrix-based recommendation techniques.We present the theoretical basis of implementing the high-performance messageprocessing function using Kafka.The SaaS mashup service platform, which provides a new type of mashup service by linking SaaS functions based on the above technology described, allows users to combine the required service functions freely and access the results of a rich service-utilization experience, using the SaaS mashup function.The platform developed through SaaS mashup service technology research will enable various flexible SaaS services, expected to contribute to the development of the smart-contents industry and the open market.