The Cloud infrastructure services landscape advances steadily leaving users in the agony of choice. As a result, Cloud service dentification and discovery remains a hard problem due to different service descriptions, nonstandardised naming conventions and heterogeneous types and features of Cloud
Cloud consumers have access to an increasingly diverse range of resource and contract options, but lack appropriate resource scaling solutions that can exploit this to minimize the cost of their cloud-hosted applications. Traditional approaches tend to use homogeneous resources and horizontal scaling to handle workload fluctuations and do not leverage resource and contract heterogeneity to optimize cloud costs. In this paper, we propose a novel opportunistic resource scaling approach that exploits both resource and contract heterogeneity to achieve cost-effective resource allocations. We model resource allocation as an unbounded knapsack problem , and resource scaling as an one-step ahead resource allocation problem . Based on these models, we propose two scaling strategies: (a) delta capacity optimization , which focuses on optimizing costs for the difference between existing resource allocation and the required capacity based on the forecast workload, and (b) full capacity optimization , which focuses on optimizing costs for resource capacity corresponding to the forecast workload. We evaluate both strategies using two real world workload datasets, and compare them against three different scaling strategies. The results show that our proposed approach, particularly full capacity optimization, outperforms all of them and offers in excess of 70 percent cost savings compared to the traditional scaling approach.
Advances in the media and entertainment industries, for example streaming audio and digital TV, present new challenges for managing large audio-visual collections. Efficient and effective retrieval from large content collections forms an important component of the business models for content holders and this is driving a need for research in audio-visual search and retrieval. Current content management systems support retrieval using low-level features, such as motion, colour, texture, beat and loudness. However, low-level features often have little meaning for the human users of these systems, who much prefer to identify content using high-level semantic descriptions or concepts. This creates a gap between the system and the user that must be bridged for these systems to be used effectively. The research presented in this paper describes our approach to bridging this gap in a specific content domain, sports video. Our approach is based on a number of automatic techniques for feature detection used in combination with heuristic rules determined through manual observations of sports footage. This has led to a set of models for interesting sporting events-goal segments-that have been implemented as part of an information retrieval system. The paper also presents results comparing output of the system against manually identified goals.
Various health care devices owned by either hospitals or individuals are producing huge amount of health care data. The big health data may contain valuable knowledge and new business opportunities. Obviously, cloud is a good candidate to collect, store and analyse such big health care data. However, health care data is very sensitive for its owners, and thus should be well protected on cloud. This paper presents our solution to protecting and analyzing health care data stored on cloud. First, we develop novel technologies to protect data privacy and enable secure data sharing on cloud. Secondly, we show the methods and tools to conduct big health care data analysis. Finally, both the security technology and the data analysis methods are evaluated to show the usefulness and efficiency of our solution.
The amount of encrypted Internet traffic almost doubles every year thanks to the wide adoption of end-to-end traffic encryption solutions such as IPSec, TLS and SSH. Despite all the benefits of user privacy the end-to-end encryption provides, the encrypted internet traffic blinds intrusion detection system (IDS) and makes detecting malicious traffic hugely difficult. The resulting conflict between the user's privacy and security has demanded solutions for deep packet inspection (DPI) over encrypted traffic. The approach of those solutions proposed to date is still restricted in that they require intensive computations during connection setup or detection. For example, BlindBox, introduced by Sherry et al. (SIGCOMM 2015) enables inspection over the TLS-encrypted traffic without compromising users' privacy, but its usage is limited due to a significant delay on establishing an inspected channel. PrivDPI, proposed more recently by Ning et al. (ACM CCS 2019), improves the overall efficiency of BlindBox and makes the inspection scenario more viable. Despite the improvement, we show in this paper that the user privacy of Ning et al.'s PrivDPI can be compromised entirely by the rule generator without involving any other parties, including the middlebox. Having observed the difficulties of realizing efficiency and security in the previous work, we propose a new DPI system for encrypted traffic, named "Practical and Privacy-Preserving Deep Packet Inspection (P2DPI)''. P2DPI enjoys the same level of security and privacy that BlindBox provides. At the same time, P2DPI offers fast setup and encryption and outperforms PrivDPI. Our results are supported by formal security analysis. We implemented our P2DPI and comparable PrivDPI and performed extensive experimentation for performance analysis and comparison.
In this paper, we analyse the sustainability of social networks using STrust, our social trust model. The novelty of the model is that it introduces the concept of engagement trust and combines it with the popularity trust to derive the social trust of the community as well as of individual members in the community. This enables the recommender system to use these different types of trust to recommend different things to the community, and identify (and recommend) different roles. For example, it recommends mentors using the engagement trust and leaders using the popularity trust. We then show the utility of the model by analysing data from two types of social networks. We also study the sustainability of a community through our social trust model. We observe that a 5% drop in highly trusted members causes more than a 50% drop in social capital that, in turn, raises the question of sustainability of the community. We report our analysis and its results.
Security Operations Centres (SOCs) are specialised facilities where security analysts leverage advanced technologies to monitor, detect, and respond to cyber incidents. However, the increasing volume of security incidents has overwhelmed security analysts, leading to alert fatigue. Effective alert prioritisation (AP) becomes crucial to address this problem through the utilisation of proper criteria and methods. Human-AI teaming (HAT) has the potential to significantly enhance AP by combining the complementary strengths of humans and AI. AI excels in processing large volumes of alert data, identifying anomalies, uncovering hidden patterns, and prioritising alerts at scale, all at machine speed. Human analysts can leverage their expertise to investigate prioritised alerts, re-prioritise them based on additional context, and provide valuable feedback to the AI system, reducing false positives and ensuring critical alerts are prioritised. This work provides a comprehensive review of the criteria and methods for AP in SOC. We analyse the advantages and disadvantages of the different categories of AP criteria and methods based on HAT, specifically considering automation, augmentation, and collaboration. We also identify several areas for future research. We anticipate that our findings will contribute to the advancement of AP techniques, fostering more effective security incident response in SOCs.