A significant open issue in cloud computing is the real performance of the infrastructure. Few, if any, cloud providers or technologies offer quantitative performance guarantees. Regardless of the potential advantages of the cloud in comparison to enterprise-deployed applications, cloud infrastructures may ultimately fail if deployed applications cannot predictably meet behavioral requirements. In this paper, we present the results of comprehensive performance experiments we conducted on Windows Azure from October 2009 to February 2010. In general, we have observed good performance of the Windows Azure mechanisms, although the average 10 min VM startup time must be accounted for in application design. We also present performance and reliability observations and analysis from our deployment of a large-scale scientific application hosted on Azure, called ModisAzure, that show unusual and sporadic VM execution slowdown of over 4× in some cases and affected up to 16% of task executions at times. In addition to a detailed performance evaluation of Windows Azure, we provide recommendations for potential users of Windows Azure based on these early observations. Although the discussion and analysis is tailored to scientific applications, the results are broadly applicable to the range of existing and future applications running in Windows Azure.
This paper addresses the scheduling problem that popular data parallel programming systems such as DryadLINQ and MapReduce are facing today. Designing a cluster system in a multi-user environment is challenging because cluster schedulers must satisfy multiple, possibly conflicting, enterprise goals and policies. Particularly for these new types of data-intensive applications, it continues to be a challenge to simultaneously achieve both high throughput and predictable end-to-end performance for jobs (e.g., predictable start/end times). The conventional approach of scheduling these types of jobs is to attempt to determine a best mapping between a task and a node before the job executes, and the scheduling system ceases to be involved for a given job once the job starts executing. Instead, as described in this paper, we define a reactive containment and control mechanism for scheduling and executing distributed tasks, schedule the jobs, and then continually monitor and adjust resources as the job executes. More specifically, a DryadLINQ task in our system is contained in virtual machine and distributed controllers regulate progress of the task at runtime. Using online, feedback-controlled VM CPU scheduling, our system provides a job a capability to speed-up or slow-down progress of concurrent sub-tasks so that the job can make predictable progress while sharing system resources with other jobs. The new capability allows an enterprise to enforce flexible scheduling policies such as fair-share and/or prioritizing jobs. Our evaluation results using five well-known DryadLINQ applications show the implemented distributed controllers achieve high throughput as well as predictable end-to-end performance.
It can be natural to believe that many of the traditional issues of scale have been eliminated or at least greatly reduced via cloud computing. That is, if one can create a seemingly well functioning cloud application that operates correctly on small or moderate-sized problems, then the very nature of cloud programming abstractions means that the same application will run as well on potentially significantly larger problems. In this paper, we present our experiences taking MODISAzure, our satellite data processing system built on the Windows Azure cloud computing platform, from the proof-of-concept stage to a point of being able to run on significantly larger problem sizes (e.g., from national-scale data sizes to global-scale data sizes). To our knowledge, this is the longest-running eScience application on the nascent Windows Azure platform. We found that while many infrastructure-level issues were thankfully masked from us by the cloud infrastructure, it was valuable to design additional redundancy and fault-tolerance capabilities such as transparent idempotent task retry and logging to support debugging of user code encountering unanticipated data issues. Further, we found that using a commercial cloud means anticipating inconsistent performance and black-box behavior of virtualized compute instances, as well as leveraging changing platform capabilities over time. We believe that the experiences presented in this paper can help future eScience cloud application developers on Windows Azure and other commercial cloud providers.
Approximate processing is an approach to real-time AI problem solving in domains in which compromise is possible between the resources required to generate a solution and the quality of that solution. It is a satisficing approach in which the goal is to produce accept- able solutions how to integrate approximate processing with the blackboard architecture[17]. However, in order to solve real-time problems with hard deadlines using a blackboard system, we need to have: (1) a predictable blackboard execution loop, (2) a representation of the set of current and future tasks and their estimated durations, and (3) a model of how to modify those tasks when their deadlines are projected to be missed, and how the modifications will affect the task durations and results. This paper describes four components for achieving these goals in an approximate process- ing blackboard system. A parameterized low-level control loop allows predictable knowledge source execution, multiple execution channels allow dynamic control over the computation involved in each task, a meta-controller allows a representation of the set of current and future tasks and their estimated durations and results, and a real-time blackboard scheduler monitors and modifies tasks during execution so that deadlines are met. An example is given that illustrates how these components work together to construct a satisficing solution to a time-constrained problem in the Distributed Vehicle Monitoring Testbed (DVMT). A brief sketch is given of the implementation of the system.
In grid collaborations, scientists use middleware to execute computational experiments, visualize results, and securely share data on resources ranging from desktop machines to supercomputers. While there has been significant effort in authentication and authorization for these distributed infrastructures, it is still difficult to determine, post-facto, exactly what information might have been accessed, what operations might have occurred, and for what reasons. To address this problem, we have designed and implemented a secure logging infrastructure for grid data access. We uniquely leverage and extend XACML with new capabilities so that data owners can specify logging policies and these policies can be used to engage logging mechanisms to record events of interest to the data owners. A case study based on GridFTP.NET is presented and analyzed, utilizing both local storage of log records and remote storage via SAWS, an independently developed secure audit Web service. We show that with relatively little performance overhead, data owners are provided with new flexibility for determining the post-facto conditions under which their grid data was accessed