Data replication technologies in distributed storage systems introduce the problem of data consistency. For high performance, data replication systems often settle for weak consistency models, such as Pipelined-RAM consistency. To determine whether a data replication system provides Pipelined-RAM consistency, we study the problem of verifying Pipelined-RAM consistency over read/write traces (VPC, for short). Four variants of VPC (labeled VPC-SU, VPC-MU, VPC-SD, and VPC-MD) are identified according to whether there are Multiple shared variables (or one Single variable) and whether write operations can assign Duplicate values (or only Unique values) to each shared variable. We prove that VPC-SD is $\sf {NP}$ -complete (so is VPC-MD) by reducing the strongly $\sf {NP}$ -complete problem 3-Partition to it. For VPC-MU, we present the Read-Centric algorithm with time complexity $O(n^4)$ , where $n$ is the number of operations. The algorithm constructs an operation graph by iteratively applying a rule which guarantees that no overwritten values can be read later. It incrementally processes all the read operations one by one, and exploits the total order between the dictating writes on the same variable to avoid redundant applications of the rule. The experiments have demonstrated its practical efficiency and scalability.
The replicated list object is frequently used to model the core functionality of replicated collaborative text editing systems. Since 1989, the convergence property has been a common specification of a replicated list object. Recently, Attiya et al. proposed the strong/weak list specification and conjectured that the well-known Jupiter protocol satisfies the weak list specification. The major obstacle to proving this conjecture is the mismatch between the global property on all replica states prescribed by the specification and the local view each replica maintains in Jupiter using data structures like 1D buffer or 2D state space. To address this issue, we propose CJupiter (Compact Jupiter) based on a novel data structure called n-ary ordered state space for a replicated client/server system with n clients. At a high level, CJupiter maintains only a single n-ary ordered state space which encompasses exactly all states of each replica. We prove that CJupiter and Jupiter are equivalent and that CJupiter satisfies the weak list specification, thus solving the conjecture above.
Abstract Internet‐scale distributed systems often replicate data at multiple geographic locations to provide low latency and high availability, despite node and network failures. According to the CAP theorem, low latency and high availability can only be achieved at the cost of accepting weak consistency. The conflict‐free replicated data type (CRDT) is a framework that provides a principled approach to maintaining eventual consistency among data replicas. CRDTs have been notoriously difficult to design and implement correctly. Subtle deep bugs lie in the complex and tedious handling of all possible cases of conflicting data updates. We argue that the CRDT design should be formally specified and model checked, to uncover deep bugs which are beyond human reasoning. The implementation further needs to be systematically tested. On the one hand, the testing needs to inherit the exhaustive nature of the model checking and ensures the coverage of testing. On the other hand, the testing is expected to find coding errors which cannot be detected by design level verification. Toward the challenges above, we propose the model‐checking‐driven explorative testing ( MET ) framework. At the design level, MET uses TLA+ to specify and model check CRDT designs. At the implementation level, MET conducts model‐checking‐driven explorative testing, in the sense that the test cases are automatically generated from the model‐checking traces. The system execution is controlled to proceed deterministically, following the model‐checking trace. The explorative testing systematically controls and permutes all nondeterministic choices of message reorderings. We apply MET in our practical development of CRDTs. The bugs in both designs and implementations of CRDTs are found. As for bugs which can be found by traditional testing techniques, MET greatly reduces the cost of fixing the bugs. Moreover, MET can find subtle deep bugs which cannot be found by existing techniques at a reasonable cost. Based on our practical use of MET , we discuss how MET provides us with sufficient confidence in the correctness of our CRDT designs and implementations. Conflict‐free replicated data type (CRDT) is a framework that provides a principled approach to maintaining eventual consistency among data replicas in distributed systems. CRDTs have been notoriously difficult to design and implement correctly. We propose model‐checking‐driven explorative testing ( MET ) framework for dealing with such problem. We apply MET in our practical development of CRDTs. MET successfully finds subtle deep bugs and provides us with sufficient confidence in the correctness of our CRDT designs and implementations.
Web service composition involves the combination of a number of existing web services to create a value-added one. WS-BPEL is a promising language which describes Web service composition in form of business processes. However, WS-BPEL is an XML-based language and may suffer from ambiguities or some erroneous properties. The analysis and verification of business processes specified in WS-BPEL by a formal method has been a hot topic in the research community lately. In this paper, we propose a method to model and analyze WS-BPEL business processes based on ServiceNet, a special class of Petri nets. Unlike most of the existing work which analyze only control flow properties of WS-BPEL business processes but neglect data flow properties, our method models and analyzes both control properties and data properties of WS-BPEL business processes. We present the transformation rules of WS-BPEL business processes into ServiceNet and enrich the reduction rules of ServiceNet by applying them in some practical projects. Then the throughness of a business processes can be verified by reducing its ServiceNet representation based on some reduction rules. Moreover, we define some data-aspect properties of WS-BPEL business processes and give the corresponding checking algorithm.
Cloud computing and SaaS (Software-as-a-Service) received significant attention recently. Testing SaaS applications is important because many mission-critical applications will be deployed on the cloud. However, to the best of our knowledge, testing framework designed specifically for SaaS applications is not developed. The issue of testing the scalability of SaaS applications remains untouched. This paper discusses the unique features and challenges in testing SaaS applications, and proposes scalability metrics that can be used to test the scalability of SaaS applications.