language-icon Old Web
English
Sign In

The Management of Replicated Data

1999 
Fourteen years have passed since Gifford’s seminal paper on weighted voting [7]. These years have seen the development of numerous protocols for managing replicated data and a handful of experimental systems implementing replicated files. The time has now come to attempt an inventory of the problems for which we have found solutions and the issues that remain open. One way to structure this inventory is to organize it around general observations reflecting points of agreement and disagreement within the replicated data community. Our frame of reference will be simple: We will consider systems maintaining multiple copies–or replicas–of the same data at distinct nodes of a computer network. We define the availability of replicated data for a given operation as the probability that the operation can be successfully carried out at some node within the network. We will focus on the problem of protecting the users of the replicated data from the inconsistencies that may result from node failures and network partitions. This is normally done through a group communication mechanism [3, 4, 22] or a replication control protocol. Group communication mechanisms focus on the problem of reliable delivery of messages to the replicas while replication control protocols operate by mediating all accesses to the replicated data. Hence they are more general. An ideal replication control protocol should guarantee the consistency of the replicated data in the presence of any arbitrary combination of non-Byzantine failures while providing the highest possible data availability and occasioning the lowest possible overhead.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []