Extreme distributed systems: from large scale to complexity

2012 
Modern distributed systems can easily consist of hundreds of thousands of computers, ranging from high-end powerful machines to low-end resource-constrained wireless devices. We label them as “extreme distributed systems,” as they push scalability and complexity well beyond traditional scenarios. The extremeness of these systems is now requiring that we reconsider our methods and techniques for their development, and, indeed, we are already witnessing a shift in thinking. For example, Barroso and Holze [1] have made a case for a holistic design of a data center, which they essentially see as a single computer system. In their approach, the process of designing a data center is very similar to the way we have been designing processors: we need to take into account compute elements, data and control paths, storage, power sources, heating issues, and so on. As another example, groups from Lancaster University and INRIA/IRISA in Rennes are working on the integration of component-based software development with gossip-based protocols to combine structural and emergent approaches toward large-scale distributed system development [2]. It seems to be inevitable that we should concentrate more on fully decentralized solutions, as witnessed by, for example, peer-to-peer systems. Decentralized organizations often combine local decision-making with dissemination of information in order to improve the decision-making process, exemplified by many epidemic-based and other bio-inspired approaches. In this light, we are seeing much more than just ensuring that the constituents of a distributed system are properly placed, organized, and connected: the design of a distributed system is becoming fully integrated with
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    2
    References
    1
    Citations
    NaN
    KQI
    []