Availability models for hyper-converged cloud computing infrastructures

2018 
The software-defined data center (SDDC) concept replaces the older, scattered and disorganized silo-based architecture. The SDDC adoption implies the reduction of expenses with staff management and the infrastructure improvement, which becomes easier-to-keep, due mainly to the possibility to virtualize all these computational resources. Also, due to the components standardization, the required physical space to store, provide and maintain these environments have decreased. But a problem still remains, how to keep the system and service provisioning working under a stringent Service Level Agreement (SLA)? High Available (HA) environments have an annual downtime of fewer than five minutes; this is the closest that one can reach from a 100% of availability, usually many layers of redundancy are required to achieve this value. This paper evaluates the hyper-convergence, one of the ways to reach HA in cloud computing environments, but with fewer components than typical architectures. The hyper-convergence on OpenStack cloud computing platform utilizes a distributed storage mechanism to reduce the expenses with storage devices. To the proposed environments evaluation, some behavioral models were created, these models show how advantageous are distributed storage systems over typical ones. The availability results pointed out that the full triple redundancy environment availability values are pretty close to the ones extracted from a hyper-convergent environment even with 33% more components.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    9
    References
    2
    Citations
    NaN
    KQI
    []