A Two-Sided Matching Model for Data Stream Processing in the Cloud – Fog Continuum
2021
Latency-sensitive and bandwidth-intensive stream processing applications are dominant traffic generators over the Internet network. A stream consists of a continuous sequence of data elements, which require processing in nearly real-time. To improve communication latency and reduce the network congestion, Fog computing complements the Cloud services by moving the computation towards the edge of the network. Unfortunately, the heterogeneity of the new Cloud – Fog continuum raises important challenges related to deploying and executing data stream applications. We explore in this work a two-sided stable matching model called Cloud – Fog to data stream application matching (CODA) for deploying a distributed application rep-resented as a workflow of stream processing microservices on heterogeneous computing continuum resources. In CODA, the application microservices rank the continuum resources based on their microservice stream processing time, while resources rank the stream processing microservices based on their residual bandwidth. A stable many-to-one matching algorithm assigns microservices to resources based on their mutual preferences, aiming to optimize the complete stream processing time on the application side, and the total streaming traffic on the resource side. We evaluate the CODA algorithm using simulated and real-world Cloud – Fog experimental scenarios. We achieved 11-45% lower stream processing time and 1.3-20% lower streaming traffic compared to related state-of-the-art approaches.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
32
References
0
Citations
NaN
KQI