Resource- and Message Size-Aware Scheduling of Stream Processing at the Edge with application to Realtime Microscopy
Whilst computational resources at the cloud edge can be leveraged to improve latency and reduce the costs of cloud services for a wide variety mobile, web, and IoT applications; such resources are naturally constrained. For distributed stream processing applications, there are clear advantages to offloading some processing work to the cloud edge. Many state of the art stream processing applications such as Flink and Spark Streaming, being designed to run exclusively in the cloud, are a poor fit for such hybrid edge/cloud deployment settings, not least because their schedulers take limited consideration of the heterogeneous hardware in such deployments. In particular, their schedulers broadly assume a homogeneous network topology (aside from data locality consideration in, e.g., HDFS/Spark). Specialized stream processing frameworks intended for such hybrid deployment scenarios, especially IoT applications, allow developers to manually allocate specific operators in the pipeline to nodes at the cloud edge. In this paper, we investigate scheduling stream processing in hybrid cloud/edge deployment settings with sensitivity to CPU costs and message size, with the aim of maximizing throughput with respect to limited edge resources. We demonstrate real-time edge processing of a stream of electron microscopy images, and measure a consistent reduction in end-to-end latency under our approach versus a resource-agnostic baseline scheduler, under benchmarking.
READ FULL TEXT