CASH: A Credit Aware Scheduling for Public Cloud Platforms
The public cloud offers a myriad of services which allows its tenants to process large scale big data in a flexible, easy and cost effective manner. Tenants generally use large scale data processing frameworks such as MapReduce, Tez, Spark etc. to process their data. Tenants can configure their frameworks to run individual tasks by the framework itself or have a middleware cluster manager like YARN or Mesos to arbitrate resource scheduling in their public-cloud cluster. Cluster managers need to be cognizant about the workload requirement along with the state of the individual resource such as CPU and disk in the cluster. Cloud providers use a token bucket mechanism for their individual hardware resources as an indicator of the quality-of-service that individual hardware resource can provide. In this paper, through our changes in YARN, Hadoop and Tez, we show how middleware cluster managers can be made cognizant about the expected quality-of-service of individual hardware resources in the cluster. Our optimized cluster manager with a coarse grained knowledge of task requirement and fine grained knowledge of expected quality-of-service of hardware resources in the cluster performs highly optimal task placements. Our experiments with our optimizations show CPU credit based instances like the Amazon T3 instances as a viable cost effective option for running bigdata workloads. We also show that streaming SQL queries on a Hive warehouse can be accelerated by up to 31 of up to 22
READ FULL TEXT