10-millisecond Computing

10/05/2016
by   Gang Lu, et al.
0

Despite computation becomes much complex on data with an unprecedented scale, we argue computers or smart devices should and will consistently provide information and knowledge to human being in the order of a few tens milliseconds. We coin a new term 10-millisecond computing to call attention to this class of workloads. 10-millisecond computing raises many challenges for both software and hardware stacks. In this paper, using a typical workload-memcached on a 40-core server (a main-stream server in near future), we quantitatively measure 10-ms computing's challenges to conventional operating systems. For better communication, we propose a simple metric-outlier proportion to measure quality of service: for N completed requests or jobs, if M jobs or requests' latencies exceed the outlier threshold t, the outlier proportion is M/N . For a 1K-scale system running Linux (version 2.6.32), LXC (version 0.7.5) or XEN (version 4.0.0), respectively, we surprisingly find that so as to reduce the service outlier proportion to 10 degradation), the outlier proportion of a single server has to be reduced by 871X, 2372X, 2372X accordingly. Also, we discuss the possible design spaces of 10-ms computing systems from perspectives of datacenter architectures, networking, OS and scheduling, and benchmarking.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset