Cost-Driven Offloading for DNN-based Applications over Cloud, Edge and End Devices

07/31/2019
by   Bin Lin, et al.
0

Currently, deep neural networks (DNNs) have achieved a great success in various applications. Traditional deployment for DNNs in the cloud may incur a prohibitively serious delay in transferring input data from the end devices to the cloud. To address this problem, the hybrid computing environments, consisting of the cloud, edge and end devices, are adopted to offload DNN layers by combining the larger layers (more amount of data) in the cloud and the smaller layers (less amount of data) at the edge and end devices. A key issue in hybrid computing environments is how to minimize the system cost while accomplishing the offloaded layers with their deadline constraints. In this paper, a self-adaptive discrete particle swarm optimization (PSO) algorithm using the genetic algorithm (GA) operators was proposed to reduce the system cost caused by data transmission and layer execution. This approach considers the characteristics of DNNs partitioning and layers offloading over the cloud, edge and end devices. The mutation operator and crossover operator of GA were adopted to avert the premature convergence of PSO, which distinctly reduces the system cost through enhanced population diversity of PSO. The proposed offloading strategy is compared with benchmark solutions, and the results show that our strategy can effectively reduce the cost of offloading for DNN-based applications over the cloud, edge and end devices relative to the benchmarks.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset