End-Edge Coordinated Joint Encoding and Neural Enhancement for Low-Light Video Analytics
In this paper, we investigate video analytics in low-light environments, and propose an end-edge coordinated system with joint video encoding and enhancement. It adaptively transmits low-light videos from cameras and performs enhancement and inference tasks at the edge. Firstly, according to our observations, both encoding and enhancement for low-light videos have a significant impact on inference accuracy, which directly influences bandwidth and computation overhead. Secondly, due to the limitation of built-in computation resources, cameras perform encoding and transmitting frames to the edge. The edge executes neural enhancement to process low contrast, detail loss, and color distortion on low-light videos before inference. Finally, an adaptive controller is designed at the edge to select quantization parameters and scales of neural enhancement networks, aiming to improve the inference accuracy and meet the latency requirements. Extensive real-world experiments demon-strate that, the proposed system can achieve a better trade-off between communication and computation resources and optimize the inference accuracy.
READ FULL TEXT