Beyond Bufferbloat: End-to-End Congestion Control Cannot Avoid Latency Spikes
End-to-end congestion control is the main method of congestion control in the Internet, and achieving consistent low queuing latency with end-to-end methods is a very active area of research. Even so, achieving consistent low queuing latency in the Internet still remains an unsolved problem. Therefore, we ask "What are the fundamental limits of end-to-end congestion control?" We find that the unavoidable queuing latency for best-case end-to-end congestion control is on the order of hundreds of milliseconds under conditions that are common in the Internet. Our argument depends on two things: The latency of congestion signaling – at minimum the speed of light – and the fact that link capacity may change rapidly for an end-to-end path in the Internet.
READ FULL TEXT