New primitives for bounded degradation in network service

08/17/2022
by   Simon Kassing, et al.
0

Certain new ascendant data center workloads can absorb some degradation in network service, not needing fully reliable data transport and/or their fair-share of network bandwidth. This opens up opportunities for superior network and infrastructure multiplexing by having this flexible traffic cede capacity under congestion to regular traffic with stricter needs. We posit there is opportunity in network service primitives which permit degradation within certain bounds, such that flexible traffic still receives an acceptable level of service, while benefiting from its weaker requirements. We propose two primitives, namely guaranteed partial delivery and bounded deprioritization. We design a budgeting algorithm to provide guarantees relative to their fair share, which is measured via probing. The requirement of budgeting and probing limits the algorithm's applicability to large flexible flows. We evaluate our algorithm with large flexible flows and for three workloads of regular flows of small size, large size and a distribution of sizes. Across the workloads, our algorithm achieves less speed-up of regular flows than fixed prioritization, especially for the small flows workload (1.25x vs. 6.82 in the 99th large regular flows (with 14.5 beyond their guarantee). However, it provides not much better or even slightly worse guarantees for the other two workloads. The ability to enforce guarantees is influenced by flow fair share interdependence, measurement inaccuracies and dependency on convergence. We observe that priority changes to probe or to deprioritize causes queue shifts which deteriorate guarantees and limit possible speed-up, especially of small flows. We find that mechanisms to both prioritize traffic and track guarantees should be as non-disruptive as possible.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset