A caching system with object sharing

05/18/2019
by   George Kesidis, et al.
0

We consider a public content caching system that is shared by a number of proxies. The cache could be located in an edge-cloud datacenter and the the proxies could each serve a large population of mobile end-users. The proxies operate their own LRU-list of a certain capacity in the shared cache. The length of objects appearing in plural LRU-lists is divided among them. We provide a "working set" approximation to quickly approximate the cache-hit probabilities under object sharing. We also discuss an approach to sharing cache I/O based on token bucket mechanisms. Also, why and how a proxy may issue mock requests to exploit the shared cache is discussed.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro