-
Notifications
You must be signed in to change notification settings - Fork 40.9k
In-memory caching for node image access multitenancy #131882
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
This issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: stlaz The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
15d56c1
to
a34c079
Compare
/retitle [WIP] In-memory caching for node image access multitenancy |
Signed-off-by: Stanislav Láznička <[email protected]>
Signed-off-by: Stanislav Láznička <[email protected]>
Signed-off-by: Stanislav Láznička <[email protected]>
Signed-off-by: Stanislav Láznička <[email protected]>
…m caching Signed-off-by: Stanislav Láznička <[email protected]>
a34c079
to
8779a03
Compare
@stlaz: The following test failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
What type of PR is this?
/kind feature
What this PR does / why we need it:
This PR adds a write-through caching layer between the kubelet's image pulls manager and the on-disk image pull records. The cache implements a fallback to the disk on cache miss.
Which issue(s) this PR fixes:
Related-to #kubernetes/enhancements#2535
Special notes for your reviewer:
NOTE: This PR relies, and is rebased, on benchmark code from #131864.
The benchmarks below show the comparison of memory-caching (all records cached, then LRU with 100 records limit) to direct access to image pull records on disk, and when the feature is disabled/enabled.
Raw benchmark results:
directfs_memcache_comparison_28GiB.txt
directfs_memcache_benchmark_28GiB.txt
directfs_memcache_LRU_benchmark_28GiB.txt
We can see that caching improves the performance greatly on 100% cache hits compared to accessing the files directly though still being slower than if the feature is disabled. This behavior is expected.
As expected, the overhead of the write-through cache shows at low number of records and low cache hit rates, but in general the performance improvement is apparent in scenarios that are more likely in the real world - above 50% hit rate and 50-70 records. The number of allocations also gets much lower, most likely because we don't need to encode/decode the resources too often.
The results show that performance gain declines quickly once the number of records kept exceeds the LRU cache capacity. However, the benchmark is currently unable to properly test the LRU strategy assumptions, and the cache hits are generated fairly randomly, which likely will not match the expected use.
Does this PR introduce a user-facing change?
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:
/cc liggitt enj
/sig node
/sig auth