Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix a data race in TopologyCache #117249

Merged
merged 1 commit into from
Apr 12, 2023
Merged

Conversation

tnqn
Copy link
Member

@tnqn tnqn commented Apr 12, 2023

What type of PR is this?

/kind bug

What this PR does / why we need it:

The member variable cpuRatiosByZone should be accessed with the lock acquired as it could be be updated by SetNodes concurrently.

Which issue(s) this PR fixes:

Fixes #

Special notes for your reviewer:

Does this PR introduce a user-facing change?

Fix a data race in TopologyCache when `AddHints` and `SetNodes` are called concurrently

Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:


@k8s-ci-robot k8s-ci-robot added release-note Denotes a PR that will be considered when it comes time to generate release notes. kind/bug Categorizes issue or PR as related to a bug. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge/needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Apr 12, 2023
@k8s-ci-robot
Copy link
Contributor

This issue is currently awaiting triage.

If a SIG or subproject determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the needs-priority Indicates a PR lacks a `priority/foo` label and requires one. label Apr 12, 2023
@k8s-ci-robot k8s-ci-robot added sig/apps Categorizes an issue or PR as relevant to SIG Apps. sig/network Categorizes an issue or PR as relevant to SIG Network. and removed do-not-merge/needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Apr 12, 2023
@tnqn
Copy link
Member Author

tnqn commented Apr 12, 2023

@robscott @aojea could you please take a look?

@aojea
Copy link
Member

aojea commented Apr 12, 2023

@robscott @aojea could you please take a look?

the CI runs with -race flag, we can add a test like this, I verified it hits the race

diff --git a/pkg/controller/endpointslice/topologycache/topologycache_test.go b/pkg/controller/endpointslice/topologycache/topologycache_test.go
index 8c83f9ec9f6..5716b4cb336 100644
--- a/pkg/controller/endpointslice/topologycache/topologycache_test.go
+++ b/pkg/controller/endpointslice/topologycache/topologycache_test.go
@@ -625,6 +625,75 @@ func TestSetNodes(t *testing.T) {
        }
 }
 
+func TestTopologyCacheRace(t *testing.T) {
+       sliceInfo := &SliceInfo{
+               ServiceKey:  "ns/svc",
+               AddressType: discovery.AddressTypeIPv4,
+               ToCreate: []*discovery.EndpointSlice{{
+                       Endpoints: []discovery.Endpoint{{
+                               Addresses:  []string{"10.1.2.3"},
+                               Zone:       pointer.String("zone-a"),
+                               Conditions: discovery.EndpointConditions{Ready: pointer.Bool(true)},
+                       }, {
+                               Addresses:  []string{"10.1.2.4"},
+                               Zone:       pointer.String("zone-b"),
+                               Conditions: discovery.EndpointConditions{Ready: pointer.Bool(true)},
+                       }},
+               }}}
+       type nodeInfo struct {
+               zone   string
+               cpu    resource.Quantity
+               ready  v1.ConditionStatus
+               labels map[string]string
+       }
+       nodesinfos := []nodeInfo{
+               {zone: "zone-a", cpu: resource.MustParse("1000m"), ready: v1.ConditionTrue},
+               {zone: "zone-a", cpu: resource.MustParse("1000m"), ready: v1.ConditionTrue},
+               {zone: "zone-a", cpu: resource.MustParse("1000m"), ready: v1.ConditionTrue},
+               {zone: "zone-a", cpu: resource.MustParse("2000m"), ready: v1.ConditionTrue},
+               {zone: "zone-b", cpu: resource.MustParse("3000m"), ready: v1.ConditionTrue},
+               {zone: "zone-b", cpu: resource.MustParse("1500m"), ready: v1.ConditionTrue},
+               {zone: "zone-c", cpu: resource.MustParse("500m"), ready: v1.ConditionTrue},
+       }
+
+       cache := NewTopologyCache()
+       nodes := []*v1.Node{}
+       for _, node := range nodesinfos {
+               labels := node.labels
+               if labels == nil {
+                       labels = map[string]string{}
+               }
+               if node.zone != "" {
+                       labels[v1.LabelTopologyZone] = node.zone
+               }
+               conditions := []v1.NodeCondition{{
+                       Type:   v1.NodeReady,
+                       Status: node.ready,
+               }}
+               allocatable := v1.ResourceList{
+                       v1.ResourceCPU: node.cpu,
+               }
+               nodes = append(nodes, &v1.Node{
+                       ObjectMeta: metav1.ObjectMeta{
+                               Labels: labels,
+                       },
+                       Status: v1.NodeStatus{
+                               Allocatable: allocatable,
+                               Conditions:  conditions,
+                       },
+               })
+       }
+
+       for i := 0; i < 50; i++ {
+               go func() {
+                       cache.SetNodes(nodes)
+               }()
+               go func() {
+                       cache.AddHints(sliceInfo)
+               }()
+       }
+}
+

The member variable `cpuRatiosByZone` should be accessed with the lock
acquired as it could be be updated by `SetNodes` concurrently.

Signed-off-by: Quan Tian <[email protected]>
Co-authored-by: Antonio Ojea <[email protected]>
@k8s-ci-robot k8s-ci-robot added size/M Denotes a PR that changes 30-99 lines, ignoring generated files. and removed size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. labels Apr 12, 2023
@tnqn
Copy link
Member Author

tnqn commented Apr 12, 2023

@aojea thanks for the suggestion. I have added the test with minor adjustments (removed some unused variables and verified it can be reproduced even executing only once). Added you as co-author if you don't mind.

@aojea
Copy link
Member

aojea commented Apr 12, 2023

Added you as co-author if you don't mind.

no need to, it was just a suggestion, but thanks

/lgtm
/approve

/test pull-kubernetes-e2e-gce

Kubernetes e2e suite: [It] [sig-cli] Kubectl client Simple pod should contain last line of the log expand_less

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Apr 12, 2023
@k8s-ci-robot
Copy link
Contributor

LGTM label has been added.

Git tree hash: 70040d77e102e6182c2f7c3fbcd24aba0a172d7c

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: aojea, tnqn

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Apr 12, 2023
@k8s-ci-robot k8s-ci-robot merged commit 5550bd5 into kubernetes:master Apr 12, 2023
@k8s-ci-robot k8s-ci-robot added this to the v1.28 milestone Apr 12, 2023
@tnqn tnqn deleted the fix-data-race branch April 13, 2023 02:06
k8s-ci-robot added a commit that referenced this pull request Aug 4, 2023
…117249-upstream-release-1.27

Automated cherry pick of #117245: Fix TopologyAwareHint not working when zone label is added
#117249: Fix a data race in TopologyCache
k8s-ci-robot added a commit that referenced this pull request Aug 4, 2023
…117249-upstream-release-1.25

Automated cherry pick of #117245: Fix TopologyAwareHint not working when zone label is added
#117249: Fix a data race in TopologyCache
k8s-ci-robot added a commit that referenced this pull request Aug 4, 2023
…117249-upstream-release-1.26

Automated cherry pick of #117245: Fix TopologyAwareHint not working when zone label is added
#117249: Fix a data race in TopologyCache
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/bug Categorizes issue or PR as related to a bug. lgtm "Looks good to me", indicates that a PR is ready to be merged. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. release-note Denotes a PR that will be considered when it comes time to generate release notes. sig/apps Categorizes an issue or PR as relevant to SIG Apps. sig/network Categorizes an issue or PR as relevant to SIG Network. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants