Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

collect watch-list requests latency metric #120490

Merged

Conversation

p0lyn0mial
Copy link
Contributor

@p0lyn0mial p0lyn0mial commented Sep 7, 2023

What type of PR is this?

/kind feature

What this PR does / why we need it:

Collects watch-list requests latency.
Having this metric allows for comparing watch-list requests with standard list requests and measuring performance of the new requests in general.

Which issue(s) this PR fixes:

Fixes #

Special notes for your reviewer:

xref: kubernetes/enhancements#3157

Does this PR introduce a user-facing change?

Adds `apiserver_watch_list_duration_seconds` metrics. Which will measure response latency distribution in seconds for watch list requests broken by group, version, resource and scope

Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:


@k8s-ci-robot k8s-ci-robot added do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. size/M Denotes a PR that changes 30-99 lines, ignoring generated files. kind/feature Categorizes issue or PR as related to a new feature. do-not-merge/release-note-label-needed Indicates that a PR should not merge because it's missing one of the release note labels. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge/needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. labels Sep 7, 2023
@p0lyn0mial
Copy link
Contributor Author

/assign @wojtek-t @dgrisonnet

@k8s-ci-robot k8s-ci-robot added area/apiserver sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. sig/instrumentation Categorizes an issue or PR as relevant to SIG Instrumentation. and removed do-not-merge/needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Sep 7, 2023
@@ -146,6 +146,18 @@ var (
},
[]string{"resource"},
)

WatchListLatencies = compbasemetrics.NewHistogramVec(
Copy link
Contributor Author

@p0lyn0mial p0lyn0mial Sep 7, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dgrisonnet could please take a look at this metric ?

My intention here is to measure latency of a special type of watch requests called watch-list.
I'd like to compare it against standard list requests to see how the new mechanism performs.

The standard list requests are measured by

requestLatencies = compbasemetrics.NewHistogramVec(

I'm not sure if the Buckets need to match or not.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a way to differentiate watch list requests with different scopes and is it relevant here? By scope I mean either:

  • resource: if the request is about a single object
  • namespace: if it is about objects from a single namespace
  • cluster: if it spawns objects from multiple namespaces

Similar to what we have today when we measure the latency of requests to the kube-apiserver. The idea would be to differentiate the latency when watching a single object vs watching all of them. This would be essential in my opinion in order to have precise measurements and to be able to properly compare the latency of watch list requests to the LIST requests.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

okay, thanks, sgtm, I will add the scope label to the metrics.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I opened #120520 as it will make the logic that determines the scope of a request easier.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've also opened kubernetes/perf-tests#2315 that will allow us to measure the new metric in the perf test

Subsystem: subsystem,
Name: "watch_list_duration_seconds",
Help: "Response latency distribution in seconds for watch list requests broken by group, resource.",
Buckets: []float64{0.5, 1.0, 2, 5, 8, 10, 15, 20, 30, 45, 60},
Copy link
Member

@dgrisonnet dgrisonnet Sep 7, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bucket don't necessarily need to match to be able to compare to the kube-apiserver metrics, but isn't 0.5 second quite high for resource-scoped watch list requests?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe @wojtek-t can comment on that. I don't have experience with actual numbers.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree - maybe we don't need all the buckets from:
https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apiserver/pkg/endpoints/metrics/metrics.go#L99

but I would definitely start with something lower (0.025?) and just took a subset of those

@p0lyn0mial p0lyn0mial force-pushed the upstream-watch-list-cache-metrics branch from fdf745b to 8738bce Compare September 8, 2023 15:46
@k8s-ci-robot k8s-ci-robot added size/L Denotes a PR that changes 100-499 lines, ignoring generated files. and removed size/M Denotes a PR that changes 30-99 lines, ignoring generated files. labels Sep 8, 2023
@@ -313,6 +343,7 @@ func (c *cacheWatcher) markBookmarkAfterRvSent(event *watchCacheEvent) {
defer c.stateMutex.Unlock()
if !c.wasBookmarkAfterRvSentLocked() {
c.state = cacheWatcherBookmarkSent
c.metrics.CalculateAndRecordWatchListRequestLatency()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What I'm a bit worried is that we will be comparing apples to oranges.

The current metric (apiserver_request_latencies) includes that time it takes to send the result to the client too. Here we effectively only measure time to generate them.
I'm not sure how to fix this, but it's a bit concerning to me.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What I started wondering is whether we actually want to measure the latency here.

Wouldn't it be better to actually measure the latency at the higher layers of the stack?
What if instead we would measure it somewhere here:
https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apiserver/pkg/endpoints/handlers/watch.go#L183

?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a good point, I wasn't aware that there is a layer above this one where we could measure the latency.
Let me rework this PR then. Thanks!

@@ -650,3 +682,85 @@ func TestBookmarkAfterResourceVersionWatchers(t *testing.T) {
t.Fatalf("expected only one watcher to be expired")
}
}

func TestWatchListLatency(t *testing.T) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This test seems somewhat artificial to me - I'm not sure I see an actual value in it...

Subsystem: subsystem,
Name: "watch_list_duration_seconds",
Help: "Response latency distribution in seconds for watch list requests broken by group, resource.",
Buckets: []float64{0.5, 1.0, 2, 5, 8, 10, 15, 20, 30, 45, 60},
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree - maybe we don't need all the buckets from:
https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apiserver/pkg/endpoints/metrics/metrics.go#L99

but I would definitely start with something lower (0.025?) and just took a subset of those

Buckets: []float64{0.5, 1.0, 2, 5, 8, 10, 15, 20, 30, 45, 60},
StabilityLevel: compbasemetrics.ALPHA,
},
[]string{"group", "resource", "scope"},
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

and version?

@p0lyn0mial p0lyn0mial force-pushed the upstream-watch-list-cache-metrics branch from 8738bce to 47e5ee6 Compare September 19, 2023 08:02
@@ -248,7 +255,14 @@ func (s *WatchServer) ServeHTTP(w http.ResponseWriter, req *http.Request) {
}
metrics.WatchEvents.WithContext(req.Context()).WithLabelValues(kind.Group, kind.Version, kind.Kind).Inc()

if err := s.EmbeddedEncoder.Encode(event.Object, buf); err != nil {
if event.Type == watch.Bookmark && utilfeature.DefaultFeatureGate.Enabled(features.WatchList) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Checking quickly if this will somehow mess up with #120300 but I think it doesn't make it visibly harder.

Might be worth checking additionally.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, it doesn't it will be an easy merge :)

@@ -248,7 +255,14 @@ func (s *WatchServer) ServeHTTP(w http.ResponseWriter, req *http.Request) {
}
metrics.WatchEvents.WithContext(req.Context()).WithLabelValues(kind.Group, kind.Version, kind.Kind).Inc()

if err := s.EmbeddedEncoder.Encode(event.Object, buf); err != nil {
if event.Type == watch.Bookmark && utilfeature.DefaultFeatureGate.Enabled(features.WatchList) {
shouldRecordWatchListLatency, err = storage.HasInitialEventsEndBookmarkAnnotation(event.Object)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK - so what you're doing is not even trying to understand whether user requests initial-list etc.
Instead you're assuming that lower layers will do the job and send (exactly one) initial-events-end bookmark correctly and just rely on that.

I guess I'm fine with that, but I would add a comment explaining that here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, it is a safe assumption, okay, I will add a comment.

utilruntime.HandleError(fmt.Errorf("unable to measure watchlist latency because no req info found in the ctx, gvr: %s", reqScope.Resource))
return
}
// TODO(p0lyn0mial): figure out how to get ns and name for watch requests
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why? We don't need name/namespace - we are only reporting it per scope...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, this is required for the scope. I'm pretty sure it will work most of the time.
Except one case mentioned in #120520 (comment)

The worst case scenario would be that we end up with an empty scope

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK - I see what you're saying now.

If you could open a separate PR with tests that Jordan mentioned and a fix for the missing namespace?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, I was planning to add support for it in kubernetes/perf-tests#2316

if err != nil {
utilruntime.HandleError(fmt.Errorf("unable to determine if the obj has the required annotation for measuring watchlist latency, obj %T: %v", event.Object, err))
}
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: I think I would prefer extracting this so a simple helper function:

func shouldRecordWatchListLatency(event) bool {
  if event.Type != Bookmark || !utilfeature.... {
    return fasle
  }
  hasAnnotation, err := ...
  if err != nil {
    utilruntime.HandleError(...)
    return false
  }
  return true
}

}
objAnnotations := objMeta.GetAnnotations()
_, ok := objAnnotations[initialEventsAnnotationKey]
return ok, nil
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To be on the safe-side, should we actually check if the value is "true" ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was thinking about it as well but then I realised that we didn't bother to check it for client-go either.
Let me open a PR addressing it for client-go sot that we are on the safe-side.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ptal, #120752

Name: "watch_list_duration_seconds",
Help: "Response latency distribution in seconds for watch list requests broken by group, version, resource and scope.",
// TODO(p0lyn0mial): adjust the buckets
Buckets: []float64{0.5, 1.0, 2, 5, 8, 10, 15, 20, 30, 45, 60},
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree - maybe we don't need all the buckets from:
https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apiserver/pkg/endpoints/metrics/metrics.go#L99

but I would definitely start with something lower (0.025?) and just took a subset of those

@wojtek-t I think that 0.025 is 25 milliseconds, which seems to be too low, no ? did you mean 0.25 :) ?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I meant 0.025 - for very small lists it's not unreasonable. Maybe 0.05 - but I want to see some percent of those to be below 100ms eventually (once Marek's improvements with manual progress notify land).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

below 100ms eventually

wow, nice :) I can add 0.05 in addition to what we have atm.

can you explain what Marek will be working on ?

@p0lyn0mial p0lyn0mial force-pushed the upstream-watch-list-cache-metrics branch from 47e5ee6 to b72df9d Compare September 19, 2023 11:47
@@ -280,6 +288,10 @@ func (s *WatchServer) ServeHTTP(w http.ResponseWriter, req *http.Request) {
if len(ch) == 0 {
flusher.Flush()
}
if isWatchListLatencyRecordingRequired {
recordWatchListLatency(req.Context(), s.Scope)
isWatchListLatencyRecordingRequired = false
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I might be missing something, but this doesn't seem to have any effect does it?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh, you are right, I can remove it now. thanks!

@p0lyn0mial p0lyn0mial force-pushed the upstream-watch-list-cache-metrics branch from b72df9d to c108f85 Compare September 25, 2023 11:17
@k8s-ci-robot k8s-ci-robot added release-note Denotes a PR that will be considered when it comes time to generate release notes. and removed do-not-merge/release-note-label-needed Indicates that a PR should not merge because it's missing one of the release note labels. labels Sep 25, 2023
@p0lyn0mial p0lyn0mial changed the title WIP: collect watch-list requests latency metric collect watch-list requests latency metric Sep 25, 2023
@k8s-ci-robot k8s-ci-robot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Sep 25, 2023
@p0lyn0mial
Copy link
Contributor Author

@dgrisonnet @wojtek-t this PR is ready for another pass, ptal, thanks!

@@ -61,7 +68,7 @@ func (w *realTimeoutFactory) TimeoutCh() (<-chan time.Time, func() bool) {

// serveWatch will serve a watch response.
// TODO: the functionality in this method and in WatchServer.Serve is not cleanly decoupled.
func serveWatch(watcher watch.Interface, scope *RequestScope, mediaTypeOptions negotiation.MediaTypeOptions, req *http.Request, w http.ResponseWriter, timeout time.Duration) {
func serveWatch(watcher watch.Interface, scope *RequestScope, mediaTypeOptions negotiation.MediaTypeOptions, req *http.Request, w http.ResponseWriter, timeout time.Duration, opts *metainternalversion.ListOptions) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rather than passing ListOptions here, I would prefer to actually pass the already computed metricsScope value.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

okay, ic, this would be computed by the CleanListScope function. I'm fine with that as well.

//
// note that normally we would use RequestInfo for computation
// bur we cannot, due to the same reasons mentioned in determineRequestNamespaceAndName
func determineRequestScope(ctx context.Context, opts *metainternalversion.ListOptions) string {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should move this function to pkg/endpoint/metrics/metrics.go [there is CleanScope there as an example]

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe let's call it CleanListScope or sth like that....

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could do that. So I assume you would also like to move the determineRequestNamespaceAndName function to the same place, right ?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

moved, ptal.

@p0lyn0mial p0lyn0mial force-pushed the upstream-watch-list-cache-metrics branch from c108f85 to 0fb7168 Compare September 25, 2023 12:32
opts.FieldSelector = fields.Nothing()
} else if opts.FieldSelector == nil {
opts.FieldSelector = fields.Nothing()
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's not modify opts here - see my comments below.


if requestNamespace, ok := request.NamespaceFrom(ctx); ok && len(requestNamespace) > 0 {
namespace = requestNamespace
} else if selectorNamespace, ok := opts.FieldSelector.RequiresExactMatch("metadata.namespace"); ok {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

else if opts != nil && opts.FieldSelector != nil {
  if selectorNamespace, ....
}

}
if requestInfo, ok := request.RequestInfoFrom(ctx); ok && requestInfo != nil && len(requestInfo.Name) > 0 {
name = requestInfo.Name
} else if selectorName, ok := opts.FieldSelector.RequiresExactMatch("metadata.name"); ok {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here

@@ -29,6 +29,13 @@ import (
"k8s.io/apimachinery/pkg/runtime"
)

const (
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's move this one to a separate cleanup PR (together with the new func and test).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will rebase this PR once 120872 merges unless git is able to figure it out automatically.

@@ -360,3 +374,31 @@ func (s *WatchServer) HandleWS(ws *websocket.Conn) {
}
}
}

func recordWatchListLatency(ctx context.Context, reqScope *RequestScope, metricsScope string) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Move it to pkg/endpoints/metrics/metrics.go - it's where a bunch of Record* functions are sitting.

Copy link
Member

@dgrisonnet dgrisonnet left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new metric looks good to me 👍

@p0lyn0mial p0lyn0mial force-pushed the upstream-watch-list-cache-metrics branch 2 times, most recently from 21dde8f to 2e2f9ce Compare September 25, 2023 15:41
@p0lyn0mial
Copy link
Contributor Author

@dgrisonnet @wojtek-t rebased against the master branch. This PR is ready for a final review.

@p0lyn0mial
Copy link
Contributor Author

/retest

@@ -219,6 +226,8 @@ func (s *WatchServer) ServeHTTP(w http.ResponseWriter, req *http.Request) {
w.WriteHeader(http.StatusOK)
flusher.Flush()

var err error
var isWatchListLatencyRecordingRequired bool
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: let's not define them here and instead just revert to create them as we were creating them before in each loop [this doesn't bring value for simple types]

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could be that the go's compiler implements loop invariant code motion optimisation anyway.
I think that you are right that declaring a simple variable like bool on every iteration may not have a significant impact on the GC. Sure I can inline these variables.

//
// Note that normally we would use CleanScope for computation.
// But due to the same reasons mentioned in determineRequestNamespaceAndName we cannot.
func CleanListScope(ctx context.Context, opts *metainternalversion.ListOptions) string {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you please add unit tests for this function?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sure, np.

Subsystem: APIServerComponent,
Name: "watch_list_duration_seconds",
Help: "Response latency distribution in seconds for watch list requests broken by group, version, resource and scope.",
Buckets: []float64{0.1, 0.25, 0.5, 1.0, 2, 5, 10, 20, 30, 45},
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I still think that we want to use more buckets, and more aligned request_latencies metric which are:

{0.005, 0.025, 0.05, 0.1, 0.2, 0.4, 0.6, 0.8, 1.0, 1.25, 1.5, 2, 3, 4, 5, 6, 8, 10, 15, 20, 30, 45, 60

We can probably drop some values, but especially given we will have progress-notify eventually, I would like to have better comparison with lists, so I suggest:

{0.05, 0.1, 0.2, 0.4, 0.6, 0.8, 1.0, 2, 4, 6, 8, 10, 15, 20, 30, 45, 60}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sure, I guess we are about to find out what values really matter, i don't have strong opinions here.

@alexzielenski
Copy link
Contributor

/triage accepted

@k8s-ci-robot k8s-ci-robot added triage/accepted Indicates an issue or PR is ready to be actively worked on. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Sep 26, 2023
@p0lyn0mial p0lyn0mial force-pushed the upstream-watch-list-cache-metrics branch from 2e2f9ce to a97f4b7 Compare September 27, 2023 07:30
@wojtek-t
Copy link
Member

/lgtm
/approve

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Sep 27, 2023
@k8s-ci-robot
Copy link
Contributor

LGTM label has been added.

Git tree hash: e37034ddebcac27ae667fe0f7370045bd7efc88f

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: dgrisonnet, p0lyn0mial, wojtek-t

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Sep 27, 2023
@k8s-ci-robot k8s-ci-robot merged commit bdedc21 into kubernetes:master Sep 27, 2023
14 checks passed
@k8s-ci-robot k8s-ci-robot added this to the v1.29 milestone Sep 27, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. area/apiserver cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/feature Categorizes issue or PR as related to a new feature. lgtm "Looks good to me", indicates that a PR is ready to be merged. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. release-note Denotes a PR that will be considered when it comes time to generate release notes. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. sig/instrumentation Categorizes an issue or PR as relevant to SIG Instrumentation. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants