-
Notifications
You must be signed in to change notification settings - Fork 38.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Webhook conversion metrics [request/error counts and latency metrics] #118292
Conversation
|
Welcome @cchapla! |
Hi @cchapla. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/triage accepted |
/ok-to-test |
/priority important-soon |
&metrics.HistogramOpts{ | ||
Name: "webhook_conversion_duration_seconds", | ||
Help: "Webhook conversion request latency", | ||
Buckets: metrics.ExponentialBuckets(0.001, 2, 15), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's not obvious to me what the actual buckets are from looking at this.
@@ -0,0 +1,268 @@ | |||
/* | |||
Copyright 2019 The Kubernetes Authors. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Copyright 2019 The Kubernetes Authors. | |
Copyright 2023 The Kubernetes Authors. |
webhookConversionLatency: Metrics.webhookConversionLatency, | ||
}, | ||
args: args{ | ||
ctx: context.TODO(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd just eliminate this and pass it in to the function directly.
webhookConversionLatency: Metrics.webhookConversionLatency, | ||
}, | ||
args: args{ | ||
ctx: context.TODO(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same
webhookConversionLatency: Metrics.webhookConversionLatency, | ||
}, | ||
args: args{ | ||
ctx: context.TODO(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same
webhookConversionLatency: Metrics.webhookConversionLatency, | ||
}, | ||
args: args{ | ||
ctx: context.TODO(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same
webhookConversionLatency: Metrics.webhookConversionLatency, | ||
}, | ||
args: args{ | ||
ctx: context.TODO(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same
webhookConversionLatency: Metrics.webhookConversionLatency, | ||
}, | ||
args: args{ | ||
ctx: context.TODO(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same
func expectCounterValue(t *testing.T, name string, labelFilter map[string]string, wantCount int) { | ||
metrics, err := legacyregistry.DefaultGatherer.Gather() | ||
if err != nil { | ||
t.Fatalf("Failed to gather metrics: %s", err) | ||
} | ||
|
||
counterSum := 0 | ||
for _, mf := range metrics { | ||
if mf.GetName() != name { | ||
continue // Ignore other metrics. | ||
} | ||
for _, metric := range mf.GetMetric() { | ||
if !testutil.LabelsMatch(metric, labelFilter) { | ||
continue | ||
} | ||
counterSum += int(metric.GetCounter().GetValue()) | ||
} | ||
} | ||
if wantCount != counterSum { | ||
t.Errorf("Wanted count %d, got %d for metric %s with labels %#+v", wantCount, counterSum, name, labelFilter) | ||
for _, mf := range metrics { | ||
if mf.GetName() == name { | ||
for _, metric := range mf.GetMetric() { | ||
t.Logf("\tnear match: %s", metric.String()) | ||
} | ||
} | ||
} | ||
} | ||
} | ||
|
||
func expectHistogramCountTotal(t *testing.T, name string, labelFilter map[string]string, wantCount int) { | ||
metrics, err := legacyregistry.DefaultGatherer.Gather() | ||
if err != nil { | ||
t.Fatalf("Failed to gather metrics: %s", err) | ||
} | ||
|
||
counterSum := 0 | ||
for _, mf := range metrics { | ||
if mf.GetName() != name { | ||
continue // Ignore other metrics. | ||
} | ||
for _, metric := range mf.GetMetric() { | ||
if !testutil.LabelsMatch(metric, labelFilter) { | ||
continue | ||
} | ||
counterSum += int(metric.GetHistogram().GetSampleCount()) | ||
} | ||
} | ||
if wantCount != counterSum { | ||
t.Errorf("Wanted count %d, got %d for metric %s with labels %#+v", wantCount, counterSum, name, labelFilter) | ||
for _, mf := range metrics { | ||
if mf.GetName() == name { | ||
for _, metric := range mf.GetMetric() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would move these into component-base/metrics/testutils
, make them public and rename them assertXCount
or whatnot.
func newWebhookConversionMetrics() *WebhookConversionMetrics { | ||
webhookConversionRequest := metrics.NewCounterVec( | ||
&metrics.CounterOpts{ | ||
Name: "webhook_conversion_requests", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Subsytem should be "apiserver"
- Counters should be suffixed
_total
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we also provide "Namespace = apiextensions-apiserver" ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, I'd just use "apiserver" as the Namespace. Otherwise the metric name with be prefixed apiserver_apiextensions_apiserver_
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not clear.
you mean both subsystem and namespace will be "apiserver" or have to just provide namespace without subsystem ?
Making both apiserver will create name like "apiserver_apiserver_webhook_conversion_duration_seconds"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The metric name is comprised as <Namespace>_<Subsystem>_<Name>
. So if you specify "apiextensions-apiserver" as a namespace and "apiserver" as a subsystem, you end up with apiextensions_apiserver_apiserver
as a prefix to your metric name.
I'm just saying only use oneof {Namespace,Subsystem}, do not use both. And use "apiserver", since that's what we use everywhere else.
Name: "webhook_conversion_duration_seconds", | ||
Namespace: namespace, | ||
Help: "Webhook conversion request latency", | ||
// 0.001, 0.002, 0.004, .... 16.384 [1ms, 2ms, 4ms, ...., 16,384 ms] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
16 seconds is a weird upper bound, maybe add one more bucket? Webhooks default timeout at 10 seconds, but can be configured to timeout at 30.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah. Now that you point out 16.384 seconds.
How about directly using:
0.01, 0.02, 0.05, 1, 2, 5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55, 60
Or maybe we could directly have 60 after 30 just in case..
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is much better.
wantLabels map[string]string | ||
expectedRequestValue int | ||
}{ | ||
// TODO: Add test cases. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// TODO: Add test cases. |
expectedRequestValue int | ||
expectedLatencyCount int | ||
}{ | ||
// TODO: Add test cases. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// TODO: Add test cases. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
/approve
Thanks for the iterations!
LGTM label has been added. Git tree hash: 930dd39309502bbe473eb161321e89cdccacf91f
|
/assign @deads2k |
/approved (applying approved here as @logicalhan's approval does not seem to cover |
/approve (whoops typo!) |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: cchapla, dims, logicalhan The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@cchapla can you update the release-note to make the new metrics and their meaning clear? Something like:
Also @kubernetes/sig-instrumentation-approvers - I know it's a bit late, but can someone quickly check if the metric convention used here is ok? Is |
What type of PR is this?
/kind cleanup
What this PR does / why we need it:
Adds webhook conversion metrics for requests count for success/failures and latency.
Which issue(s) this PR fixes:
Ref #117167
Special notes for your reviewer:
Does this PR introduce a user-facing change?
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.: