-
Notifications
You must be signed in to change notification settings - Fork 1.6k
[WPE][browserperfdash-benchmark] Allow to run run-benchmark plans subtest-by-subtest #47255
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
EWS run on previous version of this PR (hash 0802621) |
Tools/Scripts/webkitpy/browserperfdash/browserperfdash_runner.py
Outdated
Show resolved
Hide resolved
Tools/Scripts/webkitpy/browserperfdash/browserperfdash_runner.py
Outdated
Show resolved
Hide resolved
Tools/Scripts/webkitpy/browserperfdash/browserperfdash_runner.py
Outdated
Show resolved
Hide resolved
Tools/Scripts/webkitpy/browserperfdash/browserperfdash_runner.py
Outdated
Show resolved
Hide resolved
Left some (mostly stylistic) suggestions, conceptually this looks solid and is sorely needed! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, thanks! Please resolve @aoikonomopoulos comments prior to landing.
EWS run on current version of this PR (hash 3b430de) |
…test-by-subtest https://bugs.webkit.org/show_bug.cgi?id=295050 Reviewed by Nikolas Zimmermann. On the RPi4 32-bit bots, JetStream2 frequently crashes or times out before completing, which prevents any results from being reported because the current runner only uploads data if all subtests finish successfully. We have experimented with running selected subtest sets, but the flakiness remains: some subtests pass while others fail inconsistently, which makes very difficult to select a working set of subtests. This patch implements on browserperfdash-benchmark the ability to run a given benchmark plan subtest-by-subtest, so it only runs a subtest at a time. If a subtest fails, the runner skips it and proceeds to the next. Partial results from the passing subtests are then uploaded to the dashboard, improving visibility into progress and regressions even when the full benchmark cannot complete. To differenciate the standard benchmark plan between the one that is run subtest-by-subtest the string `-split-subtests` is appended to the end of the benchmark plan name. So, to to run the plan jetstream2 subtest-by-subtest it should be specified to run a virtual benchmark plan named `jetstream2-split-subtests`. Since the benchmark plan name is visible on the dashboard, is also possible to differentiate there from the complete jetstream2 vs the one that was run subtest-by-subtest All the benchmark plans that support subtests can be specified this way (not only JetStream2), and the list of those is visible when the flag `--list-plans` is passed to the runner. * Tools/Scripts/webkitpy/browserperfdash/browserperfdash_runner.py: (BrowserPerfDashRunner.__init__): (BrowserPerfDashRunner._parse_config_file): (BrowserPerfDashRunner._get_plan_version_hash): (BrowserPerfDashRunner._get_benchmark_runner_split_subtess_plans): (BrowserPerfDashRunner._run_benchmark_runner_plan_split_subtests): (BrowserPerfDashRunner._run_plan): (BrowserPerfDashRunner.run): Canonical link: https://commits.webkit.org/296877@main
3b430de
to
00cc05f
Compare
Committed 296877@main (00cc05f): https://commits.webkit.org/296877@main Reviewed commits have been landed. Closing PR #47255 and removing active labels. |
00cc05f
3b430de
🧪 ios-wk2🧪 api-mac🧪 api-wpe🧪 mac-wk1🛠 wpe-cairo🧪 api-ios🧪 mac-wk2🛠 gtk🧪 gtk-wk2🧪 mac-wk2-stress🧪 api-gtk🛠 playstation🛠 tv-sim🛠 watch🛠 watch-sim