Description
Created a PR - #4587
I have found a issue when configuring a multi-runner module setup when using runner_extra_labels
. I am going to simplify this example not adding extra irrelevant configuration.
Suppose you have a runner map with this configuration (in this example I will only show one runner):
runners_config = {
"amazon-arm64" = {
"matcherConfig" = {
"exactMatch" = true
"labelMatchers" = [
[
"self-hosted",
"linux",
"arm64",
"amazon",
],
]
}
"runner_config" = {
"enable_organization_runners" = true
"instance_types" = [
"t4g.small",
]
"runner_architecture" = "arm64"
"runner_extra_labels" = [
"test",
]
"runner_name_prefix" = "gh-runner-arm64-"
"runner_os" = "linux"
}
}
}
So no pool is configured, after running terraform, no instances will be running.
When I trigger a new workflow run with:
runs-on: [self-hosted, amazon, arm64]
it works as expected, webhook pick up the job and a new runner is created and pick the job.
When I trigger a new workflow run with:
runs-on: [self-hosted, amazon, arm64, test]
Note now, I have added test label, the workflow run is picked up by Github and it stays in pending status with no runner picking up the job.
dispatch-to-runner lambda
rejects the job with the following log:
"message": "Received event contains runner labels 'self-hosted,amazon,arm64,test' that are not accepted.",
...
Changes introduced in this commit fixes the issue for multi-runner module