-
-
Notifications
You must be signed in to change notification settings - Fork 34.2k
Use separate hassio data coordinator for Add-ons #147390
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: dev
Are you sure you want to change the base?
Conversation
Hey there @home-assistant/supervisor, mind taking a look at this pull request as it has been labeled with an integration ( Code owner commandsCode owners of
|
We should also move to use the client library, but I'd rather do this in separate steps. There is also a further optimization opportunity by removing the per-add-on info calls and instead using the data straight from the |
b19e3a9
to
2eb5e01
Compare
Use two data coordinators for hassio data, one for the Core, Supervisor, and Operating System updates, and one for the add-on updates. This allows the add-on updates to be fetched independently of the Core, Supervisor, and Operating System updates.
2eb5e01
to
fb68146
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like the idea and overall it looks fine to me. But two significant notes:
- We're slowing down our polling for addon stats significantly. Considering that data is made into sensors used for monitoring addons that seems problematic. We may want to leave that data in the other coordinator or update it on a separate (maybe even faster?) schedule.
- Technically all of
hass.data
is public and changing the structure of the data in there like this could break integrations (particularly custom ones we can't see right now). When doing the client library I really avoided this, even transforming the client library outputs when necessary to match what we had. We really don't have to change it here, we can just have the different coordinators each update the keys they own within the same dictionary. For compatibility I think I'd recommend that, even if its not as nice from a dev perspective.
@@ -321,7 +332,7 @@ async def _async_update_data(self) -> dict[str, Any]: | |||
raise UpdateFailed(f"Error on Supervisor API: {err}") from err | |||
|
|||
new_data: dict[str, Any] = {} | |||
supervisor_info = get_supervisor_info(self.hass) or {} | |||
addons = get_addons(self.hass) or {} | |||
addons_info = get_addons_info(self.hass) or {} | |||
addons_stats = get_addons_stats(self.hass) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This feels like it doesn't belong here anymore. 15 min poll interval feels too slow for something that's about monitoring the runtime of the service/addon. Its fine for update polling but for stats that's very slow (though so is 5 minutes...)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point. Ideally we probably should create a third data coordinator specifically for the container stats. This then can run at a lower interval.
That said, I was actually thinking about changing the stats anyways: The way we read stats currently on Supervisor side actually make the API call 1-2s long (we essentially get the CPU usage of a 1s timeslice). I'll consider separating the stats into a separate data coordinator already now. I guess it would make it easier to rework the stats later on.
data[key] = result | ||
|
||
_addon_data = data[DATA_SUPERVISOR_INFO].get("addons", []) | ||
_addon_data = data[DATA_ADDONS].get("addons", []) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is this data["addons"]["addons"]
? It doesn't matter I guess but feels unnecessary to embed it this deep? Maybe just data["addons"]
? Or we continue to use one dictionary for supervisor data and just have each coordinator update the keys it owns.
Technically moving this data around within hass.data
could be a breaking change. Everything in hass.data
is public to all the other integrations (including custom integrations). If anything is looking for existing keys these changes will cause failures. I don't know if that is something we consider supported though or if the integration author should fix in that case.
This is why I went to some lengths before not to change the structure of this data when doing the client library work, even if it didn't exactly match what came out of the client library.
Please take a look at the requested changes, and use the Ready for review button when you are done, thanks 👍 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR introduces a separate data coordinator for managing add-on updates in Hass.io, lowering the add-on update frequency and isolating its update flow from the main components. Key changes include updating test expectations for API calls, introducing a new HassioAddOnDataUpdateCoordinator with a 15‑minute update interval, and adjusting component modules to use the new coordinator keys.
Reviewed Changes
Copilot reviewed 14 out of 14 changed files in this pull request and generated 1 comment.
File | Description |
---|---|
tests/components/hassio/*.py | Update test call counts and API endpoints to support the new add‑on coordinator |
homeassistant/components/hassio/*.py | Replace refresh_updates with reload_updates and update data key references for add‑on updates |
homeassistant/components/hassio/const.py | Introduce new constants for coordinator keys and update intervals |
Comments suppressed due to low confidence (1)
homeassistant/components/hassio/coordinator.py:654
- Ensure that replacing 'refresh_updates' with 'reload_updates' maintains the intended behavior of the update flow and that consumers of the Supervisor API are updated accordingly.
await self.supervisor_client.reload_updates()
@@ -232,7 +264,7 @@ async def test_setup_api_ping( | |||
await hass.async_block_till_done() | |||
|
|||
assert result | |||
assert aioclient_mock.call_count + len(supervisor_client.mock_calls) == 18 | |||
assert aioclient_mock.call_count + len(supervisor_client.mock_calls) == 19 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[nitpick] Magic numbers for expected API call counts have been updated; to improve clarity and maintainability, consider defining a named constant or adding a comment that explains the rationale behind the new expected value.
assert aioclient_mock.call_count + len(supervisor_client.mock_calls) == 19 | |
assert aioclient_mock.call_count + len(supervisor_client.mock_calls) == EXPECTED_API_CALL_COUNT |
Copilot uses AI. Check for mistakes.
Breaking change
Proposed change
Use a separate data coordinator to update add-ons. So far, all add-on store git repositories got updated every 5 minutes like the main update components (Core, Supervisor, Plug-ins and Operating System). Lower the add-on store refresh to every 15 minutes.
This also avoids force refreshing the main update components on add-on update, which was often the cause of "Supervisor needs update" errors while updating add-ons (when we just pushed a new Supervisor change). It does not eliminate the error completely, as a main component update or a forced update by a user can still cause that situation.
Type of change
Additional information
Checklist
ruff format homeassistant tests
)If user exposed functionality or configuration variables are added/changed:
If the code communicates with devices, web services, or third-party tools:
Updated and included derived files by running:
python3 -m script.hassfest
.requirements_all.txt
.Updated by running
python3 -m script.gen_requirements_all
.To help with the load of incoming pull requests: