Skip to content

Use separate hassio data coordinator for Add-ons #147390

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 7 commits into
base: dev
Choose a base branch
from

Conversation

agners
Copy link
Member

@agners agners commented Jun 23, 2025

Breaking change

Proposed change

Use a separate data coordinator to update add-ons. So far, all add-on store git repositories got updated every 5 minutes like the main update components (Core, Supervisor, Plug-ins and Operating System). Lower the add-on store refresh to every 15 minutes.

This also avoids force refreshing the main update components on add-on update, which was often the cause of "Supervisor needs update" errors while updating add-ons (when we just pushed a new Supervisor change). It does not eliminate the error completely, as a main component update or a forced update by a user can still cause that situation.

Type of change

  • Dependency upgrade
  • Bugfix (non-breaking change which fixes an issue)
  • New integration (thank you!)
  • New feature (which adds functionality to an existing integration)
  • Deprecation (breaking change to happen in the future)
  • Breaking change (fix/feature causing existing functionality to break)
  • Code quality improvements to existing code or addition of tests

Additional information

  • This PR fixes or closes issue: fixes #
  • This PR is related to issue:
  • Link to documentation pull request:
  • Link to developer documentation pull request:
  • Link to frontend pull request:

Checklist

  • The code change is tested and works locally.
  • Local tests pass. Your PR cannot be merged unless tests pass
  • There is no commented out code in this PR.
  • I have followed the development checklist
  • I have followed the perfect PR recommendations
  • The code has been formatted using Ruff (ruff format homeassistant tests)
  • Tests have been added to verify that the new code works.

If user exposed functionality or configuration variables are added/changed:

If the code communicates with devices, web services, or third-party tools:

  • The manifest file has all fields filled out correctly.
    Updated and included derived files by running: python3 -m script.hassfest.
  • New or updated dependencies have been added to requirements_all.txt.
    Updated by running python3 -m script.gen_requirements_all.
  • For the updated dependencies - a link to the changelog, or at minimum a diff between library versions is added to the PR description.

To help with the load of incoming pull requests:

@home-assistant
Copy link

Hey there @home-assistant/supervisor, mind taking a look at this pull request as it has been labeled with an integration (hassio) you are listed as a code owner for? Thanks!

Code owner commands

Code owners of hassio can trigger bot actions by commenting:

  • @home-assistant close Closes the pull request.
  • @home-assistant rename Awesome new title Renames the pull request.
  • @home-assistant reopen Reopen the pull request.
  • @home-assistant unassign hassio Removes the current integration label and assignees on the pull request, add the integration domain after the command.
  • @home-assistant add-label needs-more-information Add a label (needs-more-information, problem in dependency, problem in custom component) to the pull request.
  • @home-assistant remove-label needs-more-information Remove a label (needs-more-information, problem in dependency, problem in custom component) on the pull request.

@agners
Copy link
Member Author

agners commented Jun 23, 2025

We should also move to use the client library, but I'd rather do this in separate steps.

There is also a further optimization opportunity by removing the per-add-on info calls and instead using the data straight from the /addons API call. From what I can tell the only missing information is the auto_update key.

@agners agners force-pushed the hassio-split-data-coordinator branch from b19e3a9 to 2eb5e01 Compare June 24, 2025 12:51
agners added 7 commits June 24, 2025 17:23
Use two data coordinators for hassio data, one for the Core,
Supervisor, and Operating System updates, and one for the add-on
updates. This allows the add-on updates to be fetched independently
of the Core, Supervisor, and Operating System updates.
@agners agners force-pushed the hassio-split-data-coordinator branch from 2eb5e01 to fb68146 Compare June 24, 2025 15:24
Copy link
Contributor

@mdegat01 mdegat01 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like the idea and overall it looks fine to me. But two significant notes:

  1. We're slowing down our polling for addon stats significantly. Considering that data is made into sensors used for monitoring addons that seems problematic. We may want to leave that data in the other coordinator or update it on a separate (maybe even faster?) schedule.
  2. Technically all of hass.data is public and changing the structure of the data in there like this could break integrations (particularly custom ones we can't see right now). When doing the client library I really avoided this, even transforming the client library outputs when necessary to match what we had. We really don't have to change it here, we can just have the different coordinators each update the keys they own within the same dictionary. For compatibility I think I'd recommend that, even if its not as nice from a dev perspective.

@@ -321,7 +332,7 @@ async def _async_update_data(self) -> dict[str, Any]:
raise UpdateFailed(f"Error on Supervisor API: {err}") from err

new_data: dict[str, Any] = {}
supervisor_info = get_supervisor_info(self.hass) or {}
addons = get_addons(self.hass) or {}
addons_info = get_addons_info(self.hass) or {}
addons_stats = get_addons_stats(self.hass)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This feels like it doesn't belong here anymore. 15 min poll interval feels too slow for something that's about monitoring the runtime of the service/addon. Its fine for update polling but for stats that's very slow (though so is 5 minutes...)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point. Ideally we probably should create a third data coordinator specifically for the container stats. This then can run at a lower interval.

That said, I was actually thinking about changing the stats anyways: The way we read stats currently on Supervisor side actually make the API call 1-2s long (we essentially get the CPU usage of a 1s timeslice). I'll consider separating the stats into a separate data coordinator already now. I guess it would make it easier to rework the stats later on.

data[key] = result

_addon_data = data[DATA_SUPERVISOR_INFO].get("addons", [])
_addon_data = data[DATA_ADDONS].get("addons", [])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this data["addons"]["addons"]? It doesn't matter I guess but feels unnecessary to embed it this deep? Maybe just data["addons"]? Or we continue to use one dictionary for supervisor data and just have each coordinator update the keys it owns.

Technically moving this data around within hass.data could be a breaking change. Everything in hass.data is public to all the other integrations (including custom integrations). If anything is looking for existing keys these changes will cause failures. I don't know if that is something we consider supported though or if the integration author should fix in that case.

This is why I went to some lengths before not to change the structure of this data when doing the client library work, even if it didn't exactly match what came out of the client library.

@home-assistant
Copy link

Please take a look at the requested changes, and use the Ready for review button when you are done, thanks 👍

Learn more about our pull request process.

@home-assistant home-assistant bot marked this pull request as draft June 24, 2025 19:34
@frenck frenck requested a review from Copilot June 26, 2025 21:26
Copy link
Contributor

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR introduces a separate data coordinator for managing add-on updates in Hass.io, lowering the add-on update frequency and isolating its update flow from the main components. Key changes include updating test expectations for API calls, introducing a new HassioAddOnDataUpdateCoordinator with a 15‑minute update interval, and adjusting component modules to use the new coordinator keys.

Reviewed Changes

Copilot reviewed 14 out of 14 changed files in this pull request and generated 1 comment.

File Description
tests/components/hassio/*.py Update test call counts and API endpoints to support the new add‑on coordinator
homeassistant/components/hassio/*.py Replace refresh_updates with reload_updates and update data key references for add‑on updates
homeassistant/components/hassio/const.py Introduce new constants for coordinator keys and update intervals
Comments suppressed due to low confidence (1)

homeassistant/components/hassio/coordinator.py:654

  • Ensure that replacing 'refresh_updates' with 'reload_updates' maintains the intended behavior of the update flow and that consumers of the Supervisor API are updated accordingly.
                await self.supervisor_client.reload_updates()

@@ -232,7 +264,7 @@ async def test_setup_api_ping(
await hass.async_block_till_done()

assert result
assert aioclient_mock.call_count + len(supervisor_client.mock_calls) == 18
assert aioclient_mock.call_count + len(supervisor_client.mock_calls) == 19
Copy link
Preview

Copilot AI Jun 26, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[nitpick] Magic numbers for expected API call counts have been updated; to improve clarity and maintainability, consider defining a named constant or adding a comment that explains the rationale behind the new expected value.

Suggested change
assert aioclient_mock.call_count + len(supervisor_client.mock_calls) == 19
assert aioclient_mock.call_count + len(supervisor_client.mock_calls) == EXPECTED_API_CALL_COUNT

Copilot uses AI. Check for mistakes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants