Skip to content

Conversation

@puddly
Copy link
Contributor

@puddly puddly commented Dec 23, 2025

ZHA fixes for zigpy/zigpy#1719.

A few tests needed tweaking because of attribute "collisions" but otherwise nothing is breaking as far as I can tell and a few new entities are showing up for existing test devices.

The most visible change is switching from attribute_updated to explicit events for every possible attribute change type.

@puddly puddly force-pushed the puddly/zigpy-attribute-rewrite branch from dad8c6f to 61aaf60 Compare January 15, 2026 17:24
Copy link
Contributor

@TheJulianJES TheJulianJES left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks good. Only the comment about emitting ZHA events for LevelControl changes should be addressed IMO, as it'll get spammy otherwise. That part needs to be reworked a bit in the future anyway.

Otherwise, why do the diagnostics change?

Comment on lines +487 to +489
if event.attribute_id == self.CURRENT_LEVEL:
self.dispatch_level_change(SIGNAL_SET_LEVEL, event.value)
super()._handle_attribute_updated_event(event)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This level dispatching should likely be cleaned up to use ClusterAttributeUpdatedEvent directly, but we specifically avoided emitting a ZHA event in HA for these before. Now, I think we do.

This would be a lot of logbook entries when turning on a light on/off (even with a default 1s transition).

For now, I think we can just add that else back in before calling super()._handle_attribute_updated_event(event) (if it's not the current level attribute that was updated), so:

Suggested change
if event.attribute_id == self.CURRENT_LEVEL:
self.dispatch_level_change(SIGNAL_SET_LEVEL, event.value)
super()._handle_attribute_updated_event(event)
if event.attribute_id == self.CURRENT_LEVEL:
self.dispatch_level_change(SIGNAL_SET_LEVEL, event.value)
else:
super()._handle_attribute_updated_event(event)

Comment on lines -472 to -475
if attrid == self.CURRENT_LEVEL:
self.dispatch_level_change(SIGNAL_SET_LEVEL, value)
else:
super().attribute_updated(attrid, value, timestamp)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For reference, state before.

| AttributeUpdatedEvent
| AttributeWrittenEvent,
) -> None:
"""Handle an attribute updated on this cluster."""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Side note: Could be nice to add some sort of EMIT_EVENTS attribute for ClusterHandlers in general, so we don't need this empty override, just to block emitting ZHA events.
Could just set EMIT_EVENTS = False here.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was a nice lesson on how to spell "instantaneous_demand" 😄

Comment on lines 604 to 643
async def _get_attributes(
self,
raise_exceptions: bool,
attributes: list[str],
from_cache: bool = True,
only_cache: bool = True,
) -> dict[int | str, Any]:
"""Get the values for a list of attributes."""
manufacturer = None
manufacturer_code = self._endpoint.device.manufacturer_code
if self.cluster.cluster_id >= 0xFC00 and manufacturer_code:
manufacturer = manufacturer_code
chunk = attributes[:CLUSTER_READS_PER_REQ]
rest = attributes[CLUSTER_READS_PER_REQ:]
result = {}
while chunk:
try:
self.debug("Reading attributes in chunks: %s", chunk)
read, _ = await RETRYABLE_REQUEST_DECORATOR(
self.cluster.read_attributes
)(
chunk,
allow_cache=from_cache,
only_cache=only_cache,
manufacturer=manufacturer,
)
self.debug("Got attributes: %s", read)
result.update(read)
except (TimeoutError, zigpy.exceptions.ZigbeeException) as ex:
self.debug(
"failed to get attributes '%s' on '%s' cluster: %s",
chunk,
self.cluster.ep_attribute,
str(ex),
)
if raise_exceptions:
raise
chunk = rest[:CLUSTER_READS_PER_REQ]
rest = rest[CLUSTER_READS_PER_REQ:]
return result
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Side note: We should also update the batched get_attributes when initializing a cluster handler to properly handle manufacturer codes based on attribute definitions. The code is really flawed right now.

configure_reporting should likely be adjusted as well (and likely everything where we do the self.cluster.cluster_id >= 0xFC00 and manufacturer_code check).

We can also do that in a separate PR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants