-
Notifications
You must be signed in to change notification settings - Fork 60
Zigpy attribute event and better manufacturer code handling #605
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: dev
Are you sure you want to change the base?
Conversation
dad8c6f to
61aaf60
Compare
TheJulianJES
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks good. Only the comment about emitting ZHA events for LevelControl changes should be addressed IMO, as it'll get spammy otherwise. That part needs to be reworked a bit in the future anyway.
Otherwise, why do the diagnostics change?
| if event.attribute_id == self.CURRENT_LEVEL: | ||
| self.dispatch_level_change(SIGNAL_SET_LEVEL, event.value) | ||
| super()._handle_attribute_updated_event(event) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This level dispatching should likely be cleaned up to use ClusterAttributeUpdatedEvent directly, but we specifically avoided emitting a ZHA event in HA for these before. Now, I think we do.
This would be a lot of logbook entries when turning on a light on/off (even with a default 1s transition).
For now, I think we can just add that else back in before calling super()._handle_attribute_updated_event(event) (if it's not the current level attribute that was updated), so:
| if event.attribute_id == self.CURRENT_LEVEL: | |
| self.dispatch_level_change(SIGNAL_SET_LEVEL, event.value) | |
| super()._handle_attribute_updated_event(event) | |
| if event.attribute_id == self.CURRENT_LEVEL: | |
| self.dispatch_level_change(SIGNAL_SET_LEVEL, event.value) | |
| else: | |
| super()._handle_attribute_updated_event(event) |
| if attrid == self.CURRENT_LEVEL: | ||
| self.dispatch_level_change(SIGNAL_SET_LEVEL, value) | ||
| else: | ||
| super().attribute_updated(attrid, value, timestamp) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For reference, state before.
| | AttributeUpdatedEvent | ||
| | AttributeWrittenEvent, | ||
| ) -> None: | ||
| """Handle an attribute updated on this cluster.""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Side note: Could be nice to add some sort of EMIT_EVENTS attribute for ClusterHandlers in general, so we don't need this empty override, just to block emitting ZHA events.
Could just set EMIT_EVENTS = False here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was a nice lesson on how to spell "instantaneous_demand" 😄
| async def _get_attributes( | ||
| self, | ||
| raise_exceptions: bool, | ||
| attributes: list[str], | ||
| from_cache: bool = True, | ||
| only_cache: bool = True, | ||
| ) -> dict[int | str, Any]: | ||
| """Get the values for a list of attributes.""" | ||
| manufacturer = None | ||
| manufacturer_code = self._endpoint.device.manufacturer_code | ||
| if self.cluster.cluster_id >= 0xFC00 and manufacturer_code: | ||
| manufacturer = manufacturer_code | ||
| chunk = attributes[:CLUSTER_READS_PER_REQ] | ||
| rest = attributes[CLUSTER_READS_PER_REQ:] | ||
| result = {} | ||
| while chunk: | ||
| try: | ||
| self.debug("Reading attributes in chunks: %s", chunk) | ||
| read, _ = await RETRYABLE_REQUEST_DECORATOR( | ||
| self.cluster.read_attributes | ||
| )( | ||
| chunk, | ||
| allow_cache=from_cache, | ||
| only_cache=only_cache, | ||
| manufacturer=manufacturer, | ||
| ) | ||
| self.debug("Got attributes: %s", read) | ||
| result.update(read) | ||
| except (TimeoutError, zigpy.exceptions.ZigbeeException) as ex: | ||
| self.debug( | ||
| "failed to get attributes '%s' on '%s' cluster: %s", | ||
| chunk, | ||
| self.cluster.ep_attribute, | ||
| str(ex), | ||
| ) | ||
| if raise_exceptions: | ||
| raise | ||
| chunk = rest[:CLUSTER_READS_PER_REQ] | ||
| rest = rest[CLUSTER_READS_PER_REQ:] | ||
| return result |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Side note: We should also update the batched get_attributes when initializing a cluster handler to properly handle manufacturer codes based on attribute definitions. The code is really flawed right now.
configure_reporting should likely be adjusted as well (and likely everything where we do the self.cluster.cluster_id >= 0xFC00 and manufacturer_code check).
We can also do that in a separate PR.
ZHA fixes for zigpy/zigpy#1719.
A few tests needed tweaking because of attribute "collisions" but otherwise nothing is breaking as far as I can tell and a few new entities are showing up for existing test devices.
The most visible change is switching from
attribute_updatedto explicit events for every possible attribute change type.