feat(workflow_engine): Action Filter cache invalidation#111060
feat(workflow_engine): Action Filter cache invalidation#111060saponifi3d merged 11 commits intomasterfrom
Conversation
…ed in all the receiver testests, as well as wrappers / helpers for the action filter tests
…lter conditions are updated.
…and add a pre_delete signal
…e could probably limit to the attributes for processing only, but having a greater cache invaldiation strategy is a bit safer
| # TODO -- see if we can determine if the DataCondition is | ||
| # an ActionFilter, to skip querying for the workflow. |
There was a problem hiding this comment.
I don't think there is unfortunately - since I was recently doing the workflow -> rule serializer I checked how it's handled there (link) and it's just based off of whether the workflow has a when condition group
There was a problem hiding this comment.
my thought was the handler of each data condition has a group, which is what we use to send them to different sections in the UI. I think if we can get access to the handler, we could filter before hand. If it's an operator condition type, then yeah, we'd have to check the WorkflowDataConditionGroup table.
| workflow = self.create_workflow() | ||
| action_filters = self.create_action_filters_for_workflow( | ||
| workflow=workflow, | ||
| num_conditions=1, | ||
| ) | ||
| workflow_two = self.create_workflow() | ||
| action_filters_two = self.create_action_filters_for_workflow( | ||
| workflow=workflow_two, | ||
| num_filters=2, | ||
| num_conditions=1, | ||
| ) |
There was a problem hiding this comment.
not a huge deal but any way to reduce the duplication of these across tests?
| # Warm the cache | ||
| cache_data: ActionFiltersByWorkflow = {workflow.id: condition_groups} | ||
| cache_keys = self.populate_action_filter_cache(cache_data) | ||
|
|
There was a problem hiding this comment.
Could this be like the previous test which has an assertion that there is data in the cache in the "warming" step just to show there is something to invalidate?
There was a problem hiding this comment.
it could, but i tend to like to have test cases isolated when the logic is differing. that way if 1 unit of logic fails, 1 test should fail (🤞). It's a bit more DB overhead to do that, but I've found it really useful when refactoring in the future.
kcons
left a comment
There was a problem hiding this comment.
This looks reasonable to me; I think the main risk here is likely cache inconsistency due to some overlooked nuance or a behavior tweak a few months from now.
But, we have low enough write rate and short enough TTL and some tolerance for slightly stale data, so small problems range from theoretically observable to unreproducible, so maybe it's not worth investing much though in.
I gues we could follow up with a low-probability cache check thing that pretends there was a cache it, compares, and spawns a task if there is a difference to check again shortly (and maybe again) to check how eventual our consistency is. Not hard to do, but probably not worth the complexity until we have reason to believe something might be fishy.
Description
Signal handlers for cache invalidation. There's not a rollout flag included here yet, because any exceptions or failures are limited to the scope of these invalidation methods and it keeps that a little simpler.
When we invalidate the Action Filters / Data conditions:
Remaining Work
evaluate_workflows_action_filtersmethod to read from the cache with a rollout flag.