-
Notifications
You must be signed in to change notification settings - Fork 37
SREP-2079: Making sure that alerts are processed in an atomic way #175
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
|
@Nikokolas3270: This pull request references SREP-2079 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the bug to target the "4.22.0" version, but no target version was set. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: Nikokolas3270 The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #175 +/- ##
==========================================
+ Coverage 53.95% 55.67% +1.71%
==========================================
Files 23 23
Lines 1820 1895 +75
==========================================
+ Hits 982 1055 +73
- Misses 780 785 +5
+ Partials 58 55 -3
🚀 New features to boost your workflow:
|
RHOBS or classic webhook may process an alert twice in a sequential or in a parallel way due to Prometheus or AlertManager redundancy. This change makes sure that the custom resources used to track notifications are tested and set (TAS) in an atomic way. This avoids sending notifications for alerts duplicates.
180b239 to
f12430d
Compare
|
@Nikokolas3270: all tests passed! Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
| return err | ||
| for _, limitedSupportReason := range limitedSupportReasons { | ||
| // If the reason matches the fleet notification LS reason, remove it | ||
| // TODO(ngrauss): The limitedSupportReason.ID() should be stored in the ManagedFleetNotificationRecord record item object |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
there is a todo here, is this something outside of the scope of this PR ? :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I will open a new ticket for that.
Essentially the custom resources definitions should be changed to make sure that the LS which is removed is really the one signalled by the code.
| return c.inPlaceStatusUpdate() | ||
| } | ||
|
|
||
| // TODO(ngrauss): to be removed |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
there is a todo here, is this something outside of the scope of this PR ? :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, once the CRD model will be changed we won't need anymore to restore the status the way it was
| func (c *fleetNotificationContext) inPlaceStatusUpdate() error { | ||
| // c.notificationRecordItem is a pointer but it is not part of the managedFleetNotificationRecord object | ||
| // Below code makes sure to update the oav1alpha1.NotificationRecordItem inside the managedFleetNotificationRecord object with the latest values. | ||
| // TODO(ngrauss): refactor GetNotificationRecordItem method to return a reference to the object inside the managedFleetNotificationRecord |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
todo? ;)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, one more todo :)
The "record item" returned GetNotificationRecordItem is a copy of the one in the oav1alpha1.ManagedFleetNotificationRecord object.
Hence changing it (as done in fleetNotificationContext.updateNotificationStatus method) does not change the one in the root object (oav1alpha1.ManagedFleetNotificationRecord); this is why the first part of this method is about to find back the record item in the root record object and update it with the one in the context.
| notificationRecordItem.FiringNotificationSentCount > notificationRecordItem.ResolvedNotificationSentCount | ||
| // Counters are identical when no limited support is active | ||
| // Sent counter is higher than resolved counter by 1 when limited support is active | ||
| // TODO(ngrauss): record the limited support reason ID in the NotificationRecordItem object to be able to |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
todo? ;)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, comparing counters to know if a LS was sent or not is not the proper way to track a state.
Ideally there should be:
- A boolean field to know if the alert is firing or not. This field would be updated in an atomic way.
- A string field containing the LS reason ID. This field wouldn't be updated in an atomic way. It would be possible to have the above "firing" boolean field set to "true" while this LS ID field would be empty; in case the LS failed to be sent for instance.
The ticket I discussed before should cover that.
| if resolvedCondition != nil { | ||
| lastWebhookCallTime := resolvedCondition.LastTransitionTime | ||
|
|
||
| if nowTime.Before(lastWebhookCallTime.Add(3 * time.Minute)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
question: is the 3 minutes here intentional? The comment above says ServiceLogSent may be updated within up to 2 minutes
learning question for me, where does the 2 minutes max allowed time come from? :D
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It comes from here:
https://pkg.go.dev/k8s.io/client-go/util/retry#RetryOnConflict
And the associated call to this retry.RetryOnConflict function later in the code.
Remark that the sleep duration between the retries is specified there:
https://pkg.go.dev/k8s.io/client-go/util/retry#pkg-variables
The DefaultRetry does not specify a Cap but but doing the math, knowing that
- The initial duration is 10 ms
- The multiplication factor is 1 at minimal and 1.1 at maximal due to jitering
- The are at most 5 retries (so 4 sleeps between retries)
Here is the max cumulated sleep time:
41.1^410 ms ~= 60 ms
Now for each retry there are 2 API calls (so 10 calls in total):
- One to get the object
- One to update the object status
Each of those calls may take time to return... but it is not yet clear if there is a time out defined in the kube config used by the ocm-agent pods... for sure each call must not exceed 12 seconds to be within the 2 minutes window discussed in the comment.
| if c.retriever.fleetNotification.ResendWait > 0 { | ||
| dontResendDuration = time.Duration(c.retriever.fleetNotification.ResendWait) * time.Hour | ||
| } else { | ||
| dontResendDuration = time.Duration(3) * time.Minute |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thought: if this is the default resendWait time, is 3 minutes a bit low?
Not sure IIUC, if an alert fires every 4 minutes will this resend everytime?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This 3 minutes resend time is only there to avoids the duplicates; it is not there to handle a flickering alert.
You see, alert manager may send the same alert several times to OCM agent (due to redundancy).
Lets imagine we are sending service logs and not LS: in that case the "resolved" counter is not updated and there is therefore nothing in the record we can use to know if the alert was firing or not.
First alert will update the LastTransitionTime with the current time and a SL will be sent.
Second alert will try to do the same but as the LastTransitionTime and the current time are more or less the same time: it won't.
RHOBS or classic webhook may process an alert twice in a sequential or in a parallel way due to Prometheus or AlertManager redundancy. This change makes sure that the custom resources used to track notifications are tested and set (TAS) in an atomic way. This avoids sending notifications for alerts duplicates.
What type of PR is this?
bug
What this PR does / why we need it?
Customers are currently spammed and receive the same notification several times
Which Jira/Github issue(s) this PR fixes?
Fixes SREP-2079
Special notes for your reviewer:
For RHOBS webhook:
lastTransitionTimeis now only incremented when when sending a notificationFor classic webhook:
AlertFiringandAlertResolvedconditions are test and set in an atomic way.AlertFiringtimestamp is only updated when the condition status changesAlertResolvedtimestamp change any time the webhook is calledServiceLogSentcondition is not processed in an atomic way as this condition is not used to determine whether the alert was already firing or not.ServiceLogSentcondition which is processed later, asynchronously.Pre-checks (if applicable):
make generatecommand locally to validate code changes -> There is nomake generatecommand.