-
Notifications
You must be signed in to change notification settings - Fork 575
refactor!: dropping pagination from getLogsByTags #19161
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
refactor!: dropping pagination from getLogsByTags #19161
Conversation
7d71804 to
822c8e6
Compare
822c8e6 to
88998e0
Compare
Heads up that's not correct. The limit was enforced at the schema level (maybe not the wisest decision in retrospective, since it's unclear from the code), where we'd default the |
88998e0 to
99f51a4
Compare
7f81d21 to
859acfe
Compare
99f51a4 to
24a22a4
Compare
24a22a4 to
0994846
Compare
859acfe to
f88fa5c
Compare
f88fa5c to
e8de31f
Compare
0994846 to
1497662
Compare
In this PR I drop pagination from `getLogsByTags` endpoint because pagination here doesn't really make sense because we get more logs per tag only if there are multiple devices sending logs from a given sender to a recipient. This is the final of 3 PRs in which I clean up the `getLogsByTags` endpoint: 1. First PR - including block timestamp in return value, 2. This is the second PR - I type the tag arg of the function to be `SiloedTag`, 3. in the last PR I will drop pagination from this endpoint. # Potential DoS vector Log tag is unconstrained and hence can be arbitrarily chosen by the sender. This introduces a potential DoS vector because the node currently loads all the logs into memory and returns them. Hence it could be feasible to re-use the same tag thousands of time and then spam a node by request for this tag. I think the solution here is to simply define the maximum number of logs a node is willing to store per tag in the db. Given that there is no legitimate use case for log reuse I think making it even as low as 5 would be fine. Note that this PR didn't introduce this DoS vector because even though I dropped the pagination here it didn't seem to be enforced before (if `limitPerTag` arg was undefined we just returned it all). For this reason I think we can merge it and tackle the issue in a followup PR.
1497662 to
7afc547
Compare

In this PR I drop pagination from
getLogsByTagsendpoint because pagination here doesn't really make sense because we get more logs per tag only if there are multiple devices sending logs from a given sender to a recipient.This is the final of 3 PRs in which I clean up the
getLogsByTagsendpoint:SiloedTag,Potential DoS vector
Log tag is unconstrained and hence can be arbitrarily chosen by the sender. This introduces a potential DoS vector because the node currently loads all the logs into memory and returns them. Hence it could be feasible to re-use the same tag thousands of time and then spam a node by request for this tag.
I think the solution here is to simply define the maximum number of logs a node is willing to store per tag in the db. Given that there is no legitimate use case for log reuse I think making it even as low as 5 would be fine.
Note that this PR didn't introduce this DoS vector because even though I dropped the pagination here it didn't seem to be enforced before (if
limitPerTagarg was undefined we just returned it all). For this reason I think we can merge it and tackle the issue in a followup PR.