-
Notifications
You must be signed in to change notification settings - Fork 244
ui: Migrating to Flechette for arrow record parsing #6176
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
✅ Meticulous spotted 0 visual differences across 328 screens tested: view results. Meticulous evaluated ~5 hours of user flows against your PR. Expected differences? Click here. Last updated for commit 7f8f2cb. This comment will update as new commits are pushed. |
yomete
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nice work!
| const table: Table<any> = useMemo(() => { | ||
| const result = tableFromIPC(arrow.record); | ||
| const table: Table = useMemo(() => { | ||
| // Copy to aligned buffer only if byteOffset is not 8-byte aligned (required for BigUint64Array) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FYI I can't remember the exact configuration, but this is something we can enforce (at least in rust) on the backend.
Also, I first mistook this for a buffer only carrying the uint64 array, which is always naturally 8-byte aligned since every entry is 8-bytes, but this is because the record as a whole may have arrays that are not necessarily 8 byte aligned. This probably deserves some extra detail in the comment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nevermind on the second part now that I read the second PR.
| return []; | ||
| } | ||
|
|
||
| // Use Set to collect unique filenames (Flechette decodes dictionaries upfront) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
that's very unfortunate, it shouldn't matter in this case, since the backend should already return unique items, but maybe we should open an issue about this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Raise an issue to not decode the dictionaries upfront?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As in, that we can directly access the underlying dictionary like we did before.
| const maxDepth = useMemo(() => { | ||
| if (callersFlamegraphData?.arrow != null) { | ||
| const table = tableFromIPC(callersFlamegraphData.arrow.record); | ||
| // Copy to aligned buffer only if byteOffset is not 8-byte aligned (required for BigUint64Array) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
since this happens a couple times throughout the codebase now, maybe we can extract this into a util function?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
Changes in this PR should bring in significant(>2x) improvements in reading the arrow record data, but brings in a slight degraded(~0.7x) table parsing speed.
This degradation is caused by these array copies that are needed to align the bytes to support the bigints in the data.
Making a fix to improve that parsing performance in #6177 with speeds up the parsing by 1.3x.