Parquet: Add opt-in uncompressed row group size tracking#16327
Open
nssalian wants to merge 1 commit into
Open
Conversation
3 tasks
Contributor
Author
|
CC: @pvary @steveloughran @huaxingao PTAL |
| } | ||
| } | ||
|
|
||
| private void checkSizeDefault() { |
Contributor
There was a problem hiding this comment.
I'd give it a clearer name which makes clear it's the size on the filesystem; "default" just says it's the default option, not what it does
Contributor
Author
There was a problem hiding this comment.
Let me think of a better name.
|
|
||
| @ParameterizedTest | ||
| @ValueSource(strings = {"gzip", "snappy", "zstd", "uncompressed"}) | ||
| public void testRowGroupSizeEnforcedWhenCompressionEnabled(String codec) throws IOException { |
Contributor
There was a problem hiding this comment.
is there an equivalent test which verifies that with the default setting it's the compressed byte count that's used? that's critical for regression testing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Closes: #16325
Rationale for this Change
Adds
write.parquet.row-group-size-check-uncompressed(default false) to accurately enforcewrite.parquet.row-group-size-byteswhen using compressing codecs (GZIP, ZSTD, etc.).ParquetWriter.checkSize()useswriteStore.getBufferedSize()which reports compressed bytes for flushed pages. With effective compression, the writer never sees the target exceeded because it's comparing compressed data against an uncompressed limit. Row groups grow unbounded.What changes are included in this PR?
When
write.parquet.row-group-size-check-uncompressed=true:getBufferedSize()before and aftermodel.write()per record. Between these points, data is in uncompressed column buffers (no page flush occurs duringmodel.write()). The delta is the exact uncompressed record size.rowGroupUncompressedSize. Flushes when it hits the target.Disabled by default.
When enabled
getBufferedSize()calls per record. Each call iterates column writers adding field reads. It's the same pattern parquet-mr uses inColumnWriteStoreBase.sizeCheck().Are these changes tested?
Are there any user-facing changes?
Yes. New configuration but set to
falseby default.