-
Notifications
You must be signed in to change notification settings - Fork 3.4k
HBASE-29889 Add XXH3 Hash Support to Bloom Filter #7740
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
jinhyukify
wants to merge
5
commits into
apache:master
Choose a base branch
from
jinhyukify:HBASE-29889
base: master
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Changes from all commits
Commits
Show all changes
5 commits
Select commit
Hold shift + click to select a range
31733b1
HBASE-29889 Add LittleEndianBytes utility for fast LE primitive access
jinhyukify e6a302e
HBASE-29889 Extend HashKey with bulk little-endian accessors for fast…
jinhyukify fbdc25c
HBASE-29889 Implement XXH3 64bit hashing
jinhyukify 9281600
HBASE-29889 Add 64bit Bloom filter hash support
jinhyukify db169a0
HBASE-29889 Add XXH3 to Bloom filter hashing
jinhyukify File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
45 changes: 45 additions & 0 deletions
45
hbase-common/src/main/java/org/apache/hadoop/hbase/util/Hash64.java
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,45 @@ | ||
| /* | ||
| * Licensed to the Apache Software Foundation (ASF) under one | ||
| * or more contributor license agreements. See the NOTICE file | ||
| * distributed with this work for additional information | ||
| * regarding copyright ownership. The ASF licenses this file | ||
| * to you under the Apache License, Version 2.0 (the | ||
| * "License"); you may not use this file except in compliance | ||
| * with the License. You may obtain a copy of the License at | ||
| * | ||
| * http://www.apache.org/licenses/LICENSE-2.0 | ||
| * | ||
| * Unless required by applicable law or agreed to in writing, software | ||
| * distributed under the License is distributed on an "AS IS" BASIS, | ||
| * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
| * See the License for the specific language governing permissions and | ||
| * limitations under the License. | ||
| */ | ||
| package org.apache.hadoop.hbase.util; | ||
|
|
||
| import org.apache.yetus.audience.InterfaceAudience; | ||
| import org.apache.yetus.audience.InterfaceStability; | ||
|
|
||
| /** | ||
| * Interface for computing 64-bit hash values. | ||
| */ | ||
| @InterfaceAudience.Private | ||
| @InterfaceStability.Stable | ||
| public interface Hash64 { | ||
| /** | ||
| * Computes a 64-bit hash from the given {@code HashKey} using a seed of 0. | ||
| * @param hashKey the input key providing byte access | ||
| * @return the computed 64-bit hash value | ||
| */ | ||
| default <T> long hash64(HashKey<T> hashKey) { | ||
| return hash64(hashKey, 0L); | ||
| } | ||
|
|
||
| /** | ||
| * Computes a 64-bit hash from the given {@code HashKey} and seed. | ||
| * @param hashKey the input key providing byte access | ||
| * @param seed the 64-bit seed value | ||
| * @return the computed 64-bit hash value | ||
| */ | ||
| <T> long hash64(HashKey<T> hashKey, long seed); | ||
| } | ||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
263 changes: 263 additions & 0 deletions
263
hbase-common/src/main/java/org/apache/hadoop/hbase/util/LittleEndianBytes.java
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,263 @@ | ||
| /* | ||
| * Licensed to the Apache Software Foundation (ASF) under one | ||
| * or more contributor license agreements. See the NOTICE file | ||
| * distributed with this work for additional information | ||
| * regarding copyright ownership. The ASF licenses this file | ||
| * to you under the Apache License, Version 2.0 (the | ||
| * "License"); you may not use this file except in compliance | ||
| * with the License. You may obtain a copy of the License at | ||
| * | ||
| * http://www.apache.org/licenses/LICENSE-2.0 | ||
| * | ||
| * Unless required by applicable law or agreed to in writing, software | ||
| * distributed under the License is distributed on an "AS IS" BASIS, | ||
| * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
| * See the License for the specific language governing permissions and | ||
| * limitations under the License. | ||
| */ | ||
| package org.apache.hadoop.hbase.util; | ||
|
|
||
| import java.nio.ByteBuffer; | ||
| import org.apache.hadoop.hbase.ByteBufferExtendedCell; | ||
| import org.apache.hadoop.hbase.Cell; | ||
| import org.apache.hadoop.hbase.unsafe.HBasePlatformDependent; | ||
| import org.apache.yetus.audience.InterfaceAudience; | ||
|
|
||
| /** | ||
| * Utility methods for reading and writing little-endian integers and longs from byte[] and | ||
| * ByteBuffer. Used by hashing components to perform fast, low-level LE conversions with optional | ||
| * Unsafe acceleration. | ||
| */ | ||
| @InterfaceAudience.Private | ||
| public final class LittleEndianBytes { | ||
| final static boolean UNSAFE_UNALIGNED = HBasePlatformDependent.unaligned(); | ||
|
|
||
| static abstract class Converter { | ||
| abstract int toInt(byte[] bytes, int offset); | ||
|
|
||
| abstract int toInt(ByteBuffer buffer, int offset); | ||
|
|
||
| abstract int putInt(byte[] bytes, int offset, int val); | ||
|
|
||
| abstract long toLong(byte[] bytes, int offset); | ||
|
|
||
| abstract long toLong(ByteBuffer buffer, int offset); | ||
|
|
||
| abstract int putLong(byte[] bytes, int offset, long val); | ||
| } | ||
|
|
||
| static class ConverterHolder { | ||
| static final String UNSAFE_CONVERTER_NAME = | ||
| ConverterHolder.class.getName() + "$UnsafeConverter"; | ||
| static final Converter BEST_CONVERTER = getBestConverter(); | ||
|
|
||
| static Converter getBestConverter() { | ||
| try { | ||
| Class<? extends Converter> theClass = | ||
| Class.forName(UNSAFE_CONVERTER_NAME).asSubclass(Converter.class); | ||
| return theClass.getConstructor().newInstance(); | ||
| } catch (Throwable t) { | ||
| return PureJavaConverter.INSTANCE; | ||
| } | ||
| } | ||
|
|
||
| static final class PureJavaConverter extends Converter { | ||
| static final PureJavaConverter INSTANCE = new PureJavaConverter(); | ||
|
|
||
| private PureJavaConverter() { | ||
| } | ||
|
|
||
| @Override | ||
| int toInt(byte[] bytes, int offset) { | ||
| int n = 0; | ||
| for (int i = offset + 3; i >= offset; i--) { | ||
| n <<= 8; | ||
| n ^= (bytes[i] & 0xFF); | ||
| } | ||
| return n; | ||
| } | ||
|
|
||
| @Override | ||
| int toInt(ByteBuffer buffer, int offset) { | ||
| return Integer.reverseBytes(buffer.getInt(offset)); | ||
| } | ||
|
|
||
| @Override | ||
| int putInt(byte[] bytes, int offset, int val) { | ||
| for (int i = offset; i < offset + 3; i++) { | ||
| bytes[i] = (byte) val; | ||
| val >>>= 8; | ||
| } | ||
| bytes[offset + 3] = (byte) val; | ||
| return offset + Bytes.SIZEOF_INT; | ||
| } | ||
|
|
||
| @Override | ||
| long toLong(byte[] bytes, int offset) { | ||
| long l = 0; | ||
| for (int i = offset + 7; i >= offset; i--) { | ||
| l <<= 8; | ||
| l ^= (bytes[i] & 0xFFL); | ||
| } | ||
| return l; | ||
| } | ||
|
|
||
| @Override | ||
| long toLong(ByteBuffer buffer, int offset) { | ||
| return Long.reverseBytes(buffer.getLong(offset)); | ||
| } | ||
|
|
||
| @Override | ||
| int putLong(byte[] bytes, int offset, long val) { | ||
| for (int i = offset; i < offset + 7; i++) { | ||
| bytes[i] = (byte) val; | ||
| val >>>= 8; | ||
| } | ||
| bytes[offset + 7] = (byte) val; | ||
| return offset + Bytes.SIZEOF_LONG; | ||
| } | ||
| } | ||
|
|
||
| static final class UnsafeConverter extends Converter { | ||
| static final UnsafeConverter INSTANCE = new UnsafeConverter(); | ||
|
|
||
| public UnsafeConverter() { | ||
| } | ||
|
|
||
| static { | ||
| if (!UNSAFE_UNALIGNED) { | ||
| throw new Error(); | ||
| } | ||
| } | ||
|
|
||
| @Override | ||
| int toInt(byte[] bytes, int offset) { | ||
| return UnsafeAccess.toIntLE(bytes, offset); | ||
| } | ||
|
|
||
| @Override | ||
| int toInt(ByteBuffer buffer, int offset) { | ||
| return UnsafeAccess.toIntLE(buffer, offset); | ||
| } | ||
|
|
||
| @Override | ||
| int putInt(byte[] bytes, int offset, int val) { | ||
| return UnsafeAccess.putIntLE(bytes, offset, val); | ||
| } | ||
|
|
||
| @Override | ||
| long toLong(byte[] bytes, int offset) { | ||
| return UnsafeAccess.toLongLE(bytes, offset); | ||
| } | ||
|
|
||
| @Override | ||
| long toLong(ByteBuffer buffer, int offset) { | ||
| return UnsafeAccess.toLongLE(buffer, offset); | ||
| } | ||
|
|
||
| @Override | ||
| int putLong(byte[] bytes, int offset, long val) { | ||
| return UnsafeAccess.putLongLE(bytes, offset, val); | ||
| } | ||
| } | ||
| } | ||
|
|
||
| /* | ||
| * Writes an int in little-endian order. Caller must ensure bounds; no checks are performed. | ||
| */ | ||
| public static void putInt(byte[] bytes, int offset, int val) { | ||
| assert offset >= 0 && bytes.length - offset >= Bytes.SIZEOF_INT; | ||
| ConverterHolder.BEST_CONVERTER.putInt(bytes, offset, val); | ||
| } | ||
|
|
||
| /* | ||
| * Reads an int in little-endian order. Caller must ensure bounds; no checks are performed. | ||
| */ | ||
| public static int toInt(byte[] bytes, int offset) { | ||
| assert offset >= 0 && bytes.length - offset >= Bytes.SIZEOF_INT; | ||
| return ConverterHolder.BEST_CONVERTER.toInt(bytes, offset); | ||
| } | ||
|
|
||
| /* | ||
| * Reads an int in little-endian order from ByteBuffer. Caller must ensure bounds; no checks are | ||
| * performed. | ||
| */ | ||
| public static int toInt(ByteBuffer buffer, int offset) { | ||
| assert offset >= 0 && buffer.capacity() - offset >= Bytes.SIZEOF_INT; | ||
| return ConverterHolder.BEST_CONVERTER.toInt(buffer, offset); | ||
| } | ||
|
|
||
| /* | ||
| * Writes a long in little-endian order. Caller must ensure bounds; no checks are performed. | ||
| */ | ||
| public static void putLong(byte[] bytes, int offset, long val) { | ||
| assert offset >= 0 && bytes.length - offset >= Bytes.SIZEOF_LONG; | ||
| ConverterHolder.BEST_CONVERTER.putLong(bytes, offset, val); | ||
| } | ||
|
|
||
| /* | ||
| * Reads a long in little-endian order. Caller must ensure bounds; no checks are performed. | ||
| */ | ||
| public static long toLong(byte[] bytes, int offset) { | ||
| assert offset >= 0 && bytes.length - offset >= Bytes.SIZEOF_LONG; | ||
| return ConverterHolder.BEST_CONVERTER.toLong(bytes, offset); | ||
| } | ||
|
|
||
| /* | ||
| * Reads a long in little-endian order from ByteBuffer. Caller must ensure bounds; no checks are | ||
| * performed. | ||
| */ | ||
| public static long toLong(ByteBuffer buffer, int offset) { | ||
| assert offset >= 0 && buffer.capacity() - offset >= Bytes.SIZEOF_LONG; | ||
| return ConverterHolder.BEST_CONVERTER.toLong(buffer, offset); | ||
| } | ||
|
|
||
| /* | ||
| * Reads an int in little-endian order from the row portion of the Cell, at the given offset. | ||
| */ | ||
| public static int getRowAsInt(Cell cell, int offset) { | ||
| if (cell instanceof ByteBufferExtendedCell) { | ||
| ByteBufferExtendedCell bbCell = (ByteBufferExtendedCell) cell; | ||
| return toInt(bbCell.getRowByteBuffer(), bbCell.getRowPosition() + offset); | ||
| } | ||
| return toInt(cell.getRowArray(), cell.getRowOffset() + offset); | ||
| } | ||
|
|
||
| /* | ||
| * Reads a long in little-endian order from the row portion of the Cell, at the given offset. | ||
| */ | ||
| public static long getRowAsLong(Cell cell, int offset) { | ||
| if (cell instanceof ByteBufferExtendedCell) { | ||
| ByteBufferExtendedCell bbCell = (ByteBufferExtendedCell) cell; | ||
| return toLong(bbCell.getRowByteBuffer(), bbCell.getRowPosition() + offset); | ||
| } | ||
| return toLong(cell.getRowArray(), cell.getRowOffset() + offset); | ||
| } | ||
|
|
||
| /* | ||
| * Reads an int in little-endian order from the qualifier portion of the Cell, at the given | ||
| * offset. | ||
| */ | ||
| public static int getQualifierAsInt(Cell cell, int offset) { | ||
| if (cell instanceof ByteBufferExtendedCell) { | ||
| ByteBufferExtendedCell bbCell = (ByteBufferExtendedCell) cell; | ||
| return toInt(bbCell.getQualifierByteBuffer(), bbCell.getQualifierPosition() + offset); | ||
| } | ||
| return toInt(cell.getQualifierArray(), cell.getQualifierOffset() + offset); | ||
| } | ||
|
|
||
| /* | ||
| * Reads a long in little-endian order from the qualifier portion of the Cell, at the given | ||
| * offset. | ||
| */ | ||
| public static long getQualifierAsLong(Cell cell, int offset) { | ||
| if (cell instanceof ByteBufferExtendedCell) { | ||
| ByteBufferExtendedCell bbCell = (ByteBufferExtendedCell) cell; | ||
| return toLong(bbCell.getQualifierByteBuffer(), bbCell.getQualifierPosition() + offset); | ||
| } | ||
| return toLong(cell.getQualifierArray(), cell.getQualifierOffset() + offset); | ||
| } | ||
|
|
||
| private LittleEndianBytes() { | ||
| } | ||
| } |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The goal here is to take a single 64-bit hash result and split it into two 32-bit hashes to compute the Bloom hash locations.
Since XXH3 already performs much better than the existing hashes and we no longer need to run the hash function twice, this approach gives us an additional performance win on top of the baseline speedup.