Skip to content

Comments

feat: support shiftrightunsigned expression#3590

Open
n0r0shi wants to merge 1 commit intoapache:mainfrom
n0r0shi:shiftrightunsigned-func
Open

feat: support shiftrightunsigned expression#3590
n0r0shi wants to merge 1 commit intoapache:mainfrom
n0r0shi:shiftrightunsigned-func

Conversation

@n0r0shi
Copy link

@n0r0shi n0r0shi commented Feb 25, 2026

Summary

  • Wire shiftrightunsigned from the datafusion-spark crate (SparkShiftRightUnsigned) to Comet
  • Register in jni_api.rs and add serde mapping via CometScalarFunction in QueryPlanSerde.scala

Register datafusion-spark's SparkShiftRightUnsigned UDF and add serde
mapping in bitwiseExpressions.
Copy link
Member

@andygrove andygrove left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice and clean wiring. The serde change and UDF registration both look correct.

I have a couple of suggestions around the tests. There's already an existing bitwise.sql file at spark/src/test/resources/sql-tests/expressions/bitwise/bitwise.sql that covers shiftright and shiftleft. It would be a natural fit to add shiftrightunsigned tests there instead of in CometBitwiseExpressionSuite. The SQL file framework automatically compares Spark and Comet results, which is the preferred approach for expression tests.

On test coverage, the current tests only use Int values. The existing shiftright tests in CometBitwiseExpressionSuite test both Int and Long types (there's a second block starting around line 62 that creates a long column). It would be good to add Long coverage for shiftrightunsigned too, since Int and Long go through different code paths in the Rust implementation.

Also, the literal tests (shiftrightunsigned(-1, 1) and shiftrightunsigned(NULL, 1)) would benefit from disabling constant folding so they actually exercise the Comet code path rather than being optimized away by Spark. Something like wrapping them in withSQLConf(SQLConf.OPTIMIZER_EXCLUDED_RULES.key -> "org.apache.spark.sql.catalyst.optimizer.ConstantFolding"). Or better yet, these would just work naturally in the SQL file framework.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants