Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
84 commits
Select commit Hold shift + click to select a range
1e04df1
WIP: try introducing sql validator without disabling the legacy type …
yuancu Nov 27, 2025
229cb1d
Override deriveType in validator to allow type checking on UDTs
yuancu Nov 28, 2025
b529a35
Add special handling for datetime coercion
yuancu Dec 1, 2025
34b2435
Override coerceOperandType and castTo methods to apply casting to udf…
yuancu Dec 1, 2025
e6e01c8
Fix interval semantic mismatch between sql and ppl
yuancu Dec 2, 2025
15e0bef
Fix quarter interval bug in calcite (counte-react)
yuancu Dec 2, 2025
a1f2f42
Comment out function overloadings to make basic functions work
yuancu Dec 2, 2025
7f999df
Remove unused validator operators
yuancu Dec 3, 2025
39e208a
Update sargFromJson method in ExtendedRelJson
yuancu Dec 3, 2025
55e8486
Prepend following rules for datetime comparisons: (date, time) -> tim…
yuancu Dec 3, 2025
dc44f8c
Directly delegate type checking to sql type checkers (1292/1599 | 125…
yuancu Dec 4, 2025
a4dfd3f
Enable IP comparison (1332/1559 | 1409/1915)
yuancu Dec 4, 2025
9deca8b
Fix expected type checking for agg functions (1287/1599 | 1423/1914)
yuancu Dec 8, 2025
11c33ba
Define operand types for json functions
yuancu Dec 8, 2025
a4b0d72
Define operand types and return type inferences for array functions (…
yuancu Dec 8, 2025
c237041
Add hive sql library operators for array_slice function, which is use…
yuancu Dec 9, 2025
51c556f
Define overrides for atan function to allow conditional sql call rewr…
yuancu Dec 9, 2025
dc3bc41
Update operand types for percentile approx to allow with addtional ty…
yuancu Dec 9, 2025
09f45e7
Correct span function type routing (allow any)
yuancu Dec 9, 2025
a30fa3a
Define operand type for DISTINCT_COUNT_APPROX
yuancu Dec 9, 2025
a588864
Unconditionally rewrite COUNT() to COUNT(*) in sql level to allow typ…
yuancu Dec 9, 2025
4cd7570
Define operand types for EnhancedCoalesce, ScalarMin & Max; remove Sq…
yuancu Dec 9, 2025
dcb3b8f
Correct type checkers for mvappend; fix regexp lookup; correct isempt…
yuancu Dec 9, 2025
c230765
Initiate a new RelToSqlConverter every time as it is stateful (1769/2…
yuancu Dec 10, 2025
8200cf7
Override add for string concat and number addition (1773/2018)
yuancu Dec 10, 2025
4abc31e
Switch sql dialect to opensearch spark sql dialect (1772/2018)
yuancu Dec 10, 2025
0269692
Align null order in collation (1783/2018)
yuancu Dec 11, 2025
0118313
Skip validations for bin-on-timestamps (1799/2027)
yuancu Dec 11, 2025
4adfa98
Do not remove sort in subqueries when converting sql to rel (1820/2027)
yuancu Dec 11, 2025
4838c4c
Trim unused fields after when converting sql back to rel (1857/2027)
yuancu Dec 11, 2025
545ed04
Pass on join type of logical correlate to lateral join (1858/2027)
yuancu Dec 11, 2025
1aea9e3
Support semi and anti join in the converted SQL (1860/2027)
yuancu Dec 12, 2025
9f369b0
Disable insertion of json_type operator (1864/2027)
yuancu Dec 12, 2025
9bffce8
Rewrite IN / NOT IN with tuple inputs to row(...tuple) to conform to …
yuancu Dec 12, 2025
96801c6
Update exception type of testSpanByImplicitTimestamp
yuancu Dec 15, 2025
c3bde24
Update PplTypeCoercionRule to allow CAST(IP as STRING)
yuancu Dec 15, 2025
15f17b3
Fix float literal by inserting compulsory cast
yuancu Dec 15, 2025
9b9c07a
Extend spark dialect to support cast null to IP (1880/2028)
yuancu Dec 15, 2025
443d81e
Support sql-udt conversion of composite types (1881/2028)
yuancu Dec 15, 2025
e1bab21
Add geoip of string override because the udt type is erased during va…
yuancu Dec 15, 2025
6a7feae
Skip validation when grouping by two cases (1885/2028)
yuancu Dec 16, 2025
2d36650
Embed collations into window for streamstats (1910/2054, before rebas…
yuancu Dec 16, 2025
23e611a
Skip validation when logical values is used to create empty rows (191…
yuancu Dec 16, 2025
6286e90
Remove EnhancedCoalesceFunction in favor of built-in Coalesce functio…
yuancu Dec 16, 2025
0736444
Fix ITs that are affected by allowing implicit coercion from number t…
yuancu Dec 16, 2025
53a4118
Return original logical plan when get error message 'Aggregate expres…
yuancu Dec 16, 2025
7f52976
Reorganize class structures and refactor for readability
yuancu Dec 16, 2025
c4de0e4
Remove validation logics from PPLFuncImpTable
yuancu Dec 17, 2025
0667fa7
Remove unused classes (1917/2055)
yuancu Dec 17, 2025
485c038
Merge remote-tracking branch 'origin/main' into issues/4636 (1929/2069)
yuancu Dec 17, 2025
d19ba53
Update the type checker of transform function to allow arbitrary addi…
yuancu Dec 17, 2025
330a4e5
Tolerant group by window function (returning original logical plan)
yuancu Dec 17, 2025
f5fda25
Ignore patterns IT that are to be fixed in 4968
yuancu Dec 18, 2025
657cc6a
Allow binary arithmetic operation between string and numerics
yuancu Dec 18, 2025
b030f41
Define SqlKind for DIVIDE and MOD UDF for coercion purpose
yuancu Dec 18, 2025
8caa594
Disable nullary call to not confuse with field reference
yuancu Dec 18, 2025
975d21c
Use SAFE_CAST instead of CAST to tolerant malformatted numbers
yuancu Dec 19, 2025
bc38f9f
Enable identifier expansion to ensure that casted args in agg functio…
yuancu Dec 19, 2025
502dbb0
Handle least restrictive for UDTs
yuancu Dec 19, 2025
34985c7
Update testBetweenWithIncompatibleTypes
yuancu Dec 19, 2025
c660a69
Fix or skip yaml tests
yuancu Dec 19, 2025
d9b1453
Pass on exception for bin-on-timestamp exception
yuancu Dec 19, 2025
90b6309
Fix doctest
yuancu Dec 18, 2025
ea06ab0
Fix unit tests
yuancu Dec 17, 2025
e548ad2
Merge branch '4636-fix-ut' into issues/4636
yuancu Dec 19, 2025
94c7f33
Merge branch '4636-fix-yaml' into issues/4636
yuancu Dec 19, 2025
bab48d7
Remove interface PPLTypeChecker
yuancu Dec 22, 2025
6a30697
Pass on bucket_nullable flag to sql node and back to rel node (1929/2…
yuancu Dec 22, 2025
0be7cb4
Fix append and multisearch ITs (1932/2066)
yuancu Dec 22, 2025
38e12c6
Fix unit tests broken by passed-on hints
yuancu Dec 22, 2025
0990db1
Fix calcite, calcite no push down, and v2 ppl explain ITs
yuancu Dec 17, 2025
d0195cb
Merge remote-tracking branch 'origin/main' into issues/4636
yuancu Dec 22, 2025
9f6dcd7
Update clickbench plans
yuancu Dec 22, 2025
d455d35
Fix dedupe explain ITs
yuancu Dec 22, 2025
a2d1955
Fix clickbench q41
yuancu Dec 22, 2025
4a98c3a
Fix explain dedup expr complex1
yuancu Dec 22, 2025
aff8541
Remove unused CalciteFuncSignature interface
yuancu Dec 23, 2025
229ddb7
Remove type checkers from operator registration because all type chec…
yuancu Dec 23, 2025
e6b80e5
Make getValiadtor thread-safe & improve type checking for JsonSet
yuancu Dec 23, 2025
cb78ea7
Deprecate interface UDFOperandMetadata.wrapUDT
yuancu Dec 23, 2025
69a2d01
Merge remote-tracking branch 'origin/main' into issues/4636
yuancu Dec 26, 2025
d722baf
Fix explain ITs after merging main
yuancu Dec 26, 2025
d513e17
Merge remote-tracking branch 'origin/main' into issues/4636
yuancu Dec 29, 2025
ff91bdc
Update plans after merging
yuancu Dec 29, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@
import org.junit.Before;
import org.junit.Test;
import org.opensearch.sql.api.UnifiedQueryTestBase;
import org.opensearch.sql.ppl.calcite.OpenSearchSparkSqlDialect;
import org.opensearch.sql.calcite.validate.OpenSearchSparkSqlDialect;

public class UnifiedQueryTranspilerTest extends UnifiedQueryTestBase {

Expand Down Expand Up @@ -43,11 +43,9 @@ public void testToSqlWithCustomDialect() {
UnifiedQueryTranspiler customTranspiler =
UnifiedQueryTranspiler.builder().dialect(OpenSearchSparkSqlDialect.DEFAULT).build();
String actualSql = customTranspiler.toSql(plan);
String expectedSql =
normalize(
"SELECT *\nFROM `catalog`.`employees`\nWHERE TRY_CAST(`name` AS DOUBLE) = 1.230E2");
String expectedSql = normalize("SELECT *\nFROM `catalog`.`employees`\nWHERE `name` = 123");
assertEquals(
"Transpiled query using OpenSearchSparkSqlDialect should translate SAFE_CAST to TRY_CAST",
"Numeric types can be implicitly coerced to string with OpenSearchSparkSqlDialect",
expectedSql,
actualSql);
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@
import static org.opensearch.sql.calcite.utils.OpenSearchTypeFactory.TYPE_FACTORY;

import java.sql.Connection;
import java.sql.SQLException;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
Expand All @@ -17,13 +18,21 @@
import java.util.function.BiFunction;
import lombok.Getter;
import lombok.Setter;
import org.apache.calcite.config.NullCollation;
import org.apache.calcite.rex.RexCorrelVariable;
import org.apache.calcite.rex.RexLambdaRef;
import org.apache.calcite.rex.RexNode;
import org.apache.calcite.server.CalciteServerStatement;
import org.apache.calcite.sql.validate.SqlValidator;
import org.apache.calcite.tools.FrameworkConfig;
import org.apache.calcite.tools.RelBuilder;
import org.opensearch.sql.ast.expression.UnresolvedExpression;
import org.opensearch.sql.calcite.utils.CalciteToolsHelper;
import org.opensearch.sql.calcite.validate.OpenSearchSparkSqlDialect;
import org.opensearch.sql.calcite.validate.PplTypeCoercion;
import org.opensearch.sql.calcite.validate.PplTypeCoercionRule;
import org.opensearch.sql.calcite.validate.PplValidator;
import org.opensearch.sql.calcite.validate.SqlOperatorTableProvider;
import org.opensearch.sql.common.setting.Settings;
import org.opensearch.sql.executor.QueryType;
import org.opensearch.sql.expression.function.FunctionProperties;
Expand Down Expand Up @@ -72,6 +81,17 @@ public class CalcitePlanContext {
/** Whether we're currently inside a lambda context. */
@Getter @Setter private boolean inLambdaContext = false;

/**
* -- SETTER -- Sets the SQL operator table provider. This must be called during initialization by
* the opensearch module.
*
* @param provider the provider to use for obtaining operator tables
*/
@Setter private static SqlOperatorTableProvider operatorTableProvider;

/** Cached SqlValidator instance (lazy initialized). */
private volatile SqlValidator validator;

private CalcitePlanContext(FrameworkConfig config, SysLimit sysLimit, QueryType queryType) {
this.config = config;
this.sysLimit = sysLimit;
Expand Down Expand Up @@ -101,6 +121,52 @@ private CalcitePlanContext(CalcitePlanContext parent) {
this.inLambdaContext = true; // Mark that we're inside a lambda
}

/**
* Gets the SqlValidator instance (singleton per CalcitePlanContext).
*
* @return cached SqlValidator instance
*/
public SqlValidator getValidator() {
if (validator == null) {
synchronized (this) {
// Double-Checked Locking for thread-safety
if (validator == null) {
final CalciteServerStatement statement;
try {
statement = connection.createStatement().unwrap(CalciteServerStatement.class);
} catch (SQLException e) {
throw new RuntimeException(e);
}
if (operatorTableProvider == null) {
throw new IllegalStateException(
"SqlOperatorTableProvider must be set before creating CalcitePlanContext");
}
SqlValidator.Config validatorConfig =
SqlValidator.Config.DEFAULT
.withTypeCoercionRules(PplTypeCoercionRule.instance())
.withTypeCoercionFactory(PplTypeCoercion::create)
// Use lenient conformance for PPL compatibility
.withConformance(OpenSearchSparkSqlDialect.DEFAULT.getConformance())
// Use Spark SQL's NULL collation (NULLs sorted LOW/FIRST)
.withDefaultNullCollation(NullCollation.LOW)
// This ensures that coerced arguments are replaced with cast version in sql
// select list because coercion is performed during select list expansion during
// sql validation. Affects 4356.yml
// See SqlValidatorImpl#validateSelectList and AggConverter#translateAgg
.withIdentifierExpansion(true);
validator =
PplValidator.create(
statement,
config,
operatorTableProvider.getOperatorTable(),
TYPE_FACTORY,
validatorConfig);
}
}
}
return validator;
}
Comment on lines +129 to +168
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash

# Find CalciteServerStatement in the codebase
echo "=== Searching for CalciteServerStatement ==="
find . -type f -name "*.java" | xargs grep -l "CalciteServerStatement" | head -20

# Look for the file and class definition
echo -e "\n=== Checking if CalciteServerStatement is from Calcite library ==="
grep -r "class CalciteServerStatement\|interface CalciteServerStatement" --include="*.java" 2>/dev/null | head -5

Repository: opensearch-project/sql

Length of output: 403


🏁 Script executed:

#!/bin/bash

# Find CalcitePlanContext to examine the actual implementation
echo "=== Finding CalcitePlanContext file ==="
find . -type f -name "CalcitePlanContext.java"

# Get the complete file content to understand the context
echo -e "\n=== CalcitePlanContext content ==="
cat ./core/src/main/java/org/opensearch/sql/calcite/CalcitePlanContext.java

Repository: opensearch-project/sql

Length of output: 11109


🏁 Script executed:

#!/bin/bash

# Search for PplValidator.create to understand how statement is used
echo "=== Searching for PplValidator.create ==="
find . -type f -name "*.java" | xargs grep -A 20 "PplValidator.create\|class PplValidator" | head -100

Repository: opensearch-project/sql

Length of output: 6457


🏁 Script executed:

#!/bin/bash

# Check how CalciteServerStatement is used elsewhere in the codebase
echo "=== CalciteServerStatement usage patterns ==="
find . -type f -name "*.java" -exec grep -H "CalciteServerStatement\|createStatement" {} \; | grep -v "\.class" | head -50

Repository: opensearch-project/sql

Length of output: 3384


🏁 Script executed:

#!/bin/bash

# Get the complete CalciteToolsHelper file
echo "=== CalciteToolsHelper content ==="
cat ./core/src/main/java/org/opensearch/sql/calcite/utils/CalciteToolsHelper.java

Repository: opensearch-project/sql

Length of output: 14929


🏁 Script executed:

#!/bin/bash

# Also check PplValidator to see if it stores statement reference
echo "=== PplValidator.java (full file) ==="
cat ./core/src/main/java/org/opensearch/sql/calcite/validate/PplValidator.java

Repository: opensearch-project/sql

Length of output: 8498


🏁 Script executed:

#!/bin/bash

# Check how CalcitePlanContext is created and used - who creates it?
echo "=== Searching for CalcitePlanContext.create() callers ==="
rg -B2 -A2 "CalcitePlanContext\.create\|new CalcitePlanContext" --type java | head -100

Repository: opensearch-project/sql

Length of output: 123


🏁 Script executed:

#!/bin/bash

# Search for CalcitePlanContext usage in test and main files
echo "=== CalcitePlanContext usage in codebase ==="
rg "CalcitePlanContext" --type java -B 1 -A 1 | head -150

Repository: opensearch-project/sql

Length of output: 15090


🏁 Script executed:

#!/bin/bash

# Look for unit tests of CalcitePlanContext to understand lifecycle
echo "=== CalcitePlanContext test files ==="
find . -type f -name "*CalcitePlanContext*Test*.java" -o -name "*Test*CalcitePlanContext*.java"

Repository: opensearch-project/sql

Length of output: 103


🏁 Script executed:

#!/antml:function_calls
# Check if there is a close() method being called on CalcitePlanContext anywhere
echo "=== Searching for CalcitePlanContext close/cleanup ==="
rg "\.close\(\)|try.*CalcitePlanContext|AutoCloseable.*CalcitePlanContext" --type java

Repository: opensearch-project/sql

Length of output: 10796


🏁 Script executed:

#!/bin/bash

# Check how withPrepare is used - does it handle statement cleanup?
echo "=== CalciteToolsHelper.withPrepare usage and statement handling ==="
rg "withPrepare\|perform" core/src/main/java/org/opensearch/sql/calcite/utils/CalciteToolsHelper.java -B 2 -A 5

# Check if there's any statement closing elsewhere in related code
echo -e "\n=== Searching for statement management patterns ==="
rg "createStatement\(\)" --type java -B 2 -A 5 core/src/main/java/org/opensearch/sql/calcite/ | head -80

Repository: opensearch-project/sql

Length of output: 1995


🏁 Script executed:

#!/bin/bash

# Verify CalciteServerStatement is from Calcite library and check its inheritance
echo "=== Check Calcite imports for CalciteServerStatement ==="
grep -r "import.*CalciteServerStatement" core/src/main/java/org/opensearch/sql/calcite/ --include="*.java"

# Look at how statements created elsewhere are managed (e.g., in tests that use try-with-resources)
echo -e "\n=== Statement resource patterns in tests ==="
rg "try.*createStatement\|createStatement.*try" --type java -B 1 -A 3 integ-test/ | head -50

Repository: opensearch-project/sql

Length of output: 558


Address resource leak: Close the statement after validator creation.

The underlying Statement resource created by connection.createStatement() is not being closed. Although CalciteServerStatement itself doesn't implement AutoCloseable, it wraps a JDBC Statement that does hold resources requiring proper cleanup.

The statement is only needed to extract the prepare context via statement.createPrepareContext(). Since it's not stored or reused, close it immediately after validator initialization:

final CalciteServerStatement statement;
try {
  statement = connection.createStatement().unwrap(CalciteServerStatement.class);
  if (operatorTableProvider == null) {
    throw new IllegalStateException(
        "SqlOperatorTableProvider must be set before creating CalcitePlanContext");
  }
  SqlValidator.Config validatorConfig =
      SqlValidator.Config.DEFAULT
          .withTypeCoercionRules(PplTypeCoercionRule.instance())
          .withTypeCoercionFactory(PplTypeCoercion::create)
          .withConformance(OpenSearchSparkSqlDialect.DEFAULT.getConformance())
          .withDefaultNullCollation(NullCollation.LOW)
          .withIdentifierExpansion(true);
  validator =
      PplValidator.create(
          statement,
          config,
          operatorTableProvider.getOperatorTable(),
          TYPE_FACTORY,
          validatorConfig);
} catch (SQLException e) {
  throw new RuntimeException(e);
} finally {
  if (statement != null) {
    try {
      statement.close();
    } catch (SQLException e) {
      // log or handle exception
    }
  }
}

Also check CalciteToolsHelper.withPrepare() for the same pattern—it creates a statement at line 137 that should be closed after OpenSearchPrepareImpl().perform() completes.

The double-checked locking implementation with volatile is correct.

🤖 Prompt for AI Agents
In core/src/main/java/org/opensearch/sql/calcite/CalcitePlanContext.java around
lines 129-168, the Statement created via
connection.createStatement().unwrap(CalciteServerStatement.class) is never
closed causing a resource leak; after using the statement to create the
validator, ensure the underlying JDBC Statement is closed (e.g., wrap
creation/use in try/finally or try-with-resources so statement.close() is always
called and handle/log any SQLException from close) and keep the existing
exception handling for creation; also review CalciteToolsHelper.withPrepare()
(around line 137) where a statement is created and ensure that statement is
likewise closed after OpenSearchPrepareImpl().perform() completes.


public RexNode resolveJoinCondition(
UnresolvedExpression expr,
BiFunction<UnresolvedExpression, CalcitePlanContext, RexNode> transformFunction) {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -43,9 +43,12 @@
import java.util.stream.IntStream;
import java.util.stream.Stream;
import lombok.AllArgsConstructor;
import lombok.NonNull;
import org.apache.calcite.adapter.enumerable.RexToLixTranslator;
import org.apache.calcite.plan.RelOptTable;
import org.apache.calcite.plan.ViewExpanders;
import org.apache.calcite.rel.RelCollation;
import org.apache.calcite.rel.RelFieldCollation;
import org.apache.calcite.rel.RelNode;
import org.apache.calcite.rel.core.Aggregate;
import org.apache.calcite.rel.core.JoinRelType;
Expand All @@ -55,10 +58,14 @@
import org.apache.calcite.rel.type.RelDataTypeField;
import org.apache.calcite.rex.RexCall;
import org.apache.calcite.rex.RexCorrelVariable;
import org.apache.calcite.rex.RexFieldCollation;
import org.apache.calcite.rex.RexInputRef;
import org.apache.calcite.rex.RexLiteral;
import org.apache.calcite.rex.RexNode;
import org.apache.calcite.rex.RexOver;
import org.apache.calcite.rex.RexShuttle;
import org.apache.calcite.rex.RexVisitorImpl;
import org.apache.calcite.rex.RexWindow;
import org.apache.calcite.rex.RexWindowBounds;
import org.apache.calcite.sql.SqlKind;
import org.apache.calcite.sql.fun.SqlStdOperatorTable;
Expand Down Expand Up @@ -1740,6 +1747,7 @@ public RelNode visitStreamWindow(StreamWindow node, CalcitePlanContext context)
// Default: first get rawExpr
List<RexNode> overExpressions =
node.getWindowFunctionList().stream().map(w -> rexVisitor.analyze(w, context)).toList();
overExpressions = embedExistingCollationsIntoOver(overExpressions, context);

if (hasGroup) {
// only build sequence when there is by condition
Expand Down Expand Up @@ -1781,6 +1789,84 @@ public RelNode visitStreamWindow(StreamWindow node, CalcitePlanContext context)
return context.relBuilder.peek();
}

/**
* Embed existing collation into window function's over clauses.
*
* <p>Window functions with frame specifications like {@code ROWS n PRECEDING} require ORDER BY to
* determine row order. Without it, results are non-deterministic.
*
* <p>Without this fix, the initial plan has ORDER BY separate from window functions:
*
* <pre>
* LogicalProject(SUM($5) OVER (ROWS 1 PRECEDING)) ← Missing ORDER BY
* LogicalSort(sort0=[$5])
* </pre>
*
* <p>This causes problems during validation as the order is not bound to the window. With this
* fix, sort collations are embeded into each {@code RexOver} window:
*
* <pre>
* LogicalProject(SUM($5) OVER (ORDER BY $5 ROWS 1 PRECEDING)) ← ORDER BY embedded
* </pre>
*
* @param overExpressions Window function expressions (may contain nested {@link RexOver})
* @param context Plan context for building RexNodes
* @return Expressions with ORDER BY embedded in all window specifications
*/
private List<RexNode> embedExistingCollationsIntoOver(
List<RexNode> overExpressions, CalcitePlanContext context) {
RelCollation existingCollation = context.relBuilder.peek().getTraitSet().getCollation();
List<@NonNull RelFieldCollation> relCollations =
existingCollation == null ? List.of() : existingCollation.getFieldCollations();
ImmutableList<@NonNull RexFieldCollation> rexCollations =
relCollations.stream()
.map(f -> relCollationToRexCollation(f, context.relBuilder))
.collect(ImmutableList.toImmutableList());
return overExpressions.stream()
.map(
n ->
n.accept(
new RexShuttle() {
@Override
public RexNode visitOver(RexOver over) {
RexWindow window = over.getWindow();
return context.rexBuilder.makeOver(
over.getType(),
over.getAggOperator(),
over.getOperands(),
window.partitionKeys,
rexCollations,
window.getLowerBound(),
window.getUpperBound(),
window.isRows(),
true,
false,
over.isDistinct(),
over.ignoreNulls());
}
}))
.toList();
}

private static RexFieldCollation relCollationToRexCollation(
RelFieldCollation relCollation, RelBuilder builder) {
RexNode fieldRef = builder.field(relCollation.getFieldIndex());

// Convert direction flags to SqlKind set
Set<SqlKind> flags = new HashSet<>();
if (relCollation.direction == RelFieldCollation.Direction.DESCENDING
|| relCollation.direction == RelFieldCollation.Direction.STRICTLY_DESCENDING) {
flags.add(SqlKind.DESCENDING);
}
if (relCollation.nullDirection == RelFieldCollation.NullDirection.FIRST) {
flags.add(SqlKind.NULLS_FIRST);
} else if (relCollation.nullDirection == RelFieldCollation.NullDirection.LAST) {
flags.add(SqlKind.NULLS_LAST);
}

return new RexFieldCollation(fieldRef, flags);
}

private List<RexNode> wrapWindowFunctionsWithGroupNotNull(
List<RexNode> overExpressions, RexNode groupNotNull, CalcitePlanContext context) {
List<RexNode> wrappedOverExprs = new ArrayList<>(overExpressions.size());
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -480,26 +480,8 @@ public RexNode visitWindowFunction(WindowFunction node, CalcitePlanContext conte
(arguments.isEmpty() || arguments.size() == 1)
? Collections.emptyList()
: arguments.subList(1, arguments.size());
List<RexNode> nodes =
PPLFuncImpTable.INSTANCE.validateAggFunctionSignature(
functionName, field, args, context.rexBuilder);
return nodes != null
? PlanUtils.makeOver(
context,
functionName,
nodes.getFirst(),
nodes.size() <= 1 ? Collections.emptyList() : nodes.subList(1, nodes.size()),
partitions,
List.of(),
node.getWindowFrame())
: PlanUtils.makeOver(
context,
functionName,
field,
args,
partitions,
List.of(),
node.getWindowFrame());
return PlanUtils.makeOver(
context, functionName, field, args, partitions, List.of(), node.getWindowFrame());
})
.orElseThrow(
() ->
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -14,14 +14,19 @@
import org.apache.calcite.rex.RexBuilder;
import org.apache.calcite.rex.RexLiteral;
import org.apache.calcite.rex.RexNode;
import org.apache.calcite.sql.SqlCallBinding;
import org.apache.calcite.sql.SqlIntervalQualifier;
import org.apache.calcite.sql.SqlKind;
import org.apache.calcite.sql.SqlOperator;
import org.apache.calcite.sql.fun.SqlStdOperatorTable;
import org.apache.calcite.sql.parser.SqlParserPos;
import org.apache.calcite.sql.type.SqlTypeName;
import org.apache.calcite.sql.type.SqlTypeUtil;
import org.apache.calcite.sql.validate.implicit.TypeCoercionImpl;
import org.opensearch.sql.ast.expression.SpanUnit;
import org.opensearch.sql.calcite.type.AbstractExprRelDataType;
import org.opensearch.sql.calcite.utils.OpenSearchTypeFactory;
import org.opensearch.sql.calcite.utils.OpenSearchTypeUtil;
import org.opensearch.sql.data.type.ExprCoreType;
import org.opensearch.sql.exception.ExpressionEvaluationException;
import org.opensearch.sql.exception.SemanticCheckException;
Expand Down Expand Up @@ -146,7 +151,7 @@ public RexNode makeCast(
// SqlStdOperatorTable.NOT_EQUALS,
// ImmutableList.of(exp, makeZeroLiteral(sourceType)));
}
} else if (OpenSearchTypeFactory.isUserDefinedType(type)) {
} else if (OpenSearchTypeUtil.isUserDefinedType(type)) {
if (RexLiteral.isNullLiteral(exp)) {
return super.makeCast(pos, type, exp, matchNullability, safe, format);
}
Expand Down Expand Up @@ -185,4 +190,33 @@ else if ((SqlTypeUtil.isApproximateNumeric(sourceType) || SqlTypeUtil.isDecimal(
}
return super.makeCast(pos, type, exp, matchNullability, safe, format);
}

/**
* Derives the return type of call to an operator.
*
* <p>In Calcite, coercion between STRING and NUMERIC operands takes place during converting SQL
* to RelNode. However, as we are building logical plans directly, the coercion is not yet
* implemented at this point. Hence, we duplicate {@link
* TypeCoercionImpl#binaryArithmeticWithStrings} here to infer the correct type, enabling
* operations like {@code "5" / 10}. The actual coercion will be inserted later when performing
* validation on SqlNode.
*
* @see TypeCoercionImpl#binaryArithmeticCoercion(SqlCallBinding)
* @param op the operator being called
* @param exprs actual operands
* @return derived type
*/
@Override
public RelDataType deriveReturnType(SqlOperator op, List<? extends RexNode> exprs) {
if (op.getKind().belongsTo(SqlKind.BINARY_ARITHMETIC) && exprs.size() == 2) {
final RelDataType type1 = exprs.get(0).getType();
final RelDataType type2 = exprs.get(1).getType();
if (SqlTypeUtil.isNumeric(type1) && OpenSearchTypeUtil.isCharacter(type2)) {
return type1;
} else if (OpenSearchTypeUtil.isCharacter(type1) && SqlTypeUtil.isNumeric(type2)) {
return type2;
}
}
return super.deriveReturnType(op, exprs);
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -240,7 +240,7 @@ private void registerCustomizedRules(RelOptPlanner planner) {
* return {@link OpenSearchCalcitePreparingStmt}
*/
@Override
protected CalcitePrepareImpl.CalcitePreparingStmt getPreparingStmt(
public CalcitePrepareImpl.CalcitePreparingStmt getPreparingStmt(
CalcitePrepare.Context context,
Type elementType,
CalciteCatalogReader catalogReader,
Expand Down Expand Up @@ -369,7 +369,8 @@ public RelNode visit(TableScan scan) {
"The 'bins' parameter on timestamp fields requires: (1) pushdown to be enabled"
+ " (controlled by plugins.calcite.pushdown.enabled, enabled by default), and"
+ " (2) the timestamp field to be used as an aggregation bucket (e.g., 'stats"
+ " count() by @timestamp').");
+ " count() by @timestamp').",
e);
}
throw Util.throwAsRuntime(e);
}
Expand Down
Loading
Loading