Skip to content

Conversation

@ymc9
Copy link
Member

@ymc9 ymc9 commented Jan 23, 2026

Summary by CodeRabbit

  • New Features

    • MySQL provider added: CRUD, JSON/array handling, relation support, and dialect-aware query behavior.
    • Provider capability detection with fallbacks (RETURNING, DISTINCT, batch operations) for cross-database consistency.
  • Tests

    • Test matrix and tooling expanded to run against SQLite, PostgreSQL, and MySQL; many tests made provider-aware.
  • Chores

    • CI and test scripts updated to include MySQL service and MySQL-specific test runs.

✏️ Tip: You can customize this high-level summary in your review settings.

Copilot AI review requested due to automatic review settings January 23, 2026 04:59
@claude
Copy link

claude bot commented Jan 23, 2026

Claude finished @ymc9's task —— View job


PR Review: MySQL Support Implementation

Reviewing this substantial PR that adds MySQL support to ZenStack ORM. This is a significant feature addition with extensive changes across the codebase.

Todo List

  • Review core MySQL dialect implementation
  • Analyze query generation and SQL safety
  • Check error handling and edge cases
  • Evaluate test coverage and CI integration
  • Review performance considerations
  • Assess security implications
  • Check integration with existing ORM features

🔍 Comprehensive Review Results

Security & SQL Safety - Excellent

  • Parameterized Queries: All SQL generation uses Kysely's safe sql.val() and sql.lit() methods
  • No SQL Injection Risks: Proper escaping in complex operations like JSON_CONTAINS(${lhs}, ${sql.val(JSON.stringify(rhs))})
  • Type Safety: Leverages TypeScript and Kysely's type system throughout

Code Quality - High Standard

  • Architecture: Clean extension of BaseCrudDialect with proper MySQL-specific overrides
  • Consistency: Follows established patterns from PostgreSQL and SQLite dialects
  • Error Handling: Database-specific error code mapping (e.g., ER_DUP_ENTRY for MySQL unique violations)

Feature Completeness - Comprehensive

Core MySQL Features Implemented:

  • JSON Operations: Full support with JSON_OBJECT, JSON_EXTRACT, JSON_CONTAINS, JSON_ARRAYAGG
  • Data Type Mapping: Complete type system (tinyint(1) for boolean, datetime(3), blob, json)
  • Capability Flags: Correctly identifies MySQL limitations (no DISTINCT ON, limited RETURNING support)
  • Query Features: UPDATE/DELETE with LIMIT, proper offset handling with required LIMIT

Advanced Features:

  • LATERAL Joins: Proper MySQL 8.0.14+ support for complex relation queries
  • Enum Handling: Inline ENUM definitions with proper value escaping
  • Array Emulation: JSON-based array storage with proper transformation
  • Null Handling: Distinguishes between DB NULL, JSON null, and application null types

Test Coverage - Thorough

  • CI Integration: MySQL 8.4 added to test matrix alongside PostgreSQL and SQLite
  • Cross-Database Tests: All 78+ test files updated to handle MySQL-specific behaviors
  • Feature Gating: Proper handling of unsupported features (createManyAndReturn, RETURNING clauses)
  • Provider Detection: Consistent db.$schema.provider.type === 'mysql' checks

⚠️ Minor Considerations

Performance Notes:

  • Large JSON operations may need monitoring in production
  • Array emulation via JSON has different performance characteristics than native arrays
  • Complex relation queries use LATERAL joins which require MySQL 8.0.14+

Documentation Gaps:

  • MySQL version requirements (8.0.14+ for LATERAL joins) could be better documented
  • JSON array performance implications not explicitly called out

Edge Cases Handled Well:

  • Timezone handling in DateTime transformations
  • Key-order insensitive JSON equality comparisons
  • Proper NULLS FIRST/LAST emulation for ORDER BY

🎯 Recommendation: APPROVE

This is a high-quality implementation that:

  1. Maintains Security: No SQL injection vulnerabilities, proper parameterization
  2. Follows Patterns: Consistent with existing dialect architecture
  3. Comprehensive Testing: Extensive cross-database test coverage
  4. Production Ready: Handles edge cases and MySQL-specific limitations appropriately

The implementation demonstrates deep understanding of MySQL's capabilities and limitations, with thoughtful workarounds for missing features like native array types and limited RETURNING support.

Files reviewed: 78 changed files including core dialect, query executor, test infrastructure, and comprehensive test suite updates.


@coderabbitai
Copy link

coderabbitai bot commented Jan 23, 2026

Note

Other AI code review bot(s) detected

CodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review.

📝 Walkthrough

Walkthrough

Adds first-class MySQL support across the ORM: new MySQL CRUD dialect and dialect abstractions, provider-aware type/validation updates, runtime guards for MySQL feature limits (no RETURNING, limited batch-return), CI/test infra and many tests adapted for multi-provider behavior.

Changes

Cohort / File(s) Summary
CI & Packaging
\.github/workflows/build-test.yml, pnpm-workspace.yaml, package.json, packages/orm/package.json, packages/orm/tsup.config.ts
Added MySQL service to CI and a step to configure MySQL; added mysql2 to workspace and peers (optional); new package export for MySQL dialect and tsup entry.
Provider types & constants
packages/schema/src/schema.ts, packages/language/src/constants.ts, TODO.md
Expanded provider union to include mysql; enabled mysql in SUPPORTED_PROVIDERS; added DB_PROVIDERS_SUPPORTING_LIST_TYPE and checklist updates.
Language/document validation
packages/language/src/document.ts, packages/language/src/validators/datamodel-validator.ts
Moved provider-gated list-field validation into document post-merge checks and removed older sqlite-only validation paths.
Dialect abstraction & core
packages/orm/src/client/crud/dialects/base-dialect.ts, packages/orm/src/client/crud/dialects/index.ts, packages/orm/src/client/crud/dialects/*.ts
Added capability getters (supportsReturning, supportsDistinctOn, etc.), replaced transformPrimitive with transformInput/transformOutput, added cast/trim helpers, changed buildOrderBy signature; factory returns MySQL dialect; updated Postgres/SQLite dialects to new API.
MySQL dialect implementation
packages/orm/src/client/crud/dialects/mysql.ts, packages/orm/src/dialects/mysql.ts
New MySqlCrudDialect class implementing MySQL-specific SQL/JSON/array handling, returning/read-back strategies, and utility overrides; re-exports Kysely MysqlDialect.
ORM client surface & types
packages/orm/src/client/contract.ts, packages/orm/src/client/crud-types.ts, packages/orm/src/client/client-impl.ts
Type-level gating for provider-specific ops (createManyAndReturn/updateManyAndReturn, distinct support); runtime guards throwing for unsupported MySQL operations.
CRUD operations & utilities
packages/orm/src/client/crud/operations/*.ts, packages/orm/src/client/functions.ts, packages/orm/src/client/query-utils.ts
Switched to dialect-aware transforms and castInt/castText; added read-back logic for non-RETURNING dialects; updated orderBy/aggregate flows to new signatures.
Executor, name-mapper & helpers
packages/orm/src/client/executor/name-mapper.ts, packages/orm/src/client/executor/zenstack-query-executor.ts, packages/orm/src/client/helpers/schema-db-pusher.ts
Name mapper now accepts ClientContract and derives dialect; executor made dialect-aware (mutation flows, read-backs, hooks); DB pusher/type mapping made provider-specific.
Policy plugin & transform utilities
packages/plugins/policy/src/*
Policy handler refactored for dialect-aware post-update checks, values-table usage, stripTableReferences helper; primitives now use transformInput.
ORM packaging & exports
packages/orm/package.json, packages/orm/tsup.config.ts, packages/orm/src/dialects/mysql.ts
Added exports and build entries for MySQL dialect and re-exports for Kysely types.
Test tooling & fixtures
packages/testtools/src/*, packages/testtools/package.json
Added MySQL test config/URL, dialect factory (mysql2 pool support), DB prepare helpers, extended test APIs/types to include mysql.
E2E & regression tests
tests/e2e/**, tests/regression/**, tests/e2e/package.json, packages/server/test/**, tests/**
Many tests updated: gated/skipped blocks for MySQL unsupported features (createManyAndReturn, updateManyAndReturn, distinct), broadened error-message regexes, provider-aware SQL quoting/timestamps, added MySQL-specific branches/fallbacks.

Estimated code review effort

🎯 5 (Critical) | ⏱️ ~120 minutes

Possibly related PRs

Poem

🐰 I hopped into the code with a twitchy nose,
MySQL seeds and JSON berries in my toes,
Dialects learned new tricks and tests learned to mind,
No RETURNING? I dug a read-back find.
Hooray — three-flavor ORM, a carrot-shaped rhyme! 🥕

🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 15.79% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'feat(orm): mysql support' clearly summarizes the main change - adding MySQL support to the ORM. It is concise, specific, and directly related to the substantial changes throughout the codebase for MySQL dialect implementation.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds MySQL as a supported ORM provider and updates the runtime, dialect layer, and test infrastructure to account for MySQL-specific SQL/feature differences (notably lack of RETURNING, collation/case-sensitivity, and differing DDL/quoting).

Changes:

  • Introduces a MySQL CRUD dialect and wires provider selection throughout the ORM/runtime.
  • Updates policy enforcement, mutation hooks, and executor/read-back logic to work without RETURNING.
  • Expands CI/test tooling and updates many tests for MySQL compatibility or skips unsupported operations.

Reviewed changes

Copilot reviewed 77 out of 78 changed files in this pull request and generated 5 comments.

Show a summary per file
File Description
tests/regression/test/v2-migrated/issue-1894.test.ts Uses created record’s id instead of assuming id: 0 works across providers.
tests/regression/test/v2-migrated/issue-1681.test.ts Skips createManyAndReturn assertion for MySQL.
tests/regression/test/v2-migrated/issue-1576.test.ts Adds MySQL fallback path since createManyAndReturn isn’t supported.
tests/e2e/package.json Adds ts-pattern dependency for new provider branching.
tests/e2e/orm/query-builder/query-builder.test.ts Normalizes DateTime string format for MySQL compatibility.
tests/e2e/orm/policy/policy-functions.test.ts Accounts for MySQL case-insensitive string comparison defaults.
tests/e2e/orm/policy/nonexistent-models.test.ts Uses provider-specific DDL for creating tables with correct quoting/types.
tests/e2e/orm/policy/migrated/view.test.ts Uses provider-specific CREATE VIEW syntax; errors on unsupported providers.
tests/e2e/orm/policy/migrated/update-many-and-return.test.ts Skips MySQL where updateManyAndReturn isn’t supported; enables additional coverage on other DBs.
tests/e2e/orm/policy/migrated/toplevel-operations.test.ts Adjusts expectations for MySQL limitations around ID updates under post-update policies.
tests/e2e/orm/policy/migrated/nested-to-one.test.ts Adjusts/blocks MySQL behavior for nested updates that mutate IDs.
tests/e2e/orm/policy/migrated/multi-id-fields.test.ts Adds MySQL-specific behavior for post-update policy checks with ID updates.
tests/e2e/orm/policy/migrated/deep-nested.test.ts Broadens error matching to include MySQL duplicate/constraint text.
tests/e2e/orm/policy/migrated/current-operation.test.ts Adjusts expected policy outcome for MySQL case-insensitive comparisons.
tests/e2e/orm/policy/migrated/current-model.test.ts Same as above for current-model tests and casing variants.
tests/e2e/orm/policy/migrated/create-many-and-return.test.ts Skips MySQL since createManyAndReturn isn’t supported.
tests/e2e/orm/policy/migrated/auth.test.ts Skips MySQL-only unsupported returning operations; broadens constraint message matching.
tests/e2e/orm/policy/crud/update.test.ts Adds MySQL-specific upsert/on-duplicate behavior and relaxes constraint matching.
tests/e2e/orm/policy/crud/read.test.ts Forces PostgreSQL provider for features only supported there (FROM).
tests/e2e/orm/policy/crud/create.test.ts Removes .returningAll() dependence to work on providers without RETURNING.
tests/e2e/orm/plugin-infra/on-kysely-query.test.ts Makes inserted DateTime values MySQL-compatible and simplifies insert node creation.
tests/e2e/orm/plugin-infra/ext-query-args.test.ts Skips createManyAndReturn/updateManyAndReturn expectations for MySQL.
tests/e2e/orm/plugin-infra/entity-mutation-hooks.test.ts Adds assertion for beforeMutationEntities availability in after hooks.
tests/e2e/orm/client-api/update.test.ts Broadens constraint matching for MySQL error text.
tests/e2e/orm/client-api/update-many.test.ts Skips updateManyAndReturn on MySQL due to missing RETURNING.
tests/e2e/orm/client-api/type-coverage.test.ts Uses usePrismaPush and narrows array-type test to PostgreSQL.
tests/e2e/orm/client-api/relation/self-relation.test.ts Import reordering only.
tests/e2e/orm/client-api/raw-query.test.ts Adds provider-specific quoting and SQL variants for MySQL.
tests/e2e/orm/client-api/name-mapping.test.ts Skips/adjusts RETURNING-dependent assertions on MySQL.
tests/e2e/orm/client-api/mixin.test.ts Broadens constraint matching for MySQL error text.
tests/e2e/orm/client-api/json-filter.test.ts Strengthens assertion for created JSON content.
tests/e2e/orm/client-api/find.test.ts Extends distinct-not-supported branch to MySQL as well.
tests/e2e/orm/client-api/error-handling.test.ts Adds MySQL error code mapping and broadens message matching.
tests/e2e/orm/client-api/delegate.test.ts Skips createManyAndReturn/updateManyAndReturn on MySQL; broadens constraint matching.
tests/e2e/orm/client-api/create-many-and-return.test.ts Skips MySQL since createManyAndReturn isn’t supported.
tests/e2e/orm/client-api/computed-fields.test.ts Changes computed expression to be compatible across providers and skips MySQL returning test.
pnpm-workspace.yaml Adds mysql2 to catalog for workspace consumption.
pnpm-lock.yaml Locks mysql2 and ts-pattern additions and updated transitive graph.
packages/testtools/src/schema.ts Extends schema generation helpers to support MySQL datasource prelude/placeholder replacement.
packages/testtools/src/client.ts Adds MySQL test DB config, dialect creation, and DB provisioning logic.
packages/testtools/package.json Adds mysql2 dependency for MySQL test dialect setup.
packages/server/test/api/rpc.test.ts Formatting / type cast adjustments.
packages/server/test/api/rest.test.ts Adjusts test data IDs and formatting.
packages/schema/src/schema.ts Adds mysql to DataSourceProviderType.
packages/plugins/policy/src/utils.ts Switches to transformInput API.
packages/plugins/policy/src/policy-handler.ts Reworks post-update handling and adds MySQL onDuplicateKeyUpdate support.
packages/plugins/policy/src/functions.ts Adjusts check function query structure.
packages/plugins/policy/src/expression-transformer.ts Switches to transformInput API.
packages/orm/tsup.config.ts Adds MySQL dialect entrypoint build.
packages/orm/src/dialects/mysql.ts Exposes Kysely MysqlDialect from ORM package.
packages/orm/src/client/query-utils.ts Adjusts default orderBy typing and removes unused type import.
packages/orm/src/client/plugin.ts Adds beforeMutationEntities to after-mutation hook args.
packages/orm/src/client/helpers/schema-db-pusher.ts Adds MySQL DDL/type/default handling for test-only schema pushing.
packages/orm/src/client/functions.ts Makes text matching/escaping logic more cross-dialect friendly.
packages/orm/src/client/executor/zenstack-query-executor.ts Adds MySQL-aware mutation hook read-back logic and normalizes mutation results.
packages/orm/src/client/executor/name-mapper.ts Uses dialect-aware casting for name-mapping transformations.
packages/orm/src/client/crud/operations/update.ts Forces read-back for dialects without RETURNING.
packages/orm/src/client/crud/operations/group-by.ts Uses dialect casting and passes take into orderBy builder.
packages/orm/src/client/crud/operations/delete.ts Uses numAffectedRows for correctness on non-returning dialects.
packages/orm/src/client/crud/operations/create.ts Wraps createMany in a transaction.
packages/orm/src/client/crud/operations/count.ts Uses dialect casting for counts.
packages/orm/src/client/crud/operations/base.ts Adds MySQL fallbacks for create/update/delete and expands insert ignore handling.
packages/orm/src/client/crud/operations/aggregate.ts Uses dialect casting and passes take into orderBy builder.
packages/orm/src/client/crud/dialects/sqlite.ts Introduces capability flags, casting helpers, and VALUES-table helper.
packages/orm/src/client/crud/dialects/postgresql.ts Introduces capability flags, casting helpers, and VALUES-table helper.
packages/orm/src/client/crud/dialects/mysql.ts Adds new MySQL CRUD dialect implementation.
packages/orm/src/client/crud/dialects/index.ts Registers MySQL dialect in dialect factory.
packages/orm/src/client/crud/dialects/base-dialect.ts Adds capability flags, transformInput, casting hooks, and VALUES-table interface.
packages/orm/src/client/crud-types.ts Makes distinct type available only for providers that support it.
packages/orm/src/client/contract.ts Removes createManyAndReturn/updateManyAndReturn from MySQL model contract.
packages/orm/src/client/client-impl.ts Throws explicit not-supported errors for MySQL *AndReturn operations.
packages/orm/package.json Adds mysql dialect export and mysql2 optional peer dependency.
packages/language/src/validators/datamodel-validator.ts Removes sqlite-specific list-type validation in favor of centralized validation.
packages/language/src/document.ts Adds provider-aware validation for list types after import merge.
packages/language/src/constants.ts Adds MySQL to supported providers; defines providers supporting list types.
package.json Adds test:mysql and includes it in test:all.
TODO.md Marks MySQL and custom procedures as completed.
.github/workflows/build-test.yml Adds MySQL service and runs matrix tests against mysql provider.
Files not reviewed (1)
  • pnpm-lock.yaml: Language not supported

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +2534 to +2538
if (!this.dialect.supportsReturning) {
// if the dialect doesn't support RETURNING, we always need read back
return { needReadBack: true, selectedFields: undefined };
}

Copy link

Copilot AI Jan 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

mutationNeedsReadBack checks !this.dialect.supportsReturning twice, which is redundant and makes the control flow harder to follow. Remove the duplicate block (keep a single early-return) so future changes don’t accidentally diverge between the two checks.

Suggested change
if (!this.dialect.supportsReturning) {
// if the dialect doesn't support RETURNING, we always need read back
return { needReadBack: true, selectedFields: undefined };
}

Copilot uses AI. Check for mistakes.
Comment on lines +774 to +800
override buildValuesTableSelect(fields: FieldDef[], rows: unknown[][]) {
const cols = rows[0]?.length ?? 0;

if (fields.length !== cols) {
throw createInvalidInputError('Number of fields must match number of columns in each row');
}

// check all rows have the same length
for (const row of rows) {
if (row.length !== cols) {
throw createInvalidInputError('All rows must have the same number of columns');
}
}

// build final alias name as `$values(f1, f2, ...)`
const aliasWithColumns = `$values(${fields.map((f) => f.name).join(', ')})`;

const eb = expressionBuilder<any, any>();

return eb
.selectFrom(
sql`(VALUES ${sql.join(
rows.map((row) => sql`ROW(${sql.join(row.map((v) => sql.val(v)))})`),
sql.raw(', '),
)}) as ${sql.raw(aliasWithColumns)}` as unknown as TableExpression<any, any>,
)
.selectAll();
Copy link

Copilot AI Jan 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

buildValuesTableSelect doesn't guard against rows being empty. With rows.length === 0, cols becomes 0 and the method will generate invalid SQL like VALUES with no rows. Add an explicit rows.length === 0 check (as done in the Postgres/SQLite dialects) and throw a validation error.

Copilot uses AI. Check for mistakes.

// aggregations
if (['_count', '_avg', '_sum', '_min', '_max'].includes(field)) {
invariant(value && typeof value === 'object', `invalid orderBy value for field "${field}"`);
Copy link

Copilot AI Jan 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This use of variable 'value' always evaluates to true.

Copilot uses AI. Check for mistakes.

switch (field) {
case '_count': {
invariant(value && typeof value === 'object', 'invalid orderBy value for field "_count"');
Copy link

Copilot AI Jan 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This use of variable 'value' always evaluates to true.

Copilot uses AI. Check for mistakes.
if (value === 'asc' || value === 'desc') {
result = result.orderBy(fieldRef, this.negateSort(value, negated));
} else if (
value &&
Copy link

Copilot AI Jan 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This use of variable 'value' always evaluates to true.

Suggested change
value &&

Copilot uses AI. Check for mistakes.
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 8

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (3)
tests/e2e/orm/client-api/delegate.test.ts (1)

179-187: Make error matching resilient when cause is absent.

The predicate assumes e.cause.message exists; some providers surface the message on e.message instead, which can make the matcher throw for the wrong reason.

💡 Suggested hardening
-            ).rejects.toSatisfy((e) => e.cause.message.toLowerCase().match(/(constraint)|(duplicate)/i));
+            ).rejects.toSatisfy((e) => {
+                const msg = String(e?.cause?.message ?? e?.message ?? '');
+                return /(constraint|duplicate)/i.test(msg);
+            });
packages/orm/src/client/helpers/schema-db-pusher.ts (1)

327-332: Array type handling incompatible with RawBuilder.

When mapFieldType returns a RawBuilder (e.g., for MySQL's varchar(255)), the array branch at line 329 attempts to use it in a template string, which will produce incorrect SQL like [object Object][].

🐛 Proposed fix
         if (fieldDef.array) {
             // Kysely doesn't support array type natively
-            return sql.raw(`${result}[]`);
+            if (typeof result === 'string') {
+                return sql.raw(`${result}[]`);
+            } else {
+                // For RawBuilder types, we need to handle this differently
+                // Note: MySQL doesn't support array types natively anyway
+                throw new Error(`Array types are not supported for provider ${this.schema.provider.type}`);
+            }
         } else {
             return result as ColumnDataType;
         }
packages/orm/src/client/crud/dialects/base-dialect.ts (1)

734-752: Fix JSON null filtering for Postgres (current predicate collapses to false).
transformInput(null, 'Json', false) returns SQL NULL for Postgres, which makes lhs = NULL AND lhs IS NOT NULL impossible and degrades AnyNull to DbNull only.

🐛 Proposed fix
-        } else if (value instanceof JsonNullClass) {
-            return this.eb.and([
-                this.eb(lhs, '=', this.transformInput(null, 'Json', false)),
-                this.eb(lhs, 'is not', null),
-            ]);
-        } else if (value instanceof AnyNullClass) {
-            // AnyNull matches both DB NULL and JSON null
-            return this.eb.or([this.eb(lhs, 'is', null), this.eb(lhs, '=', this.transformInput(null, 'Json', false))]);
+        } else if (value instanceof JsonNullClass) {
+            const jsonNull = this.transformInput(new JsonNullClass(), 'Json', false);
+            return this.eb.and([this.eb(lhs, '=', jsonNull), this.eb(lhs, 'is not', null)]);
+        } else if (value instanceof AnyNullClass) {
+            // AnyNull matches both DB NULL and JSON null
+            const jsonNull = this.transformInput(new JsonNullClass(), 'Json', false);
+            return this.eb.or([this.eb(lhs, 'is', null), this.eb(lhs, '=', jsonNull)]);
🤖 Fix all issues with AI agents
In `@packages/orm/src/client/crud/dialects/mysql.ts`:
- Around line 668-694: The code contains duplicate handling for the "_count"
aggregate: the initial if that checks ['_count','_avg','_sum','_min','_max']
already processes "_count" and continues, making the switch case for '_count'
unreachable; remove the redundant switch case block for '_count' (the case
labeled '_count' that uses eb.fn.count and buildFieldRef) so only the
aggregate-handling branch (which calls aggregate(...)) remains, leaving other
switch cases intact.
- Around line 528-531: The buildArrayLiteralSQL implementations are vulnerable
because string elements are interpolated without escaping single quotes; update
the MySQL override buildArrayLiteralSQL and the PostgreSQL counterpart (the
override buildArrayLiteralSQL in postgresql.ts) to properly escape string values
before interpolation (e.g., replace single quote characters ' with doubled ''),
and ensure strings are wrapped in single quotes after escaping; leave non-string
values untouched and preserve current array formatting (MySQL JSON_ARRAY(...)
and Postgres ARRAY[...] or ARRAY[...]::type) so only the string-escaping logic
is added to the existing functions.

In `@packages/orm/src/client/crud/operations/base.ts`:
- Around line 2534-2537: There is a duplicate, unreachable check of
this.dialect.supportsReturning that returns { needReadBack: true,
selectedFields: undefined }; remove that redundant block from the method in
base.ts (the one using supportsReturning/needReadBack/selectedFields) so the
earlier check (lines ~2524–2527) is the single source of truth; ensure no other
logic relies on this second return before deleting it.
- Around line 1292-1296: The current check uses updateResult.numAffectedRows to
decide whether to set updatedEntity = null which can be false-negative on MySQL
when rows matched but no values changed; update the logic in the update flow
(around executeQuery / updateQuery handling) to: if updateResult.numAffectedRows
is undefined or zero, either (a) prefer updateResult.numUpdatedRows when
present, or (b) perform the read-back query unconditionally (or if you already
fetched thisEntity earlier, use that presence to determine existence) and only
set updatedEntity = null when the read-back confirms the row is missing; ensure
references to executeQuery, updateQuery, numAffectedRows, numUpdatedRows,
updatedEntity and thisEntity are updated accordingly.

In `@packages/orm/src/client/helpers/schema-db-pusher.ts`:
- Line 298: The current sql.raw call interpolates enumValues directly (return
sql.raw(`enum(${enumValues.map((v) => `'${v}'`).join(', ')})`)), which allows
quotes/injection; fix by using parameterized bindings or proper escaping: build
a placeholder list like enum(${enumValues.map(() => '?').join(',')}) and call
sql.raw with that string and the enumValues as bindings (or use your SQL
builder's literal/escape helper) so values are passed as parameters instead of
directly injected into the raw SQL; update the sql.raw invocation that
references enumValues to use these bindings.

In `@packages/testtools/src/client.ts`:
- Around line 289-295: The cleanup for the MySQL callback-style pool is using
mysqlPool.end() which is not promise-aware; update the teardown in the provider
=== 'mysql' branch to call the promise-wrapped end via mysqlPool.promise().end()
so the await actually waits for the pool to close (keep the existing use of
mysqlPool.promise().query(...) for queries and change only the final end() call
on the mysqlPool created by createMysqlPool with TEST_MYSQL_CONFIG, ensuring
dbName invariant handling remains unchanged).

In `@tests/e2e/orm/client-api/raw-query.test.ts`:
- Around line 20-22: The comparison in function ref uses an unnecessary and
unsafe cast ('mysql' as any); remove the as any cast and compare directly to the
string literal so TypeScript uses the existing DataSourceProviderType on
client.$schema.provider.type (i.e., update the conditional in ref to use
client.$schema.provider.type === 'mysql' and keep the sql.raw branches intact).

In `@tests/e2e/orm/plugin-infra/ext-query-args.test.ts`:
- Line 126: Remove the unnecessary type assertion in the isMySql check: change
the comparison that currently uses ('mysql' as any) to a plain 'mysql' string
literal so the expression reads like const isMySql = db.$schema.provider.type
=== 'mysql'; update the code around the isMySql variable (referencing
db.$schema.provider.type and isMySql) to match other tests and production usage
without the cast.
♻️ Duplicate comments (1)
packages/server/test/api/rpc.test.ts (1)

257-270: Same as any pattern as above.

Same observation as line 170 - the type assertion works around a type mismatch. Consider a unified fix if addressing the underlying type definitions.

🧹 Nitpick comments (32)
tests/e2e/orm/policy/policy-functions.test.ts (1)

17-19: Optional: DRY up repeated provider check.

The ['sqlite', 'mysql'].includes(...) branch is repeated three times. Consider extracting a small helper to keep the test intent focused and reduce duplication.

♻️ Possible refactor
 describe('policy functions tests', () => {
+    const isCaseInsensitiveProvider = (db: { $schema: { provider: { type: string } } }) =>
+        ['sqlite', 'mysql'].includes(db.$schema.provider.type);
+
     it('supports contains case-sensitive', async () => {
@@
-        if (['sqlite', 'mysql'].includes(db.$schema.provider.type)) {
+        if (isCaseInsensitiveProvider(db)) {
             // sqlite and mysql are always case-insensitive
             await expect(db.foo.create({ data: { string: 'Acd' } })).toResolveTruthy();
         } else {
@@
-        if (['sqlite', 'mysql'].includes(db.$schema.provider.type)) {
+        if (isCaseInsensitiveProvider(db)) {
             // sqlite and mysql are always case-insensitive
             await expect(db.foo.create({ data: { string: 'Acd' } })).toResolveTruthy();
         } else {
@@
-        if (['sqlite', 'mysql'].includes(db.$schema.provider.type)) {
+        if (isCaseInsensitiveProvider(db)) {
             // sqlite and mysql are always case-insensitive
             await expect(db.$setAuth({ id: 'user1', name: 'Abc' }).foo.create({ data: {} })).toResolveTruthy();
         } else {

Also applies to: 54-56, 96-98

tests/e2e/orm/client-api/mixin.test.ts (1)

78-78: Redundant case handling and prefer test() over match() for boolean assertions.

The code applies .toLowerCase() and then uses the i (case-insensitive) flag in the regex — only one is needed. Additionally, toSatisfy expects a boolean, so RegExp.test() is more appropriate than String.match().

✨ Suggested cleanup
-        ).rejects.toSatisfy((e) => e.cause.message.toLowerCase().match(/(constraint)|(duplicate)/i));
+        ).rejects.toSatisfy((e) => /constraint|duplicate/i.test(e.cause.message));
tests/e2e/orm/client-api/create-many-and-return.test.ts (2)

18-21: Consider using it.skipIf for better test reporting and remove the type assertion.

The as any cast suggests the type system doesn't recognize 'mysql' as a valid provider type yet—this should be addressed at the type definition level as part of this PR's MySQL support.

Additionally, using an early return makes skipped tests appear as "passed" in test reports. Vitest's it.skipIf provides clearer visibility:

♻️ Suggested refactor using vitest's skipIf
-    it('works with toplevel createManyAndReturn', async () => {
-        if (client.$schema.provider.type === ('mysql' as any)) {
-            // mysql doesn't support createManyAndReturn
-            return;
-        }
+    it.skipIf(process.env.TEST_PROVIDER === 'mysql')(
+        'works with toplevel createManyAndReturn',
+        async () => {

Alternatively, if the runtime check is required, consider extracting a helper:

const isMysql = () => client.$schema.provider.type === 'mysql';

This would require updating the provider type definitions to include 'mysql' as a valid literal.


67-70: Duplicated guard pattern—consider consolidating.

Same concerns as the earlier guard. Since the entire test suite is for createManyAndReturn, consider wrapping the whole describe block conditionally or using describe.skipIf to avoid repeating this guard in every test case.

tests/e2e/orm/client-api/update.test.ts (1)

1053-1053: Minor: Redundant case handling in regex assertion.

The i flag already makes the regex case-insensitive, so .toLowerCase() is unnecessary. Additionally, the parentheses in the regex pattern can be removed for simplicity.

Suggested simplification
-            ).rejects.toSatisfy((e) => e.cause.message.toLowerCase().match(/(constraint)|(duplicate)/i));
+            ).rejects.toSatisfy((e) => e.cause.message.match(/constraint|duplicate/i));
packages/server/test/api/rpc.test.ts (1)

161-170: Consider addressing the underlying type mismatch.

The as any assertion bypasses type checking on the procedures configuration. While acceptable in test code, this could mask legitimate type errors if the procedure handler signatures evolve.

If the type mismatch stems from recent changes to createPolicyTestClient's type definitions, consider updating the type definitions to accommodate procedure options, or adding a more specific type annotation.

tests/e2e/orm/query-builder/query-builder.test.ts (1)

6-7: Consider parameterizing this test for MySQL coverage.

The timestamp format changes suggest MySQL compatibility considerations, but the test is hardcoded to SQLite. For a MySQL support PR, consider parameterizing the schema provider to validate query builder functionality against MySQL as well.

-describe('Client API tests', () => {
-    const schema = getSchema('sqlite');
+describe.each(['sqlite', 'mysql'] as const)('Client API tests (%s)', (provider) => {
+    const schema = getSchema(provider);

As per coding guidelines, E2E tests should validate real-world schema compatibility.

tests/regression/test/v2-migrated/issue-1894.test.ts (1)

45-46: Good fix for MySQL compatibility; consider removing explicit id: 0.

Using r.id instead of a hardcoded value correctly handles MySQL's autoincrement behavior. However, passing id: 0 explicitly is still problematic since MySQL treats 0 for autoincrement columns as "generate next value" rather than inserting literal 0. For consistent cross-database behavior, consider letting the database generate the id:

Suggested change
-        const r = await db.a.create({ data: { id: 0 } });
+        const r = await db.a.create({ data: {} });
         await expect(db.c.create({ data: { a: { connect: { id: r.id } } } })).toResolveTruthy();
tests/e2e/orm/plugin-infra/on-kysely-query.test.ts (1)

96-103: Timestamp format workaround is acceptable but could be more robust.

The replace('Z', '+00:00') approach works for MySQL 8.0+ but has some caveats:

  • Depends on MySQL server accepting ISO 8601 with timezone offset in DATETIME/TIMESTAMP columns
  • Behavior may vary with MySQL's sql_mode settings

Consider using a database-agnostic format without timezone suffix:

const now = new Date().toISOString().slice(0, 23).replace('T', ' '); // 'YYYY-MM-DD HH:mm:ss.sss'

Or leverage the ORM's now() function if compatible with raw query builder context (per relevant snippet in functions.ts).

The inline query builder usage with toOperationNode() is the correct pattern for building Kysely operations within hooks.

tests/e2e/orm/client-api/name-mapping.test.ts (4)

88-89: Consider extracting a shared helper or reusing this variable across tests.

The as any cast suggests 'mysql' isn't yet a recognized provider type literal. This is fine during the transition, but the same check is repeated inline in other test functions (lines 398, 427) rather than reusing a shared pattern.

For consistency and maintainability, consider either:

  1. Extracting a helper function at the describe block level, or
  2. Updating the SchemaType or provider type definition to include 'mysql' so the cast becomes unnecessary.
Example: shared helper at describe level
describe('Name mapping tests', () => {
    let db: ClientContract<SchemaType>;

    const isMysql = () => db.$schema.provider.type === ('mysql' as any);
    // ... use isMysql() throughout tests
});

105-114: MySQL branch lacks verification of inserted data.

The non-MySQL path asserts that the returned data matches expected values (id, email, role), but the MySQL branch only inserts without verifying the result. Consider adding a follow-up select to confirm the inserted values, ensuring equivalent test coverage for MySQL.

Proposed enhancement
         } else {
             // mysql doesn't support returning, simply insert
             await db.$qb
                 .insertInto('User')
                 .values({
                     email: 'u2@test.com',
                     role: 'ADMIN',
                 })
                 .executeTakeFirst();
+
+            // Verify the insert by selecting the record
+            await expect(
+                db.$qb.selectFrom('User').selectAll().where('email', '=', 'u2@test.com').executeTakeFirst(),
+            ).resolves.toMatchObject({
+                id: expect.any(Number),
+                email: 'u2@test.com',
+                role: 'ADMIN',
+            });
         }

125-152: MySQL skips these test scenarios entirely.

These tests validate the returning clause with qualified column names ('User.id', 'User.email') and returningAll(). Since MySQL doesn't support RETURNING, the entire block is skipped, leaving no equivalent validation for MySQL.

If the goal is only to test RETURNING clause behavior, this is acceptable. However, if you also want to verify that name mapping works correctly for MySQL inserts, consider adding a MySQL-specific path that performs the insert and verifies via a follow-up select.


398-411: Inconsistent conditional pattern compared to other tests.

The 'works with create' and 'works with delete' tests have else branches for MySQL that perform the operation without returning. This test only uses an if (!mysql) guard with no else branch, creating inconsistency.

For consistency and coverage, consider adding a MySQL path that performs the update and verifies via select, similar to the pattern used elsewhere.

Suggested pattern for consistency
         if (db.$schema.provider.type !== ('mysql' as any)) {
             // ... existing non-MySQL tests ...
+        } else {
+            // MySQL doesn't support returning, verify via select
+            await db.$qb
+                .updateTable('User')
+                .set({ email: (eb) => eb.fn('upper', [eb.ref('email')]), role: 'USER' })
+                .where('email', '=', 'u2@test.com')
+                .execute();
+
+            await expect(
+                db.$qb.selectFrom('User').select(['email', 'role']).where('email', '=', 'U2@TEST.COM').executeTakeFirst(),
+            ).resolves.toMatchObject({ email: 'U2@TEST.COM', role: 'USER' });
         }
tests/e2e/orm/policy/migrated/toplevel-operations.test.ts (1)

169-200: Add a post‑force‑update assertion for MySQL.

After the forced update, the test doesn’t verify that the record actually moved to id: '2' with the expected value. Adding a quick assertion makes the MySQL branch more robust and prevents silent regressions that could affect the subsequent upsert expectations.

🧪 Suggested tweak
             // force update
             await db.$unuseAll().model.update({
                 where: { id: '1' },
                 data: {
                     id: '2',
                     value: 3,
                 },
             });
+            await expect(
+                db.$unuseAll().model.findUnique({ where: { id: '2' } }),
+            ).resolves.toMatchObject({ id: '2', value: 3 });
         }
packages/orm/src/client/crud/operations/base.ts (4)

516-534: Potential issue with composite primary keys containing auto-increment field.

When a model has a composite primary key where one field is auto-increment and others are not, the current logic iterates through all ID fields and assigns insertResult.insertId to the first auto-increment field it finds, then falls back to updatedData for others. This should work correctly, but consider adding a comment explaining this behavior for maintainability.

Also, insertResult.insertId may be a bigint in MySQL. Ensure downstream code handles both number and bigint types correctly.


1013-1027: Fallback to sequential creates for non-RETURNING dialects is correct but may have performance implications.

For polymorphic models on MySQL, this falls back to N individual create calls instead of a single batch insert. This is necessary for correctness but could be slow for large batches. Consider documenting this limitation or logging a debug message when this path is taken.


1649-1657: Potential race condition between SELECT and UPDATE for non-RETURNING dialects.

The code selects records to update, then executes the update, then queries them back. In a concurrent environment without an explicit transaction, another process could modify or delete these records between steps.

Since updateMany is already within safeTransaction in most call paths, this may be acceptable. However, consider adding a comment documenting this assumption, or ensuring the transaction isolation level is sufficient.


1668-1681: Consider query efficiency for large batch updates without RETURNING.

Building an OR of AND conditions for each updated record could generate very large queries when updating many rows. For example, updating 1000 records with 2 ID fields generates 2000+ conditions.

Consider using IN with tuple comparison for databases that support it, or adding a reasonable limit with pagination for very large updates.

packages/plugins/policy/src/policy-handler.ts (2)

288-288: Use the class-level expression builder for consistency.

Line 288 creates a new expressionBuilder instance, but the class already has a reusable this.eb field (line 69). Consider using this.eb for consistency.

♻️ Suggested change
-        const eb = expressionBuilder<any, any>();
-
-        // before update table is joined if fields from `before()` are used in post-update policies
-        const needsBeforeUpdateJoin = !!beforeUpdateInfo?.fields;
+        // before update table is joined if fields from `before()` are used in post-update policies
+        const needsBeforeUpdateJoin = !!beforeUpdateInfo?.fields;

Then replace all occurrences of eb with this.eb in this method (lines 304-318).


1343-1383: Consider explicit handling for AndNode and OrNode for code clarity.

The stripTableReferences helper relies on generic traversal to handle AndNode and OrNode, which are actively produced by buildPolicyFilter through conjunction() and disjunction() utilities (lines 1078–1110). While the generic object traversal (lines 1366–1379) should correctly process these nodes via the 'kind' in value check (since all Kysely OperationNode types have a kind property), explicitly handling AndNode and OrNode would improve code maintainability and make the intent clearer:

if (AndNode.is(node)) {
    return AndNode.create(
        this.stripTableReferences(node.left),
        this.stripTableReferences(node.right),
    );
}

if (OrNode.is(node)) {
    return OrNode.create(
        this.stripTableReferences(node.left),
        this.stripTableReferences(node.right),
    );
}

Add these checks before the generic traversal fallback to make the handling explicit and future-proof.

tests/e2e/orm/client-api/raw-query.test.ts (1)

5-7: Avoid shadowing the sql helper name

The local const sql = ... in two tests shadows the top-level sql import used by ref, which makes the flow a bit harder to scan. Renaming the local variable improves clarity.

♻️ Suggested rename
-        const sql = match(client.$schema.provider.type as DataSourceProviderType)
+        const statement = match(client.$schema.provider.type as DataSourceProviderType)
             .with('postgresql', () => `UPDATE "User" SET "email" = $1 WHERE "id" = $2`)
             .with('mysql', () => 'UPDATE `User` SET `email` = ? WHERE `id` = ?')
             .with('sqlite', () => 'UPDATE "User" SET "email" = ? WHERE "id" = ?')
             .exhaustive();
-        await expect(client.$executeRawUnsafe(sql, 'u2@test.com', '1')).resolves.toBe(1);
+        await expect(client.$executeRawUnsafe(statement, 'u2@test.com', '1')).resolves.toBe(1);
-        const sql = match(client.$schema.provider.type as DataSourceProviderType)
+        const statement = match(client.$schema.provider.type as DataSourceProviderType)
             .with('postgresql', () => `SELECT "User"."id", "User"."email" FROM "User" WHERE "User"."id" = $1`)
             .with('mysql', () => 'SELECT `User`.`id`, `User`.`email` FROM `User` WHERE `User`.`id` = ?')
             .with('sqlite', () => 'SELECT "User"."id", "User"."email" FROM "User" WHERE "User"."id" = ?')
             .exhaustive();

-        const users = await client.$queryRawUnsafe<{ id: string; email: string }[]>(sql, '1');
+        const users = await client.$queryRawUnsafe<{ id: string; email: string }[]>(statement, '1');

Also applies to: 46-50, 78-82

packages/testtools/package.json (1)

42-42: Consider consistency with other database drivers.

mysql2 is added as a direct dependency, while pg and better-sqlite3 are peer dependencies. This inconsistency may cause confusion about the intended dependency model for database drivers in test tooling.

If MySQL support is optional like PostgreSQL and SQLite, consider moving mysql2 to peerDependencies for consistency.

Suggested change
     "dependencies": {
         ...
         "kysely": "catalog:",
-        "mysql2": "catalog:",
         "prisma": "catalog:",
         ...
     },
     "peerDependencies": {
         "better-sqlite3": "catalog:",
+        "mysql2": "catalog:",
         "pg": "catalog:"
     },

Also applies to: 47-50

tests/regression/test/v2-migrated/issue-1681.test.ts (1)

28-31: Appropriate guard for MySQL's lack of RETURNING support.

The conditional correctly gates createManyAndReturn since MySQL doesn't support the RETURNING clause. This preserves test coverage for other providers while avoiding failures on MySQL.

For improved test output visibility, consider using Vitest's skipIf to make it clear in test results when this assertion is skipped:

Alternative using test.skipIf for better test reporting
it.skipIf(db.$schema.provider.type === 'mysql')(
    'createManyAndReturn with auth default',
    async () => {
        const r = await authDb.post.createManyAndReturn({ data: [{ title: 'Post2' }] });
        expect(r[0].authorId).toBe(user.id);
    }
);

Note: This would require restructuring the test since db is created inside the test. The current inline approach is pragmatic given the test structure.

tests/e2e/orm/policy/nonexistent-models.test.ts (1)

18-26: Consider explicitly checking for 'mysql' instead of using else.

The current else branch catches any non-PostgreSQL/SQLite provider, which could silently succeed for future unsupported providers. Consider making the MySQL case explicit for clarity and safety.

♻️ Suggested improvement
-        if (['postgresql', 'sqlite'].includes(dbRaw.$schema.provider.type)) {
+        const provider = dbRaw.$schema.provider.type;
+        if (provider === 'postgresql' || provider === 'sqlite') {
             await dbRaw.$executeRawUnsafe(
                 `CREATE TABLE "Bar" ("id" TEXT PRIMARY KEY, "string" TEXT, "fooId" TEXT, FOREIGN KEY ("fooId") REFERENCES "Foo" ("id"));`,
             );
-        } else {
+        } else if (provider === 'mysql') {
             await dbRaw.$executeRawUnsafe(
                 `CREATE TABLE Bar (id VARCHAR(191) PRIMARY KEY, string VARCHAR(191), fooId VARCHAR(191), FOREIGN KEY (fooId) REFERENCES Foo (id));`,
             );
+        } else {
+            throw new Error(`Unsupported provider: ${provider}`);
         }
tests/e2e/orm/policy/migrated/view.test.ts (1)

39-45: Provider-specific view creation is correctly implemented.

The branching handles PostgreSQL/SQLite's double-quoted identifiers and MySQL's unquoted identifiers appropriately. The fallback error for unsupported providers is good defensive practice.

Optional: Consider using backticks for MySQL identifiers

For consistency and safety with reserved words, you could use backticks in the MySQL version:

-            await rawDb.$executeRaw`CREATE VIEW UserInfo as select User.id, User.name, User.email, User.id as userId, count(Post.id) as postCount from User left join Post on User.id = Post.authorId group by User.id;`;
+            await rawDb.$executeRaw`CREATE VIEW \`UserInfo\` as select \`User\`.\`id\`, \`User\`.\`name\`, \`User\`.\`email\`, \`User\`.\`id\` as \`userId\`, count(\`Post\`.\`id\`) as \`postCount\` from \`User\` left join \`Post\` on \`User\`.\`id\` = \`Post\`.\`authorId\` group by \`User\`.\`id\`;`;

However, the current unquoted form works fine for this test's schema.

tests/e2e/orm/policy/crud/read.test.ts (1)

635-641: Guard appears redundant given explicit provider setting.

The test specifies { provider: 'postgresql' } at line 635, so the check at lines 638-641 should never trigger unless the test infrastructure ignores the provider option. While the guard is defensive and harmless, consider whether it's necessary.

If the intention is to allow this test to be skipped when running against a non-PostgreSQL database in CI (e.g., via environment variable override), this pattern makes sense. Otherwise, the guard could be removed since the provider is explicitly set.

packages/testtools/src/schema.ts (1)

43-47: Consider using DataSourceProviderType for consistency.

The provider parameter uses an inline union type 'sqlite' | 'postgresql' | 'mysql' instead of the imported DataSourceProviderType. This could cause drift if the canonical type is updated elsewhere.

Suggested fix
 function replacePlaceholders(
     schemaText: string,
-    provider: 'sqlite' | 'postgresql' | 'mysql',
+    provider: DataSourceProviderType,
     dbUrl: string | undefined,
 ) {
packages/orm/src/client/helpers/schema-db-pusher.ts (2)

273-302: Duplicate enum value extraction logic.

The logic for extracting enum values (handling @map attributes) is duplicated between push() (lines 30-48) and mapFieldType() (lines 281-297). Consider extracting this into a shared helper method.

♻️ Suggested refactor
+    private getEnumValues(enumDef: { fields?: Record<string, any>; values?: Record<string, string> }): string[] {
+        if (enumDef.fields) {
+            return Object.values(enumDef.fields).map((f) => {
+                const mapAttr = f.attributes?.find((a: any) => a.name === '@map');
+                if (!mapAttr || !mapAttr.args?.[0]) {
+                    return f.name;
+                } else {
+                    const mappedName = ExpressionUtils.getLiteralValue(mapAttr.args[0].value);
+                    invariant(
+                        mappedName && typeof mappedName === 'string',
+                        `Invalid `@map` attribute for enum field ${f.name}`,
+                    );
+                    return mappedName;
+                }
+            });
+        } else {
+            return Object.values(enumDef.values!);
+        }
+    }

Then use this.getEnumValues(enumDef) in both locations.


65-67: Consider making provider list a constant for maintainability.

The provider check in providerSupportsNativeEnum uses an inline array. As more providers are added, consider using a shared constant.

packages/language/src/document.ts (1)

256-267: Consider consolidating with the SDK's getDataSourceProvider.

A similar helper exists in packages/sdk/src/ts-schema-generator.ts that uses invariant to throw on missing values. This version returns undefined instead, which is appropriate here since validation has already ensured the datasource exists. The difference in error handling is intentional, but consider documenting this distinction or extracting a shared utility with configurable error behavior.

packages/language/src/constants.ts (1)

37-41: Consider adding a const assertion for stricter typing.

Adding as const would provide stricter type inference, allowing consumers to use the literal types rather than string[].

💡 Optional type improvement
 /**
  * Database providers that support list field types.
  */
-export const DB_PROVIDERS_SUPPORTING_LIST_TYPE = ['postgresql'];
+export const DB_PROVIDERS_SUPPORTING_LIST_TYPE = ['postgresql'] as const;
tests/e2e/orm/policy/migrated/multi-id-fields.test.ts (1)

127-148: Consider extracting MySQL detection to a test helper.

The pattern db.$schema.provider.type === 'mysql' is repeated multiple times. Consider extracting to a helper function for consistency and maintainability.

💡 Example helper extraction
// In test utilities
function isMySql(db: PolicyClient): boolean {
  return db.$schema.provider.type === 'mysql';
}

// Or as a property on the test client
const mysql = isMySql(db);

Comment on lines +528 to +531
override buildArrayLiteralSQL(values: unknown[]): string {
// MySQL uses JSON arrays since it doesn't have native arrays
return `JSON_ARRAY(${values.map((v) => (typeof v === 'string' ? `'${v}'` : v)).join(',')})`;
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Search for usages of buildArrayLiteralSQL
rg -n -A5 -B5 'buildArrayLiteralSQL' --type=ts

Repository: zenstackhq/zenstack-v3

Length of output: 5283


🏁 Script executed:

#!/bin/bash
# Check the file structure and the method definition in context
wc -l packages/orm/src/client/crud/dialects/mysql.ts

Repository: zenstackhq/zenstack-v3

Length of output: 116


🏁 Script executed:

#!/bin/bash
# Read the specific method and surrounding context
sed -n '520,540p' packages/orm/src/client/crud/dialects/mysql.ts

Repository: zenstackhq/zenstack-v3

Length of output: 1030


🏁 Script executed:

#!/bin/bash
# Check transformInput method
rg -n 'transformInput' --type=ts -A10 -B2

Repository: zenstackhq/zenstack-v3

Length of output: 43055


🏁 Script executed:

#!/bin/bash
# Check RawNode.createWithSql to understand what it expects
rg -n 'RawNode.createWithSql\|class RawNode' --type=ts -A5 -B2

Repository: zenstackhq/zenstack-v3

Length of output: 48


🏁 Script executed:

#!/bin/bash
# Look at policy-handler context more carefully - check what item is and where it comes from
sed -n '930,960p' packages/plugins/policy/src/policy-handler.ts

Repository: zenstackhq/zenstack-v3

Length of output: 1386


SQL injection vulnerability in buildArrayLiteralSQL — fix both MySQL and PostgreSQL dialects.

String values are interpolated directly without proper escaping. This affects both the MySQL and PostgreSQL implementations. For implicit many-to-many join tables where transformInput is skipped, user-controlled array strings with quotes would cause syntax errors: JSON_ARRAY('test's value') instead of properly escaped JSON_ARRAY('test''s value').

Proposed fixes

For MySQL (packages/orm/src/client/crud/dialects/mysql.ts):

 override buildArrayLiteralSQL(values: unknown[]): string {
     // MySQL uses JSON arrays since it doesn't have native arrays
-    return `JSON_ARRAY(${values.map((v) => (typeof v === 'string' ? `'${v}'` : v)).join(',')})`;
+    return `JSON_ARRAY(${values.map((v) => (typeof v === 'string' ? `'${v.replace(/'/g, "''")}'` : v)).join(',')})`;
 }

For PostgreSQL (packages/orm/src/client/crud/dialects/postgresql.ts, line 562):

 return `ARRAY[${values.map((v) => (typeof v === 'string' ? `'${v}'` : v))}]`;
+return `ARRAY[${values.map((v) => (typeof v === 'string' ? `'${v.replace(/'/g, "''")}'` : v))}]`;
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
override buildArrayLiteralSQL(values: unknown[]): string {
// MySQL uses JSON arrays since it doesn't have native arrays
return `JSON_ARRAY(${values.map((v) => (typeof v === 'string' ? `'${v}'` : v)).join(',')})`;
}
override buildArrayLiteralSQL(values: unknown[]): string {
// MySQL uses JSON arrays since it doesn't have native arrays
return `JSON_ARRAY(${values.map((v) => (typeof v === 'string' ? `'${v.replace(/'/g, "''")}'` : v)).join(',')})`;
}
🤖 Prompt for AI Agents
In `@packages/orm/src/client/crud/dialects/mysql.ts` around lines 528 - 531, The
buildArrayLiteralSQL implementations are vulnerable because string elements are
interpolated without escaping single quotes; update the MySQL override
buildArrayLiteralSQL and the PostgreSQL counterpart (the override
buildArrayLiteralSQL in postgresql.ts) to properly escape string values before
interpolation (e.g., replace single quote characters ' with doubled ''), and
ensure strings are wrapped in single quotes after escaping; leave non-string
values untouched and preserve current array formatting (MySQL JSON_ARRAY(...)
and Postgres ARRAY[...] or ARRAY[...]::type) so only the string-escaping logic
is added to the existing functions.

Comment on lines +668 to +694
if (['_count', '_avg', '_sum', '_min', '_max'].includes(field)) {
invariant(value && typeof value === 'object', `invalid orderBy value for field "${field}"`);
for (const [k, v] of Object.entries<SortOrder>(value)) {
invariant(v === 'asc' || v === 'desc', `invalid orderBy value for field "${field}"`);
result = result.orderBy(
(eb) => aggregate(eb, buildFieldRef(model, k, modelAlias), field as AGGREGATE_OPERATORS),
this.negateSort(v, negated),
);
}
continue;
}

switch (field) {
case '_count': {
invariant(value && typeof value === 'object', 'invalid orderBy value for field "_count"');
for (const [k, v] of Object.entries<string>(value)) {
invariant(v === 'asc' || v === 'desc', `invalid orderBy value for field "${field}"`);
result = result.orderBy(
(eb) => eb.fn.count(buildFieldRef(model, k, modelAlias)),
this.negateSort(v, negated),
);
}
continue;
}
default:
break;
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Unreachable code: duplicate _count handling.

The _count case is handled twice:

  1. Lines 668-678: In the aggregation operators check with continue
  2. Lines 680-691: In the switch statement (unreachable due to the continue)

The switch case for _count at line 681 will never execute.

🧹 Remove unreachable code
                 }
                 continue;
             }

-            switch (field) {
-                case '_count': {
-                    invariant(value && typeof value === 'object', 'invalid orderBy value for field "_count"');
-                    for (const [k, v] of Object.entries<string>(value)) {
-                        invariant(v === 'asc' || v === 'desc', `invalid orderBy value for field "${field}"`);
-                        result = result.orderBy(
-                            (eb) => eb.fn.count(buildFieldRef(model, k, modelAlias)),
-                            this.negateSort(v, negated),
-                        );
-                    }
-                    continue;
-                }
-                default:
-                    break;
-            }
-
             const fieldDef = requireField(this.schema, model, field);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if (['_count', '_avg', '_sum', '_min', '_max'].includes(field)) {
invariant(value && typeof value === 'object', `invalid orderBy value for field "${field}"`);
for (const [k, v] of Object.entries<SortOrder>(value)) {
invariant(v === 'asc' || v === 'desc', `invalid orderBy value for field "${field}"`);
result = result.orderBy(
(eb) => aggregate(eb, buildFieldRef(model, k, modelAlias), field as AGGREGATE_OPERATORS),
this.negateSort(v, negated),
);
}
continue;
}
switch (field) {
case '_count': {
invariant(value && typeof value === 'object', 'invalid orderBy value for field "_count"');
for (const [k, v] of Object.entries<string>(value)) {
invariant(v === 'asc' || v === 'desc', `invalid orderBy value for field "${field}"`);
result = result.orderBy(
(eb) => eb.fn.count(buildFieldRef(model, k, modelAlias)),
this.negateSort(v, negated),
);
}
continue;
}
default:
break;
}
if (['_count', '_avg', '_sum', '_min', '_max'].includes(field)) {
invariant(value && typeof value === 'object', `invalid orderBy value for field "${field}"`);
for (const [k, v] of Object.entries<SortOrder>(value)) {
invariant(v === 'asc' || v === 'desc', `invalid orderBy value for field "${field}"`);
result = result.orderBy(
(eb) => aggregate(eb, buildFieldRef(model, k, modelAlias), field as AGGREGATE_OPERATORS),
this.negateSort(v, negated),
);
}
continue;
}
const fieldDef = requireField(this.schema, model, field);
🤖 Prompt for AI Agents
In `@packages/orm/src/client/crud/dialects/mysql.ts` around lines 668 - 694, The
code contains duplicate handling for the "_count" aggregate: the initial if that
checks ['_count','_avg','_sum','_min','_max'] already processes "_count" and
continues, making the switch case for '_count' unreachable; remove the redundant
switch case block for '_count' (the case labeled '_count' that uses eb.fn.count
and buildFieldRef) so only the aggregate-handling branch (which calls
aggregate(...)) remains, leaving other switch cases intact.

Comment on lines +1292 to +1296
const updateResult = await this.executeQuery(kysely, updateQuery, 'update');
if (!updateResult.numAffectedRows) {
// no rows updated
updatedEntity = null;
} else {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Potential false negative when update doesn't change any values on MySQL.

MySQL returns numAffectedRows = 0 when an UPDATE statement matches a row but doesn't change any values (because the SET values are identical to existing values). This could incorrectly set updatedEntity = null even though the entity exists.

Consider using numUpdatedRows if available, or always performing the read-back query regardless of affected rows count.

🔧 Suggested approach
-                const updateResult = await this.executeQuery(kysely, updateQuery, 'update');
-                if (!updateResult.numAffectedRows) {
-                    // no rows updated
-                    updatedEntity = null;
-                } else {
+                await this.executeQuery(kysely, updateQuery, 'update');
+                // Note: MySQL returns numAffectedRows=0 if SET values match existing values,
+                // so we always attempt to read back the entity
+                {

Alternatively, keep track of whether the entity was found earlier (via thisEntity) and use that to determine if null should be returned.

🤖 Prompt for AI Agents
In `@packages/orm/src/client/crud/operations/base.ts` around lines 1292 - 1296,
The current check uses updateResult.numAffectedRows to decide whether to set
updatedEntity = null which can be false-negative on MySQL when rows matched but
no values changed; update the logic in the update flow (around executeQuery /
updateQuery handling) to: if updateResult.numAffectedRows is undefined or zero,
either (a) prefer updateResult.numUpdatedRows when present, or (b) perform the
read-back query unconditionally (or if you already fetched thisEntity earlier,
use that presence to determine existence) and only set updatedEntity = null when
the read-back confirms the row is missing; ensure references to executeQuery,
updateQuery, numAffectedRows, numUpdatedRows, updatedEntity and thisEntity are
updated accordingly.

Comment on lines +2534 to +2537
if (!this.dialect.supportsReturning) {
// if the dialect doesn't support RETURNING, we always need read back
return { needReadBack: true, selectedFields: undefined };
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Remove duplicate check for supportsReturning.

This block is unreachable because the same condition is already checked and returns at lines 2524-2527.

🧹 Suggested fix
         if (args.include && typeof args.include === 'object' && Object.keys(args.include).length > 0) {
             // includes present, need read back to fetch relations
             return { needReadBack: true, selectedFields: undefined };
         }
 
-        if (!this.dialect.supportsReturning) {
-            // if the dialect doesn't support RETURNING, we always need read back
-            return { needReadBack: true, selectedFields: undefined };
-        }
-
         const modelDef = this.requireModel(model);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if (!this.dialect.supportsReturning) {
// if the dialect doesn't support RETURNING, we always need read back
return { needReadBack: true, selectedFields: undefined };
}
if (args.include && typeof args.include === 'object' && Object.keys(args.include).length > 0) {
// includes present, need read back to fetch relations
return { needReadBack: true, selectedFields: undefined };
}
const modelDef = this.requireModel(model);
🤖 Prompt for AI Agents
In `@packages/orm/src/client/crud/operations/base.ts` around lines 2534 - 2537,
There is a duplicate, unreachable check of this.dialect.supportsReturning that
returns { needReadBack: true, selectedFields: undefined }; remove that redundant
block from the method in base.ts (the one using
supportsReturning/needReadBack/selectedFields) so the earlier check (lines
~2524–2527) is the single source of truth; ensure no other logic relies on this
second return before deleting it.

} else {
enumValues = Object.values(enumDef.values);
}
return sql.raw(`enum(${enumValues.map((v) => `'${v}'`).join(', ')})`);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Potential SQL injection in MySQL enum definition.

Enum values are interpolated directly into raw SQL without escaping. If an enum value contains a single quote, this will break the query or potentially allow injection.

🔒 Proposed fix
-                return sql.raw(`enum(${enumValues.map((v) => `'${v}'`).join(', ')})`);
+                return sql.raw(`enum(${enumValues.map((v) => `'${v.replace(/'/g, "''")}'`).join(', ')})`);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
return sql.raw(`enum(${enumValues.map((v) => `'${v}'`).join(', ')})`);
return sql.raw(`enum(${enumValues.map((v) => `'${v.replace(/'/g, "''")}'`).join(', ')})`);
🤖 Prompt for AI Agents
In `@packages/orm/src/client/helpers/schema-db-pusher.ts` at line 298, The current
sql.raw call interpolates enumValues directly (return
sql.raw(`enum(${enumValues.map((v) => `'${v}'`).join(', ')})`)), which allows
quotes/injection; fix by using parameterized bindings or proper escaping: build
a placeholder list like enum(${enumValues.map(() => '?').join(',')}) and call
sql.raw with that string and the enumValues as bindings (or use your SQL
builder's literal/escape helper) so values are passed as parameters instead of
directly injected into the raw SQL; update the sql.raw invocation that
references enumValues to use these bindings.

Comment on lines +289 to +295
} else if (provider === 'mysql') {
invariant(dbName, 'dbName is required');
const mysqlPool = createMysqlPool(TEST_MYSQL_CONFIG);
await mysqlPool.promise().query(`DROP DATABASE IF EXISTS \`${dbName}\``);
await mysqlPool.promise().query(`CREATE DATABASE \`${dbName}\``);
await mysqlPool.end();
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🌐 Web query:

mysql2 node pool end() method returns promise documentation

💡 Result:

In mysql2, whether pool.end() returns a Promise depends on which API you’re using:

  • Callback-style pool (require('mysql2')): pool.end([callback]) uses an optional callback (no Promise by default). (github.com)
  • Promise wrapper (require('mysql2/promise') or mysql.createPool(...).promise()): pool.end() is promise-based, so you can await pool.end(). The PromisePool typing documents it as end(): Promise<void>. (sidorares.github.io)

Citations:


🏁 Script executed:

rg -n "createMysqlPool" packages/testtools/src/client.ts -B 2 -A 10

Repository: zenstackhq/zenstack-v3

Length of output: 1709


Use pool.promise().end() for proper Promise-based cleanup.

The createMysqlPool function imports the callback-style pool from mysql2 (line 10), not the promise wrapper. While the code awaits mysqlPool.end(), the callback-style pool's end() method does not return a Promise, so the await has no effect. The connection may not be properly closed before the function returns. Use mysqlPool.promise().end() to wrap the call with the Promise API, matching the pattern already used for queries.

🐛 Proposed fix
     } else if (provider === 'mysql') {
         invariant(dbName, 'dbName is required');
         const mysqlPool = createMysqlPool(TEST_MYSQL_CONFIG);
         await mysqlPool.promise().query(`DROP DATABASE IF EXISTS \`${dbName}\``);
         await mysqlPool.promise().query(`CREATE DATABASE \`${dbName}\``);
-        await mysqlPool.end();
+        await mysqlPool.promise().end();
     }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
} else if (provider === 'mysql') {
invariant(dbName, 'dbName is required');
const mysqlPool = createMysqlPool(TEST_MYSQL_CONFIG);
await mysqlPool.promise().query(`DROP DATABASE IF EXISTS \`${dbName}\``);
await mysqlPool.promise().query(`CREATE DATABASE \`${dbName}\``);
await mysqlPool.end();
}
} else if (provider === 'mysql') {
invariant(dbName, 'dbName is required');
const mysqlPool = createMysqlPool(TEST_MYSQL_CONFIG);
await mysqlPool.promise().query(`DROP DATABASE IF EXISTS \`${dbName}\``);
await mysqlPool.promise().query(`CREATE DATABASE \`${dbName}\``);
await mysqlPool.promise().end();
}
🤖 Prompt for AI Agents
In `@packages/testtools/src/client.ts` around lines 289 - 295, The cleanup for the
MySQL callback-style pool is using mysqlPool.end() which is not promise-aware;
update the teardown in the provider === 'mysql' branch to call the
promise-wrapped end via mysqlPool.promise().end() so the await actually waits
for the pool to close (keep the existing use of mysqlPool.promise().query(...)
for queries and change only the final end() call on the mysqlPool created by
createMysqlPool with TEST_MYSQL_CONFIG, ensuring dbName invariant handling
remains unchanged).

Comment on lines +20 to +22
function ref(col: string) {
return client.$schema.provider.type === ('mysql' as any) ? sql.raw(`\`${col}\``) : sql.raw(`"${col}"`);
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

# Find DataSourceProviderType definition
rg -n "DataSourceProviderType" --type ts -B 2 -A 2

Repository: zenstackhq/zenstack-v3

Length of output: 5488


🏁 Script executed:

# Check the provider type definition
rg -n "provider\s*[:{].*type" --type ts -B 2 -A 2

Repository: zenstackhq/zenstack-v3

Length of output: 3620


🏁 Script executed:

# Look at the test file to understand context
cat -n tests/e2e/orm/client-api/raw-query.test.ts | head -40

Repository: zenstackhq/zenstack-v3

Length of output: 1683


🏁 Script executed:

# Search for other usages of provider.type in tests
rg -n "\.provider\.type" --type ts

Repository: zenstackhq/zenstack-v3

Length of output: 8687


Remove the as any cast in the comparison

The cast ('mysql' as any) is unnecessary since client.$schema.provider.type is already typed as DataSourceProviderType, which includes the literal 'mysql'. TypeScript can safely infer the comparison without casting. Removing it makes the code type-safe—other parts of this same file already use the proper pattern (lines 46 and 78 with as DataSourceProviderType for the provider type itself).

♻️ Example adjustment
-    function ref(col: string) {
-        return client.$schema.provider.type === ('mysql' as any) ? sql.raw(`\`${col}\``) : sql.raw(`"${col}"`);
-    }
+    function ref(col: string) {
+        return client.$schema.provider.type === 'mysql' ? sql.raw(`\`${col}\``) : sql.raw(`"${col}"`);
+    }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
function ref(col: string) {
return client.$schema.provider.type === ('mysql' as any) ? sql.raw(`\`${col}\``) : sql.raw(`"${col}"`);
}
function ref(col: string) {
return client.$schema.provider.type === 'mysql' ? sql.raw(`\`${col}\``) : sql.raw(`"${col}"`);
}
🤖 Prompt for AI Agents
In `@tests/e2e/orm/client-api/raw-query.test.ts` around lines 20 - 22, The
comparison in function ref uses an unnecessary and unsafe cast ('mysql' as any);
remove the as any cast and compare directly to the string literal so TypeScript
uses the existing DataSourceProviderType on client.$schema.provider.type (i.e.,
update the conditional in ref to use client.$schema.provider.type === 'mysql'
and keep the sql.raw branches intact).

extDb.user.createManyAndReturn({ data: [{ name: 'David' }], ...cacheBustOption }),
).toResolveWithLength(1);

const isMySql = db.$schema.provider.type === ('mysql' as any);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Search for provider type definitions and db.$schema structure
rg "provider.*type" --type=ts -B2 -A2 | head -100

Repository: zenstackhq/zenstack-v3

Length of output: 8154


🏁 Script executed:

#!/bin/bash
# Search for type definitions related to Provider
rg "type.*Provider|interface.*Provider" --type=ts -B1 -A5 | head -150

Repository: zenstackhq/zenstack-v3

Length of output: 14280


🏁 Script executed:

#!/bin/bash
# Search for the db.$schema definition and structure
rg "\$schema" --type=ts -B2 -A2 | grep -E "(provider|type)" | head -50

Repository: zenstackhq/zenstack-v3

Length of output: 48


🏁 Script executed:

#!/bin/bash
# Look for where provider types are defined (mysql, postgres, etc.)
rg "provider.*=.*['\"]mysql['\"]|['\"]mysql['\"].*provider" --type=ts -B2 -A2

Repository: zenstackhq/zenstack-v3

Length of output: 22027


🏁 Script executed:

#!/bin/bash
# Search for DataSourceProviderType definition
rg "type DataSourceProviderType|interface DataSourceProviderType" --type=ts -B2 -A10

Repository: zenstackhq/zenstack-v3

Length of output: 813


🏁 Script executed:

#!/bin/bash
# Look for provider type definitions in schema-related files
fd "schema" --type=f --path="*packages*" | head -20

Repository: zenstackhq/zenstack-v3

Length of output: 299


🏁 Script executed:

#!/bin/bash
# Search for the type definition of provider.type more carefully
rg "provider.*:.*type" --type=ts -B5 -A3 | head -80

Repository: zenstackhq/zenstack-v3

Length of output: 6429


🏁 Script executed:

#!/bin/bash
# Find where provider type is defined by searching in packages
rg "type:.*['\"]mysql['\"]|type:.*string.*mysql" --type=ts | head -30

Repository: zenstackhq/zenstack-v3

Length of output: 48


Remove the unnecessary as any type assertion for consistency.

The 'mysql' string is already a valid member of the DataSourceProviderType union (defined in packages/schema/src/schema.ts), so the as any cast is unnecessary. Other test files like update-many-and-return.test.ts and production code in client-impl.ts use plain string comparisons without this assertion. Remove it to align with the rest of the codebase.

🤖 Prompt for AI Agents
In `@tests/e2e/orm/plugin-infra/ext-query-args.test.ts` at line 126, Remove the
unnecessary type assertion in the isMySql check: change the comparison that
currently uses ('mysql' as any) to a plain 'mysql' string literal so the
expression reads like const isMySql = db.$schema.provider.type === 'mysql';
update the code around the isMySql variable (referencing
db.$schema.provider.type and isMySql) to match other tests and production usage
without the cast.

@claude
Copy link

claude bot commented Jan 23, 2026

Claude finished @ymc9's task —— View job


PR Review: MySQL Support Implementation

Reviewing this substantial PR that adds MySQL support to ZenStack ORM. This is a significant feature addition with extensive changes across the codebase.

Todo List

  • Review core MySQL dialect implementation
  • Analyze query generation and SQL safety
  • Check error handling and edge cases
  • Evaluate test coverage and CI integration
  • Review performance considerations
  • Assess security implications
  • Check integration with existing ORM features

🔍 Comprehensive Review Results

Security & SQL Safety - Excellent

  • Parameterized Queries: All SQL generation uses Kysely's safe sql.val() and sql.lit() methods
  • No SQL Injection Risks: Proper escaping in complex operations like JSON_CONTAINS(${lhs}, ${sql.val(JSON.stringify(rhs))})
  • Type Safety: Leverages TypeScript and Kysely's type system throughout

Code Quality - High Standard

  • Architecture: Clean extension of BaseCrudDialect with proper MySQL-specific overrides
  • Consistency: Follows established patterns from PostgreSQL and SQLite dialects
  • Error Handling: Database-specific error code mapping (e.g., ER_DUP_ENTRY for MySQL unique violations)

Feature Completeness - Comprehensive

Core MySQL Features Implemented:

  • JSON Operations: Full support with JSON_OBJECT, JSON_EXTRACT, JSON_CONTAINS, JSON_ARRAYAGG
  • Data Type Mapping: Complete type system (tinyint(1) for boolean, datetime(3), blob, json)
  • Capability Flags: Correctly identifies MySQL limitations (no DISTINCT ON, limited RETURNING support)
  • Query Features: UPDATE/DELETE with LIMIT, proper offset handling with required LIMIT

Advanced Features:

  • LATERAL Joins: Proper MySQL 8.0.14+ support for complex relation queries
  • Enum Handling: Inline ENUM definitions with proper value escaping
  • Array Emulation: JSON-based array storage with proper transformation
  • Null Handling: Distinguishes between DB NULL, JSON null, and application null types

Test Coverage - Thorough

  • CI Integration: MySQL 8.4 added to test matrix alongside PostgreSQL and SQLite
  • Cross-Database Tests: All 78+ test files updated to handle MySQL-specific behaviors
  • Feature Gating: Proper handling of unsupported features (createManyAndReturn, RETURNING clauses)
  • Provider Detection: Consistent db.$schema.provider.type === 'mysql' checks

⚠️ Minor Considerations

Performance Notes:

  • Large JSON operations may need monitoring in production
  • Array emulation via JSON has different performance characteristics than native arrays
  • Complex relation queries use LATERAL joins which require MySQL 8.0.14+

Documentation Gaps:

  • MySQL version requirements (8.0.14+ for LATERAL joins) could be better documented
  • JSON array performance implications not explicitly called out

Edge Cases Handled Well:

  • Timezone handling in DateTime transformations
  • Key-order insensitive JSON equality comparisons
  • Proper NULLS FIRST/LAST emulation for ORDER BY

🎯 Recommendation: APPROVE

This is a high-quality implementation that:

  1. Maintains Security: No SQL injection vulnerabilities, proper parameterization
  2. Follows Patterns: Consistent with existing dialect architecture
  3. Comprehensive Testing: Extensive cross-database test coverage
  4. Production Ready: Handles edge cases and MySQL-specific limitations appropriately

The implementation demonstrates deep understanding of MySQL's capabilities and limitations, with thoughtful workarounds for missing features like native array types and limited RETURNING support.

Files reviewed: 78 changed files including core dialect, query executor, test infrastructure, and comprehensive test suite updates.


Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@packages/orm/src/client/crud/operations/delete.ts`:
- Around line 35-38: The check for affected rows uses a falsy test on
deleteResult.numAffectedRows which treats 0n and undefined the same; change the
logic in the block that calls this.delete (around deleteResult handling) to use
nullish coalescing: compute an effectiveAffected = deleteResult.numAffectedRows
?? deleteResult.rows?.length and throw createNotFoundError(this.model) only if
effectiveAffected is 0 (or falsy after explicit numeric/coercion handling),
ensuring you fall back to deleteResult.rows.length when numAffectedRows is
undefined.
♻️ Duplicate comments (1)
packages/testtools/src/client.ts (1)

281-296: await mysqlPool.end() doesn’t await on callback pools.

Line 294 uses the callback-style pool from mysql2, where end() is callback-based; the await won’t actually wait for teardown. Use mysqlPool.promise().end() for reliable cleanup.

🐛 Proposed fix
-        await mysqlPool.end();
+        await mysqlPool.promise().end();
In mysql2's callback pool API (createPool from 'mysql2'), is end() callback-only and should promise().end() be used to await cleanup?

Comment on lines 35 to 38
const deleteResult = await this.delete(tx, this.model, args.where, undefined, undefined, selectedFields);
if (deleteResult.rows.length === 0) {
if (!deleteResult.numAffectedRows) {
throw createNotFoundError(this.model);
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

Kysely DeleteResult numAffectedRows optional undefined dialects

💡 Result:

In Kysely, numAffectedRows is not part of DeleteResult.

  • DeleteResult always has numDeletedRows: bigint (not optional). (kysely-org.github.io)

    const res = await db.deleteFrom('person').where('id', '=', 1).executeTakeFirstOrThrow()
    res.numDeletedRows // bigint

    (This is also what the docs show for deleteFrom(...).executeTakeFirst().) (kysely-org.github.io)

  • numAffectedRows?: bigint is on QueryResult, which is used for raw/driver-level results, and it’s optional because some drivers/dialects don’t (or can’t) report it consistently, so it may be undefined. (kysely-org.github.io)

If you’re seeing numAffectedRows while doing a DELETE, you’re probably looking at a raw query result (or a dialect/driver adapter that returns a QueryResult) instead of Kysely’s DeleteResult. Also note Kysely removed the older QueryResult.numUpdatedOrDeletedRows and expects dialects to use numAffectedRows; outdated dialects may need updating/forking. (newreleases.io)

Citations:


🏁 Script executed:

# Find the actual delete.ts file and examine the context
find . -path "*/packages/orm/src/client/crud/operations/delete.ts" -type f

Repository: zenstackhq/zenstack-v3

Length of output: 117


🏁 Script executed:

# Search for Kysely imports and DeleteResult usage in the repository
rg "DeleteResult|numAffectedRows" --type ts --max-count 20

Repository: zenstackhq/zenstack-v3

Length of output: 2354


🏁 Script executed:

# Examine the specific file around the lines mentioned
git ls-files | grep -E "delete\.ts|DeleteResult"

Repository: zenstackhq/zenstack-v3

Length of output: 115


🏁 Script executed:

# Examine the delete.ts file to understand the context and deleteResult type
head -50 packages/orm/src/client/crud/operations/delete.ts | cat -n

Repository: zenstackhq/zenstack-v3

Length of output: 2694


🏁 Script executed:

# Look for the import statements and type definitions in delete.ts
sed -n '1,80p' packages/orm/src/client/crud/operations/delete.ts | cat -n

Repository: zenstackhq/zenstack-v3

Length of output: 3136


🏁 Script executed:

# Check the zenstack-query-executor to understand how results are transformed
rg -A 5 -B 5 "numAffectedRows: result.numAffectedRows" packages/orm/src/client/executor/zenstack-query-executor.ts

Repository: zenstackhq/zenstack-v3

Length of output: 445


🏁 Script executed:

# Check the base.ts delete method to understand the return type
rg -A 30 "async delete" packages/orm/src/client/crud/operations/base.ts | head -60

Repository: zenstackhq/zenstack-v3

Length of output: 2160


🏁 Script executed:

# Check if the executor wraps results before returning to delete operation handler
rg -B 10 "numAffectedRows: result.numAffectedRows" packages/orm/src/client/executor/zenstack-query-executor.ts | head -30

Repository: zenstackhq/zenstack-v3

Length of output: 517


🏁 Script executed:

# Check how the executor wraps query execution and whether delete operations go through it
rg -B 20 "ensureProperQueryResult" packages/orm/src/client/executor/zenstack-query-executor.ts | head -40

Repository: zenstackhq/zenstack-v3

Length of output: 1336


🏁 Script executed:

# Check the execute method and how it's called in relation to delete operations
rg -A 10 "execute.*mutation\|mutation.*execute" packages/orm/src/client/executor/zenstack-query-executor.ts | head -30

Repository: zenstackhq/zenstack-v3

Length of output: 48


🏁 Script executed:

# Verify the flow: check if BaseOperationHandler.delete calls through the executor
rg -B 5 -A 15 "this.execute\|this.kysely" packages/orm/src/client/crud/operations/base.ts | head -50

Repository: zenstackhq/zenstack-v3

Length of output: 48


Use nullish coalescing instead of falsy check on optional numAffectedRows.

Line 36 uses if (!deleteResult.numAffectedRows) which conflates undefined with 0n. Since QueryResult.numAffectedRows is optional, some dialects may not populate it even when rows are affected. Fallback to rows.length when undefined, matching the pattern already used at line 57 and in the executor's result transformation.

🔧 Proposed fix
-            if (!deleteResult.numAffectedRows) {
+            const affected =
+                deleteResult.numAffectedRows !== undefined
+                    ? deleteResult.numAffectedRows
+                    : BigInt(deleteResult.rows.length);
+            if (affected === 0n) {
                 throw createNotFoundError(this.model);
             }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const deleteResult = await this.delete(tx, this.model, args.where, undefined, undefined, selectedFields);
if (deleteResult.rows.length === 0) {
if (!deleteResult.numAffectedRows) {
throw createNotFoundError(this.model);
}
const deleteResult = await this.delete(tx, this.model, args.where, undefined, undefined, selectedFields);
const affected =
deleteResult.numAffectedRows !== undefined
? deleteResult.numAffectedRows
: BigInt(deleteResult.rows.length);
if (affected === 0n) {
throw createNotFoundError(this.model);
}
🤖 Prompt for AI Agents
In `@packages/orm/src/client/crud/operations/delete.ts` around lines 35 - 38, The
check for affected rows uses a falsy test on deleteResult.numAffectedRows which
treats 0n and undefined the same; change the logic in the block that calls
this.delete (around deleteResult handling) to use nullish coalescing: compute an
effectiveAffected = deleteResult.numAffectedRows ?? deleteResult.rows?.length
and throw createNotFoundError(this.model) only if effectiveAffected is 0 (or
falsy after explicit numeric/coercion handling), ensuring you fall back to
deleteResult.rows.length when numAffectedRows is undefined.

@claude
Copy link

claude bot commented Jan 23, 2026

Claude finished @ymc9's task —— View job


PR Review: MySQL Support Implementation

Reviewing this substantial PR that adds MySQL support to ZenStack ORM. This is a significant feature addition with extensive changes across the codebase.

Todo List

  • Review core MySQL dialect implementation
  • Analyze query generation and SQL safety
  • Check error handling and edge cases
  • Evaluate test coverage and CI integration
  • Review performance considerations
  • Assess security implications
  • Check integration with existing ORM features

🔍 Comprehensive Review Results

Security & SQL Safety - Excellent

  • Parameterized Queries: All SQL generation uses Kysely's safe sql.val() and sql.lit() methods
  • No SQL Injection Risks: Proper escaping in complex operations like JSON_CONTAINS(${lhs}, ${sql.val(JSON.stringify(rhs))})
  • Type Safety: Leverages TypeScript and Kysely's type system throughout

Code Quality - High Standard

  • Architecture: Clean extension of BaseCrudDialect with proper MySQL-specific overrides
  • Consistency: Follows established patterns from PostgreSQL and SQLite dialects
  • Error Handling: Database-specific error code mapping (e.g., ER_DUP_ENTRY for MySQL unique violations)

Feature Completeness - Comprehensive

Core MySQL Features Implemented:

  • JSON Operations: Full support with JSON_OBJECT, JSON_EXTRACT, JSON_CONTAINS, JSON_ARRAYAGG
  • Data Type Mapping: Complete type system (tinyint(1) for boolean, datetime(3), blob, json)
  • Capability Flags: Correctly identifies MySQL limitations (no DISTINCT ON, limited RETURNING support)
  • Query Features: UPDATE/DELETE with LIMIT, proper offset handling with required LIMIT

Advanced Features:

  • LATERAL Joins: Proper MySQL 8.0.14+ support for complex relation queries
  • Enum Handling: Inline ENUM definitions with proper value escaping
  • Array Emulation: JSON-based array storage with proper transformation
  • Null Handling: Distinguishes between DB NULL, JSON null, and application null types

Test Coverage - Thorough

  • CI Integration: MySQL 8.4 added to test matrix alongside PostgreSQL and SQLite
  • Cross-Database Tests: All 78+ test files updated to handle MySQL-specific behaviors
  • Feature Gating: Proper handling of unsupported features (createManyAndReturn, RETURNING clauses)
  • Provider Detection: Consistent db.$schema.provider.type === 'mysql' checks

⚠️ Minor Considerations

Performance Notes:

  • Large JSON operations may need monitoring in production
  • Array emulation via JSON has different performance characteristics than native arrays
  • Complex relation queries use LATERAL joins which require MySQL 8.0.14+

Documentation Gaps:

  • MySQL version requirements (8.0.14+ for LATERAL joins) could be better documented
  • JSON array performance implications not explicitly called out

Edge Cases Handled Well:

  • Timezone handling in DateTime transformations
  • Key-order insensitive JSON equality comparisons
  • Proper NULLS FIRST/LAST emulation for ORDER BY

🎯 Recommendation: APPROVE

This is a high-quality implementation that:

  1. Maintains Security: No SQL injection vulnerabilities, proper parameterization
  2. Follows Patterns: Consistent with existing dialect architecture
  3. Comprehensive Testing: Extensive cross-database test coverage
  4. Production Ready: Handles edge cases and MySQL-specific limitations appropriately

The implementation demonstrates deep understanding of MySQL's capabilities and limitations, with thoughtful workarounds for missing features like native array types and limited RETURNING support.

Files reviewed: 78 changed files including core dialect, query executor, test infrastructure, and comprehensive test suite updates.


Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants