[FLINK-24544][formats] Fix Avro enum deserialization failure with Confluent Schema Registry#27591
Open
nateab wants to merge 1 commit intoapache:masterfrom
Open
[FLINK-24544][formats] Fix Avro enum deserialization failure with Confluent Schema Registry#27591nateab wants to merge 1 commit intoapache:masterfrom
nateab wants to merge 1 commit intoapache:masterfrom
Conversation
08744e2 to
cbae031
Compare
Collaborator
…fluent Schema Registry When using the Table API with Kafka + Avro + Confluent Schema Registry, deserialization fails for records containing enum types with the error "Found MyEnum, expecting union". The root cause is that the RegistryAvroFormatFactory derives a reader schema from the Table DDL via AvroSchemaConverter, which is lossy: Avro enums become Flink STRING, which converts back to Avro string. Avro's schema resolution then fails because it cannot match an enum writer type against a string reader type in a union. The fix stops using the DDL-derived schema as the Avro reader schema when no explicit schema is provided via the avro-confluent.schema option. Instead, the writer schema from the registry is used directly for deserialization. The AvroToRowDataConverter already handles enum-to-string conversion via .toString() at the Flink level, so Avro-level schema resolution is not needed for type coercion. When the user provides an explicit schema via avro-confluent.schema, it continues to be used as the reader schema (schema evolution works because user-provided schemas preserve enum types).
cbae031 to
ccdbdec
Compare
davidradl
reviewed
Feb 12, 2026
| int[][] projections) { | ||
| producedDataType = Projection.of(projections).project(producedDataType); | ||
| final RowType rowType = (RowType) producedDataType.getLogicalType(); | ||
| // When no explicit schema is provided, pass null so that the |
Contributor
There was a problem hiding this comment.
Would it be possible to put some of the tables mentioned in Jira as test cases to make sure they work and making it explicit under what circumstances this fix is required. For example does this effect sink cases as well as join cases?
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
What is the purpose of the change
This pull request fixes a deserialization failure when using the Table API with Kafka + Avro + Confluent Schema Registry for records containing enum types. Deserialization fails with the error "Found MyEnumType, expecting union".
The root cause is that
RegistryAvroFormatFactoryderives a reader schema from the Table DDL viaAvroSchemaConverter.convertToSchema(rowType). This conversion is lossy: Avro enums become Flink STRING, which converts back to Avro["null", "string"]instead of["null", {"type": "enum", ...}]. When Avro'sGenericDatumReaderperforms schema resolution between the writer schema (from the registry, with enum) and the reader schema (from DDL, with string), it fails because union resolution cannot match an enum against a string.The fix stops using the DDL-derived schema as the Avro reader schema when no explicit schema is provided via the
avro-confluent.schemaformat option. Instead, the writer schema from the registry is used directly for deserialization. TheAvroToRowDataConvertersalready handles enum-to-string conversion via.toString()at the Flink level, so Avro-level schema resolution is not needed for type coercion. When the user provides an explicit schema viaavro-confluent.schema, it continues to be used as the reader schema.Brief change log
RegistryAvroFormatFactory: changed deserialization path to passnullinstead of the DDL-derived schema when noavro-confluent.schemaoption is set (serialization path unchanged)AvroDeserializationSchema:checkAvroInitialized()now handles nullschemaStringgracefully; addedPreconditions.checkNotNullguards indeserialize()andgetProducedType()to fail fast with clear messages if null reader schema leaks into code paths that require itRegistryAvroDeserializationSchema: falls back to writer schema when reader schema is null viadatumReader.setExpected(readerSchema != null ? readerSchema : writerSchema)testRowDataReadWithEnumFieldAndNullReaderSchemainRegistryAvroRowDataSeDeSchemaTestRegistryAvroFormatFactoryTest.testDeserializationSchemato match new null-schema behaviorVerifying this change
This change added tests and can be verified as follows:
testRowDataReadWithEnumFieldAndNullReaderSchemathat creates an Avro schema with a nullable enum field, serializes a GenericRecord using the Confluent wire format (magic byte + schema ID + Avro binary), then deserializes with a null reader schema and verifies the enum value is correctly read as a stringRegistryAvroFormatFactoryTest.testDeserializationSchemato expect null schema in the deserialization pathflink-avro(337 tests, 0 failures),flink-avro-confluent-registry(28 tests, 0 failures)Does this pull request potentially affect one of the following parts:
@Public(Evolving): noDocumentation