Open
Conversation
_mi_get_block_info() case 0 (BLOCK_DELETED) reads block_len from a 3-byte on-disk field (max ~16 MB) and validates minimum size and alignment, but lacks an upper-bound check. A corrupt or crafted .MYD file can supply an excessively large block_len that propagates to _mi_find_writepos(), causing an accounting underflow on info->state->empty and data corruption. Add a check against MI_MAX_BLOCK_LENGTH, consistent with the upper bound enforced in the new-block allocation path of _mi_find_writepos() and the block length limit used throughout the MyISAM dynamic record code. Co-Authored-By: Claude AI <noreply@anthropic.com>
When dlen (blob length) is less than 4 bytes, the expression dlen-4 causes an unsigned integer underflow (CWE-190) since dlen is uint. This results in a ~4 GB size being passed to sp_mbr_from_wkb(), which can cause a crash or information disclosure when processing a corrupted .MYD data file with a spatial index. Add a bounds check before subtracting the SRID size in both the MyISAM (sp_make_key) and Aria (_ma_sp_make_key) implementations, returning HA_ERR_WRONG_IN_RECORD to match the existing pattern used throughout both modules for corrupt record data. Co-Authored-By: Claude AI <noreply@anthropic.com>
Add bounds checks on field name, hostname, and table name lengths read from .cfg files via mach_read_from_4() before allocating memory with UT_NEW_ARRAY_NOKEY(). A crafted .cfg file could specify lengths up to 4 GB, causing DoS via memory exhaustion. Also add len == 0 check to the existing index name length validation. The 128-byte limit for field/column names and the FIXME comment are copied from the existing column name check in the same file (row_import_read_columns). OS_FILE_MAX_PATH is used for hostname and table name to match existing InnoDB conventions. Co-Authored-By: Claude AI <noreply@anthropic.com>
recv_spaces.find() is called at eight sites in log0recv.cc. Four of them check the result against recv_spaces.end() at runtime before dereferencing (lines 745, 765, 825, 2577). The other four rely solely on ut_ad assertions (lines 943, 1076, 3896, 4538), which are compiled out in release builds. This inconsistency means half the call sites are protected against an unexpected end-iterator and half are not. In a release build, if the invariant ever breaks at one of the unprotected sites, the end-iterator is dereferenced silently, causing undefined behavior -- potentially a crash during crash recovery. Add runtime guards after the existing ut_ad assertions at all four unprotected sites, so that every recv_spaces.find() call site now has both: - the ut_ad assertion, which fires in debug builds to flag the unexpected condition during development, and - a runtime check that safely skips or errors out in release builds. Each guard uses the existing error/skip path appropriate to its context: goto next_item, goto fail, goto nothing_recoverable, or goto next. Co-Authored-By: Claude AI <noreply@anthropic.com>
In MakeCommand(), qrystr is allocated with strlen(Qrystr)+5 bytes but later receives Query->GetStr() via strcpy at line 713. Query may be longer than qrystr when TableName is longer than Name, since Name is replaced by TableName in the query string. This causes a heap buffer overflow. Fix by: 1. Allocating qrystr with strlen(Qrystr)+64, matching the Query STRING object allocation on line 679. Add a comment linking the two. 2. Replacing the strcpy with safe_strcpy to bound the copy to the allocated size. This provides defense-in-depth: the STRING class can dynamically grow beyond its initial allocation via Realloc(), so matching the initial capacity alone is not a complete guarantee. The bounded copy ensures safety regardless of STRING's internal growth. Found by static analysis (CWE-122). Co-Authored-By: Claude AI <noreply@anthropic.com>
strcpy(result, g->Message) copies a 4160-byte g->Message buffer into the UDF result buffer which is only guaranteed to be 255 bytes by the MySQL UDF API. This can cause a heap buffer overflow when error messages exceed 255 characters. Replace all 37 instances (18 in bsonudf.cpp, 19 in jsonudf.cpp) of strcpy(result, g->Message) with safe_strcpy(result, 255, g->Message) which truncates the message to fit the destination buffer. The magic number 255 is the minimum UDF result buffer size guaranteed by the MySQL UDF API, documented in sql/udf_example.c:260 as "At least 255 byte long." The server constant MAX_FIELD_CHARLENGTH (sql_const.h) defines this same value, but sql_const.h is not included by these files and adding that include would pull in server internals. Using the literal matches the CONNECT plugin's existing style, which does not reference server constants for UDF buffer sizes. Note: a few other strcpy calls into result (e.g. strcpy(result, msg) in catch blocks, strcpy(result, ofn) for file paths) are not addressed here. Those copy from differently-sized sources and warrant separate analysis. Found by static analysis (CWE-122). Co-Authored-By: Claude AI <noreply@anthropic.com>
Member
|
Because nobody knows how to handle Galera exceptions, std::terminate can be the right thing to do, rather than catch all/eat all exceptions. Maybe you can communicate it to Claude. |
janlindstrom
requested changes
Mar 27, 2026
Contributor
janlindstrom
left a comment
There was a problem hiding this comment.
Galera related exceptions should be already be handled inside a library.
The InnoDB allocator (ut0new.h) has throw(std::bad_alloc) paths. Since C++11 destructors are implicitly noexcept, an uncaught std::bad_alloc calls std::terminate(), crashing the server. Wrap code paths that can reach the InnoDB allocator in try/catch blocks catching only std::bad_alloc within three affected destructors: THD::~THD(): free_connection() -> ha_close_connection() can reach InnoDB allocator paths that throw std::bad_alloc. ~Delayed_insert(): close_thread_tables() -> mysql_unlock_tables() -> InnoDB handler chain can reach the InnoDB allocator. TR_table::~TR_table(): close_log_table() -> close_thread_tables() -> mysql_unlock_tables() -> ha_external_unlock() -> InnoDB external_lock(F_UNLCK) can trigger innobase_commit(). InnoDB allocator has throw(std::bad_alloc) paths (ut0new.h). Co-Authored-By: Claude AI <noreply@anthropic.com>
The InnoDB allocator (ut0new.h) has throw(std::bad_alloc) paths. If std::bad_alloc propagates through a noexcept function, std::terminate() is called, crashing the server. Affected functions and fixes: 1. dict_sys_t::create_or_check_sys_tables() in dict0crea.cc: calls que_eval_sql(), trx->rollback(), trx->commit() which can reach the InnoDB allocator. Extract the throwing body into create_or_check_sys_tables_impl() and wrap the call in try/catch(std::bad_alloc). 2. mdl_release() in dict0dict.cc: wrap release_lock() in try/catch(std::bad_alloc). 3. dict_stats::open() in dict0dict.cc: extract body into open_impl() and wrap in try/catch(std::bad_alloc). Cleanup partially acquired resources on exception. 4. dict_stats::close() in dict0dict.cc: wrap the release_lock() calls in try/catch(std::bad_alloc). Co-Authored-By: Claude AI <noreply@anthropic.com>
Wrap potentially-throwing InnoDB allocator paths in try/catch(std::bad_alloc) blocks within noexcept functions to prevent std::terminate() crashes on memory allocation failure: - trx_purge_close_tables(): wrap mdl_context.release_lock() - trx_purge_table_acquire(): wrap mdl_context.try_acquire_lock() Co-Authored-By: Claude AI <noreply@anthropic.com>
84f73d7 to
b4a0cc2
Compare
Contributor
Author
|
Right, there is no reasonable way to handle it in the server. I dropped the galera/wsrep parts and left only unrelated exceptions handling. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Fixes for various static analysis discoveries of high severity or higher.