perf: speed up standardize_quotes with str.translate()#4314
Conversation
The optimized code achieves a **144% speedup** by replacing a loop-based character replacement approach with Python's built-in `str.translate()` method using a pre-computed translation table. ## Key Optimizations **1. Pre-computed Translation Table at Module Load** - The quote dictionaries and translation table are now created once at module import time (module-level constants prefixed with `_`) - Original code recreated these 40+ entry dictionaries on every function call (6.1% + 6.5% = 12.6% of runtime just for dictionary creation) - Translation table maps Unicode codepoints directly to ASCII quote codepoints, eliminating repeated string operations **2. Single-Pass O(n) Algorithm with `str.translate()`** - Original: Two loops iterating through ~40 quote types, calling `unicode_to_char()` 3,096 times (67.5% of total runtime) and performing substring searches with `in` operator (5.9% of runtime) - Optimized: Single `str.translate()` call that processes the entire string in one pass using efficient C-level implementation - Eliminates 3,096 function calls to `unicode_to_char()` and all associated string parsing/conversion overhead **3. Algorithmic Complexity Improvement** - Original: O(n × m) where n = text length, m = number of quote types (~40), with repeated `text.replace()` creating new string objects - Optimized: O(n) single pass through the text, with translation table lookups being O(1) ## Performance Context Based on `function_references`, this function is called from `calculate_edit_distance()`, which is likely in a **hot path** for text extraction metrics. The function processes strings before edit distance calculations, meaning: - Any text comparison workflow will call this repeatedly - The 144% speedup compounds when processing multiple documents or performing batch comparisons - Reduced memory allocation pressure from eliminating repeated dictionary creation and intermediate string objects ## Test Case Insights The test with input `"«'"` (containing both double and single quote variants) shows the optimization handles mixed quote types efficiently in a single pass, whereas the original code would iterate through all 40 quote types regardless of actual presence in the text.
…te dict keys The quote-mapping dicts used literal quote characters as keys, but '"'/'"'/'"' all encode as byte 0x22 and '''/'''/''' as 0x27. Python deduplicates them, silently dropping U+201C (left double) and U+2018 (left single) before the translation table is built. Restructure as tuples of \uXXXX escape sequences so every codepoint is guaranteed unique.
KRRT7
left a comment
There was a problem hiding this comment.
The changelog claims this fixes "a pre-existing bug where left smart quotes were never normalized due to duplicate dictionary keys," but there are no regression assertions that prove the fix works for the specific characters that were allegedly broken.
Add explicit regression assertions for U+201C (") and U+2018 (') — the claimed bug-fix characters — and for mixed strings containing both left/right smart quotes (e.g. "\u201cHello\u201d" → "\"Hello\"", "\u2018it\u2019s" → "'it's").
The new benchmark input in test_benchmark_standardize_quotes.py includes those characters, but it only measures runtime; it does not assert correctness. The existing test_standardize_quotes parametrized cases still do not directly cover those exact code points — I checked and neither \u201c nor \u2018 appear anywhere in the test file.
Without these assertions, the bug-fix claim is untested and could silently regress.
…ints Add explicit tests for U+201C and U+2018 (the characters silently dropped by duplicate dict keys in the old implementation), plus a parametrized test that asserts every one of the 39 codepoints in the translation table maps to its correct ASCII equivalent.
|
Added regression tests in 8dffad1 — 58/58 pass. |
Summary
str.maketrans()+str.translate()table forstandardize_quotestest_unstructured/benchmarks/) to trackstandardize_quotesperformanceBenchmark
Azure Standard_D8s_v5 — 8 vCPU Intel Xeon Platinum 8473C, 32 GiB RAM, Python 3.12.12
test_benchmark_standardize_quotes
6ada488f6c28(base)8929336e66aa(head)standardize_quotes██████░░░░-55%Generated by codeflash agent
Reproduce the benchmark locally
Benchmark test source
Changelog
Added entry in
CHANGELOG.mdunder 0.22.13.Test plan
codeflash compareon Azure VM (Standard_D8s_v5)standardize_quotesis a drop-in replacement