Skip to content

fix(semantic): make deep flow narrowing iterative and fast#1029

Draft
lewis6991 wants to merge 1 commit intoEmmyLuaLs:mainfrom
lewis6991:fix/issue-1028-iterative-flow
Draft

fix(semantic): make deep flow narrowing iterative and fast#1029
lewis6991 wants to merge 1 commit intoEmmyLuaLs:mainfrom
lewis6991:fix/issue-1028-iterative-flow

Conversation

@lewis6991
Copy link
Copy Markdown
Collaborator

Summary

  • replace recursive get_type_at_flow branch evaluation with an explicit frame stack
  • cache hot flow state and decoded flow metadata so deep guard chains stay fast
  • add regression coverage for the deep semantic-token guard chain and preserved narrowing behavior

Testing

  • cargo test -p emmylua_code_analysis
  • cargo test -p emmylua_ls test_issue_1028_i18n_semantic_tokens_repeated_prefix_guard_chain -- --nocapture

Issue 1028 exposes a pathological narrowing path. Deep guard chains
drive semantic token analysis through recursive `get_type_at_flow`
branch and condition evaluation, which can overflow the thread stack and
repeatedly rebuild the same flow state for repeated index lookups.

Rework the flow narrowing hot path so it stays stack safe and fast on
those inputs.

What changed:
- replace recursive branch-merge evaluation in `get_type_at_flow` with
  an explicit frame stack so deep `BranchLabel` chains no longer recurse
  on the thread stack
- assign each queried `VarRefId` a dense cache id and store hot flow
  results in direct `[var_ref][flow][condition]` slots instead of
  hashing `(VarRefId, FlowId, bool)` keys on every step
- flatten `BranchLabel` antecedents once per merge and cache the
  flattened branch lists for reuse across queries
- cache parsed flow assignment and flow condition metadata per `FlowId`
  so assignment var refs and index condition refs are resolved once and
  reused
- share the flow condition helper between `infer_index` and narrowing so
  the direct index precheck and the main evaluator use the same decoded
  condition data
- add a direct index flow-effect precheck in `infer_index` so repeated
  index accesses skip full narrowing when backward flow cannot affect
  the queried index ref
- keep assignment source reuse conservative for partial table and object
  literals so branch-local narrowing behavior stays intact

Fixes EmmyLuaLs#1028
@lewis6991
Copy link
Copy Markdown
Collaborator Author

lewis6991 commented Apr 9, 2026

@CppCXY These changes are quite AI-sloppy but should fix #1028.

If you want something cleaner I can break this up. The main thing is making get_type_at_flow iterative so it uses the heap instead of the stack. Even with an iterative approach, the i18n testcase was really slow so I needed to add a bunch of caching to get the perf back to normal.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request optimizes the flow-based type narrowing system by refactoring LuaInferCache to use dense Vec-based storage and synthetic u32 identifiers, significantly reducing hashing overhead. The core narrowing logic has been transitioned to a stack-based state machine to prevent deep recursion, and a new has_direct_index_flow_effect check was added to skip unnecessary narrowing for index expressions. Feedback suggests encapsulating the complex flow effect checks into dedicated methods on FlowTree or FlowIndex and simplifying the PendingUnion evaluation logic by extracting it into a helper function.

Comment on lines +116 to +125
cache.get(index).and_then(|entry| entry.as_ref())
}

fn get_dense_rc_slot<T>(cache: &mut Vec<Option<Rc<T>>>, index: usize) -> &mut Option<Rc<T>> {
if cache.len() <= index {
cache.resize(index + 1, None);
}
&mut cache[index]
}

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The logic for checking flow effects is complex and repeated in multiple places. Consider encapsulating this check into a dedicated method on the FlowTree or FlowIndex to improve maintainability and readability.

Comment on lines +397 to 412
let Some(next_pending_index) = branch_flow_ids.len().checked_sub(1) else {
let result_type = apply_pending_condition_narrows(
db,
cache,
LuaType::Never,
pending_condition_narrows,
);
*get_flow_node_cache_slot(
cache,
var_ref_cache_id,
flow_id,
use_condition_narrowing,
) = Some(CacheEntry::Cache(result_type.clone()));
last_result = Some(Ok(result_type));
continue;
};
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The logic for handling PendingUnion frames is quite verbose. This could be simplified by extracting the branch evaluation logic into a helper function to reduce the complexity of the main loop.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant