Skip to content

Conversation

@Pterjudin
Copy link

No description provided.

Tajudeen added 16 commits January 7, 2026 22:07
…ulti-file apply

- Add ApplyEngineV2 service with atomic transaction support
- Implement pre-apply verification (base signature computation)
- Implement post-apply verification (proof of apply)
- Add deterministic ordering (files and hunks sorted)
- Integrate with rollbackSnapshotService and gitAutoStashService
- Add path safety checks (prevent writes outside workspace)
- Update composerPanel.applyAll to use ApplyEngineV2
- Add comprehensive test placeholders
- Support for edit and create operations
- Error categorization (base_mismatch, hunk_apply_failure, write_failure, verification_failure)
- UX notifications for verification states and errors
- Add open_file tool to BuiltinToolCallParams and BuiltinToolResultType
- Add open_file tool definition in builtinTools
- Inject IEditorService into ToolsService
- Implement open_file handler that verifies file exists and opens in editor
- Update intent synthesis to handle 'open' commands and route to open_file tool

Fixes issue where 'open file1.js' command was not working.
Fixed issue where new autocompletions were using hardcoded matchup bounds (all zeros) instead of calculating them properly. This caused incorrect completion display when:
- User typed while waiting for LLM response
- Completion text didn't align with cursor position
- Stale completions were shown

The fix:
1. Recalculates prefix/suffix when LLM promise resolves
2. Properly calculates matchup bounds using getAutocompletionMatchup
3. Handles cases where prefix changed too much (returns empty completions)

This matches the pattern used for cached autocompletions and ensures completions display correctly.
- Removed FIM from providers that don't support suffix parameter:
  - OpenAI (official API doesn't support it)
  - xAI, DeepSeek, Groq (OpenAI-compatible but no suffix support)
  - vLLM (docs confirm no suffix support)
  - lmStudio (comment confirms no suffix support)

- Kept FIM for providers that support it:
  - Mistral (native FIM endpoint)
  - Ollama (native FIM support)
  - OpenRouter, OpenAI-compatible, LiteLLM (may support depending on backend)

- Updated filter logic to match actual implementations
- Added clear comments explaining which providers support FIM and why
- Fixed 'Illegal value for lineNumber' errors in ContextGatheringService:
  - Added validation for line numbers before accessing model.getLineContent()
  - Added validation in _getSnippetForRange and _findContainerFunction
  - Prevents errors when model changes during autocomplete

- Added filtering for non-code content in autocomplete results:
  - Filters out lines that are mostly non-ASCII characters without code indicators
  - Prevents Chinese characters and explanatory text from appearing in completions
  - Keeps legitimate Unicode in string literals and comments
- Updated error messages to clarify that Mistral (cloud) supports FIM
- Added note that OpenAI models can be used via OpenRouter if backend supports FIM
- Improved user guidance for selecting autocomplete models
- Fixed issue where accepted completions were being aborted due to cache eviction
- Mark completion as 'finished' before deletion to prevent abort in dispose callback
- Dispose callback now correctly skips abort for finished/accepted completions
- This prevents 'Aborted autocomplete' errors when user accepts a completion
- Added detectLanguageMismatch() to detect Python/JS/Java syntax mismatches
- Filter out completions that contain syntax from wrong language
- Prevents Python code (def __init__) from appearing in JS files
- Also filters Chinese characters and non-code content
- Validates completions match the file's language before showing them
- Fixed onError scope issue in language mismatch detection
- Use reject() instead of onError() for promise rejection
- Properly handle wrong-language completions
PROPER FIX (not band-aid):
- Added languageId parameter to prepareFIMMessage
- Include '// Language: javascript' (or python, etc.) in FIM prefix
- Tells model explicitly what language to generate BEFORE it generates
- Prevents wrong-language code at the source, not just filtering output

This is the proper fix because:
- Model knows what language to generate from the start
- Reduces wrong-language completions at generation time
- Filtering is now a safety net, not the primary solution
- Made language instruction more explicit: 'Generate javascript code only. Do not add comments or explanations.'
- Strengthened 'no comments' instruction in FIM prompt
- Added filtering to remove Chinese/Japanese/Korean characters from comments
- Strips comment portion but keeps code when non-ASCII found in comments
- Example: 'const x = 1; //中文注释' becomes 'const x = 1;'
- Improved regex patterns for detecting Chinese/Japanese/Korean in comments
- Better handling of comment removal while preserving code
- Strengthened language context in FIM prompts
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants