Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1,452 changes: 1,452 additions & 0 deletions DOCUMENTATION.md

Large diffs are not rendered by default.

139 changes: 139 additions & 0 deletions FEATURE_MAP.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,139 @@
# ๐Ÿ—บ๏ธ Analyzer Repository Feature Mapping

**Purpose:** Comprehensive map of all features, functions, and their integration points

---


## ๐Ÿ“„ analyzer.py

**Lines:** 2,112 | **Size:** 80.2 KB

### Classes (10)

- `AnalysisError`
- `ToolConfig`
- `GraphSitterAnalysis`
- `RuffIntegration`
- `LSPDiagnosticsCollector`
- `ErrorDatabase`
- `AutoGenLibFixerLegacy`
- `ComprehensiveAnalyzer`
- `InteractiveAnalyzer`
- `ReportGenerator`

### Functions (1)

- `main()`


## ๐Ÿ“„ autogenlib_adapter.py

**Lines:** 1,167 | **Size:** 47.7 KB

### Functions (1)

- `get_ai_client()`


## ๐Ÿ“„ graph_sitter_adapter.py

**Lines:** 5,590 | **Size:** 227.4 KB

### Classes (12)

- `AnalyzeRequest`
- `ErrorAnalysisResponse`
- `EntrypointAnalysisResponse`
- `TransformationRequest`
- `VisualizationRequest`
- `DeadCodeAnalysisResponse`
- `CodeQualityMetrics`
- `GraphSitterAnalyzer`
- `AnalysisEngine`
- `EnhancedVisualizationEngine`
- `TransformationEngine`
- `EnhancedTransformationEngine`

### Functions (23)

- `calculate_doi(cls: Class)`
- `get_operators_and_operands(function: Function)`
- `calculate_halstead_volume(operators: List[str], operands: List[str])`
- `cc_rank(complexity: int)`
- `analyze_codebase(request: AnalyzeRequest, background_tasks: Backgro...)`
- `get_error_analysis(analysis_id: str)`
- `fix_errors_with_ai(analysis_id: str, max_fixes: int = 1)`
- `get_entrypoint_analysis(analysis_id: str)`
- `get_dead_code_analysis(analysis_id: str)`
- `get_code_quality_metrics(analysis_id: str)`
- `create_visualization(analysis_id: str, request: VisualizationRequest)`
- `apply_transformation(analysis_id: str, request: TransformationRequest)`
- `generate_documentation(
analysis_id: str, target_type: str = "codebas...)`
- `get_tree_structure(analysis_id: str)`
- `get_dependency_graph(analysis_id: str)`
- `get_architectural_insights(analysis_id: str)`
- `get_analysis_summary(analysis_id: str)`
- `delete_analysis(analysis_id: str)`
- `list_analyses()`
- `health_check()`
- `get_capabilities()`
- `cleanup_temp_directory(repo_path: str)`
- `convert_all_calls_to_kwargs(codebase: Codebase)`


## ๐Ÿ“„ lsp_adapter.py

**Lines:** 564 | **Size:** 25.8 KB

### Classes (3)

- `EnhancedDiagnostic`
- `RuntimeErrorCollector`
- `LSPDiagnosticsManager`


## ๐Ÿ“„ static_libs.py

**Lines:** 2,076 | **Size:** 81.6 KB

### Classes (23)

- `LibraryManager`
- `StandardToolIntegration`
- `ErrorCategory`
- `Severity`
- `AnalysisError`
- `AdvancedASTAnalyzer`
- `SymbolTableAnalyzer`
- `DeadCodeDetector`
- `TypeInferenceAnalyzer`
- `ImportResolver`
- `ComprehensiveErrorAnalyzer`
- `ResultAggregator`
- `ReportGenerator`
- `AdvancedErrorDetector`
- `ErrorCategory`
- `Severity`
- `AnalysisError`
- `AdvancedASTAnalyzer`
- `SymbolTableAnalyzer`
- `DeadCodeDetector`
- `TypeInferenceAnalyzer`
- `ImportResolver`
- `ComprehensiveErrorAnalyzer`

### Functions (1)

- `main()`


---

## ๐Ÿ“Š Summary Statistics

- **Total Functions:** 26
- **Total Classes:** 48
- **Total Lines:** 11,509
- **Total Files:** 5
68 changes: 41 additions & 27 deletions Libraries/analyzer.py
Original file line number Diff line number Diff line change
Expand Up @@ -74,6 +74,14 @@
SOLIDLSP_AVAILABLE = False

# AutoGenLib integration
# Enhanced AutoGenLib Fixer - Safe runtime error fixing
try:
from autogenlib_adapter import AutoGenLibAdapter
AUTOGENLIB_ADAPTER_AVAILABLE = True
except ImportError as e:
AUTOGENLIB_ADAPTER_AVAILABLE = False
logging.debug(f"Enhanced AutoGenLib fixer not available: {e}")

try:
from graph_sitter.extensions import autogenlib
from graph_sitter.extensions.autogenlib._cache import cache_module
Expand Down Expand Up @@ -640,36 +648,46 @@ def query_errors(self, filters: dict[str, Any]) -> list[dict[str, Any]]:
return [dict(row) for row in cursor.fetchall()]


class AutoGenLibFixer:
"""Integration with AutoGenLib for AI-powered error fixing."""
class AutoGenLibFixerLegacy:
Copy link

@cubic-dev-ai cubic-dev-ai bot Oct 16, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Renaming the legacy wrapper to AutoGenLibFixerLegacy breaks existing callers that still instantiate AutoGenLibFixer(), so when the enhanced fixer import is missing we now raise NameError and lose the fallback. Please keep the original class name or update every call site to match.

Prompt for AI agents
Address the following comment on Libraries/analyzer.py at line 651:

<comment>Renaming the legacy wrapper to `AutoGenLibFixerLegacy` breaks existing callers that still instantiate `AutoGenLibFixer()`, so when the enhanced fixer import is missing we now raise `NameError` and lose the fallback. Please keep the original class name or update every call site to match.</comment>

<file context>
@@ -640,36 +648,46 @@ def query_errors(self, filters: dict[str, Any]) -&gt; list[dict[str, Any]]:
 
-class AutoGenLibFixer:
-    &quot;&quot;&quot;Integration with AutoGenLib for AI-powered error fixing.&quot;&quot;&quot;
+class AutoGenLibFixerLegacy:
+    &quot;&quot;&quot;Legacy wrapper for AutoGenLibFixer - now uses enhanced version.
+    
</file context>
Fix with Cubic

"""Legacy wrapper for AutoGenLibFixer - now uses enhanced version.

This class maintains backward compatibility while delegating to the
new enhanced AutoGenLibFixer for safe runtime error fixing.
"""

def __init__(self):
if not AUTOGENLIB_AVAILABLE:
"""Initialize using enhanced fixer if available, otherwise raise error."""
if AUTOGENLIB_ADAPTER_AVAILABLE:
# Use enhanced fixer with full safety features
self._fixer = AutoGenLibFixer(codebase=None)
logging.info("โœ… Using enhanced AutoGenLibFixer")
elif AUTOGENLIB_AVAILABLE:
# Fallback to basic autogenlib
logging.warning("โš ๏ธ Using basic AutoGenLib (enhanced fixer not available)")
autogenlib.init(
"Advanced Python code analysis and error fixing system",
enable_exception_handler=True,
enable_caching=True,
)
self._fixer = None
else:
msg = "AutoGenLib not available"
raise ImportError(msg)

# Initialize AutoGenLib for code fixing
autogenlib.init(
"Advanced Python code analysis and error fixing system",
enable_exception_handler=True,
enable_caching=True,
)

def generate_fix_for_error(self, error: AnalysisError, source_code: str) -> dict[str, Any] | None:
"""Generate a fix for a specific error using AutoGenLib's LLM integration."""
"""Generate a fix using enhanced fixer if available."""
if self._fixer:
return self._fixer.generate_fix_for_error(error, source_code)

# Fallback to basic generation (legacy code)
try:
# Create a mock exception for the error
mock_exception_type = type(error.error_type, (Exception,), {})
mock_exception_value = Exception(error.message)

# Create a simplified traceback string
mock_traceback = f"""
File "{error.file_path}", line {error.line}, in <module>
{error.context or "# Error context not available"}
{getattr(error, 'context', None) or "# Error context not available"}
{error.error_type}: {error.message}
"""

# Use AutoGenLib's fix generation
fix_info = generate_fix(
module_name=os.path.basename(error.file_path).replace(".py", ""),
current_code=source_code,
Expand All @@ -679,31 +697,27 @@ def generate_fix_for_error(self, error: AnalysisError, source_code: str) -> dict
is_autogenlib=False,
source_file=error.file_path,
)

return fix_info

except Exception as e:
logging.exception(f"Failed to generate fix for error: {e}")
return None

def apply_fix_to_file(self, file_path: str, fixed_code: str) -> bool:
"""Apply a fix to a file (with backup)."""
"""Apply fix using enhanced fixer if available."""
if self._fixer:
return self._fixer.apply_fix_to_file(file_path, fixed_code)

# Fallback to basic application
try:
# Create backup
backup_path = f"{file_path}.backup_{int(time.time())}"
with open(file_path) as original:
with open(backup_path, "w") as backup:
backup.write(original.read())

# Apply fix
with open(file_path, "w") as f:
f.write(fixed_code)

logging.info(f"Applied fix to {file_path} (backup: {backup_path})")
return True

except Exception as e:
logging.exception(f"Failed to apply fix to {file_path}: {e}")
logging.exception(f"Failed to apply fix: {e}")
return False


Expand Down
88 changes: 62 additions & 26 deletions Libraries/autogenlib_adapter.py
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,40 @@
logger = logging.getLogger(__name__)


# ================================================================================
# AI CLIENT CONFIGURATION
# ================================================================================

def get_ai_client():
"""Get configured AI client (Z.AI Anthropic endpoint or OpenAI fallback).

Returns:
tuple: (client, model) or (None, None) if not configured
"""
# Try Z.AI Anthropic endpoint first
api_key = os.environ.get("ANTHROPIC_AUTH_TOKEN")
base_url = os.environ.get("ANTHROPIC_BASE_URL")
model = os.environ.get("ANTHROPIC_MODEL", "glm-4.6")

if api_key and base_url:
logger.info(f"โœ… Using Z.AI Anthropic endpoint: {model}")
client = openai.OpenAI(api_key=api_key, base_url=base_url)
return client, model

# Fallback to OpenAI
api_key = os.environ.get("OPENAI_API_KEY")
base_url = os.environ.get("OPENAI_API_BASE_URL")
model = os.environ.get("OPENAI_MODEL", "gpt-4o")

if api_key:
logger.info(f"โš ๏ธ Using OpenAI endpoint (fallback): {model}")
client = openai.OpenAI(api_key=api_key, base_url=base_url)
return client, model

logger.error("โŒ No AI API configuration found")
return None, None


# ================================================================================
# CONTEXT ENRICHMENT FUNCTIONS
# ================================================================================
Expand Down Expand Up @@ -595,15 +629,10 @@ def _get_search_terms_for_error_category(category: str) -> list[str]:

def resolve_diagnostic_with_ai(enhanced_diagnostic: EnhancedDiagnostic, codebase: Codebase) -> dict[str, Any]:
"""Generates a fix for a given LSP diagnostic using an AI model, with comprehensive context."""
api_key = os.environ.get("OPENAI_API_KEY")
if not api_key:
logger.error("OPENAI_API_KEY environment variable not set.")
return {"status": "error", "message": "OpenAI API key not configured."}

base_url = os.environ.get("OPENAI_API_BASE_URL")
model = os.environ.get("OPENAI_MODEL", "gpt-4o") # Using gpt-4o for better code generation

client = openai.OpenAI(api_key=api_key, base_url=base_url)
# Get configured AI client
client, model = get_ai_client()
if not client:
return {"status": "error", "message": "AI API not configured. Set ANTHROPIC_AUTH_TOKEN or OPENAI_API_KEY."}

# Prepare comprehensive context for the LLM
diag = enhanced_diagnostic["diagnostic"]
Expand Down Expand Up @@ -771,11 +800,13 @@ def resolve_diagnostic_with_ai(enhanced_diagnostic: EnhancedDiagnostic, codebase

def resolve_runtime_error_with_ai(runtime_error: dict[str, Any], codebase: Codebase) -> dict[str, Any]:
"""Resolve runtime errors using AI with full context."""
api_key = os.environ.get("OPENAI_API_KEY")
if not api_key:
return {"status": "error", "message": "OpenAI API key not configured."}
# Get configured AI client

client = openai.OpenAI(api_key=api_key, base_url=os.environ.get("OPENAI_API_BASE_URL"))
client, model = get_ai_client()

if not client:

return {"status": "error", "message": "AI API not configured. Set ANTHROPIC_AUTH_TOKEN or OPENAI_API_KEY."}

system_message = """
You are an expert Python developer specializing in runtime error resolution.
Expand Down Expand Up @@ -828,11 +859,13 @@ def resolve_runtime_error_with_ai(runtime_error: dict[str, Any], codebase: Codeb

def resolve_ui_error_with_ai(ui_error: dict[str, Any], codebase: Codebase) -> dict[str, Any]:
"""Resolve UI interaction errors using AI with full context."""
api_key = os.environ.get("OPENAI_API_KEY")
if not api_key:
return {"status": "error", "message": "OpenAI API key not configured."}
# Get configured AI client

client = openai.OpenAI(api_key=api_key, base_url=os.environ.get("OPENAI_API_BASE_URL"))
client, model = get_ai_client()

if not client:

return {"status": "error", "message": "AI API not configured. Set ANTHROPIC_AUTH_TOKEN or OPENAI_API_KEY."}

system_message = """
You are an expert frontend developer specializing in React/JavaScript error resolution.
Expand Down Expand Up @@ -885,11 +918,13 @@ def resolve_multiple_errors_with_ai(
max_fixes: int = 10,
) -> dict[str, Any]:
"""Resolve multiple errors in batch using AI with pattern recognition."""
api_key = os.environ.get("OPENAI_API_KEY")
if not api_key:
return {"status": "error", "message": "OpenAI API key not configured."}
# Get configured AI client

client = openai.OpenAI(api_key=api_key, base_url=os.environ.get("OPENAI_API_BASE_URL"))
client, model = get_ai_client()

if not client:

return {"status": "error", "message": "AI API not configured. Set ANTHROPIC_AUTH_TOKEN or OPENAI_API_KEY."}

# Group errors by category and file
error_groups = {}
Expand Down Expand Up @@ -995,11 +1030,13 @@ def resolve_multiple_errors_with_ai(

def generate_comprehensive_fix_strategy(codebase: Codebase, error_analysis: dict[str, Any]) -> dict[str, Any]:
"""Generate a comprehensive fix strategy for all errors in the codebase."""
api_key = os.environ.get("OPENAI_API_KEY")
if not api_key:
return {"status": "error", "message": "OpenAI API key not configured."}
# Get configured AI client

client = openai.OpenAI(api_key=api_key, base_url=os.environ.get("OPENAI_API_BASE_URL"))
client, model = get_ai_client()

if not client:

return {"status": "error", "message": "AI API not configured. Set ANTHROPIC_AUTH_TOKEN or OPENAI_API_KEY."}

system_message = """
You are a senior software architect and code quality expert.
Expand Down Expand Up @@ -1127,4 +1164,3 @@ def _styles_compatible(style1: dict[str, Any], style2: dict[str, Any]) -> bool:


import time

Loading