Conversation
There was a problem hiding this comment.
Pull request overview
This PR prepares the codebase for the v1.0.0 release by expanding integrations (AWS/Proxmox), improving multi-source inventory linking, and refactoring the database layer to support both SQLite and PostgreSQL with a migration-first schema approach.
Changes:
- Added AWS integration types and health-check tests; updated inventory ID prefixing for Bolt/Ansible and improved node linking with per-source
sourceData. - Refactored database initialization to use a
DatabaseAdapterabstraction (SQLite + Postgres) and migration-first schema management (removing legacy schema files). - Updated docs, Dockerfiles, environment templates, and release metadata to reflect the v1.0.0 configuration model (
.envas source of truth).
Reviewed changes
Copilot reviewed 89 out of 313 changed files in this pull request and generated 5 comments.
Show a summary per file
| File | Description |
|---|---|
| backend/src/integrations/bolt/BoltPlugin.ts | Clarifies node ID prefixing behavior in groups. |
| backend/src/integrations/aws/types.ts | Adds AWS integration type definitions + auth error type. |
| backend/src/integrations/aws/tests/AWSPlugin.healthCheck.test.ts | Adds unit tests for AWS plugin health check behavior. |
| backend/src/integrations/ansible/AnsibleService.ts | Prefixes Ansible group host node IDs with ansible: consistently. |
| backend/src/integrations/NodeLinkingService.ts | Extends linked nodes with per-source sourceData; adjusts identifier extraction. |
| backend/src/integrations/IntegrationManager.ts | Switches aggregated inventory nodes to LinkedNode[]; adds provisioning capability aggregation; adds per-source inventory timeout. |
| backend/src/database/rbac-schema.sql | Removes legacy RBAC schema file (migration-first). |
| backend/src/database/migrations/010_drop_integration_configs.sql | Adds migration to drop integration_configs table. |
| backend/src/database/migrations/009_integration_configs.sql | Adds historical migration that creates integration_configs. |
| backend/src/database/migrations/008_journal_entries.sql | Adds journal_entries table migration. |
| backend/src/database/migrations/007_permissions_and_provisioner_role.sql | Seeds new permissions and Provisioner role. |
| backend/src/database/migrations/002_seed_rbac_data.sql | Makes seeding timestamps consistent via CURRENT_TIMESTAMP. |
| backend/src/database/migrations/000_initial_schema.sql | Adds migration header documentation. |
| backend/src/database/migrations.sql | Removes legacy monolithic migration file. |
| backend/src/database/errors.ts | Adds typed DB error classes. |
| backend/src/database/audit-schema.sql | Removes legacy audit schema file (migration-first). |
| backend/src/database/SQLiteAdapter.ts | Introduces SQLite adapter implementing DatabaseAdapter. |
| backend/src/database/PostgresAdapter.ts | Introduces Postgres adapter implementing DatabaseAdapter. |
| backend/src/database/MigrationRunner.ts | Refactors migration runner to use DatabaseAdapter + dialect-aware migration selection. |
| backend/src/database/ExecutionRepository.ts | Refactors repository to use dialect placeholders via DatabaseAdapter. |
| backend/src/database/DatabaseService.ts | Refactors DB initialization to adapter factory + migrations only. |
| backend/src/database/DatabaseAdapter.ts | Adds unified DB adapter interface. |
| backend/src/database/AdapterFactory.ts | Adds adapter factory driven by DB_TYPE / DATABASE_URL. |
| backend/src/config/schema.ts | Adds Proxmox + AWS config schemas and provisioning safety config. |
| backend/src/config/ConfigService.ts | Parses Proxmox/AWS/provisioning env vars; adds getters. |
| backend/package.json | Bumps backend version to 1.0.0; updates build to copy migrations; adds pg + AWS SDK deps. |
| backend/.env.example | Updates v1.0.0 env template (integrations, provisioning safety, defaults). |
| README.md | Updates positioning/features/docs links/version history for v1.0.0. |
| Dockerfile.ubuntu | Adds prod-deps stage; copies migrations; uses shared entrypoint; updates env defaults. |
| Dockerfile.alpine | Adds prod-deps stage; copies migrations; uses shared entrypoint; updates env defaults. |
| Dockerfile | Updates version label; adjusts Bolt/OpenBolt install; copies migrations; adds env defaults. |
| CLAUDE.md | Updates database documentation to migration-first approach. |
| CHANGELOG.md | Adds v1.0.0 release notes including breaking changes. |
| .pre-commit-config.yaml | Tightens duplicate/backup filename detection regex. |
| .kirograph/config.json | Adds KiroGraph indexing config. |
| .kiro/todo/proxmox-restart-required.md | Documents cached build issue requiring restart (internal note). |
| .kiro/todo/expert-mode-prototype-pollution.md | Documents prototype pollution risk (internal note). |
| .kiro/todo/database-schema-cleanup-task.md | Documents DB cleanup task completion (internal note). |
| .kiro/todo/REMAINING_TODOS_REPORT.md | Adds prioritized TODO report (internal note). |
| .kiro/steering/tech.md | Adds tech stack steering doc. |
| .kiro/steering/structure.md | Adds repository structure steering doc. |
| .kiro/steering/security-best-practices.md | Adds allowlist-secret guidance for detect-secrets. |
| .kiro/steering/product.md | Adds product summary steering doc. |
| .kiro/steering/kirograph.md | Adds KiroGraph usage steering doc. |
| .kiro/steering/git-best-practices.md | Expands pre-commit hook documentation. |
| .kiro/specs/v1-release-prep/requirements.md | Adds release-prep requirements spec. |
| .kiro/specs/v1-release-prep/design.md | Adds release-prep design spec. |
| .kiro/specs/v1-release-prep/.config.kiro | Adds Kiro spec config. |
| .kiro/specs/pabawi-release-1-0-0/.config.kiro | Adds Kiro spec config. |
| .kiro/specs/missing-lifecycle-actions/bugfix.md | Adds lifecycle actions bugfix spec. |
| .kiro/specs/missing-lifecycle-actions/.config.kiro | Adds Kiro spec config. |
| .kiro/specs/azure-support/requirements.md | Adds Azure integration requirements spec (future work). |
| .kiro/specs/azure-support/.config.kiro | Adds Kiro spec config. |
| .kiro/specs/090/puppet-pabawi-refactoring/tasks.md | Adds Puppet module refactoring plan. |
| .kiro/specs/090/puppet-pabawi-refactoring/requirements.md | Adds Puppet module refactoring requirements. |
| .kiro/specs/090/puppet-pabawi-refactoring/.config.kiro | Adds Kiro spec config. |
| .kiro/specs/090/proxmox-integration/.config.kiro | Adds Kiro spec config. |
| .kiro/specs/090/proxmox-frontend-ui/requirements.md | Adds Proxmox frontend UI requirements spec. |
| .kiro/specs/090/proxmox-frontend-ui/.config.kiro | Adds Kiro spec config. |
| .kiro/settings/mcp.json | Adds MCP server config for KiroGraph tools. |
| .kiro/hooks/kirograph-sync-on-save.json | Adds KiroGraph sync hook (save). |
| .kiro/hooks/kirograph-sync-on-delete.json | Adds KiroGraph sync hook (delete). |
| .kiro/hooks/kirograph-sync-on-create.json | Adds KiroGraph sync hook (create). |
| .kiro/done/proxmox-ssl-fix.md | Marks Proxmox SSL fix as done (internal note). |
| .kiro/done/provisioning-endpoint-fix.md | Marks provisioning endpoint fix as done (internal note). |
| .kiro/done/node-linking-redesign.md | Marks node linking redesign as done (internal note). |
| .kiro/done/docker-missing-schema-files.md | Marks Docker schema copy fix as done (internal note). |
| .kiro/done/database-schema-cleanup-task.md | Marks DB schema cleanup as done (internal note). |
| .kiro/database-cleanup-prompt.md | Adds DB cleanup prompt (internal note). |
| .kiro/analysis/ManageTab_Lifecycle_Flow.md | Adds ManageTab lifecycle flow analysis (internal note). |
| .github/copilot-instructions.md | Updates DB schema guidance to migration-first. |
| .env.docker | Updates Docker env template for v1.0.0 configuration model. |
| // Split into statements (handle multi-statement migrations) | ||
| const statements = sql | ||
| .split(";") | ||
| .map(s => s.trim()) | ||
| .filter(s => { | ||
| // Filter out empty statements and comment-only statements | ||
| .map((s) => s.trim()) | ||
| .filter((s) => { | ||
| if (s.length === 0) return false; | ||
| // Remove single-line comments and check if anything remains | ||
| const withoutComments = s.split('\n') | ||
| .map(line => line.replace(/--.*$/, '').trim()) | ||
| .filter(line => line.length > 0) | ||
| .join('\n'); | ||
| const withoutComments = s | ||
| .split("\n") | ||
| .map((line) => line.replace(/--.*$/, "").trim()) | ||
| .filter((line) => line.length > 0) | ||
| .join("\n"); | ||
| return withoutComments.length > 0; | ||
| }); | ||
|
|
||
| // Execute each statement | ||
| // Execute each statement individually via the adapter | ||
| for (const statement of statements) { | ||
| await this.execStatement(statement); | ||
| await this.db.execute(statement); | ||
| } |
There was a problem hiding this comment.
The migration runner now claims PostgreSQL support, but splitting SQL by ; is not safe for many valid Postgres migrations (e.g., functions/procedures with $$...;...$$, trigger bodies, complex statements containing semicolons). This can cause migrations to fail or be applied partially. A concrete improvement is to execute the entire migration file as a single execute(sql) when dialect === "postgres", while keeping per-statement execution for SQLite (or alternatively, add a more robust SQL statement parser/marker-based splitter).
| // Ensure database directory exists (for SQLite) | ||
| const dbDir = dirname(this.databasePath); | ||
| if (!existsSync(dbDir)) { | ||
| mkdirSync(dbDir, { recursive: true }); | ||
| } |
There was a problem hiding this comment.
This directory creation is now duplicated with SQLiteAdapter.initialize() (which already creates the directory when the path isn’t :memory:). To keep responsibilities clear (and avoid doing filesystem work when DB_TYPE=postgres), consider removing this block from DatabaseService and letting the SQLite adapter handle it internally, or conditionally running it only when the selected adapter dialect is SQLite.
| getAllProvisioningCapabilities(): { | ||
| source: string; | ||
| capabilities: { | ||
| name: string; | ||
| description: string; | ||
| operation: "create" | "destroy"; | ||
| parameters: { | ||
| name: string; | ||
| type: string; | ||
| required: boolean; | ||
| default?: unknown; | ||
| }[]; | ||
| }[]; | ||
| }[] { |
There was a problem hiding this comment.
getAllProvisioningCapabilities() introduces a large inline return type and relies on runtime duck-typing + no-unsafe-* suppressions. This makes the API harder to maintain and refactor safely. A more robust approach is to define a shared exported type (e.g., ProvisioningCapability) and a small interface like ProvisioningCapableExecutionTool { listProvisioningCapabilities(): ProvisioningCapability[] }, then narrow via a typed type-guard—removing the need for ESLint suppressions and making the contract explicit for plugin implementers.
| getAllProvisioningCapabilities(): { | ||
| source: string; | ||
| capabilities: { | ||
| name: string; | ||
| description: string; | ||
| operation: "create" | "destroy"; | ||
| parameters: { | ||
| name: string; | ||
| type: string; | ||
| required: boolean; | ||
| default?: unknown; | ||
| }[]; | ||
| }[]; | ||
| }[] { | ||
| const result: { | ||
| source: string; | ||
| capabilities: { | ||
| name: string; | ||
| description: string; | ||
| operation: "create" | "destroy"; | ||
| parameters: { | ||
| name: string; | ||
| type: string; | ||
| required: boolean; | ||
| default?: unknown; | ||
| }[]; | ||
| }[]; | ||
| }[] = []; | ||
|
|
||
| for (const [name, tool] of this.executionTools) { | ||
| // Check if the plugin has listProvisioningCapabilities method | ||
| if ( | ||
| "listProvisioningCapabilities" in tool && | ||
| typeof tool.listProvisioningCapabilities === "function" | ||
| ) { | ||
| try { | ||
| // eslint-disable-next-line @typescript-eslint/no-unsafe-assignment, @typescript-eslint/no-unsafe-call | ||
| const capabilities = tool.listProvisioningCapabilities(); | ||
| // eslint-disable-next-line @typescript-eslint/no-unsafe-member-access | ||
| if (capabilities && capabilities.length > 0) { | ||
| result.push({ | ||
| source: name, | ||
| // eslint-disable-next-line @typescript-eslint/no-unsafe-assignment |
There was a problem hiding this comment.
getAllProvisioningCapabilities() introduces a large inline return type and relies on runtime duck-typing + no-unsafe-* suppressions. This makes the API harder to maintain and refactor safely. A more robust approach is to define a shared exported type (e.g., ProvisioningCapability) and a small interface like ProvisioningCapableExecutionTool { listProvisioningCapabilities(): ProvisioningCapability[] }, then narrow via a typed type-guard—removing the need for ESLint suppressions and making the contract explicit for plugin implementers.
| getAllProvisioningCapabilities(): { | |
| source: string; | |
| capabilities: { | |
| name: string; | |
| description: string; | |
| operation: "create" | "destroy"; | |
| parameters: { | |
| name: string; | |
| type: string; | |
| required: boolean; | |
| default?: unknown; | |
| }[]; | |
| }[]; | |
| }[] { | |
| const result: { | |
| source: string; | |
| capabilities: { | |
| name: string; | |
| description: string; | |
| operation: "create" | "destroy"; | |
| parameters: { | |
| name: string; | |
| type: string; | |
| required: boolean; | |
| default?: unknown; | |
| }[]; | |
| }[]; | |
| }[] = []; | |
| for (const [name, tool] of this.executionTools) { | |
| // Check if the plugin has listProvisioningCapabilities method | |
| if ( | |
| "listProvisioningCapabilities" in tool && | |
| typeof tool.listProvisioningCapabilities === "function" | |
| ) { | |
| try { | |
| // eslint-disable-next-line @typescript-eslint/no-unsafe-assignment, @typescript-eslint/no-unsafe-call | |
| const capabilities = tool.listProvisioningCapabilities(); | |
| // eslint-disable-next-line @typescript-eslint/no-unsafe-member-access | |
| if (capabilities && capabilities.length > 0) { | |
| result.push({ | |
| source: name, | |
| // eslint-disable-next-line @typescript-eslint/no-unsafe-assignment | |
| export interface ProvisioningCapabilityParameter { | |
| name: string; | |
| type: string; | |
| required: boolean; | |
| default?: unknown; | |
| } | |
| export interface ProvisioningCapability { | |
| name: string; | |
| description: string; | |
| operation: "create" | "destroy"; | |
| parameters: ProvisioningCapabilityParameter[]; | |
| } | |
| export interface ProvisioningCapabilitySource { | |
| source: string; | |
| capabilities: ProvisioningCapability[]; | |
| } | |
| export interface ProvisioningCapableExecutionTool { | |
| listProvisioningCapabilities(): ProvisioningCapability[]; | |
| } | |
| private isProvisioningCapableExecutionTool( | |
| tool: ExecutionToolPlugin | |
| ): tool is ExecutionToolPlugin & ProvisioningCapableExecutionTool { | |
| return ( | |
| "listProvisioningCapabilities" in tool && | |
| typeof tool.listProvisioningCapabilities === "function" | |
| ); | |
| } | |
| getAllProvisioningCapabilities(): ProvisioningCapabilitySource[] { | |
| const result: ProvisioningCapabilitySource[] = []; | |
| for (const [name, tool] of this.executionTools) { | |
| if (this.isProvisioningCapableExecutionTool(tool)) { | |
| try { | |
| const capabilities = tool.listProvisioningCapabilities(); | |
| if (capabilities.length > 0) { | |
| result.push({ | |
| source: name, |
| // Set source (singular) to the primary source for backward compatibility | ||
| // This ensures code that reads node.source still works correctly | ||
| linkedNode.source = linkedNode.sources[0]; |
There was a problem hiding this comment.
Setting linkedNode.source to linkedNode.sources[0] can be order-dependent (based on discovery/iteration order), which may result in inconsistent “primary source” selection across runs and potentially incorrect behavior for code paths that still rely on node.source. To make this deterministic, pick the primary source using a consistent rule (e.g., highest configured plugin priority, or a stable predefined ordering), and then set linkedNode.source from that chosen source.
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 82 out of 303 changed files in this pull request and generated 5 comments.
Comments suppressed due to low confidence (1)
backend/src/integrations/IntegrationManager.ts:1
- Using
Promise.racefor timeouts here can create unhandled rejections: iftimeoutPromisewins, thePromise.all([source.getInventory(), ...])continues running in the background, and ifgetInventory()later rejects it may surface as an unhandled rejection (and still consume resources). Consider wrapping each source call in a timeout helper that attaches a.catch(...)to the losing branch, or useAbortController-based cancellation (if supported by the integrations) so timed-out work is actually stopped/handled.
/**
| // Get hosts (direct members) | ||
| const hosts = Array.isArray(group.hosts) ? group.hosts : []; | ||
| // Get hosts (direct members) and prefix with source | ||
| const hosts = Array.isArray(group.hosts) ? group.hosts.map((h: string) => `ansible:${h}`) : []; |
There was a problem hiding this comment.
This maps every group.hosts entry into a string template without validating the runtime type. If the Ansible inventory contains non-string host entries (or mixed types), this will produce IDs like ansible:[object Object], breaking node lookups/linking. Filter to typeof h === \"string\" (and consider trimming/ignoring empty strings) before prefixing.
| const hosts = Array.isArray(group.hosts) ? group.hosts.map((h: string) => `ansible:${h}`) : []; | |
| const hosts = Array.isArray(group.hosts) | |
| ? group.hosts | |
| .filter((h: unknown): h is string => typeof h === "string") | |
| .map((h) => h.trim()) | |
| .filter((h) => h.length > 0) | |
| .map((h) => `ansible:${h}`) | |
| : []; |
| sources: ["ansible"], | ||
| linked: false, | ||
| nodes: hosts.map(host => `ansible:${host}`), // Prefix hosts with ansible: | ||
| nodes: hosts, |
There was a problem hiding this comment.
This maps every group.hosts entry into a string template without validating the runtime type. If the Ansible inventory contains non-string host entries (or mixed types), this will produce IDs like ansible:[object Object], breaking node lookups/linking. Filter to typeof h === \"string\" (and consider trimming/ignoring empty strings) before prefixing.
| // Fetch nodes and groups in parallel with per-source timeout | ||
| // Prevents a single slow source from blocking the entire inventory | ||
| const SOURCE_TIMEOUT_MS = 15_000; |
There was a problem hiding this comment.
Using Promise.race for timeouts here can create unhandled rejections: if timeoutPromise wins, the Promise.all([source.getInventory(), ...]) continues running in the background, and if getInventory() later rejects it may surface as an unhandled rejection (and still consume resources). Consider wrapping each source call in a timeout helper that attaches a .catch(...) to the losing branch, or use AbortController-based cancellation (if supported by the integrations) so timed-out work is actually stopped/handled.
| await this.execute("COMMIT"); | ||
| this._inTransaction = false; | ||
| } | ||
|
|
||
| async rollback(): Promise<void> { | ||
| await this.execute("ROLLBACK"); | ||
| this._inTransaction = false; |
There was a problem hiding this comment.
If COMMIT/ROLLBACK fails (e.g., connection issue), _inTransaction will remain true, leaving the adapter in an inconsistent state and potentially blocking future transactions. Set _inTransaction = false in a finally block (and consider surfacing a structured DB error type for these cases, similar to query/connection failures).
| await this.execute("COMMIT"); | |
| this._inTransaction = false; | |
| } | |
| async rollback(): Promise<void> { | |
| await this.execute("ROLLBACK"); | |
| this._inTransaction = false; | |
| try { | |
| await this.execute("COMMIT"); | |
| } finally { | |
| this._inTransaction = false; | |
| } | |
| } | |
| async rollback(): Promise<void> { | |
| try { | |
| await this.execute("ROLLBACK"); | |
| } finally { | |
| this._inTransaction = false; | |
| } |
| // Ensure database directory exists (for SQLite) | ||
| const dbDir = dirname(this.databasePath); | ||
| if (!existsSync(dbDir)) { | ||
| mkdirSync(dbDir, { recursive: true }); | ||
| } | ||
|
|
||
| // Create database connection | ||
| this.db = await this.createConnection(); | ||
| // Create adapter via factory | ||
| this.adapter = await createDatabaseAdapter({ databasePath: this.databasePath }); | ||
| await this.adapter.initialize(); |
There was a problem hiding this comment.
Directory creation is now duplicated (SQLiteAdapter.initialize already creates the parent dir when appropriate). More importantly, when running with DB_TYPE=postgres, DATABASE_PATH may not represent a filesystem path, and dirname(...)/mkdirSync(...) can behave unexpectedly or fail. Prefer moving the directory creation responsibility fully into SQLiteAdapter.initialize() and only doing it when the selected dialect is SQLite.
…a file - Update COPY instruction to include all database files and migrations - Change from copying only schema.sql to copying entire src/database/ directory - Ensures future database-related files are automatically included in builds - Improves maintainability by avoiding need to update Dockerfile when new database files are added
…tern - Add new Proxmox integration with ProxmoxClient, ProxmoxService, and ProxmoxIntegration classes - Implement Proxmox API client with authentication and resource management capabilities - Add comprehensive test coverage for Proxmox integration and service layer - Update IntegrationManager to register and manage Proxmox integration - Add dedicated Proxmox routes handler for API endpoints - Update integration types to include Proxmox configuration schema - Refactor ConfigService and schema to support Proxmox settings - Update server configuration to include Proxmox routes - Add Kiro specification files for puppet-pabawi refactoring workflow - Update vitest configuration for improved test execution - Improves infrastructure flexibility by adding virtualization platform support alongside existing integrations
…ycle management - Add ProvisionPage with dynamic form generation for VM and LXC creation - Add ManageTab component for node lifecycle actions (start, stop, reboot, destroy) - Add ProxmoxProvisionForm and ProxmoxSetupGuide components for integration setup - Add formGenerator utility for dynamic form rendering based on capability metadata - Add permissions system for RBAC-aware UI element visibility - Add comprehensive validation and error handling for provisioning operations - Add test utilities and generators for provisioning-related tests - Add documentation for Proxmox setup, provisioning, permissions, and management workflows - Add Kiro specification files for Proxmox frontend UI and integration features - Update Navigation component to include new Provision page route - Update IntegrationSetupPage to support Proxmox configuration - Update API client with provisioning endpoints and type definitions - Update package.json with required dependencies
- Move 7 completed task documents from .kiro/todo to .kiro/done directory - Add comprehensive REMAINING_TODOS_REPORT.md with prioritized task breakdown - Include test failure analysis, RBAC issues, and environment configuration items - Add SQLite test database temporary files (test-migration.db-shm, test-migration.db-wal) - Update frontend logger with minor improvements - Consolidate task tracking and provide clear roadmap for remaining work
- Add getNodes() method to retrieve PVE cluster nodes with status and resource info - Add getNextVMID() method to fetch next available VM ID from cluster - Add getISOImages() method to list ISO images available on node storage - Add getTemplates() method to list OS templates available on node storage - Implement caching for node and resource queries (60-120 second TTL) - Add corresponding integration layer methods to expose Proxmox service capabilities - Update frontend navigation and routing to support new provisioning workflows - Enhance ProxmoxProvisionForm with node selection and resource discovery UI - Update API client and type definitions for provisioning operations - Improve error handling and logging across Proxmox integration layer
- Update foundFilter initial state from 'all' to 'found' for better UX - Aligns with filterMode default behavior to show relevant data by default - Reduces noise by filtering out not-found entries on initial load
…ved API handling - Fix ProxmoxClient to use form-urlencoded encoding for POST/PUT/DELETE requests instead of JSON, matching Proxmox API requirements - Add detailed error messages in API responses by including response body text for better diagnostics - Add getStorages() method to ProxmoxService and ProxmoxIntegration to query available storage devices on nodes with optional content type filtering - Add getNetworkBridges() method to ProxmoxService and ProxmoxIntegration to query network interfaces on nodes with bridge type filtering - Implement caching for both storage and network queries with 120-second TTL to reduce API calls - Update ProxmoxProvisionForm frontend component to use new storage and network discovery endpoints - Extend provisioning types to support storage and network configuration options - Update API client to expose new storage and network discovery endpoints
… and code quality improvements Co-authored-by: alvagante <283804+alvagante@users.noreply.github.com>
- DatabaseAdapter interface with query, execute, transaction support - SQLiteAdapter: full implementation with WAL mode, foreign keys - PostgresAdapter: full implementation with pg pool, transaction client - AdapterFactory: creates correct adapter based on DB_TYPE env var - Error types: DatabaseQueryError, DatabaseConnectionError - Tests for all three components (39 tests passing)
Backend: - AWS plugin with EC2 inventory, provisioning, lifecycle, health check - Journal service with timeline aggregation and note support - Integration config service with AES-256-GCM encryption and merge - Proxmox VM/LXC compute type routing and separate create methods - API routes for journal, integration config, and AWS endpoints - ConfigService and schema updated for AWS env vars - Database migrations for journal_entries and integration_configs Frontend: - AWS provisioning form and setup guide - Proxmox VM and LXC provision forms (split from single form) - Journal timeline component with note form - Integration config management page - RBAC UI updated for new permission types - Navigation and routing updates Fixes: - Markdown table formatting in docs/integrations/aws.md - Allowlisted example AWS keys in AWSSetupGuide.svelte - Updated .secrets.baseline
…sment - Add Azure support design and requirements specifications - Add ManageTab lifecycle flow analysis documenting end-to-end error tracing - Add missing lifecycle actions bugfix documentation and Kiro config - Add workspace analysis summary and product/tech steering documents - Update ProxmoxClient with improved integration handling - Update inventory routes with enhanced node lifecycle actions support - Update ManageTab component with better state management - Update API client with refined lifecycle actions fetching - Update NodeDetailPage with improved node detail rendering - Add security assessment documentation for v0.10.0
- Add ALLOW_DESTRUCTIVE_PROVISIONING environment variable to control whether destructive provisioning actions (destroy VM/LXC, terminate EC2) are allowed - Add ProvisioningConfig schema and provisioning safety configuration to ConfigService - Add isDestructiveProvisioningAllowed() method to ConfigService for checking if destructive actions are enabled - Add /api/config/provisioning endpoint to expose provisioning safety settings to frontend - Add allowDestructiveActions guard to AWS EC2 terminate endpoint, returning 403 DESTRUCTIVE_ACTION_DISABLED when disabled - Add allowDestructiveActions guard to Proxmox destroy endpoints, returning 403 when disabled - Pass provisioning safety options through integration routers (AWS, Proxmox) from main integrations router - Update ManageTab.svelte frontend to respect provisioning safety configuration - Update configuration documentation and provisioning guide with safety settings - Update setup.sh to include provisioning safety configuration - Defaults to false (destructive actions disabled) for safer out-of-box experience
- Add AWS_REGIONS configuration to support querying multiple regions via JSON array or comma-separated string - Update ConfigService to parse and validate multi-region configuration - Add regions field to AWSConfigSchema for type safety - Improve AWS credential validation to support default credential chain (env vars, ~/.aws/credentials, instance profile) - Remove requirement for explicit credentials or profile when using default chain - Update AWSPlugin health check to include regions in diagnostic output - Fix NodeLinkingService to skip AWS URIs when extracting hostname identifiers - Add source field to linked nodes for backward compatibility with code reading node.source - Update .env.example with comprehensive AWS and Proxmox configuration documentation - Clean up test database temporary files (test-migration.db-shm, test-migration.db-wal) - Update integration and performance tests to reflect configuration changes
- Remove source-specific prefixes (ansible:, bolt:) from node identifiers in AnsibleService and BoltPlugin - Replace node.id with node.name in AWS region, VPC, and tag grouping methods for consistency - Update Proxmox service to use node.name instead of node.id across all grouping operations - Add dual-mode AMI input to AWSProvisionForm with search and direct ID entry options - Implement switchAMIMode and onDirectAMIInput functions for flexible AMI selection - Remove redundant describeAllInstances method from AWSService - Ensures consistent node naming convention across all integration services and improves user experience with flexible AMI input methods
…lution - Add comprehensive AWS instance metadata collection including CPU options, network interfaces, and hardware details - Implement node ID resolution in AWSPlugin.getNodeFacts() to support both aws:region:instanceId format and node name/id lookup - Expand command whitelist with system inspection commands and change matching mode from exact to prefix - Add JournalCollectors service for centralized journal event collection - Update IntegrationManager.getLinkedInventory() to support cache control parameter - Remove terminate_instance action from AWS provisioning operations for safety - Fix AWS availability zone regex pattern to correctly extract region identifier - Enhance network interface data collection with DNS names and detailed NIC information - Add instance name tag extraction and platform metadata to AWS facts - Update frontend components to reflect improved inventory and journal data handling
Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com> Signed-off-by: Alessandro Franceschi <al@example42.com>
Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com> Signed-off-by: Alessandro Franceschi <al@example42.com>
Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com> Signed-off-by: Alessandro Franceschi <al@example42.com>
Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com> Signed-off-by: Alessandro Franceschi <al@example42.com>
- Add KiroGraph sync hooks for file create, delete, and save events - Configure MCP server with KiroGraph tools and auto-approval settings - Add KiroGraph steering guide with tool usage documentation and workflow - Initialize KiroGraph configuration with language support and exclusion patterns - Enable semantic code graph indexing for improved exploration capabilities
…eaders/encoding Agent-Logs-Url: https://github.com/example42/pabawi/sessions/21235efe-0ec2-4f7c-b639-3c95b5857cf2 Co-authored-by: alvagante <283804+alvagante@users.noreply.github.com>
- Delete orphaned database schema files (audit-schema.sql, migrations.sql, rbac-schema.sql) - Rename initial migration to 000_initial_schema.sql for clarity - Update DatabaseService to load only numbered migrations instead of base schema files - Move all schema definitions to the migrations directory as single source of truth - Add documentation in copilot-instructions.md explaining migration-first approach - Create internal task documentation for database schema cleanup process - Eliminate duplicate table definitions that existed across base schemas and migrations
…a file - Update COPY instruction to include all database files and migrations - Change from copying only schema.sql to copying entire src/database/ directory - Ensures future database-related files are automatically included in builds - Improves maintainability by avoiding need to update Dockerfile when new database files are added
Co-authored-by: alvagante <283804+alvagante@users.noreply.github.com>
- 1.1 Delete IntegrationConfigService source, types, and tests - 1.2 Delete IntegrationConfigRouter and route tests - 1.3 Create migration 010 to drop integration_configs table - 1.4 Remove IntegrationConfigService from server.ts, plugins use ConfigService directly
…shboard - 2.1 Rewrite IntegrationConfigPage.svelte as status dashboard with color-coded indicators - 2.2 Refactor test connection API functions to use .env-sourced config (no body)
- 3.1 Refactor ProxmoxSetupGuide to pure .env snippet generator - 3.2 Refactor AWSSetupGuide to pure .env snippet generator - 3.3 Refactor remaining guides (Bolt, PuppetDB, SSH, Hiera, Puppetserver, Ansible)
…fix TS errors - 4.1 Remove saveIntegrationConfig, getIntegrationConfig, getIntegrationConfigs, deleteIntegrationConfig, saveProxmoxConfig, saveAWSConfig, IntegrationConfigRecord from api.ts - 4.2 Fix all TypeScript compilation errors across frontend (33 errors in 7 files)
- 5.1 Fix migration test to expect 11 migrations (including 010) - 5.2 Fix all lint and type errors across both workspaces (274+ errors resolved)
…d components - 6.1 Add ConfigService property tests (Property 1: env parsing round-trip) - 6.2 Add IntegrationManager property tests (Property 8: graceful degradation) - 6.3 Add Integration Status Dashboard tests (Properties 2, 3, 4) - 6.4 Add Env Snippet Wizard tests (Properties 5, 6, 7) - 6.5 Verify obsolete IntegrationConfigService tests removed Note: pre-existing ProxmoxService.test.ts failure unrelated to this work
- 7.1 Update README.md with v1.0.0 version, .env-only config, new features - 7.2 Update configuration, API, and architecture docs - 7.3 Update integration setup guides and Docker deployment docs - 7.4 Add v1.0.0 CHANGELOG entry with breaking changes
- 8.1 Verify all IntegrationConfigService references removed from source code - 8.1 Clean stale entries from .secrets.baseline - 8.2 Audit dependencies — all production deps actively used
- 9.1 Update package.json (root, backend, frontend) to 1.0.0 - 9.2 Update product.md steering, health endpoint, verify docker-compose
- 10.1 Fix trailing whitespace, detect-secrets false positives, ESLint warnings - All pre-commit hooks pass (except hadolint-docker which requires Docker Desktop)
- 11.1 Update Dockerfile version labels and descriptions - 11.2 Align integration ENV defaults across all Dockerfiles - 11.3 Fix migration copy in alpine/ubuntu Dockerfiles - 11.4 Add backend-deps stage for production deps in alpine/ubuntu - 11.5 Rewrite .env.docker with comprehensive integration examples - 11.6 Update docker-compose.yml with volume mounts and health check - 11.7 Standardize docker-entrypoint.sh across all Dockerfiles - 11.8 Update Docker deployment documentation
- 12.1 Sync .env.example with ConfigService parsing (add missing vars, fix gaps) - 12.2 Fix setup guide env var mismatches (BoltSetupGuide, PuppetdbSetupGuide) - 12.3 Verify .env.docker matches ConfigService and Docker ENV defaults - 12.4 Reconcile Dockerfile ENV defaults with ConfigService Zod schema
- 13.1 npm test: 2893 passed (1 pre-existing ProxmoxService failure) - 13.1 npm run lint: zero errors - 13.1 tsc --noEmit: zero type errors - 13.1 pre-commit run --all-files: all hooks pass
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> Signed-off-by: Alessandro Franceschi <al@example42.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> Signed-off-by: Alessandro Franceschi <al@example42.com>
No description provided.