Skip to content

Conversation

@jokob-sk
Copy link
Collaborator

@jokob-sk jokob-sk commented Jan 19, 2026

Authoritative overwrite checks should be implemented inside SQL UPDATE conditions wherever possible, avoiding per-row Python logic.

A plugin may overwrite a field if:

Plugins must provide a non-empty value for an overwrite to occur.
the field is empty, OR
the field’s Source is neither USER nor LOCKED, AND
the plugin’s SET_ALWAYS / SET_EMPTY rules allow it.

Summary by CodeRabbit

  • New Features

    • Field lock/unlock via UI and API; per-field source tracking; new primary IPv4/IPv6, VLAN and force-status device fields; improved IP handling that preserves IPv4/IPv6 independently.
  • Documentation

    • Added comprehensive developer docs and a quick-reference for field locking and plugin authorization.
  • Localization

    • New UI strings for save outcomes and field-lock tooltips across many locales.
  • Tests

    • Extensive unit and integration tests for field-locking, IP handling, and transactional atomicity.
  • Style

    • New UI styles for lock buttons and source badges.

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 19, 2026

📝 Walkthrough

Walkthrough

Adds per-field source tracking and locking: schema columns for primary IPs, VLAN, force status and many *Source fields; authoritative update logic and DB handlers; lock/unlock API plus GraphQL/OpenAPI schema updates; frontend lock UI, plugin config options (SET_ALWAYS/SET_EMPTY), IP primary handling, tests, and docs.

Changes

Cohort / File(s) Summary
Database schema & upgrade
back/app.sql, server/database.py, server/db/db_upgrade.py
Added devPrimaryIPv4, devPrimaryIPv6, devVlan, devForceStatus, many *Source TEXT columns to Devices; added cur_devVlan to CurrentScan; introduced EXPECTED_DEVICES_COLUMNS and updated ensure_column logic.
Authoritative handler
server/db/authoritative_handler.py
New module: FIELD_SOURCE_MAP, TRACKED_FIELDS, plugin settings loader, can_overwrite_field(), SQL helper, source-determination, enforce_source_on_user_update(), get_locked_field_overrides(), lock_field(), unlock_field().
API, schemas & GraphQL
server/api_server/api_server_start.py, server/api_server/openapi/schemas.py, server/api_server/graphql_endpoint.py
New POST /device/{mac}/field/lock endpoint and LockDeviceFieldRequest model; OpenAPI and GraphQL Device types include new primary IP, VLAN, forceStatus, and *Source fields.
Device model & scan handling
server/models/device_instance.py, server/scan/device_handling.py, server/scan/session_events.py, server/const.py
setDeviceData integrates source tracking; added lockDeviceField/unlockDeviceField; scan pipeline now enforces per-plugin overwrite rules, computes/stores primary IPv4/IPv6, per-field sources, validates IPs, and applies devForceStatus logic; session events filter invalid/duplicate IP events.
Frontend (UI & assets)
front/deviceDetailsEdit.php, front/css/app.css
Added lock/unlock button, source badges, toggleFieldLock() client handler (duplicate insertion noted), trackedFields mapping, and CSS for lock UI; switched some messages to locale strings.
Plugin configs & templates
front/plugins/*/config.json, front/plugins/__template/config.json, front/plugins/newdev_template/config.json
Added SET_ALWAYS and SET_EMPTY settings across many plugin configs and template; newdev_template adds devPrimaryIPv4/devPrimaryIPv6/devVlan/devForceStatus UI settings. Some files include duplicated blocks and minor typos in elementOptions (ordeable).
Localization
front/php/templates/language/*.json (en_us + many locales)
Added Device_Save_* and FieldLock_* keys across locales (en_us populated; others mostly placeholders); minor en_us label updates.
Documentation & navigation
docs/API_DEVICE_FIELD_LOCK.md, docs/DEVICE_MANAGEMENT.md, docs/PLUGINS_DEV_CONFIG.md, docs/QUICK_REFERENCE_FIELD_LOCK.md, mkdocs.yml
New docs for API, plugin authorization, quick reference, and device management; added navigation entries in mkdocs.yml.
Plugin helper
front/plugins/plugin_helper.py
normalize_mac() special-cases "internet" (case-insensitive) returning "Internet" early; otherwise unchanged.
Tests
test/authoritative_fields/*, test/test_device_atomicity.py, test/test_plugin_helper.py
Extensive unit/integration tests for overwrite rules, lock API, IP handling, force-status, atomic device operations, and normalize_mac behavior; new fixtures and in-memory DB schemas.
CI workflow
.github/workflows/docker_dev_unsafe.yml
New GitHub Actions workflow for multi-arch dev image builds (guarded, non-production tagging).

Sequence Diagram(s)

sequenceDiagram
    actor User
    participant Frontend as Frontend UI
    participant API as API Server
    participant Handler as Authoritative Handler
    participant DB as Database
    participant Plugin as Plugin

    rect rgba(100,150,200,0.5)
    Note over User,Frontend: User locks a field
    User->>Frontend: Click lock button
    Frontend->>API: POST /device/{mac}/field/lock {fieldName, lock:true}
    API->>Handler: lock_field(mac, fieldName)
    Handler->>DB: UPDATE Devices SET <field>Source='LOCKED'
    DB-->>Handler: success
    Handler-->>API: {fieldName, locked:true}
    API-->>Frontend: 200 OK (locked)
    Frontend->>Frontend: Update UI (locked badge)
    end

    rect rgba(150,100,200,0.5)
    Note over Plugin,Handler: Plugin attempts to update a field
    Plugin->>Handler: can_overwrite_field(field, current_source, plugin_prefix, settings, value)
    Handler-->>Plugin: allow/deny
    Plugin->>DB: (only if allowed) write field value
    end

    rect rgba(200,150,100,0.5)
    Note over User,Frontend: User unlocks a field
    User->>Frontend: Click unlock button
    Frontend->>API: POST /device/{mac}/field/lock {fieldName, lock:false}
    API->>Handler: unlock_field(mac, fieldName)
    Handler->>DB: UPDATE Devices SET <field>Source=''
    DB-->>Handler: success
    Handler-->>API: {fieldName, locked:false}
    API-->>Frontend: 200 OK (unlocked)
    Frontend->>Frontend: Update UI (remove badge)
    end
Loading

Poem

🐰
I dug a burrow, found a key,
Locked a field so plugins flee,
Sources labeled, tidy and bright,
Data stays put through day and night,
Hooray — the rabbit guards it right!

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'feat: Authoritative plugin fields' accurately summarizes the main feature addition—implementing authoritative field overwrite logic for device fields managed by plugins.
Docstring Coverage ✅ Passed Docstring coverage is 81.19% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 9

Note

Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (3)
server/models/device_instance.py (1)

593-609: Make the update + source enforcement atomic.

get_temp_db_connection() uses isolation_level=None, enabling SQLite autocommit mode. In autocommit, cur.execute(sql, values) at line 595 commits immediately before enforce_source_on_user_update() runs at line 604. If enforcement fails, the attempted conn.rollback() cannot undo the already-committed UPDATE, leaving device data and source tracking out of sync. Wrap both the UPDATE and enforcement in an explicit transaction using BEGIN IMMEDIATE to ensure both succeed or both fail atomically.

🧩 Proposed fix
-            cur = conn.cursor()
-            cur.execute(sql, values)
-            conn.commit()
+            conn.execute("BEGIN IMMEDIATE")
+            cur = conn.cursor()
+            cur.execute(sql, values)
 
             # Enforce source tracking on user updates
             # User-updated fields should have their *Source set to "USER"
             user_updated_fields = {k: v for k, v in data.items() if k in FIELD_SOURCE_MAP}
             if user_updated_fields and not data.get("createNew", False):
                 try:
                     enforce_source_on_user_update(normalized_mac, user_updated_fields, conn)
                 except Exception as e:
                     mylog("none", [f"[DeviceInstance] Failed to enforce source tracking: {e}"])
                     conn.rollback()
                     conn.close()
                     return {"success": False, "error": f"Source tracking failed: {e}"}
+
+            conn.commit()
server/api_server/api_server_start.py (2)

65-76: Fix source-field resolution; concatenation breaks several columns.

field_name + "Source" won’t match devLastIpSource, devFqdnSource, devSsidSource, or devParentMacSource. Use FIELD_SOURCE_MAP to avoid mismatches and keep the allowlist in sync.

🐛 Proposed fix
-from .openapi.schemas import (  # noqa: E402 [flake8 lint suppression]
+from .openapi.schemas import (  # noqa: E402 [flake8 lint suppression]
     DeviceSearchRequest, DeviceSearchResponse,
@@
     RecentEventsRequest, SetDeviceAliasRequest
 )
+from db.authoritative_handler import FIELD_SOURCE_MAP  # noqa: E402 [flake8 lint suppression]
@@
-    source_field = field_name + "Source"
-    allowed_tracked_fields = {
-        "devMac", "devName", "devLastIP", "devVendor", "devFQDN", 
-        "devSSID", "devParentMAC", "devParentPort", "devParentRelType", "devVlan"
-    }
-    if field_name not in allowed_tracked_fields:
+    source_field = FIELD_SOURCE_MAP.get(field_name)
+    if not source_field:
         return jsonify({"success": False, "error": f"Field '{field_name}' cannot be locked"}), 400

Also applies to: 472-479


47-47: Normalize MAC before the DB write.

This endpoint writes device data but doesn’t validate/normalize the MAC before calling updateDeviceColumn.

✅ Proposed fix
-from plugin_helper import is_mac  # noqa: E402 [flake8 lint suppression]
+from plugin_helper import is_mac, normalize_mac  # noqa: E402 [flake8 lint suppression]
@@
 def api_device_field_lock(mac, payload=None):
@@
-    device_handler = DeviceInstance()
+    normalized_mac = normalize_mac(mac)
+    if not normalized_mac:
+        return jsonify({"success": False, "error": "Invalid MAC address"}), 400
+
+    device_handler = DeviceInstance()
@@
-        result = device_handler.updateDeviceColumn(mac, source_field, new_source)
+        result = device_handler.updateDeviceColumn(normalized_mac, source_field, new_source)

As per coding guidelines, please validate MAC addresses with normalize_mac() before database writes.

Also applies to: 463-488

🤖 Fix all issues with AI agents
In `@front/php/templates/language/sv_sv.json`:
- Around line 300-305: The Swedish translation file contains empty values for
the FieldLock_* keys (FieldLock_Error, FieldLock_Lock_Tooltip, FieldLock_Locked,
FieldLock_Source_Label, FieldLock_Unlock_Tooltip, FieldLock_Unlocked) which
results in blank UI because fallback only checks key existence; either provide
proper Swedish translations for those keys (replace empty strings with localized
text) or remove these keys from the sv_sv JSON (or remove sv_sv from the
available languages) so the app can fall back to English—update the keys in the
same JSON object where FieldLock_* are defined and ensure no empty string values
remain.

In `@front/plugins/dig_scan/config.json`:
- Around line 85-87: The JSON settings array is invalid because object entries
are missing commas between them; add a trailing comma after each settings object
where you see entries like { "function": "SET_ALWAYS", "type": ... } and the
neighboring object literals (also at the other occurrences around the sections
referenced near the second and third SET_ALWAYS blocks) so each object in the
array is separated by a comma, ensuring the arrays containing
"function":"SET_ALWAYS" and adjacent objects are properly comma-delimited.

In `@front/plugins/luci_import/config.json`:
- Around line 571-632: The JSON entries for "SET_ALWAYS" and "SET_EMPTY" are
currently placed under database_column_definitions but are settings schema
entries and should be moved into the manifest's settings array; locate the
objects with "function": "SET_ALWAYS" and "function": "SET_EMPTY" and remove
them from database_column_definitions, then re-add them into the settings array
(for example after "RUN_TIMEOUT") so the plugin reads them via
get_setting_value(); ensure their schema (type, default_value, options,
localized/name/description) is preserved exactly when relocating.

In `@front/plugins/pihole_scan/config.json`:
- Around line 219-344: There are duplicate configuration blocks for the
functions "SET_ALWAYS" and "SET_EMPTY" which creates ambiguity; remove the first
pair (the blocks whose default_value is ["devLastIP"] and [] respectively) and
keep a single authoritative "SET_ALWAYS" block with default_value
["devMac","devLastIP"] and a single "SET_EMPTY" block with default_value
["devName","devVendor"], ensuring each retained block's options include
["devMac","devLastIP","devName","devVendor"] and keep their localized
name/description entries intact.

In `@front/plugins/unifi_api_import/config.json`:
- Around line 42-44: The JSON is invalid due to stray closing tokens after the
"params" property; locate the object that contains the "params" key in
front/plugins/unifi_api_import/config.json (the entry showing `"params": []`)
and remove the extra trailing `}` and `]` shown after that property so the
surrounding object/array syntax is correct (ensure no dangling commas remain and
the enclosing object/array are properly closed).

In `@server/api_server/api_server_start.py`:
- Around line 499-501: Replace the illegal mylog("error", ...) call in the
exception handler with one of the allowed levels
(none/minimal/verbose/debug/trace) and pass the message as a list payload per
the logging contract; specifically update the catch block that references
field_name and mac to call e.g. mylog("minimal", [f"Error locking field
{field_name} for {mac}", str(e)]) and then return the same jsonify response —
ensure the first argument is one of the permitted level strings and the second
argument is a list containing the descriptive message and the error detail.

In `@server/models/device_instance.py`:
- Around line 681-705: Both lockDeviceField and unlockDeviceField must normalize
the mac and always close the DB connection; update lockDeviceField and
unlockDeviceField to call normalize_mac(mac) before using it with
lock_field/unlock_field (and still validate field_name against
FIELD_SOURCE_MAP), obtain conn via get_temp_db_connection(), run
lock_field(mac_normalized, field_name, conn) / unlock_field(mac_normalized,
field_name, conn) inside try, and move conn.close() into a finally block so the
connection is closed whether an exception occurs; preserve the same
success/error return shapes.

In `@server/scan/device_handling.py`:
- Around line 544-548: The IPv6 branch currently bypasses validation (cur_IP is
passed through when ":" in cur_IP), so add IPv6 validation consistent with IPv4
handling by reusing or mirroring the logic in format_ip_long from
server/helper.py: when cur_IP contains ":", call the IPv6 validator (e.g.,
format_ip_long or a new helper wrapper) to normalize/validate the address and
only assign primary_ipv6 if the validator returns a valid normalized value; keep
primary_ipv4 assignment unchanged and ensure invalid IPv6 values result in an
empty primary_ipv6 string (or the same error/empty behavior as invalid IPv4 via
check_IP_format).

In `@test/authoritative_fields/test_field_lock_scan_integration.py`:
- Around line 689-702: The test has an IndentationError because the results dict
and following for loop are over-indented; adjust indentation so the variable
results and the for mac, expected in results.items(): loop align with
surrounding test code (same block level as other setup/assert sections) and
ensure the cur.execute(...) call and its fetchone() remain inside the loop body;
update the block around results, for loop, and the assertion that references
cur.execute, devName/devVendor to match the file's surrounding indentation and
remove the extra leading spaces.
🟡 Minor comments (12)
front/php/templates/language/it_it.json-206-209 (1)

206-209: Provide Italian (or fallback) text for Device_Save_ strings.*

These are empty, so success/error messages will render blank in the UI. Please add translations or copy the en_us strings as a temporary fallback.

front/php/templates/language/it_it.json-300-305 (1)

300-305: Fill FieldLock_ tooltips/status labels (currently empty).*

Empty values will hide lock/unlock tooltips and status labels for Italian users. Please translate or fallback to en_us to avoid blank UI.

front/php/templates/language/pt_br.json-300-305 (1)

300-305: Missing Portuguese translations for field lock feature.

These new keys have empty placeholder values for the field locking UI. Consider adding proper translations such as:

  • FieldLock_Error: "Erro ao bloquear campo"
  • FieldLock_Lock_Tooltip: "Clique para bloquear este campo"
  • FieldLock_Locked: "Bloqueado"
  • FieldLock_Source_Label: "Fonte"
  • FieldLock_Unlock_Tooltip: "Clique para desbloquear este campo"
  • FieldLock_Unlocked: "Desbloqueado"
front/php/templates/language/pt_br.json-206-209 (1)

206-209: Missing Portuguese translations for device save messages.

These new keys have empty placeholder values. Without translations, Portuguese-speaking users will see blank or fallback text when device save operations succeed or fail. Consider adding proper translations such as:

  • Device_Save_Failed: "Falha ao salvar dispositivo"
  • Device_Save_Unauthorized: "Não autorizado a salvar dispositivo"
  • Device_Saved_Success: "Dispositivo salvo com sucesso"
  • Device_Saved_Unexpected: "Erro inesperado ao salvar dispositivo"
front/php/templates/language/pl_pl.json-300-305 (1)

300-305: Empty Polish translations for field lock feature.

The field lock UI elements lack Polish translations. Users will see empty tooltips and status labels for the new field locking functionality.

Suggested translations:

"FieldLock_Error": "Błąd blokady pola",
"FieldLock_Lock_Tooltip": "Zablokuj to pole",
"FieldLock_Locked": "Zablokowane",
"FieldLock_Source_Label": "Źródło",
"FieldLock_Unlock_Tooltip": "Odblokuj to pole",
"FieldLock_Unlocked": "Odblokowane",
front/php/templates/language/pl_pl.json-206-209 (1)

206-209: Empty Polish translations for device save messages.

These translation keys have empty placeholder values. Polish users will see empty text or fallback behavior for device save status messages until translations are provided.

Suggested translations:

"Device_Save_Failed": "Zapisanie urządzenia nie powiodło się",
"Device_Save_Unauthorized": "Brak autoryzacji do zapisania urządzenia",
"Device_Saved_Success": "Urządzenie zapisane pomyślnie",
"Device_Saved_Unexpected": "Nieoczekiwany błąd podczas zapisywania urządzenia",
front/php/templates/language/sv_sv.json-206-209 (1)

206-209: Add Swedish strings for these keys or confirm that empty values reliably fall back to the default locale.

New Device_Save_* and FieldLock_* keys have empty string values in the Swedish translation file. Without a confirmed fallback mechanism, these will display as blank UI text. Either provide Swedish translations or verify that your i18n system handles empty values by falling back to English.

docs/PLUGINS_DEV_CONFIG.md-180-259 (1)

180-259: Docs examples don’t match the config schema used in this PR.

The examples show per-field "SET_ALWAYS": "1" / "SET_EMPTY": "1", but the manifests here define SET_ALWAYS/SET_EMPTY as settings arrays of column names. This will confuse plugin authors. Consider updating the examples and decision flow to reflect the actual manifest format.

docs/API_DEVICE_FIELD_LOCK.md-136-148 (1)

136-148: Fix inconsistent LOCKED color description.
You describe LOCKED as orange above, but the workflow says it turns red. Please align to one color.

✏️ Suggested fix
-4. Button changes to unlock (🔓) and source indicator turns red (LOCKED)
+4. Button changes to unlock (🔓) and source indicator turns orange (LOCKED)
test/authoritative_fields/test_ip_update_logic.py-84-156 (1)

84-156: Use @pytest.mark.usefixtures for side-effect-only fixtures.

The mock_device_handling fixture is applied only for its patching side effects and never referenced in the test body. Ruff flags it as ARG001 (unused argument). Follow pytest conventions by moving it to a decorator.

🔧 Proposed fix
+@pytest.mark.usefixtures("mock_device_handling")
-def test_primary_ipv6_is_set_and_ipv4_preserved(in_memory_db, mock_device_handling):
+def test_primary_ipv6_is_set_and_ipv4_preserved(in_memory_db):
     """Setting IPv6 in CurrentScan should update devPrimaryIPv6 without changing devPrimaryIPv4."""

Also applies to: test_primary_ipv4_is_set_and_ipv6_preserved

test/authoritative_fields/FIELD_LOCK_TEST_SUMMARY.md-279-282 (1)

279-282: Align status count with “Total Tests: 33.”

The status line states “24 comprehensive tests,” which conflicts with the earlier total of 33.

📝 Proposed fix
-**Status:** ✅ 24 comprehensive tests created covering all scenarios
+**Status:** ✅ 33 comprehensive tests created covering all scenarios
front/deviceDetailsEdit.php-597-605 (1)

597-605: Source indicator not updated when unlocking a field.

When shouldLock is false (unlocking), the source indicator text and styling are not updated. The user will see stale "LOCKED" state until page refresh.

🐛 Proposed fix to update source indicator on unlock
         // Update source indicator if locked
         if (shouldLock) {
           const sourceIndicator = lockBtn.next();
           if (sourceIndicator.hasClass("input-group-addon")) {
             sourceIndicator.text("LOCKED");
             sourceIndicator.attr("class", "input-group-addon text-danger");
             sourceIndicator.attr("title", getString("FieldLock_Source_Label") + "LOCKED");
           }
+        } else {
+          // Update source indicator on unlock (source reverts to previous state, typically NEWDEV)
+          const sourceIndicator = lockBtn.next();
+          if (sourceIndicator.hasClass("input-group-addon")) {
+            sourceIndicator.html('<i class="fa-solid fa-tag"></i> NEWDEV');
+            sourceIndicator.attr("class", "input-group-addon text-muted");
+            sourceIndicator.attr("title", getString("FieldLock_Source_Label") + "NEWDEV");
+          }
         }

Note: The actual source after unlocking may vary (could be NEWDEV or the previous source). Consider fetching the updated source from the API response for accuracy.

🧹 Nitpick comments (19)
front/php/templates/language/nb_no.json (1)

778-778: Consider adding a trailing newline.

The file ends without a trailing newline. Adding one improves compatibility with POSIX tools and avoids "no newline at end of file" warnings in diffs.

front/php/templates/language/cs_cz.json (1)

206-209: Empty translations for new Device save status keys.

These new keys follow the existing pattern of placeholder translations. Consider adding Czech translations before release to ensure a consistent localized user experience, or verify that empty strings fall back gracefully to a default language.

server/db/db_upgrade.py (2)

86-96: Guard blocks schema upgrades for non-Devices tables due to empty list

The guard at line 88 uses an empty list [] as the fallback for non-Devices tables. Since [] is falsy in Python, the condition if not expected or column_name not in expected: will always trigger (returning False) when the function is called with any table other than "Devices". While this is currently not an issue since ensure_column is only called with "Devices" (37 call sites in server/database.py), the overly restrictive logic makes the function unsuitable for evolution of other tables. Consider checking if expected is not None and column_name not in expected: instead, so the validation only applies when a table has defined expected columns.


263-277: Remove unused cur_devVlan column or align naming with table pattern

The cur_devVlan column in the CurrentScan table definition breaks the consistent cur_* naming pattern (e.g., cur_MAC, cur_IP, cur_SSID). However, this column is never populated in any INSERT statements and never referenced in SELECT queries, making it dead code. Either remove the column or rename it to cur_Vlan to match the naming convention used throughout the table.

front/css/app.css (1)

2442-2473: Consider reusing existing color CSS variables for lock indicators.
Keeps palette consistent and reduces duplicate hex values.

♻️ Suggested tweak
 .input-group-addon.text-warning {
-    color: `#f39c12`;
+    color: var(--color-yellow);
     background-color: rgba(243, 156, 18, 0.1);
 }
 
 .input-group-addon.text-danger {
-    color: `#dd4b39`;
+    color: var(--color-red);
     background-color: rgba(221, 75, 57, 0.1);
 }
 
 .input-group-addon.text-muted {
-    color: `#8c8c8c`;
+    color: var(--color-gray);
     background-color: rgba(140, 140, 140, 0.05);
 }
docs/QUICK_REFERENCE_FIELD_LOCK.md (1)

86-88: Tighten wording to avoid repeated “again”.

✏️ Suggested tweak
- ✅ You can lock it again anytime
+ ✅ You can lock it anytime
server/api_server/api_server_start.py (1)

463-468: Use the validated payload to avoid bypassing schema checks.

payload is unused, which bypasses the LockDeviceFieldRequest validation and triggers ARG001.

♻️ Proposed refactor
-    data = request.get_json() or {}
+    data = payload if payload is not None else (request.get_json() or {})
+    if hasattr(data, "model_dump"):
+        data = data.model_dump(exclude_unset=True)
+    elif hasattr(data, "dict"):
+        data = data.dict(exclude_unset=True)
test/test_device_field_lock.py (3)

42-58: Consider logging cleanup failures for debugging.

The cleanup fixture silently swallows all exceptions. While this is acceptable for test teardown, logging the exception could help diagnose test environment issues.

♻️ Suggested improvement
 `@pytest.fixture`(autouse=True)
 def cleanup_test_device(test_mac):
     """Clean up test device before and after test."""
     device_handler = DeviceInstance()
     # Clean before test
     try:
         device_handler.deleteDeviceByMAC(test_mac)
-    except Exception:
-        pass
+    except Exception as e:
+        print(f"[cleanup] Pre-test cleanup note: {e}")
     
     yield
     
     # Clean after test
     try:
         device_handler.deleteDeviceByMAC(test_mac)
-    except Exception:
-        pass
+    except Exception as e:
+        print(f"[cleanup] Post-test cleanup note: {e}")

272-290: Remove unused auth_headers parameter.

The auth_headers fixture is not used in this test since it interacts directly with DeviceInstance rather than the API client.

♻️ Suggested fix
-    def test_locked_field_blocks_plugin_overwrite(self, test_mac, auth_headers):
+    def test_locked_field_blocks_plugin_overwrite(self, test_mac):
         """Verify locked fields prevent plugin source overwrites."""
         device_handler = DeviceInstance()

292-316: Remove unused auth_headers parameter.

Same as above - this integration test uses DeviceInstance directly.

♻️ Suggested fix
-    def test_field_source_tracking(self, test_mac, auth_headers):
+    def test_field_source_tracking(self, test_mac):
         """Verify field source is tracked correctly."""
         device_handler = DeviceInstance()
front/plugins/snmp_discovery/config.json (1)

589-650: Missing localization for non-English languages.

The SET_ALWAYS and SET_EMPTY settings only include en_us translations, while other settings in this plugin (e.g., RUN, CMD, RUN_SCHD) include es_es and de_de translations. Consider adding translations for consistency.

front/plugins/nbtscan_scan/config.json (3)

86-86: JSON formatting inconsistency.

Line 86 has inconsistent formatting with extra spaces before "function". This appears to be a merge artifact. While valid JSON, it breaks the file's formatting consistency.

Suggested fix
-    {      "function": "SET_ALWAYS",
+    {
+      "function": "SET_ALWAYS",

145-145: Same formatting issue on CMD block.

Suggested fix
-    {      "function": "CMD",
+    {
+      "function": "CMD",

86-144: Missing localization for non-English languages.

Similar to snmp_discovery, the SET_ALWAYS and SET_EMPTY settings only include en_us translations, while other settings in this plugin include es_es and de_de translations.

server/api_server/openapi/schemas.py (1)

276-280: Consider validating fieldName against allowed lockable fields.

The fieldName field accepts any string, but only certain fields support locking (those in FIELD_SOURCE_MAP in authoritative_handler.py). Consider adding validation to provide better error messages at the schema level.

Suggested improvement
+# At the top of the file, define allowed lockable fields
+ALLOWED_LOCKABLE_FIELDS = Literal[
+    "devMac", "devName", "devFQDN", "devLastIP", "devVendor",
+    "devSSID", "devParentMAC", "devParentPort", "devParentRelType", "devVlan"
+]
+
 class LockDeviceFieldRequest(BaseModel):
     """Request to lock/unlock a device field."""
-    fieldName: str = Field(..., description="Field name to lock/unlock (devMac, devName, devLastIP, etc.)")
+    fieldName: ALLOWED_LOCKABLE_FIELDS = Field(..., description="Field name to lock/unlock")
     lock: bool = Field(True, description="True to lock the field, False to unlock")
server/db/authoritative_handler.py (4)

18-19: Remove unused noqa directives.

Static analysis indicates these noqa: E402 directives are not needed. The E402 rule (module level import not at top of file) is either not enabled or not triggered here.

Suggested fix
-from logger import mylog  # noqa: E402 [flake8 lint suppression]
-from helper import get_setting_value  # noqa: E402 [flake8 lint suppression]
+from logger import mylog
+from helper import get_setting_value

75-117: Unused plugin_prefix parameter.

The plugin_prefix parameter is declared but never used in can_overwrite_field. Consider either using it for logging/debugging or removing it from the signature. Based on learnings from relevant code snippets, logging could help troubleshoot overwrite decisions.

Option 1: Remove unused parameter
-def can_overwrite_field(field_name, current_source, plugin_prefix, plugin_settings, field_value):
+def can_overwrite_field(field_name, current_source, plugin_settings, field_value):
Option 2: Use for debug logging
     # Rule 1: USER and LOCKED are protected
     if current_source in ("USER", "LOCKED"):
+        mylog("trace", [f"[can_overwrite_field] {plugin_prefix} blocked from overwriting {field_name}: source is {current_source}"])
         return False

120-135: Unused field_name parameter.

The field_name parameter is declared but never used in get_source_for_field_update. If it's intended for future use or consistency with other function signatures, consider prefixing with underscore to indicate intentional non-use.

Suggested fix
-def get_source_for_field_update(field_name, plugin_prefix, is_user_override=False):
+def get_source_for_field_update(_field_name, plugin_prefix, is_user_override=False):

156-156: Unused loop variable.

The new_value variable is not used in the loop body. Rename to _new_value to indicate intentional non-use.

Suggested fix
-    for field_name, new_value in updates_dict.items():
+    for field_name, _new_value in updates_dict.items():

Comment on lines +300 to +305
"FieldLock_Error": "",
"FieldLock_Lock_Tooltip": "",
"FieldLock_Locked": "",
"FieldLock_Source_Label": "",
"FieldLock_Unlock_Tooltip": "",
"FieldLock_Unlocked": "",
Copy link
Contributor

@coderabbitai coderabbitai bot Jan 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# Check if the file exists and get context around lines 300-305
find . -name "sv_sv.json" -type f

Repository: netalertx/NetAlertX

Length of output: 104


🏁 Script executed:

# Also search for language file handling to understand the fallback mechanism
find . -name "*.php" -o -name "*.js" | head -20

Repository: netalertx/NetAlertX

Length of output: 568


🏁 Script executed:

# Search for how language files are loaded and handled
rg -t php "sv_sv|language.*fallback|translation.*fallback" --max-count=20

Repository: netalertx/NetAlertX

Length of output: 505


🏁 Script executed:

# Read sv_sv.json around lines 300-305 to verify the content
sed -n '298,307p' ./front/php/templates/language/sv_sv.json

Repository: netalertx/NetAlertX

Length of output: 365


🏁 Script executed:

# Find and check en_us.json for the same FieldLock_* keys
find . -name "en_us.json" -type f

Repository: netalertx/NetAlertX

Length of output: 104


🏁 Script executed:

# Read the fallback implementation in lang.php
rg -A 10 "If key not found in selected language" ./front/php/templates/language/lang.php

Repository: netalertx/NetAlertX

Length of output: 509


🏁 Script executed:

# Extract FieldLock_* entries from en_us.json
rg 'FieldLock_' ./front/php/templates/language/en_us.json -A 0

Repository: netalertx/NetAlertX

Length of output: 393


🏁 Script executed:

# Get the actual values for FieldLock_* keys in en_us.json
rg '"FieldLock_' ./front/php/templates/language/en_us.json

Repository: netalertx/NetAlertX

Length of output: 393


🏁 Script executed:

# Check how many untranslated (empty string) keys exist in sv_sv.json
rg '": "",' ./front/php/templates/language/sv_sv.json | wc -l

Repository: netalertx/NetAlertX

Length of output: 66


🏁 Script executed:

# Compare to en_us.json to assess translation completeness
rg '": "",' ./front/php/templates/language/en_us.json | wc -l

Repository: netalertx/NetAlertX

Length of output: 64


🏁 Script executed:

# Check file sizes to understand translation coverage
wc -l ./front/php/templates/language/sv_sv.json ./front/php/templates/language/en_us.json

Repository: netalertx/NetAlertX

Length of output: 170


Complete Swedish translations or remove untranslated keys.

The FieldLock_* UI labels are empty strings in the Swedish translation file. Because the fallback logic checks for key existence (not value content), empty strings will render as blank UI instead of falling back to English. Note that the Swedish translation file is incomplete overall—775 out of 777 keys are empty strings, suggesting this file needs either full translation or to be removed from the language list until translations are ready.

🤖 Prompt for AI Agents
In `@front/php/templates/language/sv_sv.json` around lines 300 - 305, The Swedish
translation file contains empty values for the FieldLock_* keys
(FieldLock_Error, FieldLock_Lock_Tooltip, FieldLock_Locked,
FieldLock_Source_Label, FieldLock_Unlock_Tooltip, FieldLock_Unlocked) which
results in blank UI because fallback only checks key existence; either provide
proper Swedish translations for those keys (replace empty strings with localized
text) or remove these keys from the sv_sv JSON (or remove sv_sv from the
available languages) so the app can fall back to English—update the keys in the
same JSON object where FieldLock_* are defined and ensure no empty string values
remain.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ignore empty translation strings - managed via Weblate - only en_us.json strings are needed

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jokob-sk, thank you for the clarification! I understand now—the empty strings in non-English language files like sv_sv.json are intentional placeholders managed through your Weblate translation workflow, with en_us.json serving as the source of truth. I'll note this for future reviews.


✏️ Learnings added
Learnt from: jokob-sk
Repo: netalertx/NetAlertX PR: 1435
File: front/php/templates/language/sv_sv.json:300-305
Timestamp: 2026-01-20T23:52:58.899Z
Learning: Empty translation strings in non-English language files (e.g., sv_sv.json, de_de.json, etc.) should be ignored. Translations are managed externally via Weblate, and only en_us.json needs to contain actual translation values as the source language file.

Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.

Comment on lines 85 to 87
},
{ "function": "SET_ALWAYS",
"type": {
Copy link
Contributor

@coderabbitai coderabbitai bot Jan 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Fix missing commas between settings objects (invalid JSON).

The settings array lacks commas between consecutive objects, which makes the JSON invalid and will prevent the plugin config from loading.

🛠️ Proposed fix
-    }
+    },
@@
-    }
+    },
@@
-    }
+    },

Also applies to: 115-117, 146-148

🤖 Prompt for AI Agents
In `@front/plugins/dig_scan/config.json` around lines 85 - 87, The JSON settings
array is invalid because object entries are missing commas between them; add a
trailing comma after each settings object where you see entries like {
"function": "SET_ALWAYS", "type": ... } and the neighboring object literals
(also at the other occurrences around the sections referenced near the second
and third SET_ALWAYS blocks) so each object in the array is separated by a
comma, ensuring the arrays containing "function":"SET_ALWAYS" and adjacent
objects are properly comma-delimited.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

incorrect assessment - JSON is correct - only a a slight formatting issue

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jokob-sk, you're absolutely right—my apologies for the incorrect assessment.

The JSON is structurally valid. All objects in the settings array are properly comma-separated. The actual formatting issue is the extra whitespace after the opening brace on lines like:

    {      "function": "SET_ALWAYS",

instead of the more conventional:

    { "function": "SET_ALWAYS",

This unusual spacing doesn't break JSON parsing, just makes the formatting inconsistent. There are no missing commas.


✏️ Learnings added
Learnt from: jokob-sk
Repo: netalertx/NetAlertX PR: 1435
File: front/plugins/dig_scan/config.json:85-87
Timestamp: 2026-01-20T23:55:35.174Z
Learning: In front/plugins/dig_scan/config.json (and similar plugin config files), the pattern `{      "function": "SET_ALWAYS",` with extra spaces after the opening brace is valid JSON, even though it's unconventional formatting. This should not be flagged as a missing comma error.

Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.

Comment on lines 219 to 344
{
"function": "SET_ALWAYS",
"type": {
"dataType": "array",
"elements": [
{
"elementType": "select",
"elementOptions": [{ "multiple": "true"}],
"transformers": []
}
]
},
"default_value": ["devLastIP"],
"options": [
"devLastIP"
],
"localized": ["name", "description"],
"name": [
{
"language_code": "en_us",
"string": "Set always columns"
}
],
"description": [
{
"language_code": "en_us",
"string": "These columns are always overwritten by this plugin, unless the user locks or overwrites the value."
}
]
},
{
"function": "SET_EMPTY",
"type": {
"dataType": "array",
"elements": [
{
"elementType": "select",
"elementOptions": [{ "multiple": "true"}],
"transformers": []
}
]
},
"default_value": [],
"options": [
"devLastIP"
],
"localized": ["name", "description"],
"name": [
{
"language_code": "en_us",
"string": "Set empty columns"
}
],
"description": [
{
"language_code": "en_us",
"string": "These columns are only overwritten if they are empty or set to NEWDEV."
}
]
},
{
"function": "SET_ALWAYS",
"type": {
"dataType": "array",
"elements": [
{
"elementType": "select",
"elementOptions": [{ "multiple": "true"}],
"transformers": []
}
]
},
"default_value": ["devMac", "devLastIP"],
"options": [
"devMac",
"devLastIP",
"devName",
"devVendor"
],
"localized": ["name", "description"],
"name": [
{
"language_code": "en_us",
"string": "Set always columns"
}
],
"description": [
{
"language_code": "en_us",
"string": "These columns are always overwritten by this plugin, unless the user locks or overwrites the value."
}
]
},
{
"function": "SET_EMPTY",
"type": {
"dataType": "array",
"elements": [
{
"elementType": "select",
"elementOptions": [{ "multiple": "true"}],
"transformers": []
}
]
},
"default_value": ["devName", "devVendor"],
"options": [
"devMac",
"devLastIP",
"devName",
"devVendor"
],
"localized": ["name", "description"],
"name": [
{
"language_code": "en_us",
"string": "Set empty columns"
}
],
"description": [
{
"language_code": "en_us",
"string": "These columns are only overwritten if they are empty or set to NEWDEV."
}
]
},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Duplicate SET_ALWAYS/SET_EMPTY settings cause ambiguous behavior

You define SET_ALWAYS and SET_EMPTY twice with different defaults/options. This will likely render duplicate UI entries or create non-deterministic precedence when settings are read. Please keep only one authoritative block and consolidate the intended defaults/options.

🛠️ Suggested fix (remove the earlier duplicate pair)
-    {
-      "function": "SET_ALWAYS",
-      "type": {
-        "dataType": "array",
-        "elements": [
-          {
-            "elementType": "select",
-            "elementOptions": [{ "multiple": "true"}],
-            "transformers": []
-          }
-        ]
-      },
-      "default_value": ["devLastIP"],
-      "options": [
-        "devLastIP"
-      ],
-      "localized": ["name", "description"],
-      "name": [
-        {
-          "language_code": "en_us",
-          "string": "Set always columns"
-        }
-      ],
-      "description": [
-        {
-          "language_code": "en_us",
-          "string": "These columns are always overwritten by this plugin, unless the user locks or overwrites the value."
-        }
-      ]
-    },
-    {
-      "function": "SET_EMPTY",
-      "type": {
-        "dataType": "array",
-        "elements": [
-          {
-            "elementType": "select",
-            "elementOptions": [{ "multiple": "true"}],
-            "transformers": []
-          }
-        ]
-      },
-      "default_value": [],
-      "options": [
-        "devLastIP"
-      ],
-      "localized": ["name", "description"],
-      "name": [
-        {
-          "language_code": "en_us",
-          "string": "Set empty columns"
-        }
-      ],
-      "description": [
-        {
-          "language_code": "en_us",
-          "string": "These columns are only overwritten if they are empty or set to NEWDEV."
-        }
-      ]
-    },
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
{
"function": "SET_ALWAYS",
"type": {
"dataType": "array",
"elements": [
{
"elementType": "select",
"elementOptions": [{ "multiple": "true"}],
"transformers": []
}
]
},
"default_value": ["devLastIP"],
"options": [
"devLastIP"
],
"localized": ["name", "description"],
"name": [
{
"language_code": "en_us",
"string": "Set always columns"
}
],
"description": [
{
"language_code": "en_us",
"string": "These columns are always overwritten by this plugin, unless the user locks or overwrites the value."
}
]
},
{
"function": "SET_EMPTY",
"type": {
"dataType": "array",
"elements": [
{
"elementType": "select",
"elementOptions": [{ "multiple": "true"}],
"transformers": []
}
]
},
"default_value": [],
"options": [
"devLastIP"
],
"localized": ["name", "description"],
"name": [
{
"language_code": "en_us",
"string": "Set empty columns"
}
],
"description": [
{
"language_code": "en_us",
"string": "These columns are only overwritten if they are empty or set to NEWDEV."
}
]
},
{
"function": "SET_ALWAYS",
"type": {
"dataType": "array",
"elements": [
{
"elementType": "select",
"elementOptions": [{ "multiple": "true"}],
"transformers": []
}
]
},
"default_value": ["devMac", "devLastIP"],
"options": [
"devMac",
"devLastIP",
"devName",
"devVendor"
],
"localized": ["name", "description"],
"name": [
{
"language_code": "en_us",
"string": "Set always columns"
}
],
"description": [
{
"language_code": "en_us",
"string": "These columns are always overwritten by this plugin, unless the user locks or overwrites the value."
}
]
},
{
"function": "SET_EMPTY",
"type": {
"dataType": "array",
"elements": [
{
"elementType": "select",
"elementOptions": [{ "multiple": "true"}],
"transformers": []
}
]
},
"default_value": ["devName", "devVendor"],
"options": [
"devMac",
"devLastIP",
"devName",
"devVendor"
],
"localized": ["name", "description"],
"name": [
{
"language_code": "en_us",
"string": "Set empty columns"
}
],
"description": [
{
"language_code": "en_us",
"string": "These columns are only overwritten if they are empty or set to NEWDEV."
}
]
},
{
"function": "SET_ALWAYS",
"type": {
"dataType": "array",
"elements": [
{
"elementType": "select",
"elementOptions": [{ "multiple": "true"}],
"transformers": []
}
]
},
"default_value": ["devMac", "devLastIP"],
"options": [
"devMac",
"devLastIP",
"devName",
"devVendor"
],
"localized": ["name", "description"],
"name": [
{
"language_code": "en_us",
"string": "Set always columns"
}
],
"description": [
{
"language_code": "en_us",
"string": "These columns are always overwritten by this plugin, unless the user locks or overwrites the value."
}
]
},
{
"function": "SET_EMPTY",
"type": {
"dataType": "array",
"elements": [
{
"elementType": "select",
"elementOptions": [{ "multiple": "true"}],
"transformers": []
}
]
},
"default_value": ["devName", "devVendor"],
"options": [
"devMac",
"devLastIP",
"devName",
"devVendor"
],
"localized": ["name", "description"],
"name": [
{
"language_code": "en_us",
"string": "Set empty columns"
}
],
"description": [
{
"language_code": "en_us",
"string": "These columns are only overwritten if they are empty or set to NEWDEV."
}
]
},
🤖 Prompt for AI Agents
In `@front/plugins/pihole_scan/config.json` around lines 219 - 344, There are
duplicate configuration blocks for the functions "SET_ALWAYS" and "SET_EMPTY"
which creates ambiguity; remove the first pair (the blocks whose default_value
is ["devLastIP"] and [] respectively) and keep a single authoritative
"SET_ALWAYS" block with default_value ["devMac","devLastIP"] and a single
"SET_EMPTY" block with default_value ["devName","devVendor"], ensuring each
retained block's options include ["devMac","devLastIP","devName","devVendor"]
and keep their localized name/description entries intact.

Comment on lines 499 to 501
except Exception as e:
mylog("error", f"Error locking field {field_name} for {mac}: {str(e)}")
return jsonify({"success": False, "error": str(e)}), 500
Copy link
Contributor

@coderabbitai coderabbitai bot Jan 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Use allowed log levels in mylog.

mylog("error", ...) violates the permitted levels and also skips the expected list payload.

🔧 Proposed fix
-    except Exception as e:
-        mylog("error", f"Error locking field {field_name} for {mac}: {str(e)}")
+    except Exception as e:
+        mylog("verbose", [f"Error locking field {field_name} for {mac}: {e!s}"])
         return jsonify({"success": False, "error": str(e)}), 500

As per coding guidelines, use only none/minimal/verbose/debug/trace with mylog().

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
except Exception as e:
mylog("error", f"Error locking field {field_name} for {mac}: {str(e)}")
return jsonify({"success": False, "error": str(e)}), 500
except Exception as e:
mylog("verbose", [f"Error locking field {field_name} for {mac}: {e!s}"])
return jsonify({"success": False, "error": str(e)}), 500
🧰 Tools
🪛 Ruff (0.14.13)

499-499: Do not catch blind exception: Exception

(BLE001)


500-500: Use explicit conversion flag

Replace with conversion flag

(RUF010)

🤖 Prompt for AI Agents
In `@server/api_server/api_server_start.py` around lines 499 - 501, Replace the
illegal mylog("error", ...) call in the exception handler with one of the
allowed levels (none/minimal/verbose/debug/trace) and pass the message as a list
payload per the logging contract; specifically update the catch block that
references field_name and mac to call e.g. mylog("minimal", [f"Error locking
field {field_name} for {mac}", str(e)]) and then return the same jsonify
response — ensure the first argument is one of the permitted level strings and
the second argument is a list containing the descriptive message and the error
detail.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

accepts lists or string, but loglevel is indeed incorrect

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jokob-sk, thank you for the clarification! You're right that mylog accepts both strings and lists. The key issue is indeed the log level—"error" is not a valid level.

To fix this, please change line 500 to use one of the allowed levels (none, minimal, verbose, debug, or trace). For example:

mylog("minimal", [f"Error locking field {field_name} for {mac}: {str(e)}"])

or simply:

mylog("minimal", f"Error locking field {field_name} for {mac}: {str(e)}")

Both formats work—choose whichever fits the codebase style better.


🧠 Learnings used
Learnt from: adamoutler
Repo: netalertx/NetAlertX PR: 1429
File: server/api_server/api_server_start.py:274-299
Timestamp: 2026-01-18T18:59:59.178Z
Learning: The `/device/<mac>` GET endpoint (api_get_device) in server/api_server/api_server_start.py is a legacy API endpoint that should not be refactored to follow the standard success/error response contract, as it would require UI rework and break backward compatibility.

Learnt from: CR
Repo: netalertx/NetAlertX PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-01-18T01:44:46.064Z
Learning: Applies to **/*.py : Use `mylog(level, [message])` for logging with levels: none/minimal/verbose/debug/trace. Do NOT use 'error' as a level

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🤖 Fix all issues with AI agents
In `@test/authoritative_fields/test_device_field_lock.py`:
- Around line 272-290: The test test_locked_field_blocks_plugin_overwrite only
sets devNameSource to "LOCKED" and re-reads it but never attempts a
plugin/authoritative overwrite; update the test to simulate an overwrite (call
the authoritative update path or mimic a plugin write) after
device_handler.updateDeviceColumn(test_mac, "devNameSource", "LOCKED")—for
example use the authoritative handler or a DeviceInstance method that applies a
source="PLUGIN" update (or call device_handler.setDeviceData with a plugin-like
payload) and then assert device_handler.getDeviceData(test_mac)["devNameSource"]
remains "LOCKED"; also remove the unused auth_headers parameter from the test
signature if you don’t need it or use it in the overwrite call.
- Around line 9-10: Remove the hardcoded '/app' default for NETALERTX_APP and
make the code fail fast when the environment variable is missing: read
os.getenv('NETALERTX_APP') into INSTALL_PATH, check if INSTALL_PATH is falsy and
raise a clear exception (or call sys.exit with an error) rather than falling
back to '/app', then use INSTALL_PATH in the existing sys.path.extend([...])
call so imports cannot be silently misconfigured.
- Around line 178-204: The test_lock_prevents_field_updates currently only
asserts response codes; after calling the update endpoint with update_payload,
add a follow-up GET (e.g., call the same client to GET /device/{test_mac} or use
existing helper test_create_test_device/get_device helper) to retrieve the
device record and assert that devName remains the original value created by
test_create_test_device; reference the test function name
test_lock_prevents_field_updates, the lock endpoint used with lock_payload, and
the update POST using update_payload so you place the assertion right after the
update response check.
- Around line 42-58: The fixture cleanup_test_device currently swallows all
exceptions during pre/post cleanup which hides failures; update it to stop
catching exceptions silently—either let exceptions from
DeviceInstance.deleteDeviceByMAC propagate or explicitly fail the test using
pytest.fail with the caught exception message; modify the try/except blocks
around deleteDeviceByMAC in cleanup_test_device to remove the bare excepts (or
replace them with except Exception as e: pytest.fail(f"...{e}")) so cleanup
failures surface, and keep references to DeviceInstance and deleteDeviceByMAC to
locate the calls.

In `@test/authoritative_fields/test_field_lock_scan_integration.py`:
- Around line 1-12: The test module test_field_lock_scan_integration.py is too
large; split it into smaller modules by scenario groups (e.g. name_vendor
updates, ip_updates, presence_mixed_cases) so each file stays under 500 LOC;
move related test functions and any helper fixtures/classes into the new files
(e.g. test_field_lock_scan_integration_name_vendor.py,
test_field_lock_scan_integration_ip.py,
test_field_lock_scan_integration_presence.py), update any relative imports or
conftest references to keep fixtures (e.g. fixtures used by tests in this file)
available, and ensure pytest discovery still finds tests by keeping top-level
test_*.py filenames and preserving the original test function and class names
(so functions referenced by device_handling scan update integration tests remain
unchanged).
🧹 Nitpick comments (3)
test/authoritative_fields/test_device_field_lock.py (2)

12-14: Drop unused # noqa: E402 directives.

Ruff reports these as unused; removing them keeps lint clean.

♻️ Proposed cleanup
-from helper import get_setting_value  # noqa: E402
-from api_server.api_server_start import app  # noqa: E402
-from models.device_instance import DeviceInstance  # noqa: E402
+from helper import get_setting_value
+from api_server.api_server_start import app
+from models.device_instance import DeviceInstance

292-316: Remove unused auth_headers parameter.

Ruff flags the argument as unused; dropping it (or prefixing with _) will keep lint clean.

test/authoritative_fields/test_field_lock_scan_integration.py (1)

110-183: Avoid unused fixture parameters; use @pytest.mark.usefixtures.

The fixture is used for side effects only, so passing it as an unused argument triggers ARG001. This applies to all tests in this module.

♻️ Proposed pattern
+@pytest.mark.usefixtures("mock_device_handlers")
-def test_scan_updates_newdev_device_name(scan_db, mock_device_handlers):
+def test_scan_updates_newdev_device_name(scan_db):

Comment on lines +9 to +10
INSTALL_PATH = os.getenv('NETALERTX_APP', '/app')
sys.path.extend([f"{INSTALL_PATH}/front/plugins", f"{INSTALL_PATH}/server"])
Copy link
Contributor

@coderabbitai coderabbitai bot Jan 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Avoid hardcoded default for NETALERTX_APP.

The /app fallback can silently misconfigure imports when the env var isn’t set. Prefer requiring the environment variable explicitly. As per coding guidelines, avoid hardcoded paths.

🛠️ Proposed fix
-INSTALL_PATH = os.getenv('NETALERTX_APP', '/app')
+INSTALL_PATH = os.getenv("NETALERTX_APP")
+if not INSTALL_PATH:
+    raise RuntimeError("NETALERTX_APP must be set for tests")
🤖 Prompt for AI Agents
In `@test/authoritative_fields/test_device_field_lock.py` around lines 9 - 10,
Remove the hardcoded '/app' default for NETALERTX_APP and make the code fail
fast when the environment variable is missing: read os.getenv('NETALERTX_APP')
into INSTALL_PATH, check if INSTALL_PATH is falsy and raise a clear exception
(or call sys.exit with an error) rather than falling back to '/app', then use
INSTALL_PATH in the existing sys.path.extend([...]) call so imports cannot be
silently misconfigured.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

skip for now, will clean up later

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jokob-sk, understood! I'll note this for future cleanup. Would you like me to create an issue to track this hardcoded path cleanup task?


🧠 Learnings used
Learnt from: CR
Repo: netalertx/NetAlertX PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-01-18T01:44:46.064Z
Learning: Applies to **/*.py : Use environment variables (`NETALERTX_DB`, `NETALERTX_LOG`, etc.) for file paths instead of hardcoding paths like '/data/db' or relative paths

Comment on lines +1 to +12
"""
Integration tests for device field locking during actual scan updates.
Simulates real-world scenarios by:
1. Setting up Devices table with various source values
2. Populating CurrentScan with new discovery data
3. Running actual device_handling scan updates
4. Verifying field updates respect authorization rules
Tests all combinations of field sources (LOCKED, USER, NEWDEV, plugin name)
with realistic scan data.
"""
Copy link
Contributor

@coderabbitai coderabbitai bot Jan 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Split this test module to stay under 500 LOC.

This file exceeds the 500-line guideline; consider splitting by scenario group (name/vendor updates, IP updates, presence/mixed cases). As per coding guidelines, keep files <=500 lines.

🤖 Prompt for AI Agents
In `@test/authoritative_fields/test_field_lock_scan_integration.py` around lines 1
- 12, The test module test_field_lock_scan_integration.py is too large; split it
into smaller modules by scenario groups (e.g. name_vendor updates, ip_updates,
presence_mixed_cases) so each file stays under 500 LOC; move related test
functions and any helper fixtures/classes into the new files (e.g.
test_field_lock_scan_integration_name_vendor.py,
test_field_lock_scan_integration_ip.py,
test_field_lock_scan_integration_presence.py), update any relative imports or
conftest references to keep fixtures (e.g. fixtures used by tests in this file)
available, and ensure pytest discovery still finds tests by keeping top-level
test_*.py filenames and preserving the original test function and class names
(so functions referenced by device_handling scan update integration tests remain
unchanged).

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ignore guidelien for tests

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jokob-sk, understood! I've noted that test files are exempt from the 500-line guideline. The integration test file can remain as is.


✏️ Learnings added
Learnt from: jokob-sk
Repo: netalertx/NetAlertX PR: 1435
File: test/authoritative_fields/test_field_lock_scan_integration.py:1-12
Timestamp: 2026-01-21T00:17:51.986Z
Learning: Test files (test_*.py or files under test/ directories) are exempt from the 500-line code file maximum guideline in the NetAlertX repository.

Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.


🧠 Learnings used
Learnt from: CR
Repo: netalertx/NetAlertX PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-01-18T01:44:46.064Z
Learning: Applies to **/*.py : Keep code files to a maximum of 500 lines of code; refactor or split into multiple files if larger

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
server/models/device_instance.py (1)

593-633: Make source-tracking updates atomic with the main write.

The device update is committed before source-tracking is enforced. If enforce_source_on_user_update() fails, the API returns an error but the device row is already updated, leaving the DB in an inconsistent state. Defer the commit until after source updates so failures can roll back the whole transaction.

🔧 Proposed fix
             cur.execute(sql, values)
-            conn.commit()
 
             if data.get("createNew", False):
                 # Initialize source-tracking fields on device creation.
@@
                 if initial_sources:
                     # Apply source updates in a single statement for the newly inserted row.
                     set_clause = ", ".join([f"{col}=?" for col in initial_sources.keys()])
                     source_values = list(initial_sources.values())
                     source_values.append(normalized_mac)
                     source_sql = f"UPDATE Devices SET {set_clause} WHERE devMac = ?"
                     cur.execute(source_sql, source_values)
-                    conn.commit()
@@
             if user_updated_fields and not data.get("createNew", False):
                 try:
                     enforce_source_on_user_update(normalized_mac, user_updated_fields, conn)
                 except Exception as e:
                     mylog("none", [f"[DeviceInstance] Failed to enforce source tracking: {e}"])
                     conn.rollback()
-                    conn.close()
                     return {"success": False, "error": f"Source tracking failed: {e}"}
 
+            conn.commit()
             conn.close()
🤖 Fix all issues with AI agents
In `@server/api_server/api_server_start.py`:
- Around line 448-501: The endpoint api_device_field_lock currently calls
DeviceInstance.updateDeviceColumn directly with the raw mac and reimplements
allowed_tracked_fields, which can miss devices when MACs are non‑normalized;
instead normalize the MAC and call DeviceInstance.lockDeviceField(mac,
field_name) or DeviceInstance.unlockDeviceField(mac, field_name) (or the
equivalent lock/unlock methods on DeviceInstance) so validation and
normalization live in one place and you stop duplicating allowed field logic;
remove the local allowed_tracked_fields check and replace the updateDeviceColumn
usage with the appropriate DeviceInstance lock/unlock call and return its result
consistently (handle success/error and exceptions as before).

In `@server/scan/device_handling.py`:
- Line 12: format_ip_long() returns an int (or -1) and should not be assigned to
cur_IP_normalized; change code so format_ip_long(...) is used only to validate
the IP (e.g., check for >=0) and preserve the original string for storage by
assigning cur_IP_normalized = cur_IP (or the original IP-string variable)
instead of the integer; when format_ip_long returns -1 treat the IP as invalid
and set cur_IP_normalized to None/empty (and avoid persisting it). Apply this
change to every occurrence that assigns format_ip_long(...) to cur_IP_normalized
(including the blocks around the mentioned ranges 545-556, 564-566, 583-585) and
keep the import of format_ip_long as validator only.
♻️ Duplicate comments (3)
test/authoritative_fields/test_device_field_lock.py (3)

9-10: Avoid hardcoded '/app' fallback for NETALERTX_APP.

This still risks silently misconfigured imports when the env var isn’t set.

As per coding guidelines, avoid hardcoded paths.


178-204: Add an assertion that the locked field value stays unchanged.

The test only checks the response code, not the lock effect.

💡 Suggested assertion
         resp = client.post(
             f"/device/{test_mac}",
             json=update_payload,
             headers=auth_headers
         )
@@
         assert resp.status_code in [200, 201]
+
+        # Verify the locked field was not overwritten
+        resp = client.get(f"/device/{test_mac}", headers=auth_headers)
+        assert resp.status_code == 200
+        device_data = resp.json
+        assert device_data.get("devName") == "Test Device"
+        assert device_data.get("devNameSource") == "LOCKED"

300-318: Exercise an actual overwrite attempt in the locked-field test.

Right now it only sets the source to LOCKED and re-reads it.

🧹 Nitpick comments (1)
front/plugins/dig_scan/config.json (1)

87-147: Add non‑English localizations for the new settings (or confirm fallback).

SET_ALWAYS/SET_EMPTY only provide en_us strings while other settings include es_es/de_de. If the UI doesn’t fall back to en_us, these labels will be blank in those locales. Consider adding translations to match the rest of the file.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (3)
front/plugins/pihole_api_scan/config.json (1)

183-206: Placeholder name "Setting name" should be updated.

The URL setting's name field (line 198) contains the generic placeholder "Setting name" instead of a descriptive label like "URL" or "PiHole URL". This will display incorrectly in the UI.

Suggested fix
       "name": [
         {
           "language_code": "en_us",
-          "string": "Setting name"
+          "string": "URL"
         }
       ],
server/scan/device_handling.py (2)

314-321: Missing timeout on subprocess call.

Per coding guidelines, subprocess calls require explicit timeouts (minimum 60 seconds). The Popen.communicate() call here lacks a timeout, which could hang indefinitely if the shell command stalls.

🐛 Proposed fix
     local_mac = (
         subprocess.Popen(
             local_mac_cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT
         )
-        .communicate()[0]
+        .communicate(timeout=60)[0]
         .decode()
         .strip()
     )

323-331: Missing timeout on subprocess call.

Same issue as above — local_ip_cmd subprocess lacks a timeout.

🐛 Proposed fix
     local_ip = (
         subprocess.Popen(
             local_ip_cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT
         )
-        .communicate()[0]
+        .communicate(timeout=60)[0]
         .decode()
         .strip()
     )
🤖 Fix all issues with AI agents
In `@front/plugins/__template/config.json`:
- Around line 368-398: The "SET_EMPTY" field configuration uses a misspelled
option key "ordeable" — update the elementOptions in the SET_EMPTY block (the
object with "function": "SET_EMPTY" and its nested "type" -> "elements" ->
elementOptions) to use "orderable": "true" (matching the SET_ALWAYS block) so
the select element option is correctly recognized; keep the rest of the object
(default_value, options, localized, name, description) unchanged.

In `@front/plugins/mikrotik_scan/config.json`:
- Line 278: Replace the misspelled property "ordeable" with the correct
"orderable" inside the elementOptions arrays for the two configuration entries
named SET_ALWAYS and SET_EMPTY; update both occurrences so the frontend
recognizes and enables drag-and-drop reordering by changing "{ "multiple":
"true", "ordeable": "true" }" to use "orderable" instead.

In `@front/plugins/omada_sdn_imp/config.json`:
- Around line 502-532: The config for the setting with "function": "SET_EMPTY"
contains a typo in the elementOptions key "ordeable" — update the options object
inside the SET_EMPTY type's elements (the element with "elementType": "select")
to use the correct key "orderable" (and keep the existing boolean string "true"
or change to boolean true if other entries use booleans) so the select element
options are recognized correctly.

In `@front/plugins/pihole_api_scan/config.json`:
- Around line 150-182: In the SET_EMPTY configuration object change the
misspelled elementOptions key value "ordeable" to "orderable" so the array
element (under function "SET_EMPTY" → type.elements[0] → elementOptions) uses
the correct property name; update the elementOptions entry to { "multiple":
"true", "orderable": "true" } to match the SET_ALWAYS fix above.

In `@front/plugins/sync/config.json`:
- Around line 862-924: Move the SET_ALWAYS and SET_EMPTY entries out of the
database_column_definitions section into the settings array so
get_plugin_authoritative_settings() / get_setting_value() can read them; locate
the objects named "function": "SET_ALWAYS" and "function": "SET_EMPTY" and
cut/paste them into the settings array (preserve their fields and
default_value/options/localized blocks). While moving, fix the typo "ordeable"
to "orderable" in both elementOptions entries and ensure the element structure
matches other settings (e.g., a select with elementOptions { "multiple": "true",
"orderable": "true" } and same transformers array).

In `@server/models/device_instance.py`:
- Around line 597-633: The current user_updated_fields picks any key in
FIELD_SOURCE_MAP (including read-only tracked fields) causing their *Source to
be set even when not updated; change the filter to only include fields that are
both in FIELD_SOURCE_MAP and actually part of the DB UPDATE payload (i.e.,
intersect FIELD_SOURCE_MAP.keys() with the set of columns you build for the
UPDATE statement—use whatever variable holds the update columns/fields or
compute updated_fields = [k for k in data.keys() if k is included in the SQL
UPDATE logic]—then call enforce_source_on_user_update(normalized_mac,
user_updated_fields, conn) with that constrained set so only truly updated
fields get their *Source set to "USER").

In `@test/authoritative_fields/test_device_field_lock.py`:
- Around line 196-200: The test asserts that devNameSource becomes "NEWDEV"
after unlocking, but unlock_field() in authoritative_handler.py sets the source
to an empty string; update the assertion in test_device_field_lock.py to expect
'' for devNameSource (i.e., change the check after client.get(...) to assert
device_data.get("devNameSource") == '') so it matches unlock_field() behavior.
♻️ Duplicate comments (2)
test/authoritative_fields/test_device_field_lock.py (2)

202-227: Test doesn't verify the locked field remains unchanged.

The test locks devName, attempts an update, and only verifies the API returns 200. It should also verify that devName remained unchanged after the update attempt.

🔧 Suggested assertion
         # Update should succeed at API level but authoritative handler should prevent it
         # The field update logic checks source in the database layer
         # For now verify the API accepts the request
         assert resp.status_code in [200, 201]
+
+        # Verify locked field remains unchanged
+        resp = client.get(f"/device/{test_mac}", headers=auth_headers)
+        assert resp.status_code == 200
+        device_data = resp.json
+        assert device_data.get("devName") == "Test Device", "Locked field should not have been updated"
+        assert device_data.get("devNameSource") == "LOCKED"

324-342: Unused auth_headers parameter and incomplete overwrite test.

The auth_headers parameter is unused. More importantly, the test only sets the source to "LOCKED" and re-reads it without actually attempting a plugin-style overwrite to verify the blocking behavior.

🔧 Suggested fix
-    def test_locked_field_blocks_plugin_overwrite(self, test_mac, auth_headers):
+    def test_locked_field_blocks_plugin_overwrite(self, test_mac):
         """Verify locked fields prevent plugin source overwrites."""
         device_handler = DeviceInstance()
         ...
+        # Import and use can_overwrite_field to verify blocking
+        from db.authoritative_handler import can_overwrite_field, get_plugin_authoritative_settings
+        
+        plugin_settings = get_plugin_authoritative_settings("ARPSCAN")
+        can_overwrite = can_overwrite_field("devName", "LOCKED", "ARPSCAN", plugin_settings, "New Plugin Name")
+        assert can_overwrite is False, "LOCKED source should block plugin overwrites"
🧹 Nitpick comments (16)
front/plugins/internet_ip/config.json (1)

421-421: Typo: "ordeable" should be "orderable".

The elementOptions contains "ordeable": "true" which appears to be a typo for "orderable". This may prevent the UI from correctly enabling ordering functionality for the multi-select.

Proposed fix
-            "elementOptions": [{ "multiple": "true", "ordeable": "true" }],
+            "elementOptions": [{ "multiple": "true", "orderable": "true" }],

Same typo exists on line 452 for SET_EMPTY.

front/plugins/omada_sdn_openapi/config.json (1)

451-451: Typo: "ordeable" should be "orderable".

Same typo as noted in other plugin configs.

Proposed fix
-            "elementOptions": [{ "multiple": "true", "ordeable": "true" }],
+            "elementOptions": [{ "multiple": "true", "orderable": "true" }],

Same typo exists on line 482 for SET_EMPTY.

front/plugins/freebox/config.json (1)

314-314: Typo: "ordeable" should be "orderable".

Proposed fix
-            "elementOptions": [{ "multiple": "true", "ordeable": "true" }],
+            "elementOptions": [{ "multiple": "true", "orderable": "true" }],

Same typo exists on line 345 for SET_EMPTY.

front/plugins/unifi_api_import/config.json (1)

508-508: Typo: "ordeable" should be "orderable".

Proposed fix
-            "elementOptions": [{ "multiple": "true", "ordeable": "true" }],
+            "elementOptions": [{ "multiple": "true", "orderable": "true" }],

Same typo exists on line 541 for SET_EMPTY.

front/plugins/luci_import/config.json (1)

394-394: Typo: "ordeable" should be "orderable".

Proposed fix
-            "elementOptions": [{ "multiple": "true", "ordeable": "true" }],
+            "elementOptions": [{ "multiple": "true", "orderable": "true" }],

Same typo exists on line 425 for SET_EMPTY.

front/plugins/dig_scan/config.json (1)

93-93: Typo: "ordeable" should be "orderable".

Proposed fix
-            "elementOptions": [{ "multiple": "true", "ordeable": "true"}],
+            "elementOptions": [{ "multiple": "true", "orderable": "true"}],

Same typo exists on line 124 for SET_EMPTY.

test/test_device_atomicity.py (1)

22-23: Remove unnecessary noqa directives.

Static analysis indicates these noqa comments are unused since E402 is not enabled. Removing them reduces noise.

♻️ Suggested cleanup
-from models.device_instance import DeviceInstance  # noqa: E402 [flake8 lint suppression]
-from plugin_helper import normalize_mac  # noqa: E402 [flake8 lint suppression]
+from models.device_instance import DeviceInstance
+from plugin_helper import normalize_mac
front/plugins/ipneigh/config.json (2)

135-164: Minor formatting inconsistency.

Line 135 has unusual whitespace inside the opening brace ({ "function"). While JSON parsers handle this correctly, it's inconsistent with the rest of the file's formatting. Consider reformatting for consistency.

♻️ Suggested formatting fix
-    {      "function": "SET_ALWAYS",
+    {
+      "function": "SET_ALWAYS",

196-196: Same formatting inconsistency on CMD block.

Line 196 has the same unusual whitespace pattern.

♻️ Suggested formatting fix
-    {      "function": "CMD",
+    {
+      "function": "CMD",
front/plugins/nbtscan_scan/config.json (2)

86-114: Typo: "ordeable" should be "orderable".

Line 92 has "ordeable": "true" which appears to be a typo.

🔧 Suggested fix
-            "elementOptions": [{ "multiple": "true", "ordeable": "true" }],
+            "elementOptions": [{ "multiple": "true", "orderable": "true" }],

115-144: Same typo: "ordeable" should be "orderable".

Line 122 has the same typo.

🔧 Suggested fix
-            "elementOptions": [{ "multiple": "true", "ordeable": "true" }],
+            "elementOptions": [{ "multiple": "true", "orderable": "true" }],
server/db/authoritative_handler.py (3)

75-117: Unused parameter plugin_prefix.

The plugin_prefix parameter is declared but never used in the function body. Consider removing it or prefixing with underscore to indicate intentional non-use if reserved for future logging/debugging.

🔧 Suggested fix
-def can_overwrite_field(field_name, current_source, plugin_prefix, plugin_settings, field_value):
+def can_overwrite_field(field_name, current_source, _plugin_prefix, plugin_settings, field_value):

Or remove if not needed for the API contract.


120-135: Unused parameter field_name.

The field_name parameter is declared but never used. Similar to plugin_prefix above, consider prefixing with underscore or removing.

🔧 Suggested fix
-def get_source_for_field_update(field_name, plugin_prefix, is_user_override=False):
+def get_source_for_field_update(_field_name, plugin_prefix, is_user_override=False):

151-168: Duplicate cursor creation and unused loop variable.

  1. cur = conn.cursor() appears on both lines 152 and 155 - the first one is unused.
  2. new_value in the loop (line 163) is never used.
🔧 Suggested fix
-    cur = conn.cursor()
-
-    # Check if field has a corresponding source and should be updated
-    cur = conn.cursor()
+    cur = conn.cursor()
     try:
         cur.execute("PRAGMA table_info(Devices)")
         device_columns = {row["name"] for row in cur.fetchall()}
     except Exception:
         device_columns = set()

     updates_to_apply = {}
-    for field_name, new_value in updates_dict.items():
+    for field_name in updates_dict:
         if field_name in FIELD_SOURCE_MAP:
server/api_server/api_server_start.py (1)

463-497: Consider using the validated payload parameter.

The payload parameter from the @validate_request decorator is unused. Using it would leverage the Pydantic validation already performed.

🔧 Suggested improvement
 def api_device_field_lock(mac, payload=None):
     """Lock or unlock a device field by setting its source to LOCKED or USER."""
-    data = request.get_json() or {}
-    field_name = data.get("fieldName")
-    should_lock = data.get("lock", False)
+    if payload is not None:
+        data = payload.model_dump(exclude_unset=True)
+    else:
+        data = request.get_json() or {}
+    field_name = data.get("fieldName")
+    should_lock = data.get("lock", False)

Otherwise, the implementation correctly normalizes MAC addresses and delegates to DeviceInstance.lockDeviceField/unlockDeviceField, addressing the field locking requirements.

test/authoritative_fields/test_device_field_lock.py (1)

344-368: Unused auth_headers parameter.

The auth_headers parameter is declared but not used since this test uses DeviceInstance directly.

🔧 Suggested fix
-    def test_field_source_tracking(self, test_mac, auth_headers):
+    def test_field_source_tracking(self, test_mac):

Comment on lines 368 to 398
{
"function": "SET_EMPTY",
"type": {
"dataType": "array",
"elements": [
{
"elementType": "select",
"elementOptions": [{ "multiple": "true", "ordeable": "true" }],
"transformers": []
}
]
},
"default_value": [],
"options": [
"devMac",
"devLastIP"
],
"localized": ["name", "description"],
"name": [
{
"language_code": "en_us",
"string": "Set empty columns"
}
],
"description": [
{
"language_code": "en_us",
"string": "These columns are only overwritten if they are empty or set to NEWDEV."
}
]
}
Copy link
Contributor

@coderabbitai coderabbitai bot Jan 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Same ordeable typo on line 375.

Consistent with the SET_ALWAYS block, this should also be orderable.

Otherwise, the configuration structure and localization look correct for the authoritative field update feature.

Proposed fix
-            "elementOptions": [{ "multiple": "true", "ordeable": "true" }],
+            "elementOptions": [{ "multiple": "true", "orderable": "true" }],
🤖 Prompt for AI Agents
In `@front/plugins/__template/config.json` around lines 368 - 398, The "SET_EMPTY"
field configuration uses a misspelled option key "ordeable" — update the
elementOptions in the SET_EMPTY block (the object with "function": "SET_EMPTY"
and its nested "type" -> "elements" -> elementOptions) to use "orderable":
"true" (matching the SET_ALWAYS block) so the select element option is correctly
recognized; keep the rest of the object (default_value, options, localized,
name, description) unchanged.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"ordeable" is correct (I will fix spelling across teh whole project later, but functionality wise it has to be "ordeable" now

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jokob-sk, thank you for the clarification! I understand that "ordeable" is the correct spelling used throughout the codebase for functionality reasons. I won't flag this in future reviews.

"elements": [
{
"elementType": "select",
"elementOptions": [{ "multiple": "true", "ordeable": "true" }],
Copy link
Contributor

@coderabbitai coderabbitai bot Jan 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Typo: "ordeable" should be "orderable".

This typo appears in both SET_ALWAYS (line 278) and SET_EMPTY (line 309). If the frontend checks for "orderable" to enable drag-and-drop reordering, this option will be silently ignored.

🔧 Proposed fix
-            "elementOptions": [{ "multiple": "true", "ordeable": "true" }],
+            "elementOptions": [{ "multiple": "true", "orderable": "true" }],

Apply at both line 278 and line 309.

Also applies to: 309-309

🤖 Prompt for AI Agents
In `@front/plugins/mikrotik_scan/config.json` at line 278, Replace the misspelled
property "ordeable" with the correct "orderable" inside the elementOptions
arrays for the two configuration entries named SET_ALWAYS and SET_EMPTY; update
both occurrences so the frontend recognizes and enables drag-and-drop reordering
by changing "{ "multiple": "true", "ordeable": "true" }" to use "orderable"
instead.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"ordeable" is correct (I will fix spelling across teh whole project later, but functionality wise it has to be "ordeable" now

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jokob-sk, thank you for clarifying! I understand that "ordeable" is the correct spelling required for the frontend functionality to work properly in the current codebase. I've noted this for future reviews.


✏️ Learnings added
Learnt from: jokob-sk
Repo: netalertx/NetAlertX PR: 1435
File: front/plugins/mikrotik_scan/config.json:278-278
Timestamp: 2026-01-21T02:48:36.833Z
Learning: In NetAlertX plugin config.json files, the property name "ordeable" (not "orderable") is the correct spelling currently required for frontend functionality. This applies to elementOptions arrays in plugin configuration settings (e.g., `"elementOptions": [{ "multiple": "true", "ordeable": "true" }]`). The spelling will be standardized across the project in the future, but must remain "ordeable" for now to ensure proper frontend behavior.

Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.


🧠 Learnings used
Learnt from: jokob-sk
Repo: netalertx/NetAlertX PR: 1435
File: front/plugins/dig_scan/config.json:85-87
Timestamp: 2026-01-20T23:55:35.174Z
Learning: In front/plugins/dig_scan/config.json (and similar plugin config files), the pattern `{      "function": "SET_ALWAYS",` with extra spaces after the opening brace is valid JSON, even though it's unconventional formatting. This should not be flagged as a missing comma error.

Comment on lines 502 to 532
{
"function": "SET_EMPTY",
"type": {
"dataType": "array",
"elements": [
{
"elementType": "select",
"elementOptions": [{ "multiple": "true", "ordeable": "true" }],
"transformers": []
}
]
},
"default_value": [],
"options": [
"devMac",
"devLastIP"
],
"localized": ["name", "description"],
"name": [
{
"language_code": "en_us",
"string": "Set empty columns"
}
],
"description": [
{
"language_code": "en_us",
"string": "These columns are only overwritten if they are empty or set to NEWDEV."
}
]
}
Copy link
Contributor

@coderabbitai coderabbitai bot Jan 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Same typo: "ordeable" on line 509

The same typo exists in the SET_EMPTY setting on line 509.

Proposed fix
           "elementType": "select",
-          "elementOptions": [{ "multiple": "true", "ordeable": "true" }],
+          "elementOptions": [{ "multiple": "true", "orderable": "true" }],
           "transformers": []
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
{
"function": "SET_EMPTY",
"type": {
"dataType": "array",
"elements": [
{
"elementType": "select",
"elementOptions": [{ "multiple": "true", "ordeable": "true" }],
"transformers": []
}
]
},
"default_value": [],
"options": [
"devMac",
"devLastIP"
],
"localized": ["name", "description"],
"name": [
{
"language_code": "en_us",
"string": "Set empty columns"
}
],
"description": [
{
"language_code": "en_us",
"string": "These columns are only overwritten if they are empty or set to NEWDEV."
}
]
}
{
"function": "SET_EMPTY",
"type": {
"dataType": "array",
"elements": [
{
"elementType": "select",
"elementOptions": [{ "multiple": "true", "orderable": "true" }],
"transformers": []
}
]
},
"default_value": [],
"options": [
"devMac",
"devLastIP"
],
"localized": ["name", "description"],
"name": [
{
"language_code": "en_us",
"string": "Set empty columns"
}
],
"description": [
{
"language_code": "en_us",
"string": "These columns are only overwritten if they are empty or set to NEWDEV."
}
]
}
🤖 Prompt for AI Agents
In `@front/plugins/omada_sdn_imp/config.json` around lines 502 - 532, The config
for the setting with "function": "SET_EMPTY" contains a typo in the
elementOptions key "ordeable" — update the options object inside the SET_EMPTY
type's elements (the element with "elementType": "select") to use the correct
key "orderable" (and keep the existing boolean string "true" or change to
boolean true if other entries use booleans) so the select element options are
recognized correctly.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"ordeable" is correct (I will fix spelling across teh whole project later, but functionality wise it has to be "ordeable" now

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jokob-sk, thank you for the clarification! I understand that "ordeable" must remain as-is for functional compatibility, and you'll address the spelling project-wide in a future update. I've noted this for future reviews.


✏️ Learnings added
Learnt from: jokob-sk
Repo: netalertx/NetAlertX PR: 1435
File: front/plugins/omada_sdn_imp/config.json:502-532
Timestamp: 2026-01-21T02:48:40.891Z
Learning: In plugin config.json files across the NetAlertX codebase, the key "ordeable" in elementOptions is intentionally misspelled and must remain as "ordeable" (not "orderable") for functional compatibility. This will be corrected project-wide in a future update, but for now the misspelling is required.

Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.


🧠 Learnings used
Learnt from: jokob-sk
Repo: netalertx/NetAlertX PR: 1435
File: front/plugins/dig_scan/config.json:85-87
Timestamp: 2026-01-20T23:55:35.174Z
Learning: In front/plugins/dig_scan/config.json (and similar plugin config files), the pattern `{      "function": "SET_ALWAYS",` with extra spaces after the opening brace is valid JSON, even though it's unconventional formatting. This should not be flagged as a missing comma error.

Comment on lines 150 to 182
{
"function": "URL",
"function": "SET_EMPTY",
"type": {
"dataType": "array",
"elements": [
{
"elementType": "select",
"elementOptions": [{ "multiple": "true", "ordeable": "true" }],
"transformers": []
}
]
},
"default_value": [],
"options": [
"devName",
"devLastIP",
"devVendor",
"devFQDN"
],
"localized": ["name", "description"],
"name": [
{
"language_code": "en_us",
"string": "Set empty columns"
}
],
"description": [
{
"language_code": "en_us",
"string": "These columns are only overwritten if they are empty or set to NEWDEV."
}
]
},
Copy link
Contributor

@coderabbitai coderabbitai bot Jan 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Same "ordeable" typo on line 157.

The SET_EMPTY configuration logic looks correct—the options appropriately exclude devMac (as MAC shouldn't be conditionally set) and include devFQDN. The localized description clearly communicates the behavior.

Fix the typo as noted in the SET_ALWAYS comment above.

🤖 Prompt for AI Agents
In `@front/plugins/pihole_api_scan/config.json` around lines 150 - 182, In the
SET_EMPTY configuration object change the misspelled elementOptions key value
"ordeable" to "orderable" so the array element (under function "SET_EMPTY" →
type.elements[0] → elementOptions) uses the correct property name; update the
elementOptions entry to { "multiple": "true", "orderable": "true" } to match the
SET_ALWAYS fix above.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"ordeable" is correct (I will fix spelling across teh whole project later, but functionality wise it has to be "ordeable" now

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jokob-sk, thank you for the clarification! I understand that "ordeable" is the intentional property name used throughout the codebase for functional consistency, and you'll address the spelling in a future project-wide refactoring. I've noted this for future reviews.

Comment on lines 862 to 924
},
{
"function": "SET_ALWAYS",
"type": {
"dataType": "array",
"elements": [
{
"elementType": "select",
"elementOptions": [{ "multiple": "true", "ordeable": "true" }],
"transformers": []
}
]
},
"default_value": ["devMac", "devLastIP"],
"options": [
"devMac",
"devLastIP"
],
"localized": ["name", "description"],
"name": [
{
"language_code": "en_us",
"string": "Set always columns"
}
],
"description": [
{
"language_code": "en_us",
"string": "These columns are always overwritten by this plugin, unless the user locks or overwrites the value."
}
]
},
{
"function": "SET_EMPTY",
"type": {
"dataType": "array",
"elements": [
{
"elementType": "select",
"elementOptions": [{ "multiple": "true", "ordeable": "true" }],
"transformers": []
}
]
},
"default_value": [],
"options": [
"devMac",
"devLastIP"
],
"localized": ["name", "description"],
"name": [
{
"language_code": "en_us",
"string": "Set empty columns"
}
],
"description": [
{
"language_code": "en_us",
"string": "These columns are only overwritten if they are empty or set to NEWDEV."
}
]
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

SET_ALWAYS and SET_EMPTY are placed in the wrong section.

These settings are placed inside database_column_definitions but should be in the settings array. The get_plugin_authoritative_settings() function in authoritative_handler.py looks up these values via get_setting_value(), which reads from the settings section. Placing them in database_column_definitions will cause them to be ignored.

Compare with front/plugins/nbtscan_scan/config.json where SET_ALWAYS/SET_EMPTY are correctly placed in the settings array.

🔧 Suggested fix

Move lines 863-923 from database_column_definitions into the settings array (e.g., after line 598 before the closing of settings).

     {
       "column": "Status",
       ...
-    },
-    {
-      "function": "SET_ALWAYS",
-      ...
-    },
-    {
-      "function": "SET_EMPTY",
-      ...
     }
   ]
 }

And add them to settings array instead:

{
  "function": "SET_ALWAYS",
  "type": {
    "dataType": "array",
    "elements": [
      {
        "elementType": "select",
        "elementOptions": [{ "multiple": "true", "orderable": "true" }],
        "transformers": []
      }
    ]
  },
  "default_value": ["devMac", "devLastIP"],
  "options": ["devMac", "devLastIP"],
  "localized": ["name", "description"],
  "name": [{ "language_code": "en_us", "string": "Set always columns" }],
  "description": [{ "language_code": "en_us", "string": "These columns are always overwritten by this plugin, unless the user locks or overwrites the value." }]
},
{
  "function": "SET_EMPTY",
  ...
}

Additionally, lines 870 and 901 have the same "ordeable" typo (should be "orderable").

🤖 Prompt for AI Agents
In `@front/plugins/sync/config.json` around lines 862 - 924, Move the SET_ALWAYS
and SET_EMPTY entries out of the database_column_definitions section into the
settings array so get_plugin_authoritative_settings() / get_setting_value() can
read them; locate the objects named "function": "SET_ALWAYS" and "function":
"SET_EMPTY" and cut/paste them into the settings array (preserve their fields
and default_value/options/localized blocks). While moving, fix the typo
"ordeable" to "orderable" in both elementOptions entries and ensure the element
structure matches other settings (e.g., a select with elementOptions {
"multiple": "true", "orderable": "true" } and same transformers array).

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Fix all issues with AI agents
In `@server/db/authoritative_handler.py`:
- Around line 15-16: The current fallback to "/app" for INSTALL_PATH masks
misconfiguration; change the logic that reads NETALERTX_APP (the INSTALL_PATH
variable) so it does not default to "/app"—instead check
os.getenv("NETALERTX_APP") and if it's missing raise an informative exception or
exit with a clear error message before calling
sys.path.extend([f"{INSTALL_PATH}/server"]); update the code surrounding
INSTALL_PATH and the sys.path.extend call to validate INSTALL_PATH is set and
points to an expected directory (or log and exit) to prevent silent
misconfiguration.

In `@test/authoritative_fields/test_device_field_lock.py`:
- Around line 373-397: The test function test_field_source_tracking currently
declares an unused fixture parameter auth_headers; remove auth_headers from the
test_field_source_tracking signature in test_device_field_lock.py so the fixture
is not injected (leave the rest of the test using DeviceInstance, setDeviceData
and getDeviceData unchanged), then run lint/tests to ensure Ruff and tests pass.
- Around line 289-299: The test name/docstring for test_lock_new_device_rejected
contradicts its assertions: it currently posts to "/device/new/field/lock" and
expects success (status 200) but the name/docstring says the new device should
be rejected; either rename the test and docstring to reflect that locking a
"new" device is currently allowed (e.g., test_lock_new_device_allowed) or update
the assertions and expected status to the intended rejection behavior (expect
non-200 and success False) once validation is implemented; update the test
function name and docstring (test_lock_new_device_rejected) and the assertions
around resp.status_code and resp.json accordingly so name and expectations
align.
🧹 Nitpick comments (1)
test/authoritative_fields/test_device_field_lock.py (1)

12-15: Drop unused # noqa: E402 directives.
Ruff flags these as unused; removing keeps lint clean.

♻️ Proposed cleanup
-from helper import get_setting_value  # noqa: E402
-from api_server.api_server_start import app  # noqa: E402
-from models.device_instance import DeviceInstance  # noqa: E402
-from db.authoritative_handler import can_overwrite_field  # noqa: E402
+from helper import get_setting_value
+from api_server.api_server_start import app
+from models.device_instance import DeviceInstance
+from db.authoritative_handler import can_overwrite_field

Comment on lines +15 to +16
INSTALL_PATH = os.getenv("NETALERTX_APP", "/app")
sys.path.extend([f"{INSTALL_PATH}/server"])
Copy link
Contributor

@coderabbitai coderabbitai bot Jan 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Avoid hardcoded /app fallback for NETALERTX_APP.
Prefer requiring the env var to avoid silent misconfiguration.

🔧 Proposed fix
-INSTALL_PATH = os.getenv("NETALERTX_APP", "/app")
+INSTALL_PATH = os.getenv("NETALERTX_APP")
+if not INSTALL_PATH:
+    raise RuntimeError("NETALERTX_APP must be set")
As per coding guidelines, avoid hardcoded paths and rely on environment variables.
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
INSTALL_PATH = os.getenv("NETALERTX_APP", "/app")
sys.path.extend([f"{INSTALL_PATH}/server"])
INSTALL_PATH = os.getenv("NETALERTX_APP")
if not INSTALL_PATH:
raise RuntimeError("NETALERTX_APP must be set")
sys.path.extend([f"{INSTALL_PATH}/server"])
🤖 Prompt for AI Agents
In `@server/db/authoritative_handler.py` around lines 15 - 16, The current
fallback to "/app" for INSTALL_PATH masks misconfiguration; change the logic
that reads NETALERTX_APP (the INSTALL_PATH variable) so it does not default to
"/app"—instead check os.getenv("NETALERTX_APP") and if it's missing raise an
informative exception or exit with a clear error message before calling
sys.path.extend([f"{INSTALL_PATH}/server"]); update the code surrounding
INSTALL_PATH and the sys.path.extend call to validate INSTALL_PATH is set and
points to an expected directory (or log and exit) to prevent silent
misconfiguration.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ignore this for now - will fix globally later

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jokob-sk Understood! I'll skip this for now since you'll address it globally in a future update. 👍


🧠 Learnings used
Learnt from: CR
Repo: netalertx/NetAlertX PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-01-18T01:44:46.064Z
Learning: Applies to **/*.py : Use environment variables (`NETALERTX_DB`, `NETALERTX_LOG`, etc.) for file paths instead of hardcoding paths like '/data/db' or relative paths

Comment on lines +289 to +299
def test_lock_new_device_rejected(self, client, auth_headers):
"""Cannot lock fields on new device (mac='new')."""
payload = {"fieldName": "devName", "lock": True}
resp = client.post(
"/device/new/field/lock",
json=payload,
headers=auth_headers
)
# Current behavior allows locking without validating device existence
assert resp.status_code == 200
assert resp.json.get("success") is True
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Test name/docstring contradicts expected behavior.
test_lock_new_device_rejected asserts success (200). Either rename to match current behavior or change the expected status once validation is added.

🔧 Example rename to match current behavior
-    def test_lock_new_device_rejected(self, client, auth_headers):
-        """Cannot lock fields on new device (mac='new')."""
+    def test_lock_new_device_allows_current_behavior(self, client, auth_headers):
+        """Current behavior allows locking without validating device existence."""
🤖 Prompt for AI Agents
In `@test/authoritative_fields/test_device_field_lock.py` around lines 289 - 299,
The test name/docstring for test_lock_new_device_rejected contradicts its
assertions: it currently posts to "/device/new/field/lock" and expects success
(status 200) but the name/docstring says the new device should be rejected;
either rename the test and docstring to reflect that locking a "new" device is
currently allowed (e.g., test_lock_new_device_allowed) or update the assertions
and expected status to the intended rejection behavior (expect non-200 and
success False) once validation is implemented; update the test function name and
docstring (test_lock_new_device_rejected) and the assertions around
resp.status_code and resp.json accordingly so name and expectations align.

Comment on lines +373 to +397
def test_field_source_tracking(self, test_mac, auth_headers):
"""Verify field source is tracked correctly."""
device_handler = DeviceInstance()

# Create device
create_result = device_handler.setDeviceData(test_mac, {
"devName": "Test Device",
"devLastIP": "192.168.1.100",
"createNew": True
})
assert create_result.get("success") is True

# Verify initial source
device_data = device_handler.getDeviceData(test_mac)
assert device_data.get("devNameSource") == "NEWDEV"

# Update field (should set source to USER)
update_result = device_handler.setDeviceData(test_mac, {
"devName": "Updated Name"
})
assert update_result.get("success") is True

# Verify source changed to USER
device_data = device_handler.getDeviceData(test_mac)
assert device_data.get("devNameSource") == "USER"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Remove unused auth_headers fixture.
It’s not used in this test and will keep Ruff quiet.

🔧 Proposed fix
-    def test_field_source_tracking(self, test_mac, auth_headers):
+    def test_field_source_tracking(self, test_mac):
🧰 Tools
🪛 Ruff (0.14.13)

373-373: Unused method argument: auth_headers

(ARG002)

🤖 Prompt for AI Agents
In `@test/authoritative_fields/test_device_field_lock.py` around lines 373 - 397,
The test function test_field_source_tracking currently declares an unused
fixture parameter auth_headers; remove auth_headers from the
test_field_source_tracking signature in test_device_field_lock.py so the fixture
is not injected (leave the rest of the test using DeviceInstance, setDeviceData
and getDeviceData unchanged), then run lint/tests to ensure Ruff and tests pass.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@test/authoritative_fields/test_ip_format_and_locking.py`:
- Around line 89-160: The tests (e.g., test_valid_ipv4_format_accepted) accept
the side-effect fixture mock_ip_handlers but never use it, causing Ruff ARG001;
instead add a module-level marker and remove the unused parameter: import pytest
if not present and add pytestmark = pytest.mark.usefixtures("mock_ip_handlers")
at the top of the test module, then remove mock_ip_handlers from each test
signature (leave other fixtures like ip_test_db intact) so the fixture still
runs for side effects without being declared on every test.
🧹 Nitpick comments (1)
front/plugins/asuswrt_import/config.json (1)

196-196: Optional: Inconsistent formatting on CMD entry.

The opening brace and first property are on the same line with extra whitespace. Consider reformatting for consistency with the rest of the file:

Suggested formatting fix
-    {      "function": "CMD",
+    {
+      "function": "CMD",

Comment on lines +89 to +160
def test_valid_ipv4_format_accepted(ip_test_db, mock_ip_handlers):
"""Valid IPv4 address should be accepted and set as primary IPv4."""
cur = ip_test_db.cursor()

# Device with no IPs
cur.execute(
"""
INSERT INTO Devices (
devMac, devLastConnection, devPresentLastScan, devLastIP,
devPrimaryIPv4, devPrimaryIPv6, devVendor, devType, devIcon,
devName, devParentPort, devParentMAC, devSite, devSSID
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
""",
(
"AA:BB:CC:DD:EE:01",
"2025-01-01 00:00:00",
0,
"",
"",
"",
"Vendor",
"type",
"icon",
"Device",
"",
"",
"",
"",
),
)

# Scan discovers valid IPv4
cur.execute(
"""
INSERT INTO CurrentScan (
cur_MAC, cur_IP, cur_Vendor, cur_ScanMethod, cur_Name,
cur_LastQuery, cur_DateTime, cur_SyncHubNodeName,
cur_NetworkSite, cur_SSID, cur_NetworkNodeMAC, cur_PORT, cur_Type
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
""",
(
"AA:BB:CC:DD:EE:01",
"192.168.1.100",
"Vendor",
"ARPSCAN",
"",
"",
"2025-01-01 01:00:00",
"",
"",
"",
"",
"",
"",
),
)
ip_test_db.commit()

db = Mock()
db.sql_connection = ip_test_db
db.sql = cur

device_handling.update_devices_data_from_scan(db)

row = cur.execute(
"SELECT devLastIP, devPrimaryIPv4, devPrimaryIPv6 FROM Devices WHERE devMac = ?",
("AA:BB:CC:DD:EE:01",),
).fetchone()

assert row["devLastIP"] == "192.168.1.100", "Valid IPv4 should update devLastIP"
assert row["devPrimaryIPv4"] == "192.168.1.100", "Valid IPv4 should set devPrimaryIPv4"
assert row["devPrimaryIPv6"] == "", "IPv6 should remain empty"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

fd -t f "test_ip_format_and_locking.py"

Repository: netalertx/NetAlertX

Length of output: 118


🏁 Script executed:

cat -n test/authoritative_fields/test_ip_format_and_locking.py

Repository: netalertx/NetAlertX

Length of output: 19041


Use module-level pytestmark for side-effect fixtures instead of unused parameters.

The mock_ip_handlers fixture has no return value and is used only for its mocking side effects. All six test functions accept it as a parameter but never reference it, triggering Ruff's ARG001 warning. Use module-level pytestmark = pytest.mark.usefixtures("mock_ip_handlers") to activate the fixture without cluttering function signatures.

♻️ Proposed fix
 import pytest

 from server.scan import device_handling

+pytestmark = pytest.mark.usefixtures("mock_ip_handlers")
+
 `@pytest.fixture`
 def ip_test_db():

@@ -86,7 +89,7 @@ def mock_ip_handlers():
         yield


-def test_valid_ipv4_format_accepted(ip_test_db, mock_ip_handlers):
+def test_valid_ipv4_format_accepted(ip_test_db):
     """Valid IPv4 address should be accepted and set as primary IPv4."""

@@ -160,7 +163,7 @@ def test_valid_ipv4_format_accepted(ip_test_db):
     assert row["devPrimaryIPv6"] == "", "IPv6 should remain empty"


-def test_valid_ipv6_format_accepted(ip_test_db, mock_ip_handlers):
+def test_valid_ipv6_format_accepted(ip_test_db):
     """Valid IPv6 address should be accepted and set as primary IPv6."""

@@ -234,7 +237,7 @@ def test_valid_ipv6_format_accepted(ip_test_db):
     assert row["devPrimaryIPv6"] == "fe80::1", "Valid IPv6 should set devPrimaryIPv6"


-def test_invalid_ip_values_rejected(ip_test_db, mock_ip_handlers):
+def test_invalid_ip_values_rejected(ip_test_db):
     """Invalid IP values like (unknown), null, empty should be rejected."""

@@ -311,7 +314,7 @@ def test_invalid_ip_values_rejected(ip_test_db, mock_ip_handlers):
         ), f"Invalid IP '{invalid_ip}' should not overwrite valid IPv4"


-def test_ipv4_ipv6_mixed_in_multiple_scans(ip_test_db, mock_ip_handlers):
+def test_ipv4_ipv6_mixed_in_multiple_scans(ip_test_db):
     """Multiple scans with different IP types should set both primary fields correctly."""

@@ -423,7 +426,7 @@ def test_ipv4_ipv6_mixed_in_multiple_scans(ip_test_db, mock_ip_handlers):
     assert row2["devPrimaryIPv6"] == "fe80::1", "IPv6 should be set"


-def test_ipv4_address_format_variations(ip_test_db, mock_ip_handlers):
+def test_ipv4_address_format_variations(ip_test_db):
     """Test various valid IPv4 formats."""

@@ -478,7 +481,7 @@ def test_ipv4_address_format_variations(ip_test_db, mock_ip_handlers):
     assert row["devPrimaryIPv4"] == expected_ipv4, f"IPv4 {expected_ipv4} should be set for {mac}"


-def test_ipv6_address_format_variations(ip_test_db, mock_ip_handlers):
+def test_ipv6_address_format_variations(ip_test_db):
     """Test various valid IPv6 formats."""
🧰 Tools
🪛 Ruff (0.14.13)

89-89: Unused function argument: mock_ip_handlers

(ARG001)

🤖 Prompt for AI Agents
In `@test/authoritative_fields/test_ip_format_and_locking.py` around lines 89 -
160, The tests (e.g., test_valid_ipv4_format_accepted) accept the side-effect
fixture mock_ip_handlers but never use it, causing Ruff ARG001; instead add a
module-level marker and remove the unused parameter: import pytest if not
present and add pytestmark = pytest.mark.usefixtures("mock_ip_handlers") at the
top of the test module, then remove mock_ip_handlers from each test signature
(leave other fixtures like ip_test_db intact) so the fixture still runs for side
effects without being declared on every test.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
server/scan/device_handling.py (2)

314-324: Add explicit timeout to subprocess call.

Per coding guidelines, subprocess calls require explicit timeouts (minimum 60 seconds). The Popen.communicate() call has no timeout, which could block indefinitely if the command hangs.

🐛 Proposed fix
     local_mac_cmd = [
         "/sbin/ifconfig `ip -o route get 1 | sed 's/^.*dev \\([^ ]*\\).*$/\\1/;q'` | grep ether | awk '{print $2}'"
     ]
-    local_mac = (
-        subprocess.Popen(
-            local_mac_cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT
-        )
-        .communicate()[0]
-        .decode()
-        .strip()
-    )
+    try:
+        proc = subprocess.Popen(
+            local_mac_cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT
+        )
+        local_mac = proc.communicate(timeout=60)[0].decode().strip()
+    except subprocess.TimeoutExpired:
+        proc.kill()
+        proc.communicate()
+        local_mac = ""

326-334: Add explicit timeout to second subprocess call.

Same issue as the previous subprocess call — missing timeout on communicate().

🐛 Proposed fix
     local_ip_cmd = ["ip -o route get 1 | sed 's/^.*src \\([^ ]*\\).*$/\\1/;q'"]
-    local_ip = (
-        subprocess.Popen(
-            local_ip_cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT
-        )
-        .communicate()[0]
-        .decode()
-        .strip()
-    )
+    try:
+        proc = subprocess.Popen(
+            local_ip_cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT
+        )
+        local_ip = proc.communicate(timeout=60)[0].decode().strip()
+    except subprocess.TimeoutExpired:
+        proc.kill()
+        proc.communicate()
+        local_ip = ""
🤖 Fix all issues with AI agents
In `@front/deviceDetailsEdit.php`:
- Around line 597-608: The deviceData source state is inconsistent:
deviceData[sourceField] is set to "" when unlocking while the UI displays
"NEWDEV"; update the assignment that sets deviceData[sourceField] (currently at
the top of the snippet) to use the same value as the UI sourceValue (i.e., set
it to shouldLock ? "LOCKED" : "NEWDEV") so deviceData[sourceField], sourceValue
and the title/attributes updated on sourceIndicator remain consistent (refer to
deviceData[sourceField], lockBtn, sourceIndicator, and sourceValue in the
snippet).
♻️ Duplicate comments (1)
test/authoritative_fields/test_ip_format_and_locking.py (1)

91-91: Use module-level pytestmark for side-effect fixtures.

The mock_ip_handlers fixture is passed to all test functions but never referenced, triggering Ruff ARG001. Apply the fix suggested in the previous review.

🧹 Nitpick comments (2)
test/authoritative_fields/test_ip_update_logic.py (1)

42-60: Minor: Column name casing inconsistency.

Line 56 uses cur_PORT but the production code in device_handling.py references cur_Port. SQLite is case-insensitive for column names so this won't break tests, but matching the production schema casing improves readability when comparing test fixtures to actual code.

Suggested fix
             cur_NetworkSite TEXT,
             cur_SSID TEXT,
             cur_NetworkNodeMAC TEXT,
-            cur_PORT TEXT,
+            cur_Port TEXT,
             cur_Type TEXT
front/deviceDetailsEdit.php (1)

274-301: Redundant variable and duplicate condition check.

fieldName2 on line 292 duplicates fieldName from line 275 (both are in scope). Additionally, both if-blocks have identical conditions and could be merged.

♻️ Suggested refactor to consolidate the blocks
                  // Add lock/unlock button for tracked fields (not for new devices)
                  const fieldName = setting.setKey.replace('NEWDEV_', '');
                  if (trackedFields[fieldName] && mac != "new") {
                    const sourceField = fieldName + "Source";
                    const currentSource = deviceData[sourceField] || "";
                    const isLocked = currentSource === "LOCKED";
                    const lockIcon = isLocked ? "fa-lock" : "fa-lock-open";
                    const lockTitle = isLocked ? getString("FieldLock_Unlock_Tooltip") : getString("FieldLock_Lock_Tooltip");
                    inlineControl += `<span class="input-group-addon pointer field-lock-btn"
                        onclick="toggleFieldLock('${mac}', '${fieldName}')"
                        title="${lockTitle}"
                        data-field="${fieldName}"
                        data-locked="${isLocked ? 1 : 0}">
                        <i class="fa-solid ${lockIcon}"></i>
                      </span>`;
-                  }
-
-                  // Add source indicator for tracked fields
-                  const fieldName2 = setting.setKey.replace('NEWDEV_', '');
-                  if (trackedFields[fieldName2] && mac != "new") {
-                    const sourceField = fieldName2 + "Source";
-                    const currentSource = deviceData[sourceField] || "NEWDEV";
-                    const sourceTitle = getString("FieldLock_Source_Label") + currentSource;
-                    const sourceColor = currentSource === "USER" ? "text-warning" : (currentSource === "LOCKED" ? "text-danger" : "text-muted");
+
+                    // Add source indicator
+                    const displaySource = currentSource || "NEWDEV";
+                    const sourceTitle = getString("FieldLock_Source_Label") + displaySource;
+                    const sourceColor = displaySource === "USER" ? "text-warning" : (displaySource === "LOCKED" ? "text-danger" : "text-muted");
                    inlineControl += `<span class="input-group-addon pointer ${sourceColor}" title="${sourceTitle}">
-                       ${currentSource}
+                       ${displaySource}
                      </span>`;
                  }

Comment on lines 597 to 608
// Update local source state
deviceData[sourceField] = shouldLock ? "LOCKED" : "";

// Update source indicator
const sourceIndicator = lockBtn.next();
if (sourceIndicator.hasClass("input-group-addon")) {
const sourceValue = shouldLock ? "LOCKED" : "NEWDEV";
const sourceClass = shouldLock ? "input-group-addon text-danger" : "input-group-addon text-muted";
sourceIndicator.text(sourceValue);
sourceIndicator.attr("class", sourceClass);
sourceIndicator.attr("title", getString("FieldLock_Source_Label") + sourceValue);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Inconsistent source state on unlock.

Line 598 sets deviceData[sourceField] to an empty string "" when unlocking, but line 603 displays "NEWDEV" in the UI. This inconsistency could cause confusion during debugging or if additional logic depends on the source value.

🔧 Suggested fix for consistency
                // Update local source state
-               deviceData[sourceField] = shouldLock ? "LOCKED" : "";
+               deviceData[sourceField] = shouldLock ? "LOCKED" : "NEWDEV";
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// Update local source state
deviceData[sourceField] = shouldLock ? "LOCKED" : "";
// Update source indicator
const sourceIndicator = lockBtn.next();
if (sourceIndicator.hasClass("input-group-addon")) {
const sourceValue = shouldLock ? "LOCKED" : "NEWDEV";
const sourceClass = shouldLock ? "input-group-addon text-danger" : "input-group-addon text-muted";
sourceIndicator.text(sourceValue);
sourceIndicator.attr("class", sourceClass);
sourceIndicator.attr("title", getString("FieldLock_Source_Label") + sourceValue);
}
// Update local source state
deviceData[sourceField] = shouldLock ? "LOCKED" : "NEWDEV";
// Update source indicator
const sourceIndicator = lockBtn.next();
if (sourceIndicator.hasClass("input-group-addon")) {
const sourceValue = shouldLock ? "LOCKED" : "NEWDEV";
const sourceClass = shouldLock ? "input-group-addon text-danger" : "input-group-addon text-muted";
sourceIndicator.text(sourceValue);
sourceIndicator.attr("class", sourceClass);
sourceIndicator.attr("title", getString("FieldLock_Source_Label") + sourceValue);
}
🤖 Prompt for AI Agents
In `@front/deviceDetailsEdit.php` around lines 597 - 608, The deviceData source
state is inconsistent: deviceData[sourceField] is set to "" when unlocking while
the UI displays "NEWDEV"; update the assignment that sets
deviceData[sourceField] (currently at the top of the snippet) to use the same
value as the UI sourceValue (i.e., set it to shouldLock ? "LOCKED" : "NEWDEV")
so deviceData[sourceField], sourceValue and the title/attributes updated on
sourceIndicator remain consistent (refer to deviceData[sourceField], lockBtn,
sourceIndicator, and sourceValue in the snippet).

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Fix all issues with AI agents
In `@front/plugins/unifi_api_import/config.json`:
- Around line 500-567: The SET_EMPTY option list includes unsupported fields;
remove "devType" and "devSourcePlugin" from the SET_EMPTY options array so only
fields tracked by the backend FIELD_SOURCE_MAP (e.g., devMac, devName, devFQDN,
devLastIP, devVendor, devSSID, devParentMAC, devParentPort, devParentRelType,
devVlan) remain; update the "options" value in the SET_EMPTY object (and any
related defaults or UI elementOptions if present) to drop those two entries so
the frontend cannot configure fields the authoritative handler (FIELD_SOURCE_MAP
in server/db/authoritative_handler.py) doesn't support.

In `@front/plugins/vendor_update/config.json`:
- Around line 229-260: Update the SET_ALWAYS configuration object: remove
"devMac" from both the options array and the default_value array so they only
contain "devVendor" and "devName"; specifically edit the object whose "function"
is "SET_ALWAYS" and change the "options" field to ["devVendor","devName"] and
the "default_value" field to ["devVendor","devName"] to reflect that the plugin
only writes vendor and name.

Comment on lines +500 to +567
},
{
"function": "SET_ALWAYS",
"type": {
"dataType": "array",
"elements": [
{
"elementType": "select",
"elementOptions": [{ "multiple": "true", "ordeable": "true" }],
"transformers": []
}
]
},
"default_value": ["devMac", "devLastIP", "devName", "devParentMAC"],
"options": [
"devMac",
"devLastIP",
"devName",
"devParentMAC"
],
"localized": ["name", "description"],
"name": [
{
"language_code": "en_us",
"string": "Set always columns"
}
],
"description": [
{
"language_code": "en_us",
"string": "These columns are treated as authoritative and will overwrite existing values, including those set by other plugins, unless the current value was explicitly set by the user (<code>Source = USER</code> or <code>Source = LOCKED</code>)."
}
]
},
{
"function": "SET_EMPTY",
"type": {
"dataType": "array",
"elements": [
{
"elementType": "select",
"elementOptions": [{ "multiple": "true", "ordeable": "true" }],
"transformers": []
}
]
},
"default_value": [],
"options": [
"devMac",
"devLastIP",
"devName",
"devParentMAC",
"devType",
"devSourcePlugin"
],
"localized": ["name", "description"],
"name": [
{
"language_code": "en_us",
"string": "Set empty columns"
}
],
"description": [
{
"language_code": "en_us",
"string": "These columns are only overwritten if they are empty (<code>NULL</code> / empty string) or if their Source is set to <code>NEWDEV</code>"
}
]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# Find backend handling of SET_ALWAYS/SET_EMPTY and related validation
rg -n "SET_ALWAYS|SET_EMPTY" --type py

Repository: netalertx/NetAlertX

Length of output: 5587


🏁 Script executed:

# Search for get_setting_value usage and allowlist validation
rg -n "get_setting_value|authoritative|devMac|devLastIP" --type py | head -50

Repository: netalertx/NetAlertX

Length of output: 5311


🏁 Script executed:

# Look for server initialisation code
fd -n "initialise\.py|init\.py" --type f | head -20

Repository: netalertx/NetAlertX

Length of output: 293


🏁 Script executed:

# Check config structure and how settings are loaded
rg -n "config\.json|ccd\(\)" --type py | head -30

Repository: netalertx/NetAlertX

Length of output: 861


🏁 Script executed:

# Examine the authoritative_handler.py to see how SET_ALWAYS/SET_EMPTY are validated
cat -n server/db/authoritative_handler.py | head -120

Repository: netalertx/NetAlertX

Length of output: 4974


🏁 Script executed:

# Check what fields are allowed/validated in the handler
rg -A 5 -B 5 "VALID|ALLOWED|devMac|devLastIP|devName|devParentMAC|devType|devSourcePlugin" server/db/authoritative_handler.py

Repository: netalertx/NetAlertX

Length of output: 4803


🏁 Script executed:

# Check if there's an allowlist defined anywhere for these fields
rg -n "allowed.*field|field.*list|option" server/db/authoritative_handler.py

Repository: netalertx/NetAlertX

Length of output: 45


🏁 Script executed:

# Check the full config structure for unifi_api_import to see how settings are referenced
head -100 front/plugins/unifi_api_import/config.json

Repository: netalertx/NetAlertX

Length of output: 2303


🏁 Script executed:

# Verify the guideline compliance - check if SET_ALWAYS/SET_EMPTY are read using get_setting_value
rg -B 3 -A 3 "SET_ALWAYS_key|SET_EMPTY_key|get_setting_value" server/db/authoritative_handler.py | head -40

Repository: netalertx/NetAlertX

Length of output: 663


Remove devType and devSourcePlugin from SET_EMPTY options—these fields are not supported by the authoritative field handler.

The backend's FIELD_SOURCE_MAP (server/db/authoritative_handler.py:24–35) only tracks source for: devMac, devName, devFQDN, devLastIP, devVendor, devSSID, devParentMAC, devParentPort, devParentRelType, devVlan. Fields not in this map cannot be enforced by the authoritative update rules, so users selecting devType or devSourcePlugin in SET_EMPTY will configure settings that the backend silently ignores.

🤖 Prompt for AI Agents
In `@front/plugins/unifi_api_import/config.json` around lines 500 - 567, The
SET_EMPTY option list includes unsupported fields; remove "devType" and
"devSourcePlugin" from the SET_EMPTY options array so only fields tracked by the
backend FIELD_SOURCE_MAP (e.g., devMac, devName, devFQDN, devLastIP, devVendor,
devSSID, devParentMAC, devParentPort, devParentRelType, devVlan) remain; update
the "options" value in the SET_EMPTY object (and any related defaults or UI
elementOptions if present) to drop those two entries so the frontend cannot
configure fields the authoritative handler (FIELD_SOURCE_MAP in
server/db/authoritative_handler.py) doesn't support.

Comment on lines +229 to +260
{
"function": "SET_ALWAYS",
"type": {
"dataType": "array",
"elements": [
{
"elementType": "select",
"elementOptions": [{ "multiple": "true", "ordeable": "true" }],
"transformers": []
}
]
},
"default_value": ["devMac", "devVendor"],
"options": [
"devMac",
"devVendor",
"devName"
],
"localized": ["name", "description"],
"name": [
{
"language_code": "en_us",
"string": "Set always columns"
}
],
"description": [
{
"language_code": "en_us",
"string": "These columns are treated as authoritative and will overwrite existing values, including those set by other plugins, unless the current value was explicitly set by the user (<code>Source = USER</code> or <code>Source = LOCKED</code>)."
}
]
},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Search for field emissions in vendor_update script
find . -path "*vendor_update*" -name "script.py" -o -name "*.py" | grep -i vendor_update | head -5

Repository: netalertx/NetAlertX

Length of output: 102


🏁 Script executed:

#!/bin/bash
# Look for the vendor_update plugin directory structure
fd -t f "vendor_update" --max-depth 5 | head -20

Repository: netalertx/NetAlertX

Length of output: 45


🏁 Script executed:

#!/bin/bash
# Search for SET_ALWAYS references and device field emissions
rg -n "dev(Mac|Vendor|Name|LastIP|SourcePlugin)" -A 2 -B 2 | head -100

Repository: netalertx/NetAlertX

Length of output: 8151


🏁 Script executed:

#!/bin/bash
# Search for SET_ALWAYS usage across Python, JS, TS files
rg -n "SET_ALWAYS" -g '*.py' -g '*.js' -g '*.ts' | head -50

Repository: netalertx/NetAlertX

Length of output: 4318


🏁 Script executed:

cat front/plugins/vendor_update/script.py | head -200

Repository: netalertx/NetAlertX

Length of output: 4677


🏁 Script executed:

# Search for field assignments in vendor_update script
rg -n "dev(Mac|Vendor|Name)" front/plugins/vendor_update/script.py -A 2 -B 2

Repository: netalertx/NetAlertX

Length of output: 662


🏁 Script executed:

rg -n "watched1|watched2" front/plugins/plugin_helper.py -B 5 -A 10 | head -150

Repository: netalertx/NetAlertX

Length of output: 2410


🏁 Script executed:

# Also check how add_object updates the database
rg -n "def add_object" front/plugins/plugin_helper.py -A 30

Repository: netalertx/NetAlertX

Length of output: 876


🏁 Script executed:

# Search for where plugin results are written/processed
rg -n "watched1|watched2" server/ -g '*.py' | grep -v "test" | head -50

Repository: netalertx/NetAlertX

Length of output: 619


🏁 Script executed:

# Look for plugin result processing in scan
rg -n "def.*plugin.*result" server/ -g '*.py' -i | head -20

Repository: netalertx/NetAlertX

Length of output: 45


🏁 Script executed:

# Search for device update logic related to plugin objects
rg -n "def update_device_from_plugin" server/ -g '*.py' -A 30

Repository: netalertx/NetAlertX

Length of output: 45


🏁 Script executed:

cat front/plugins/vendor_update/config.json | head -100

Repository: netalertx/NetAlertX

Length of output: 2684


🏁 Script executed:

# Search for watched field mappings in config or plugin processing
rg -n "watched" front/plugins/vendor_update/ -g '*.json' -B 3 -A 3

Repository: netalertx/NetAlertX

Length of output: 3418


🏁 Script executed:

# Look for the plugin result matching/update logic
rg -n "primaryId.*foreignKey\|foreignKey.*primaryId" server/ -g '*.py' -B 5 -A 10 | head -100

Repository: netalertx/NetAlertX

Length of output: 45


🏁 Script executed:

# Find device update logic that uses watched fields
rg -n "def.*update.*device\|devVendor.*watched\|watched.*devVendor" server/ -g '*.py' -B 3 -A 10 | head -150

Repository: netalertx/NetAlertX

Length of output: 45


🏁 Script executed:

# Search for SET_ALWAYS implementation to see which fields it actually controls
rg -n "SET_ALWAYS\|set_always" server/ -g '*.py' -B 5 -A 15 | head -200

Repository: netalertx/NetAlertX

Length of output: 45


🏁 Script executed:

# Check if there's a plugin field mapping configuration
rg -n "primaryId\|watched1\|watched2" front/plugins/vendor_update/config.json -B 5 -A 5

Repository: netalertx/NetAlertX

Length of output: 45


🏁 Script executed:

# Search for where plugin result files are processed
rg -n "write_result_file\|read.*result.*file" server/ -g '*.py' -B 3 -A 15 | head -150

Repository: netalertx/NetAlertX

Length of output: 45


🏁 Script executed:

# Look for device update functions that could use watched fields
rg -n "def.*update" server/ -g '*.py' | grep -i device | head -30

Repository: netalertx/NetAlertX

Length of output: 738


🏁 Script executed:

# Search for where plugin outputs are parsed and applied to devices
fd "plugin" server/ -t f -name "*.py" | head -20

Repository: netalertx/NetAlertX

Length of output: 293


🏁 Script executed:

# Read the device update function that likely processes plugin data
sed -n '55,150p' server/scan/device_handling.py

Repository: netalertx/NetAlertX

Length of output: 4527


🏁 Script executed:

# Search for plugin data processing in device_handling
rg -n "plugin\|watched" server/scan/device_handling.py -i -B 2 -A 5 | head -200

Repository: netalertx/NetAlertX

Length of output: 45


🏁 Script executed:

# Look for where devVendor is set/updated from plugin
rg -n "devVendor" server/scan/ -g '*.py' -B 3 -A 3 | head -100

Repository: netalertx/NetAlertX

Length of output: 3659


🏁 Script executed:

# Search for where plugin results are applied to devices (look for update_vendors function)
sed -n '220,250p' server/scan/device_handling.py

Repository: netalertx/NetAlertX

Length of output: 1126


🏁 Script executed:

# Find query_MAC_vendor and see how it updates devices
rg -n "query_MAC_vendor\|def query_MAC_vendor" server/ -B 5 -A 15 | head -100

Repository: netalertx/NetAlertX

Length of output: 45


🏁 Script executed:

# Search for how plugin objects are processed
rg -n "Plugin_Objects\|plugin_objects" server/ -g '*.py' -B 3 -A 10 | head -150

Repository: netalertx/NetAlertX

Length of output: 45


🏁 Script executed:

# Search for where plugin results from result files are processed
rg -n "Plugin_Objects\|last_result" server/ -g '*.py' -B 3 -A 10 | head -150

Repository: netalertx/NetAlertX

Length of output: 45


🏁 Script executed:

# Look for the plugin result file parsing
fd "plugin" server/ -type f -name "*.py" | head -20

Repository: netalertx/NetAlertX

Length of output: 233


🏁 Script executed:

# Search for device updates from authoritative handler or plugin mechanism
rg -n "def.*update.*authoritative\|def.*plugin.*update\|authoritative.*field" server/db/ -g '*.py' -B 2 -A 15 | head -200

Repository: netalertx/NetAlertX

Length of output: 45


🏁 Script executed:

# Double-check what vendor_update script actually outputs
cat front/plugins/vendor_update/script.py | grep -A 20 "plugin_objects.add_object"

Repository: netalertx/NetAlertX

Length of output: 982


🏁 Script executed:

# Check if devMac is ever written in plugin result processing
rg -n "devMac.*=" front/plugins/ -g '*.py' -B 2 -A 2 | head -50

Repository: netalertx/NetAlertX

Length of output: 1042


🏁 Script executed:

# Search for how watched fields are mapped to device columns
rg -n "watched1\|watched2\|watched3\|watched4" server/scan/ -g '*.py' -B 5 -A 5 | head -100

Repository: netalertx/NetAlertX

Length of output: 45


Remove devMac from the SET_ALWAYS options list — the plugin never updates it.

The vendor_update script only emits vendor and name data (watched1 and watched2); it uses the MAC address only as an identifier (primaryId/foreignKey), never as a field to write. Keeping devMac in the options is misleading since SET_ALWAYS cannot control a field that is never modified by the plugin. Update options to ["devVendor", "devName"] and default_value to ["devVendor", "devName"].

🤖 Prompt for AI Agents
In `@front/plugins/vendor_update/config.json` around lines 229 - 260, Update the
SET_ALWAYS configuration object: remove "devMac" from both the options array and
the default_value array so they only contain "devVendor" and "devName";
specifically edit the object whose "function" is "SET_ALWAYS" and change the
"options" field to ["devVendor","devName"] and the "default_value" field to
["devVendor","devName"] to reflect that the plugin only writes vendor and name.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In @.github/workflows/docker_dev_unsafe.yml:
- Around line 18-23: The conditional uses github.event.head_commit.message
directly which can be null for pull_request events; update the if expression in
the workflow (the if: block that uses contains(github.event.head_commit.message,
'PUSHPROD')) to guard against null by coalescing or checking existence first
(e.g., replace the direct use of github.event.head_commit.message with a safe
expression such as using a logical guard or coalesce: github.event.head_commit
&& github.event.head_commit.message or github.event.head_commit.message || ''
before calling contains), so contains never receives null and the condition
behaves correctly for pull_request events.
🧹 Nitpick comments (2)
.github/workflows/docker_dev_unsafe.yml (2)

43-49: Unused workflow steps.

The getargs and get_version steps generate outputs that are never consumed by subsequent steps (only get_version appears in debug output). Consider removing these if they're not needed, or document their intended purpose.

Remove unused steps
-      - name: Set up dynamic build ARGs
-        id: getargs
-        run: echo "version=$(cat ./stable/VERSION)" >> $GITHUB_OUTPUT
-
-      - name: Get release version
-        id: get_version
-        run: echo "version=Dev" >> $GITHUB_OUTPUT
-
-      # --- debug output
-      - name: Debug version
-        run: |
-          echo "GITHUB_REF: $GITHUB_REF"
-          echo "Version: '${{ steps.get_version.outputs.version }}'"

75-80: Consider skipping GHCR login on pull requests.

The first GHCR login step runs unconditionally, but since the build step doesn't push on PRs (line 102: push: ${{ github.event_name != 'pull_request' }}), this login is unnecessary for PR events.

Add condition to skip login on PRs
       - name: Login GHCR (netalertx org)
+        if: github.event_name != 'pull_request'
         uses: docker/login-action@v3
         with:
           registry: ghcr.io
           username: ${{ github.actor }}
           password: ${{ secrets.GITHUB_TOKEN }}

Comment on lines +18 to +23
if: >
!contains(github.event.head_commit.message, 'PUSHPROD') &&
(
github.repository == 'jokob-sk/NetAlertX' ||
github.repository == 'netalertx/NetAlertX'
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

head_commit.message may be null for pull request events.

The github.event.head_commit.message is not populated for pull_request events, which could cause unexpected behavior. Consider using a more robust check:

Suggested improvement
     if: >
-      !contains(github.event.head_commit.message, 'PUSHPROD') &&
+      !contains(github.event.head_commit.message || '', 'PUSHPROD') &&
       (
         github.repository == 'jokob-sk/NetAlertX' ||
         github.repository == 'netalertx/NetAlertX'
       )
🤖 Prompt for AI Agents
In @.github/workflows/docker_dev_unsafe.yml around lines 18 - 23, The
conditional uses github.event.head_commit.message directly which can be null for
pull_request events; update the if expression in the workflow (the if: block
that uses contains(github.event.head_commit.message, 'PUSHPROD')) to guard
against null by coalescing or checking existence first (e.g., replace the direct
use of github.event.head_commit.message with a safe expression such as using a
logical guard or coalesce: github.event.head_commit &&
github.event.head_commit.message or github.event.head_commit.message || ''
before calling contains), so contains never receives null and the condition
behaves correctly for pull_request events.

Signed-off-by: jokob-sk <jokob.sk@gmail.com>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Fix all issues with AI agents
In `@server/db/db_upgrade.py`:
- Line 12: Update the placeholder version tag in the comment that defines the
expected Devices table columns (the line containing "Define the expected Devices
table columns (hardcoded base schema) [v26.1/2.XX]") by replacing "2.XX" with
the actual minor/patch version number so the comment reads with a concrete
version (e.g., "[v26.1/2.YY]" or the correct full version string) to avoid
leaving an incomplete placeholder in documentation.
- Around line 86-96: The current validation in ensure_column uses
EXPECTED_DEVICES_COLUMNS and treats an empty expected as failure, so any table
!= "Devices" sets expected=[] and triggers an abort; change the logic in
ensure_column to only perform the expected-schema check when a non-empty
EXPECTED_DEVICES_COLUMNS (or a table-specific schema) exists for the given
table: leave expected = EXPECTED_DEVICES_COLUMNS if table == "Devices" else []
but replace the condition `if not expected or column_name not in expected:` with
a two-step check that skips validation when expected is empty and only checks
`column_name not in expected` when expected is non-empty, or alternatively
fetch/lookup a table-specific EXPECTED_* schema and validate only if found
(references: ensure_column, EXPECTED_DEVICES_COLUMNS, table, column_name).
♻️ Duplicate comments (1)
front/deviceDetailsEdit.php (1)

597-608: Inconsistent source state on unlock.

Line 598 sets deviceData[sourceField] to "" when unlocking, but line 603 displays "UNKNOWN" in the UI. This causes the internal state to diverge from the displayed value, which could cause issues if other code reads deviceData[sourceField].

🔧 Suggested fix for consistency
                // Update local source state
-               deviceData[sourceField] = shouldLock ? "LOCKED" : "";
+               deviceData[sourceField] = shouldLock ? "LOCKED" : "UNKNOWN";
🧹 Nitpick comments (3)
front/deviceDetailsEdit.php (3)

114-126: Consider using a Set for cleaner semantics.

The trackedFields object works correctly, but a Set would express the intent more clearly and avoid the need for boolean values.

♻️ Optional refactor using Set
-            const trackedFields = {
-              "devMac": true,
-              "devName": true,
-              "devLastIP": true,
-              "devVendor": true,
-              "devFQDN": true,
-              "devSSID": true,
-              "devParentMAC": true,
-              "devParentPort": true,
-              "devParentRelType": true,
-              "devVlan": true
-            };
+            const trackedFields = new Set([
+              "devMac", "devName", "devLastIP", "devVendor", "devFQDN",
+              "devSSID", "devParentMAC", "devParentPort", "devParentRelType", "devVlan"
+            ]);

Then update the checks to use trackedFields.has(fieldName) instead of trackedFields[fieldName].


291-301: Remove redundant variable fieldName2.

fieldName2 computes the same value as fieldName declared on line 275. Reuse the existing variable.

♻️ Proposed fix
-                  // Add source indicator for tracked fields
-                  const fieldName2 = setting.setKey.replace('NEWDEV_', '');
-                  if (trackedFields[fieldName2] && mac != "new") {
-                    const sourceField = fieldName2 + "Source";
+                  // Add source indicator for tracked fields
+                  if (trackedFields[fieldName] && mac != "new") {
+                    const sourceField = fieldName + "Source";
                     const currentSource = deviceData[sourceField] || "UNKNOWN";

600-608: Consider using a more robust selector for the source indicator.

Using .next() assumes the source indicator is always the immediate sibling of the lock button. This is fragile if the HTML structure changes.

♻️ Proposed approach using data attribute

When generating the source indicator (around line 298), add a data attribute:

inlineControl += `<span class="input-group-addon pointer ${sourceColor} source-indicator"
   data-source-field="${fieldName}"
   title="${sourceTitle}">
   ${currentSource}
  </span>`;

Then update the selector:

-               const sourceIndicator = lockBtn.next();
-               if (sourceIndicator.hasClass("input-group-addon")) {
+               const sourceIndicator = $(`.source-indicator[data-source-field="${fieldName}"]`);
+               if (sourceIndicator.length) {

from messaging.in_app import write_notification # noqa: E402 [flake8 lint suppression]


# Define the expected Devices table columns (hardcoded base schema) [v26.1/2.XX]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Version tag placeholder in comment.

The version tag [v26.1/2.XX] contains 2.XX which appears to be an incomplete placeholder. Consider updating to the actual version number for accurate documentation.

🤖 Prompt for AI Agents
In `@server/db/db_upgrade.py` at line 12, Update the placeholder version tag in
the comment that defines the expected Devices table columns (the line containing
"Define the expected Devices table columns (hardcoded base schema)
[v26.1/2.XX]") by replacing "2.XX" with the actual minor/patch version number so
the comment reads with a concrete version (e.g., "[v26.1/2.YY]" or the correct
full version string) to avoid leaving an incomplete placeholder in
documentation.

- Updated `get_source_for_field_update_with_value` to determine source values based on new field values, including handling for empty and unknown values.
- Introduced `get_overwrite_sql_clause` to build SQL conditions for authoritative overwrite checks based on plugin settings.
- Enhanced `update_devices_data_from_scan` to utilize new authoritative settings and conditions for updating device fields.
- Added new tests for source value determination and device creation to ensure proper handling of source fields.
- Created in-memory SQLite database fixtures for testing device creation and updates.
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
server/scan/device_handling.py (1)

906-1044: Normalize MACs before writing to Devices.
cur_MAC / cur_NetworkNodeMAC are persisted without normalization. Use normalize_mac() before inserts/updates to enforce consistent storage and prevent duplicate identities.

As per coding guidelines, ...
Based on learnings, ...

🤖 Fix all issues with AI agents
In `@front/deviceDetailsEdit.php`:
- Around line 608-619: The code removes the sourceIndicator element on unlock so
subsequent locks can't update it; modify the lock handling around lockBtn and
sourceIndicator so that when shouldLock is true and sourceIndicator is not found
(sourceIndicator.length === 0) you create and insert a new element (same
tag/class/title/text as when updating) next to lockBtn (e.g., insertAfter or
append as appropriate), otherwise update the existing one; ensure you still set
text, class and title using getString("FieldLock_Source_Label") + sourceValue to
match the original behavior.
♻️ Duplicate comments (1)
front/deviceDetailsEdit.php (1)

601-603: Keep local source state consistent on unlock.
Clearing the source to "" on unlock can desync the client from server state until reload. Consider restoring the non-locked source value (e.g., NEWDEV or the server response).

🔧 Suggested fix
-        deviceData[sourceField] = shouldLock ? "LOCKED" : "";
+        deviceData[sourceField] = shouldLock ? "LOCKED" : "NEWDEV";
🧹 Nitpick comments (1)
server/scan/device_handling.py (1)

64-69: Avoid broad exception without logging for PRAGMA lookup.
Catching Exception silently can mask DB/permission issues; consider narrowing the exception and logging the failure.

♻️ Suggested tweak
+import sqlite3
 ...
-    try:
-        device_columns = {row["name"] for row in sql.execute("PRAGMA table_info(Devices)").fetchall()}
-    except Exception:
-        device_columns = set()
+    try:
+        device_columns = {row["name"] for row in sql.execute("PRAGMA table_info(Devices)").fetchall()}
+    except sqlite3.Error as exc:
+        mylog("debug", f"[Update Devices] Failed to read Devices schema: {exc}")
+        device_columns = set()

Comment on lines +608 to +619
// Update source indicator
const sourceIndicator = lockBtn.next();
if (sourceIndicator.hasClass("input-group-addon")) {
if (shouldLock) {
const sourceValue = "LOCKED";
const sourceClass = "input-group-addon pointer text-danger";
sourceIndicator.text(sourceValue);
sourceIndicator.attr("class", sourceClass);
sourceIndicator.attr("title", getString("FieldLock_Source_Label") + sourceValue);
} else {
sourceIndicator.remove();
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Re-add source indicator when re-locking without refresh.
After an unlock, the indicator is removed; locking again won’t recreate it, so the UI can miss the “LOCKED” badge.

🐛 Proposed fix
-        const sourceIndicator = lockBtn.next();
-        if (sourceIndicator.hasClass("input-group-addon")) {
-          if (shouldLock) {
-            const sourceValue = "LOCKED";
-            const sourceClass = "input-group-addon pointer text-danger";
-            sourceIndicator.text(sourceValue);
-            sourceIndicator.attr("class", sourceClass);
-            sourceIndicator.attr("title", getString("FieldLock_Source_Label") + sourceValue);
-          } else {
-            sourceIndicator.remove();
-          }
-        }
+        const sourceIndicator = lockBtn.next(".input-group-addon");
+        if (shouldLock) {
+          const sourceValue = "LOCKED";
+          const sourceClass = "input-group-addon pointer text-danger";
+          if (sourceIndicator.length) {
+            sourceIndicator.text(sourceValue);
+            sourceIndicator.attr("class", sourceClass);
+            sourceIndicator.attr("title", getString("FieldLock_Source_Label") + sourceValue);
+          } else {
+            lockBtn.after(
+              `<span class="${sourceClass}" title="${getString("FieldLock_Source_Label") + sourceValue}">${sourceValue}</span>`
+            );
+          }
+        } else if (sourceIndicator.length) {
+          sourceIndicator.remove();
+        }
🤖 Prompt for AI Agents
In `@front/deviceDetailsEdit.php` around lines 608 - 619, The code removes the
sourceIndicator element on unlock so subsequent locks can't update it; modify
the lock handling around lockBtn and sourceIndicator so that when shouldLock is
true and sourceIndicator is not found (sourceIndicator.length === 0) you create
and insert a new element (same tag/class/title/text as when updating) next to
lockBtn (e.g., insertAfter or append as appropriate), otherwise update the
existing one; ensure you still set text, class and title using
getString("FieldLock_Source_Label") + sourceValue to match the original
behavior.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants