Skip to content

Non-Linux Dev Envs 0.9.8#232

Open
DXCanas wants to merge 11 commits intoJaapvanEkris:0.9.8-In-Developmentfrom
DXCanas:non-linux-dev-envs
Open

Non-Linux Dev Envs 0.9.8#232
DXCanas wants to merge 11 commits intoJaapvanEkris:0.9.8-In-Developmentfrom
DXCanas:non-linux-dev-envs

Conversation

@DXCanas
Copy link
Copy Markdown

@DXCanas DXCanas commented Apr 6, 2026

#194, retargetted to 0.9.8 :)

Please do confirm functionality; this was a rush job during my lunch break and the rebase got funky.

DXCanas added 11 commits April 6, 2026 14:31
…y with graceful fallbacks

Changed BleManager to lazy-load native modules only on Linux platform and throw descriptive error on unsupported platforms.
Updated PeripheralManager error handling to downgrade BLE initialization errors from error to warn level and automatically disable BLE/HRM modes when unavailable.
Made GPIO timer service conditional on Linux platform in server.js with warning message for other platforms.
Introduces the `simulateWithoutHardware` flag to the default configuration and config validation. This enables developers to bypass GPIO and BLE hardware checks to simulate the environment on headless servers or non-Linux systems without causing error logs.

AI-assisted-by: Gemini 3.1 Pro
Removes the static `process.platform === 'linux'` check in favor of dynamically importing `hci-socket` and `ble-host`. Uses the new `simulateWithoutHardware` config flag to bypass BLE initialization when requested. Refactors `open()` to use guard clauses, preventing deep nesting and improving readability. Catches and logs missing dependencies cleanly.

AI-assisted-by: Gemini 3.1 Pro
Removes the static `process.platform === 'linux'` check from `server.js` and moves hardware checking into `GpioTimerService.js`. Uses dynamic imports for `pigpio` wrapped in a try/catch block. If `simulateWithoutHardware` is true or if `pigpio` fails to load (e.g., on non-Raspberry Pi devices or without root), the service logs a warning and exits cleanly instead of crashing the app.

AI-assisted-by: Gemini 3.1 Pro
Updates `GpioTimerService` to explicitly read `/proc/device-tree/model` to confirm it is running on a supported Raspberry Pi (3, 4, or Zero 2W) before attempting to load `pigpio`.
Updates `BleManager` to explicitly check for Linux before attempting to load `hci-socket`.
Both services now log explicit messages detailing their initialization decisions as requested by maintainers, ensuring clear visibility into why hardware modules are bypassed or loaded.

AI-assisted-by: Gemini 3.1 Pro
…on failure gracefully

The `pigpio` JS wrapper exports functions even if the underlying C library fails to initialize (e.g. on macOS or non-Pi Linux without root), causing a hard crash later when those undefined C-bindings are called. This commit explicitly wraps the first call to `pigpio.hardwareRevision()` in a try/catch block to catch the `TypeError: is not a function`. This prevents the application from crashing and instead cleanly bypasses the GPIO service as requested by maintainers.

AI-assisted-by: Gemini 3.1 Pro
… JSDoc imports

Replaces the temporary `any` types with `typeof import(...)` for the BLE modules (`hci-socket` and `ble-host`). This restores strong type checking for the class properties while keeping the actual module loading deferred to runtime, satisfying the requirement to lazy-load these modules only on supported platforms (Linux) without losing static analysis benefits.
…T+ initialization attempts

Relocates the `simulateWithoutHardware` check from individual BLE peripheral setup() functions to the `createBlePeripheral()` and `createAntPeripheral()` functions in PeripheralManager. This prevents BLE and ANT+ peripherals from being created at all when hardware simulation is enabled, rather than catching rejected promises after initialization has already started.

Removes now-unnecessary .catch() handlers
@JaapvanEkris JaapvanEkris added this to the 0.9.8 milestone Apr 6, 2026
@JaapvanEkris
Copy link
Copy Markdown
Owner

I'm a bit puzzled here. I see a lit of changes, just to essentially comment out the peripheralmanager and gpio. Why such an elaborate approach?

@DXCanas
Copy link
Copy Markdown
Author

DXCanas commented Apr 6, 2026

The approach became more elaborate with iterations on the PR, but started off more-or-less simply:
876eb69

It's been a while now so the memory is no longer fresh. But from what I recall, there were many chunks of code to comment out, I kept hitting more and more errors I made my way through, and it was a pain to pull changes straight into the pi after working on my laptop (or vise versa).

So I started with just catching the errors and then had claude come in to refine. And then introduced new features for the other use cases described in #194

@JaapvanEkris
Copy link
Copy Markdown
Owner

JaapvanEkris commented Apr 7, 2026

I'm not too happy about spreading this simulation/disabling logic across many parts of the code. It makes debugging tough.

As you are already conditionally loading stuff, why not load dummy modules. In fact that may be a nice abstraction (as gpio might load the replay functionality that curremtly is commented out in server.js), and still provide decent feedback (peripheralmanager needs BLE to respond to changes initiated by the GUI).

By doing so, you can introduce two config options: one for simulated gpio (replacing gpio with replay) and one for simulating BLE and ANT+. A trick I use is also to listen to metrics and report (80 + 0.5 * Power) as heartrate. I would keep MQTT in there as it allows simulating commands. You could even use existing parameters: gpiopin could ve set to "sim", as would BLE type.

@Abasz
Copy link
Copy Markdown
Collaborator

Abasz commented Apr 7, 2026

I disagree with Jaap. I think this is not a lot of change. It seems a lot because there is a lot of logic that relates to logging (whether it provide benefit or not its up for debate) and error handling which was missing in the past because the hard assumption was that we have these HW modules (and in case of missing these modules the app would have crashed, even the unit tests were throwing hci-socket missing error for me because the dev container does not have bluetooth installed which can be done of course). Also I dont see how this makes debugging any more difficult: basically you have a flag in the settings (that defaults to false) that disables all hardwares.

Having mock modules would actually be more complicated in my view and would require more change. We would need to conditionally load those and maintain its public interface manually that is a lot of work for little benefit (please see more on that below).

I think the long term goal for testing the client is not by running the back end and have these mock BLE and ANT that mimic the behaviour. This is just added complexity and we do not need that. The Client responsibility ends at the level of the WebSocket command. It should send the appropriate command and should react to things coming from the WS. The rest is backend staff which we are not testing when client is being worked on. Having the need to launch the back-end for client development is rather bad design (though currently it is a necessity to some extent). The better approach and this is what we should be using on the long term (once I have the TS/vite/vitest migration which I have started and seems simpler than I thought) we would mock the WebSocket and catch the commands there and provide mock responses, but this is also only in integration kind of tests not on the unit test level.

By doing so, you can introduce two config options: one for simulated gpio (replacing gpio with replay) and one for simulating BLE and ANT+.

For the client development this does not add any benefit as we should not be testing with actual BLE and ANT+ simulation on the back end. Its simply not necessary. For the back-end development this could be necessary I agree there. But please bear in mind that vitests supports module import mocking which would solve all these problems. Mocking can be driven by the flags, It does not even need to be a config flag, can be an env variable or a command line argument (like DEBUG or SIMULATE).

As a side note, I have been thinking about this every time I need to run simulation that the comment out simulation is not very user friendly (one needs to make sure that the commenting out is not accidentally committed) that we can drive the simulation with an env variable or a command line argument (including the time delay, the file name etc.). This would allow to create a npm run TARGET for simple start.

@JaapvanEkris
Copy link
Copy Markdown
Owner

I disagree with Jaap. I think this is not a lot of change. It seems a lot because there is a lot of logic that relates to logging (whether it provide benefit or not its up for debate) and error handling which was missing in the past because the hard assumption was that we have these HW modules (and in case of missing these modules the app would have crashed, even the unit tests were throwing hci-socket missing error for me because the dev container does not have bluetooth installed which can be done of course).

Fait point

Also I dont see how this makes debugging any more difficult: basically you have a flag in the settings (that defaults to false) that disables all hardwares.

It is another thing to check if something goes wrong.

Having mock modules would actually be more complicated in my view and would require more change. We would need to conditionally load those and maintain its public interface manually that is a lot of work for little benefit (please see more on that below).

All managers share the same identical interface, so

'use strict'
/**
 * @copyright [OpenRowingMonitor]{@link https://github.com/JaapvanEkris/openrowingmonitor}
 *
 * @file This manager creates the different Bluetooth Low Energy (BLE), ANT+ and MQTT Peripherals and allows
 * switching between them
 */
/**
 * @param {Config} config
 */
export function createPeripheralManager (config) {

  /**
   * This function handles all incomming commands. As all commands are broadasted to all managers, we need to filter here what is relevant
   * for the peripherals and what is not
   *
   * @param {Command} Name of the command to be executed by the commandhandler
   * @param {unknown} data for executing the command
   *
   * @see {@link https://github.com/JaapvanEkris/openrowingmonitor/blob/main/docs/Architecture.md#command-flow|The command flow documentation}
  */
  /* eslint-disable-next-line no-unused-vars -- data is irrelevant here, but it is a standardised interface */
  async function handleCommand (commandName, data) {
    switch (commandName) {
      case ('updateIntervalSettings'):
        break
      case ('start'):
        break
      case ('startOrResume'):
        break
      case ('pause'):
        break
      case ('stop'):
        break
      case ('reset'):
        break
      case 'switchBlePeripheralMode':
        break
      case 'switchAntPeripheralMode':
        break
      case 'switchHrmMode':
        break
      case 'refreshPeripheralConfig':
        break
      case 'upload':
        break
      case 'shutdown':
        break
      default:
        log.error(`PeripheralManager: Received unknown command: ${commandName}`)
    }
  }

  /**
   * @param {Metrics} metrics
   */
  function notifyMetrics (metrics) {
  }

  return Object.assign(emitter, {
    handleCommand,
    notifyMetrics
  })
}

So the default manager template would be a valid replacement.

I think the long term goal for testing the client is not by running the back end and have these mock BLE and ANT that mimic the behaviour. This is just added complexity and we do not need that. The Client responsibility ends at the level of the WebSocket command. It should send the appropriate command and should react to things coming from the WS. The rest is backend staff which we are not testing when client is being worked on. Having the need to launch the back-end for client development is rather bad design (though currently it is a necessity to some extent). The better approach and this is what we should be using on the long term (once I have the TS/vite/vitest migration which I have started and seems simpler than I thought) we would mock the WebSocket and catch the commands there and provide mock responses, but this is also only in integration kind of tests not on the unit test level.

I've been thinking about that. In essence, we want to be able to replay a RowingData file, and push it to the GUI or the peripherals?

By doing so, you can introduce two config options: one for simulated gpio (replacing gpio with replay) and one for simulating BLE and ANT+.

For the client development this does not add any benefit as we should not be testing with actual BLE and ANT+ simulation on the back end. Its simply not necessary. For the back-end development this could be necessary I agree there. But please bear in mind that vitests supports module import mocking which would solve all these problems. Mocking can be driven by the flags, It does not even need to be a config flag, can be an env variable or a command line argument (like DEBUG or SIMULATE).

Looking forward to that.

As a side note, I have been thinking about this every time I need to run simulation that the comment out simulation is not very user friendly (one needs to make sure that the commenting out is not accidentally committed) that we can drive the simulation with an env variable or a command line argument (including the time delay, the file name etc.). This would allow to create a npm run TARGET for simple start.

It depends. In engine development I have two modes:

  • Small tests where I use known recordings and see how the engine behaves. There the current approach actually works well as there is no command line to construct: I simply start ORM and it starts simulating.
  • Large scale tests, which are started as separate simulator process, as it runs through all recordings of the past 4 years, and records results (more of a statistical approach). But this is the engine bolted onto a special JS-script. As it takes over 24 hours to get results, this is more a statistical approach.

I'm willing to accept this PR providing the point of handing over the config import is resolved. I want to prevent side-loading the config, partially as it will keep generating error messages each time it is called. and partially as we use a central config object and its injection allows it to communicate config changes (and this breaks this).

@Abasz
Copy link
Copy Markdown
Collaborator

Abasz commented Apr 8, 2026

I've been thinking about that. In essence, we want to be able to replay a RowingData file, and push it to the GUI or the peripherals?
So the default manager template would be a valid replacement.

I will still push back on this, but it is possible that I dont understand the purpose or what this mocking adds compared to the current disabling idea. It is possible that we use the term "test" differently. Is it possible that you mean an e2e test like where you manually click things and so on? If yes I think we are not on the same page :D that is non-deterministic and I want to move to test automation and avoiding the need for the real hardware as much as possible because coding on the Rpi is not the most pleasant experience.

If I want to summarise the purpose of this PR in one sentence is to allow the development of ORM bypassing the Rpi (@DXCanas to chip in if disagrees with the summary)

Some more reasoning:

on the interface of the peripherals, I agree they are simple. But wouldnt we need to same logic whether to load them like when we disable the hardware? So it would be the same in terms of one more thing that could go wrong and needs checking. In this sense current idea is identical to the proposed version.

If you want to push staff to the GUI or the Peripherals and do a kind of e2e test, I dont understand why you would want stub/mock the peripherals. Actually you would need those so you can see the metrics on your watch. But for that you would need an Rpi anyway.

For running tests (unit or integration) and session simulation and run it in e.g. in CI for instance (or anywhere that is not a Rpi, like a github codespace, or a windows machine) you do not need the stubbing/mocking you just simply need to disable this functionality. Simulation on its own where you care only about the output of the metrics (or the logs) generated you do not need the hardware elements.

Testing interactions from the GUI towards the peripherals I think if its e2e you will want the full hardware present (i.e. Rpi would be needed), if its isolated unit/integration tests I would want to be able to run those without spinning up the back up (and the exit point from the gui is the WS which can be mocked/stubbed for testing). I want auto rebuild, and reload of the browser and so on (this is provided by vite).

I agree of changing the config option to a command line argument, or env var (essentially I think your 24h simulation script would be simpler if you could pass the file name via a command line argument but I dont see your suite so this is just a guess). Anyway this would reach beyond this PR, I am happy to draw something up for this.

For this PR, what I would change is that I would use a command line flag called --noHardware or whatever that would disable the hardwares (at some point we can even go a step further and and options like all, or gpio or ble, but I would leave this outside of this PR). So no specific config just the flag which is not present defaults to false and normal operation is triggered. Now I would need to think about how this best fed into the config object cleanly (at the config object creation level, or directly as a global query on the module, leaning towards the first as it would allow unit testing better) but knowing the flexibility of node it should be no problem.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants