Skip to content

Comments

load-from-file: Updates to improve reliability of large .parm loads#112

Open
vielster wants to merge 1 commit intodronecan:masterfrom
allocortech:fetch-and-load-improvements
Open

load-from-file: Updates to improve reliability of large .parm loads#112
vielster wants to merge 1 commit intodronecan:masterfrom
allocortech:fetch-and-load-improvements

Conversation

@vielster
Copy link

The present implementation of do_load_from_file has a few performance issues.

  1. All ConfigGetSet() requests are queued as fast as possible, and do
    not give time for the downstream system to even respond before the
    next transaction is sent.
  2. Even thought the index is known for each parameter in the table
    the name based request for setting is always used. This causes
    very heavy bus loading, expecially when combined with (1)

This commit make the following core updates:

  1. Stores the index in the param set such that it can be utilized
    later for setting parameters.
  2. Queues the request transactions at 20ms increments to give the
    downstream unit a chance to process the request before sending
    the next.

Additionally in this commit, the fetch speed was increased from 100ms intervals between fetching parameters to 10ms.

The 10ms and 20ms settings could be added to the application config to allow some tuning/tweaking on a per user basis, but for now, all tests show these are sufficient values.

The present implementation of do_load_from_file has a few performance
issues.
  1) All ConfigGetSet() requests are queued as fast as possible, and do
     not give time for the downstream system to even respond before the
     next transaction is sent.
  2) Even thought the index is known for each parameter in the table
     the name based request for setting is _always_ used. This causes
     very heavy bus loading, expecially when combined with (1)

This commit make the following core updates:
  1) Stores the index in the param set such that it can be utilized
     later for setting parameters.
  2) Queues the request transactions at 20ms increments to give the
     downstream unit a chance to process the request before sending
     the next.

Additionally in this commit, the fetch speed was increased from 100ms
intervals between fetching parameters to 10ms.

The 10ms and 20ms settings could be added to the application config to
allow some tuning/tweaking on a per user basis, but for now, all tests
show these are sufficient values.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant