-
Notifications
You must be signed in to change notification settings - Fork 5
docs: add query exporter monitoring documentation #112
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR adds comprehensive documentation for the query-exporter service, which provides PostgreSQL database monitoring and metrics exposure for Prometheus integration. The documentation covers metrics available, configuration options, and operational guidance for monitoring the OpenRemote database.
Key changes:
- Added complete query-exporter monitoring documentation covering available metrics (bloat detection, autovacuum, datapoint performance, database health)
- Documented configuration including environment variables, thresholds, and query intervals
- Provided operational guidance including Prometheus integration, troubleshooting, and performance tuning
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
Copilot reviewed 1 out of 1 changed files in this pull request and generated 6 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| scrape_configs: | ||
| - job_name: 'openremote-postgres' | ||
| static_configs: | ||
| - targets: ['localhost:9560'] |
Copilot
AI
Jan 6, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The target 'localhost:9560' will only work if Prometheus is running on the same host as the query-exporter service. For Docker deployments (which is the primary deployment method mentioned throughout the documentation), this should use the Docker service name instead. Consider adding a note about using the service name (e.g., 'query-exporter:9560') when Prometheus runs in the same Docker network, or providing examples for both scenarios.
|
|
||
| To modify queries or add new metrics: | ||
|
|
||
| 1. Edit the `config.yaml` file located in the `query-exporter` configuration directory (by default, this is `/deployment/query-exporter/config.yaml` which is mounted as a Docker volume at the container's `/config/config.yaml` path—see your `profile/deploy.yml` for the exact path). |
Copilot
AI
Jan 6, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This sentence is overly complex and difficult to parse. The nested parenthetical clauses and multiple path references make it hard to understand. Consider breaking this into multiple sentences or a bulleted list that clearly separates the host path, container path, and reference to the deploy.yml file.
| 1. Edit the `config.yaml` file located in the `query-exporter` configuration directory (by default, this is `/deployment/query-exporter/config.yaml` which is mounted as a Docker volume at the container's `/config/config.yaml` path—see your `profile/deploy.yml` for the exact path). | |
| 1. Edit the `config.yaml` file in the `query-exporter` configuration directory. | |
| - Default host path: `/deployment/query-exporter/config.yaml` | |
| - Container path (Docker volume mount): `/config/config.yaml` | |
| - For the exact host path in your environment, see the `query-exporter` volume mapping in `profile/deploy.yml`. |
| - **Increase query interval** - Change from 300s to 600s or higher in `config.yaml` | ||
| - **Limit to specific schemas** - Modify queries to target specific schemas only | ||
| - **Schedule off-peak runs** - Use `schedule` option instead of `interval` | ||
| - **Reduce sample size** - Lower the datapoint query sample size (default: 100) |
Copilot
AI
Jan 6, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The sample size value mentioned here (100) should be documented in the "Query Intervals" section above for consistency. Currently, the query intervals section (lines 61-66) documents time intervals but doesn't mention sample sizes. Consider adding a note about the sample size to that section or creating a "Query Parameters" subsection.
| - `pg_datapoint_query_duration_seconds` - Histogram of execution times for the attribute with most datapoints | ||
| - `pg_datapoint_count` - Total number of datapoints for the top attribute |
Copilot
AI
Jan 6, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The "Datapoint Query Performance" section mentions "the attribute with most datapoints" and "the top attribute", but it's unclear whether this is determined dynamically or requires configuration. Consider clarifying that this automatically identifies and monitors the attribute with the highest datapoint count.
| - `pg_datapoint_query_duration_seconds` - Histogram of execution times for the attribute with most datapoints | |
| - `pg_datapoint_count` - Total number of datapoints for the top attribute | |
| - `pg_datapoint_query_duration_seconds` - Histogram of execution times for the attribute that is automatically identified as having the highest datapoint count | |
| - `pg_datapoint_count` - Total number of datapoints for the attribute that is automatically identified as having the highest datapoint count |
|
|
||
| ### Query Complexity | ||
| - Bloat detection scans `pg_stats` and `pg_class` catalogs (limited to top 50 results) | ||
| - Datapoint performance samples 100 most recent datapoints from the largest attribute |
Copilot
AI
Jan 6, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This line states "100 most recent datapoints" but line 129 refers to "sample size (default: 100)". These should use consistent terminology - either both refer to it as a "sample" or both as "most recent datapoints". Using "sample size" is more technically accurate since it's a configurable parameter.
| - Datapoint performance samples 100 most recent datapoints from the largest attribute | |
| - Datapoint performance uses a sample size of 100 recent datapoints from the largest attribute (configurable) |
| - `20` - Page header size in bytes | ||
| - `12` - Index header overhead in bytes | ||
| - `4` - Item pointer size in bytes | ||
| - `23` - Tuple header size for PostgreSQL 14+ (Linux) |
Copilot
AI
Jan 6, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The tuple header size of 23 bytes is documented as specific to "PostgreSQL 14+ (Linux)" but the documentation doesn't specify which PostgreSQL version OpenRemote uses or whether this value might differ on other platforms. Consider adding a note about verifying compatibility if users are running different PostgreSQL versions or operating systems.
| - `23` - Tuple header size for PostgreSQL 14+ (Linux) | |
| - `23` - Typical tuple header size for PostgreSQL 14+ on Linux (this value may differ for other PostgreSQL versions or operating systems; verify for your deployment) |
No description provided.