-
Notifications
You must be signed in to change notification settings - Fork 0
Open
Labels
enhancementNew feature or requestNew feature or request
Description
Investigate OpenTelemetry integration for collecting user metrics and observability data.
Goals
- Understand usage patterns (which runtimes, commands, versions are popular)
- Identify performance bottlenecks in real-world usage
- Track errors and failure modes to improve reliability
- Enable data-driven prioritization of features and fixes
Areas to Investigate
Metrics
- Command usage frequency (install, use, list, reshim, etc.)
- Runtime provider usage (node, python, future providers)
- Version distribution (which versions users install/use)
- Shim invocation latency
- Error rates by command/operation
Tracing
- End-to-end timing for install operations (download, extract, configure)
- Shim resolution path (cache hit/miss, local vs global config)
- Network request timing (version lists, downloads)
Error Reporting
- Structured error collection with context
- Stack traces for unexpected failures
- Environment info (OS, architecture, dtvem version)
Privacy Considerations
- Opt-in by default - users must explicitly enable telemetry
- Transparency - document exactly what is collected
- No PII - no paths, usernames, or identifiable information
- Local-first option - ability to export metrics locally without sending anywhere
Implementation Questions
- Which OpenTelemetry SDK components are needed? (metrics, tracing, logging)
- What backend would receive the data? (self-hosted, cloud service)
- How to minimize binary size impact?
- How to handle offline scenarios?
- What's the opt-in/opt-out UX? (
dtvem config telemetry enable/disable)
Related
- refactor: standardize error handling and structured logging #92 - Standardize error handling (telemetry would benefit from structured errors)
References
Metadata
Metadata
Assignees
Labels
enhancementNew feature or requestNew feature or request