Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
26 changes: 26 additions & 0 deletions docs/platforms/ruby/common/configuration/options.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -326,6 +326,32 @@ config.trace_ignore_status_codes = [404, (502..511)]

</SdkOption>

<SdkOption name="capture_queue_time" type="Boolean" defaultValue="true">

Automatically capture how long requests wait in the web server queue before processing begins. The SDK reads the `X-Request-Start` header set by reverse proxies (Nginx, HAProxy, Heroku) and attaches queue time to transactions as `http.queue_time_ms`.

This helps identify when requests are delayed due to insufficient worker threads or server capacity, which is especially useful under load.

To disable queue time capture:

```ruby
config.capture_queue_time = false
```

**Nginx:**

```nginx
proxy_set_header X-Request-Start "t=${msec}";
```

**HAProxy:**

```haproxy
http-request set-header X-Request-Start t=%Ts%ms
```

</SdkOption>

<SdkOption name="instrumenter" type="Symbol" defaultValue=":sentry">

The instrumenter to use, `:sentry` or `:otel` for [use with OpenTelemetry](../../tracing/instrumentation/opentelemetry).
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -20,5 +20,7 @@ Spans are instrumented for the following operations within a transaction:
- includes common database systems such as Postgres and MySQL
- Outgoing HTTP requests made with `Net::HTTP`
- Redis operations
- Queue time for requests behind reverse proxies (Nginx, HAProxy, Heroku)
- Requires `X-Request-Start` header from reverse proxy

Spans are only created within an existing transaction. If you're not using any of the supported frameworks, you'll need to <PlatformLink to="/tracing/instrumentation/custom-instrumentation/">create transactions manually</PlatformLink>.
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,8 @@ Sentry supports adding arbitrary custom units, but we recommend using one of the

<Include name="custom-measurements-units-disclaimer.mdx" />

<PlatformContent includePath="performance/queue-time-capture" />

## Supported Measurement Units

Units augment measurement values by giving meaning to what otherwise might be abstract numbers. Adding units also allows Sentry to offer controls - unit conversions, filters, and so on - based on those units. For values that are unitless, you can supply an empty string or `none`.
Expand Down
278 changes: 278 additions & 0 deletions docs/platforms/ruby/guides/good_job/index.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,278 @@
---
title: GoodJob
description: "Learn about using Sentry with GoodJob, an ActiveJob adapter for Postgres-based job queuing."
---

The GoodJob integration adds support for [GoodJob](https://github.com/bensheldon/good_job), a multithreaded, Postgres-based ActiveJob backend for Ruby on Rails. This integration provides automatic error capture with enriched context, performance monitoring with execution time and queue latency tracking, and cron monitoring for scheduled jobs.

## Install

Install `sentry-good_job`:

```bash
gem install sentry-good_job
```

Or add it to your `Gemfile`:

```ruby
gem "sentry-ruby"
gem "sentry-good_job"
```

## Configure

### Automatic Setup with Rails

If you're using Rails and have GoodJob in your dependencies, the integration will be enabled automatically when you initialize the Sentry SDK.

```ruby {filename:config/initializers/sentry.rb}
Sentry.init do |config|
config.dsn = "___PUBLIC_DSN___"
config.breadcrumbs_logger = [:active_support_logger, :http_logger]

# Set traces_sample_rate to 1.0 to capture 100%
# of transactions for tracing.
config.traces_sample_rate = 1.0
end
```

### Manual Setup

For non-Rails applications or when you need more control, you can configure the integration explicitly:

```ruby
require "sentry-ruby"
require "sentry-good_job"

Sentry.init do |config|
config.dsn = "___PUBLIC_DSN___"
config.traces_sample_rate = 1.0

# Configure GoodJob-specific options
config.good_job.report_after_job_retries = false
config.good_job.include_job_arguments = false
config.good_job.auto_setup_cron_monitoring = true
end
```

<Alert>
Make sure that `Sentry.init` is called before GoodJob workers start processing
jobs. For Rails applications, placing the initialization in
`config/initializers/sentry.rb` ensures proper setup.
</Alert>

## Verify

To verify that the integration is working, create a job that raises an error:

```ruby {filename:app/jobs/debug_job.rb}
class DebugJob < ApplicationJob
queue_as :default

def perform
1 / 0 # Intentional error
end
end
```

Enqueue the job:

```ruby
DebugJob.perform_later
```

When the job is processed by GoodJob, the error will be captured and sent to Sentry. You'll see:

- An error event with the exception details
- Enriched context including job name, queue name, and job ID
- A performance transaction showing job execution time and queue latency

View the error in the **Issues** section and the performance data in the **Performance** section of [sentry.io](https://sentry.io).

## Features

### Error Capture

The integration automatically captures exceptions raised during job execution:

- Exceptions are captured with full context (job name, queue, arguments if enabled, job ID)
- Trace propagation across job executions
- Configurable error reporting (after retries, only dead jobs, etc.)

### Performance Monitoring

Job execution is automatically instrumented with performance monitoring:

- **Execution time**: Time spent executing the job
- **Queue latency**: Time job spent waiting in the queue before execution
- **Trace propagation**: Jobs maintain trace context from the code that enqueued them

Transactions are created with the name `queue.active_job/<JobClassName>` and include:

- A span for the job execution
- Queue latency measurement
- Breadcrumbs for job lifecycle events

### Cron Monitoring

The integration provides two ways to monitor scheduled jobs:

#### Automatic Setup

GoodJob cron configurations are automatically detected and monitored:

```ruby {filename:config/initializers/good_job.rb}
Rails.application.configure do
config.good_job.cron = {
example_job: {
cron: "0 0 * * *", # Daily at midnight
class: "ExampleJob"
}
}
end
```

With `auto_setup_cron_monitoring` enabled (default), Sentry will automatically create cron monitors for all jobs in your GoodJob cron configuration. Monitor slugs are generated from the cron key.

<Alert>
Cron monitors are created when your application starts and the GoodJob
configuration is loaded. You don't need to create monitors manually in Sentry.
</Alert>

#### Manual Setup

For more control over cron monitoring, use the `sentry_cron_monitor` method in your job:

```ruby {filename:app/jobs/scheduled_cleanup_job.rb}
class ScheduledCleanupJob < ApplicationJob
include GoodJob::ActiveJobExtensions::Crons

sentry_cron_monitor(
schedule: { cron: "0 2 * * *" }, # 2 AM daily
timezone: "America/New_York"
)

def perform
# Cleanup logic
end
end
```

The `sentry_cron_monitor` method accepts:

- `schedule`: Cron schedule hash (e.g., `{ cron: "0 * * * *" }`)
- `timezone`: Timezone for the schedule (optional, defaults to UTC)

<Alert level="info">
If you use manual cron monitoring with `sentry_cron_monitor`, set
`auto_setup_cron_monitoring` to `false` to avoid duplicate monitors.
</Alert>

View your monitored jobs at [sentry.io/insights/crons](https://sentry.io/insights/crons/).

## Options

Configure the GoodJob integration with these options:

### `report_after_job_retries`

<SdkOption name="report_after_job_retries" type="Boolean" defaultValue="false">

Only report errors to Sentry after all retry attempts have been exhausted.

When `true`, errors are only sent to Sentry after the job has failed its final retry attempt. When `false`, errors are reported on every failure, including during retries.

```ruby
Sentry.init do |config|
config.dsn = "___PUBLIC_DSN___"
config.good_job.report_after_job_retries = true
end
```

</SdkOption>

### `report_only_dead_jobs`

<SdkOption name="report_only_dead_jobs" type="Boolean" defaultValue="false">

Only report errors for jobs that cannot be retried (dead jobs).

When `true`, errors are only sent to Sentry for jobs that have permanently failed and won't be retried. This is stricter than `report_after_job_retries`.

```ruby
Sentry.init do |config|
config.dsn = "___PUBLIC_DSN___"
config.good_job.report_only_dead_jobs = true
end
```

</SdkOption>

### `include_job_arguments`

<SdkOption name="include_job_arguments" type="Boolean" defaultValue="false">

Include job arguments in error context sent to Sentry.

When `true`, job arguments are included in the event's extra context. **Warning**: This may expose sensitive data. Only enable this if you're certain your job arguments don't contain PII or sensitive information.

```ruby
Sentry.init do |config|
config.dsn = "___PUBLIC_DSN___"
config.good_job.include_job_arguments = true
end
```

<Alert level="warning" title="Sensitive Data">
Job arguments may contain personally identifiable information (PII) or other
sensitive data. Only enable this option if you've reviewed your job arguments
and are certain they don't contain sensitive information, or if you've
configured [data scrubbing](/platforms/ruby/data-management/sensitive-data/)
appropriately.
</Alert>

</SdkOption>

### `auto_setup_cron_monitoring`

<SdkOption name="auto_setup_cron_monitoring" type="Boolean" defaultValue="true">

Automatically set up cron monitoring by reading GoodJob's cron configuration.

When `true`, the integration scans your GoodJob cron configuration and automatically creates Sentry cron monitors for scheduled jobs.

```ruby
Sentry.init do |config|
config.dsn = "___PUBLIC_DSN___"
config.good_job.auto_setup_cron_monitoring = false
end
```

Disable this if you prefer to use manual cron monitoring with the `sentry_cron_monitor` method.

</SdkOption>

### `logging_enabled`

<SdkOption name="logging_enabled" type="Boolean" defaultValue="false">

Enable detailed logging for debugging the integration.

When `true`, the integration logs detailed information about job monitoring, cron setup, and error capture. Useful for troubleshooting but should be disabled in production.

```ruby
Sentry.init do |config|
config.dsn = "___PUBLIC_DSN___"
config.good_job.logging_enabled = true # Only for debugging
end
```

</SdkOption>

## Supported Versions

- Ruby: 2.4+
- Rails: 5.2+
- GoodJob: 3.0+
- Sentry Ruby SDK: 5.28.0+
50 changes: 50 additions & 0 deletions platform-includes/performance/queue-time-capture/ruby.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
## Automatic Queue Time Capture

The Ruby SDK automatically captures queue time for Rack-based applications when the `X-Request-Start` header is present. This measures how long requests wait in the web server queue (e.g., waiting for a Puma thread) before your application begins processing them.

Queue time is attached to transactions as `http.queue_time_ms` and helps identify server capacity issues.

### Setup

Configure your reverse proxy to add the `X-Request-Start` header:

**Nginx:**

```nginx
location / {
proxy_pass http://your-app;
proxy_set_header X-Request-Start "t=${msec}";
}
```

**HAProxy:**

```haproxy
frontend http-in
http-request set-header X-Request-Start t=%Ts%ms
```

**Heroku:** The header is automatically set by Heroku's router.

### How It Works

The SDK:

1. Reads the `X-Request-Start` header timestamp from your reverse proxy
2. Calculates the time difference between the header timestamp and when the request reaches your application
3. Subtracts `puma.request_body_wait` (if present) to exclude time spent waiting for slow client uploads
4. Attaches the result as `http.queue_time_ms` to the transaction

### Disable Queue Time Capture

If you don't want queue time captured, disable it in your configuration:

```ruby
Sentry.init do |config|
config.capture_queue_time = false
end
```

### Viewing Queue Time

Queue time appears in the Sentry transaction details under the "Data" section as `http.queue_time_ms` (measured in milliseconds).
1 change: 1 addition & 0 deletions src/mdx.ts
Original file line number Diff line number Diff line change
Expand Up @@ -268,6 +268,7 @@ export async function getDevDocsFrontMatterUncached(): Promise<FrontMatter[]> {

const source = await readFile(file, 'utf8');
const {data: frontmatter} = matter(source);

return {
...(frontmatter as FrontMatter),
slug: fileName.replace(/\/index.mdx?$/, '').replace(/\.mdx?$/, ''),
Expand Down
Loading
Loading