VibeReader includes a background job queue system for processing feed updates and cleanup operations asynchronously, improving performance and user experience.
- Database-backed queue: Works with both SQLite and PostgreSQL
- Automatic retries: Failed jobs are retried up to 3 times (configurable)
- Job status tracking: Monitor pending, processing, completed, and failed jobs
- Feed fetching: Update feeds in the background without blocking user requests
- Item cleanup: Automatically remove old feed items based on retention policies
Add these settings to your .env file:
# Enable background jobs (0 = disabled, 1 = enabled)
JOBS_ENABLED=0
# Worker sleep interval in seconds (how long to wait when no jobs available)
JOBS_WORKER_SLEEP=5
# Maximum retry attempts for failed jobs
JOBS_MAX_ATTEMPTS=3
# Days to keep completed/failed job records
JOBS_CLEANUP_DAYS=7
# Feed retention settings
FEED_RETENTION_DAYS=90 # Keep items for 90 days
FEED_RETENTION_COUNT= # Keep max N items per feed (empty = unlimited)You need two cron jobs for the complete system:
- Scheduler - Queues feed refresh jobs (runs every 15-30 minutes):
*/15 * * * * cd /path/to/vibereader && php scheduler.php- Worker - Processes queued jobs (runs every 1-5 minutes):
*/5 * * * * cd /path/to/vibereader && php worker.phpOr process worker more frequently:
*/1 * * * * cd /path/to/vibereader && php worker.phpFor Docker containers, you can add these to your host's crontab:
# Schedule feed refreshes every 15 minutes
*/15 * * * * docker exec vibereader php /var/www/html/scheduler.php
# Process jobs every 5 minutes
*/5 * * * * docker exec vibereader php /var/www/html/worker.phpRun the worker as a background daemon:
php worker.php --daemonThis will run continuously until stopped (Ctrl+C).
Run the worker manually when needed:
php worker.phpThis processes all available jobs once and exits.
# Process up to 10 jobs
php worker.php --max-jobs=10
# Custom sleep interval (seconds)
php worker.php --sleep=10
# Run as daemon
php worker.php --daemon
# Combine options
php worker.php --daemon --sleep=5Updates a feed by fetching the latest content. Automatically queued when:
- User manually refreshes a feed (if
JOBS_ENABLED=1) - Background refresh is triggered
Payload:
{
"feed_id": 123
}Removes old feed items based on retention policies.
Payload:
{
"feed_id": 123, // Optional: specific feed, null for all
"retention_days": 90, // Optional: override config
"retention_count": 100 // Optional: override config
}GET /api/jobs/statsReturns:
{
"pending": 5,
"processing": 1,
"completed": 150,
"failed": 2
}POST /api/jobs/cleanup
Content-Type: application/x-www-form-urlencoded
feed_id=123&retention_days=30Returns:
{
"success": true,
"data": {
"job_id": 456,
"message": "Cleanup job queued"
}
}Add to your crontab to run cleanup daily:
# Run cleanup at 2 AM daily
0 2 * * * cd /path/to/vibereader && php -r "require 'vendor/autoload.php'; \PhpRss\Queue\JobQueue::push(\PhpRss\Queue\JobQueue::TYPE_CLEANUP_ITEMS, []);"Call the cleanup endpoint programmatically or via a scheduled HTTP request.
Use the API endpoint to monitor job queue:
curl http://localhost/api/jobs/stats-- SQLite
SELECT status, COUNT(*) FROM jobs GROUP BY status;
-- PostgreSQL
SELECT status, COUNT(*) FROM jobs GROUP BY status;The worker automatically cleans up old completed/failed jobs based on JOBS_CLEANUP_DAYS. You can also manually clean up:
$deleted = \PhpRss\Queue\JobQueue::cleanup(7); // Remove jobs older than 7 days- Check that
JOBS_ENABLED=1in your.envfile - Verify both the scheduler and worker are running (check cron logs or process list)
- Check job status in database:
SELECT * FROM jobs WHERE status = 'pending' - Verify cron jobs are set up correctly (both scheduler and worker)
- Check scheduler is running: The scheduler script (
scheduler.php) must be running via cron to queue refresh jobs - Check worker is running: The worker script (
worker.php) must be running via cron to process queued jobs - Check refresh interval: Default is 15 minutes - feeds won't refresh if they were fetched more recently
- Check for duplicate jobs: The scheduler skips feeds that already have pending jobs
- Verify database connection: Both scripts need database access
Run manually to test:
# Queue refresh jobs for feeds that need updating
php scheduler.php
# Process the queued jobs
php worker.phpIf running in Docker, ensure:
- Both scripts are accessible in the container (mounted volumes)
- Cron jobs use
docker exec vibereaderto run scripts inside the container - Environment variables are set correctly in
docker-compose.yml
- Review application logs for errors
- Check the
error_messagecolumn in thejobstable - Review application logs for exceptions
- Verify feed URLs are accessible
- Check database connectivity
- Verify PHP CLI is available:
php --version - Check file permissions on
worker.php - Verify database connection works
- Check PHP error logs
- Adjust worker frequency: More frequent workers = faster processing but more database queries
- Batch processing: Use
--max-jobsto limit jobs per run - Sleep interval: Increase
JOBS_WORKER_SLEEPif you have many workers to reduce database load - Cleanup frequency: Run cleanup jobs during off-peak hours
- Job queue endpoints require authentication
- CSRF protection is enforced for state-changing operations
- Job payloads are stored as JSON in the database (ensure database is secured)
- Failed jobs may contain error messages - review logs regularly
- Set
JOBS_ENABLED=0initially (synchronous mode) - Set up the worker cron job
- Test with a few feeds
- Set
JOBS_ENABLED=1to enable background processing - Monitor job statistics to ensure proper operation