Skip to content

out_gcs: Implement Google Cloud Storage(gcs) plugin#11792

Open
cosmo0920 wants to merge 4 commits intomasterfrom
cosmo0920-implement-out_gcs-plugin
Open

out_gcs: Implement Google Cloud Storage(gcs) plugin#11792
cosmo0920 wants to merge 4 commits intomasterfrom
cosmo0920-implement-out_gcs-plugin

Conversation

@cosmo0920
Copy link
Copy Markdown
Contributor

@cosmo0920 cosmo0920 commented May 11, 2026

This PR adds Google Cloud Storage (gcs) plugin to send logs type of events into GCS buckets.
Previously, we have a PR for this purpose: #6984

But the PR seems to be stale so I had taken over that work.

The issue should be related #1032 and it's a long standing issue.


Enter [N/A] in the box, if an item is not applicable to your change.

Testing
Before we can approve your change; please submit the following in a comment:

  • Example configuration file for the change
pipeline:
  inputs:
    - name: dummy
      tag: dummy
      rate: 5

  outputs:
    - name: gcs
      match: '*'
      bucket: fbit-testing
      google_service_credentials: <user_credentials>.json
      compression: gzip
      upload_timeout: 10s
  • Debug log output from testing the change
Fluent Bit v5.0.6
* Copyright (C) 2015-2026 The Fluent Bit Authors
* Fluent Bit is a CNCF graduated project under the Fluent organization
* https://fluentbit.io

______ _                  _    ______ _ _           _____  _____ 
|  ___| |                | |   | ___ (_) |         |  ___||  _  |
| |_  | |_   _  ___ _ __ | |_  | |_/ /_| |_  __   _|___ \ | |/' |
|  _| | | | | |/ _ \ '_ \| __| | ___ \ | __| \ \ / /   \ \|  /| |
| |   | | |_| |  __/ | | | |_  | |_/ / | |_   \ V //\__/ /\ |_/ /
\_|   |_|\__,_|\___|_| |_|\__| \____/|_|\__|   \_/ \____(_)\___/


[2026/05/11 20:03:04.553] [ info] Configuration:
[2026/05/11 20:03:04.553] [ info]  flush time     | 1.000000 seconds
[2026/05/11 20:03:04.553] [ info]  grace          | 5 seconds
[2026/05/11 20:03:04.553] [ info]  daemon         | 0
[2026/05/11 20:03:04.553] [ info] ___________
[2026/05/11 20:03:04.553] [ info]  inputs:
[2026/05/11 20:03:04.553] [ info]      dummy
[2026/05/11 20:03:04.553] [ info] ___________
[2026/05/11 20:03:04.553] [ info]  filters:
[2026/05/11 20:03:04.553] [ info] ___________
[2026/05/11 20:03:04.553] [ info]  outputs:
[2026/05/11 20:03:04.553] [ info]      gcs.0
[2026/05/11 20:03:04.553] [ info] ___________
[2026/05/11 20:03:04.553] [ info]  collectors:
[2026/05/11 20:03:04.553] [ info] [fluent bit] version=5.0.6, commit=5eaa64e8f9, pid=18676
[2026/05/11 20:03:04.553] [debug] [engine] coroutine stack size: 36864 bytes (36.0K)
[2026/05/11 20:03:04.553] [ info] [storage] ver=1.4.0, type=memory, sync=normal, checksum=off, max_chunks_up=128
[2026/05/11 20:03:04.553] [ info] [simd    ] NEON
[2026/05/11 20:03:04.553] [ info] [cmetrics] version=2.1.3
[2026/05/11 20:03:04.553] [ info] [ctraces ] version=0.7.1
[2026/05/11 20:03:04.554] [ info] [input:dummy:dummy.0] initializing
[2026/05/11 20:03:04.554] [ info] [input:dummy:dummy.0] storage_strategy='memory' (memory only)
[2026/05/11 20:03:04.554] [debug] [dummy:dummy.0] created event channels: read=22 write=23
[2026/05/11 20:03:04.554] [debug] [gcs:gcs.0] created event channels: read=24 write=25
[2026/05/11 20:03:04.617] [debug] [tls] attempting to load certificates from system keychain of macOS
<snip>
[2026/05/11 20:03:04.685] [debug] [tls] successfully loaded and added certificate 158 to trusted store
[2026/05/11 20:03:04.685] [debug] [tls] finished loading keychain certificates, total loaded: 159
[2026/05/11 20:03:04.685] [ info] [output:gcs:gcs.0] worker #0 started
[2026/05/11 20:03:04.685] [ info] [sp] stream processor started
[2026/05/11 20:03:04.685] [ info] [engine] Shutdown Grace Period=5, Shutdown Input Grace Period=2
[2026/05/11 20:03:05.690] [debug] [task] created task=0x9c105c240 id=0 OK
[2026/05/11 20:03:05.690] [debug] [output:gcs:gcs.0] task_id=0 assigned to thread #0
[2026/05/11 20:03:05.690] [debug] [out flush] cb_destroy coro_id=0
[2026/05/11 20:03:05.690] [debug] [task] destroy task=0x9c105c240 (task_id=0)
[2026/05/11 20:03:06.690] [debug] [task] created task=0x9c105c240 id=0 OK
[2026/05/11 20:03:06.690] [debug] [output:gcs:gcs.0] task_id=0 assigned to thread #0
[2026/05/11 20:03:06.690] [debug] [out flush] cb_destroy coro_id=1
[2026/05/11 20:03:06.690] [debug] [task] destroy task=0x9c105c240 (task_id=0)
[2026/05/11 20:03:07.690] [debug] [task] created task=0x9c105c240 id=0 OK
[2026/05/11 20:03:07.690] [debug] [output:gcs:gcs.0] task_id=0 assigned to thread #0
[2026/05/11 20:03:07.690] [debug] [out flush] cb_destroy coro_id=2
[2026/05/11 20:03:07.690] [debug] [task] destroy task=0x9c105c240 (task_id=0)
[2026/05/11 20:03:08.690] [debug] [task] created task=0x9c105c240 id=0 OK
[2026/05/11 20:03:08.690] [debug] [output:gcs:gcs.0] task_id=0 assigned to thread #0
[2026/05/11 20:03:08.690] [debug] [out flush] cb_destroy coro_id=3
[2026/05/11 20:03:08.690] [debug] [task] destroy task=0x9c105c240 (task_id=0)
[2026/05/11 20:03:09.690] [debug] [task] created task=0x9c105c240 id=0 OK
[2026/05/11 20:03:09.690] [debug] [output:gcs:gcs.0] task_id=0 assigned to thread #0
[2026/05/11 20:03:09.690] [debug] [out flush] cb_destroy coro_id=4
[2026/05/11 20:03:09.690] [debug] [task] destroy task=0x9c105c240 (task_id=0)
[2026/05/11 20:03:10.690] [debug] [task] created task=0x9c105c240 id=0 OK
[2026/05/11 20:03:10.690] [debug] [output:gcs:gcs.0] task_id=0 assigned to thread #0
[2026/05/11 20:03:10.690] [debug] [out flush] cb_destroy coro_id=5
[2026/05/11 20:03:10.690] [debug] [task] destroy task=0x9c105c240 (task_id=0)
[2026/05/11 20:03:11.690] [debug] [task] created task=0x9c105c240 id=0 OK
[2026/05/11 20:03:11.690] [debug] [output:gcs:gcs.0] task_id=0 assigned to thread #0
[2026/05/11 20:03:11.690] [debug] [out flush] cb_destroy coro_id=6
[2026/05/11 20:03:11.690] [debug] [task] destroy task=0x9c105c240 (task_id=0)
[2026/05/11 20:03:12.690] [debug] [task] created task=0x9c105c240 id=0 OK
[2026/05/11 20:03:12.690] [debug] [output:gcs:gcs.0] task_id=0 assigned to thread #0
[2026/05/11 20:03:12.690] [debug] [out flush] cb_destroy coro_id=7
[2026/05/11 20:03:12.690] [debug] [task] destroy task=0x9c105c240 (task_id=0)
[2026/05/11 20:03:13.690] [debug] [task] created task=0x9c105c240 id=0 OK
[2026/05/11 20:03:13.690] [debug] [output:gcs:gcs.0] task_id=0 assigned to thread #0
[2026/05/11 20:03:13.690] [debug] [out flush] cb_destroy coro_id=8
[2026/05/11 20:03:13.690] [debug] [task] destroy task=0x9c105c240 (task_id=0)
[2026/05/11 20:03:14.736] [debug] [http_client] not using http_proxy for header
[2026/05/11 20:03:14.788] [debug] [oauth2] HTTP Status=200
[2026/05/11 20:03:14.788] [ info] [oauth2] access token from 'oauth2.googleapis.com:443' retrieved
[2026/05/11 20:03:14.789] [debug] [output:gcs:gcs.0] Pre-compression chunk size is 1760, After compression, chunk is 63 bytes
[2026/05/11 20:03:14.824] [debug] [http_client] not using http_proxy for header
[2026/05/11 20:03:14.937] [debug] [task] created task=0x9c105c240 id=0 OK
[2026/05/11 20:03:14.937] [debug] [output:gcs:gcs.0] task_id=0 assigned to thread #0
[2026/05/11 20:03:14.938] [debug] [out flush] cb_destroy coro_id=9
[2026/05/11 20:03:14.938] [debug] [task] destroy task=0x9c105c240 (task_id=0)
[2026/05/11 20:03:15.690] [debug] [task] created task=0x9c105c240 id=0 OK
[2026/05/11 20:03:15.690] [debug] [output:gcs:gcs.0] task_id=0 assigned to thread #0
[2026/05/11 20:03:15.690] [debug] [out flush] cb_destroy coro_id=10
[2026/05/11 20:03:15.690] [debug] [task] destroy task=0x9c105c240 (task_id=0)
[2026/05/11 20:03:16] [engine] caught signal (SIGTERM)
[2026/05/11 20:03:16.624] [ info] [input] pausing dummy.0
[2026/05/11 20:03:16.625] [ info] [output:gcs:gcs.0] thread worker #0 stopping...
[2026/05/11 20:03:16.625] [ info] [output:gcs:gcs.0] thread worker #0 stopped
  • Attached Valgrind output that shows no leaks or memory corruption was found

With macOS's leaks command, there's no leaks reported:

Process 18676 is not debuggable. Due to security restrictions, leaks can only show or save contents of readonly memory of restricted processes.

Process:         fluent-bit [18676]
Path:            /Users/USER/*/fluent-bit
Load Address:    0x100ad0000
Identifier:      fluent-bit
Version:         0
Code Type:       ARM64
Platform:        macOS
Parent Process:  leaks [18675]
Target Type:     live task

Date/Time:       2026-05-11 20:03:16.635 +0900
Launch Time:     2026-05-11 20:03:04.053 +0900
OS Version:      macOS 26.4.1 (25E253)
Report Version:  7
Analysis Tool:   /usr/bin/leaks

Physical footprint:         13.6M
Physical footprint (peak):  13.7M
Idle exit:                  untracked
----

leaks Report Version: 4.0, multi-line stacks
Process 18676: 2777 nodes malloced for 405 KB
Process 18676: 0 leaks for 0 total leaked bytes.

[2026/05/11 20:03:17] [engine] caught signal (SIGCONT)

If this is a change to packaging of containers or native binaries then please confirm it works for all targets.

  • Run local packaging test showing all targets (including any new ones) build.
  • Set ok-package-test label to test for all targets (requires maintainer to do).

Documentation

  • Documentation required for this feature

Backporting

  • Backport to latest stable release.

Fluent Bit is licensed under Apache 2.0, by submitting this pull request I understand that this code will be released under the terms of that license.

Summary by CodeRabbit

  • New Features

    • Added a Google Cloud Storage (GCS) output plugin to export logs to GCS with service-account auth, optional gzip compression, configurable retries, ordered delivery, object key formatting (sequence/UUID), and on-disk buffering.
  • Build

    • New build option to enable the GCS output; the "all outputs" build profile now includes GCS by default.
  • Tests

    • Added runtime tests covering GCS upload success and error scenarios.

Review Change Stack

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented May 11, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: eac98de6-63c2-4c69-b194-950ee9a1d7e4

📥 Commits

Reviewing files that changed from the base of the PR and between 2fb74aa and b2635b7.

📒 Files selected for processing (2)
  • plugins/out_gcs/gcs.h
  • plugins/out_gcs/gcs_store.c
🚧 Files skipped from review as they are similar to previous changes (2)
  • plugins/out_gcs/gcs.h
  • plugins/out_gcs/gcs_store.c

📝 Walkthrough

Walkthrough

Adds a new out_gcs output plugin and build option, implements on-disk buffering, OAuth/JWT authentication, upload queue and HTTP upload logic (gzip/MD5/ACL/options), integrates plugin into build and tests, and provides runtime tests for success and error/retry paths.

Changes

GCS Output Plugin

Layer / File(s) Summary
Build Configuration
CMakeLists.txt, cmake/plugins_options.cmake, plugins/CMakeLists.txt, plugins/out_gcs/CMakeLists.txt
New FLB_OUT_GCS build option (default ON), enabled under FLB_ALL; plugin registered and sources (gcs.c, gcs_store.c) configured.
Headers: Types & Plugin State
plugins/out_gcs/gcs.h, plugins/out_gcs/gcs_store.h
Defines struct upload_queue, struct gcs_file, struct flb_gcs_oauth_credentials, and struct flb_gcs plus constants for endpoints, scopes, compression modes, and public store APIs.
On-disk Buffer Store
plugins/out_gcs/gcs_store.c
Filesystem-backed buffering using Fluent Bit fstore: filename generation, init/exit, lookup by tag, append/read/lock/delete, and size/count enforcement.
Plugin Helpers & Test Mode
plugins/out_gcs/gcs.c
Test-mode env helpers, persistent sequence index read/write, MD5/base64 and random hex helpers, and small file helpers.
Credentials & OAuth
plugins/out_gcs/gcs.c
Service-account JSON parsing/unescape, credential lifecycle, base64-url encoding, RS256 JWT assembly, OAuth jwt-bearer token exchange with mutex-protected caching.
Queue & Upload Flow
plugins/out_gcs/gcs.c
Upload queue management (enqueue/dequeue/retries), request-body construction from store, HTTP POST upload routine (headers: auth, content-type, encoding, ACL, storage-class, Content-MD5), per-entry upload execution (sequence index, key construction, gzip), queue draining with preserve-order and retry semantics, backlog recovery, timers, init/flush/exit wiring, and plugin config map/registration.
Tests
tests/runtime/CMakeLists.txt, tests/runtime/out_gcs.c
Registers runtime tests and adds upload_success and upload_error tests using plugin test-mode environment variables to validate upload and retry behavior.

Sequence Diagram

sequenceDiagram
  participant Flb as FluentBit
  participant Plugin as out_gcs
  participant Store as gcs_store
  participant Queue as UploadQueue
  participant OAuth as OAuth2
  participant GCS as GoogleCloudStorage

  Flb->>Plugin: flush(msgpack)
  Plugin->>Store: write chunk (JSON)
  Store-->>Plugin: gcs_file handle
  Plugin->>Queue: enqueue(gcs_file)

  Queue->>Store: read chunk content
  Store-->>Queue: data buffer
  Queue->>OAuth: get_token()
  OAuth-->>Queue: access_token

  Queue->>Plugin: build request (headers + payload)
  Plugin->>GCS: POST /upload (Authorization: Bearer)
  GCS-->>Plugin: response (200 / error)
  Plugin->>Store: delete chunk (on success)
  Plugin->>Queue: remove entry / schedule retry (on failure)
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Suggested reviewers

  • edsiper
  • niedbalski
  • patrick-stephens
  • celalettin1286

"I hopped through bytes and headers bright,
JWTs signed under the moonlit night,
Chunks tucked to disk then queued for flight,
Retries and timers keep uploads right,
A small rabbit cheers logs into light."

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 4.35% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately describes the main change: implementation of a new Google Cloud Storage (GCS) output plugin for Fluent Bit.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch cosmo0920-implement-out_gcs-plugin

Tip

💬 Introducing Slack Agent: The best way for teams to turn conversations into code.

Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.

  • Generate code and open pull requests
  • Plan features and break down work
  • Investigate incidents and troubleshoot customer tickets together
  • Automate recurring tasks and respond to alerts with triggers
  • Summarize progress and report instantly

Built for teams:

  • Shared memory across your entire org—no repeating context
  • Per-thread sandboxes to safely plan and execute work
  • Governance built-in—scoped access, auditability, and budget controls

One agent for your entire SDLC. Right inside Slack.

👉 Get started


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: ffe6fb0502

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread plugins/out_gcs/gcs.c
Comment thread plugins/out_gcs/gcs.c Outdated
Comment thread plugins/out_gcs/gcs.c
cosmo0920 added 2 commits May 11, 2026 20:16
Signed-off-by: Hiroshi Hatake <hiroshi@chronosphere.io>
Signed-off-by: Hiroshi Hatake <hiroshi@chronosphere.io>
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

🧹 Nitpick comments (2)
tests/runtime/out_gcs.c (2)

65-67: ⚡ Quick win

Clean up the temporary store directory after the test.

The test creates a temporary directory with mkdtemp() but never removes it, leaving files in /tmp. Add cleanup code before returning.

🧹 Proposed cleanup

After flb_destroy(ctx) on line 98, add:

    /* Clean up temporary store directory */
    char rm_cmd[256];
    snprintf(rm_cmd, sizeof(rm_cmd), "rm -rf %s", store_dir);
    system(rm_cmd);

Or use a more portable approach:

    /* Clean up temporary store directory */
    flb_utils_recursive_unlink(store_dir);
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@tests/runtime/out_gcs.c` around lines 65 - 67, The test creates a temp
directory via store_dir and mkdtemp but never removes it; after flb_destroy(ctx)
(the test teardown) add code to recursively remove store_dir — either by
invoking flb_utils_recursive_unlink(store_dir) if available or by executing a
safe platform call to remove the directory contents — ensuring the cleanup runs
before the test returns so the /tmp directory is not left behind.

19-21: ⚡ Quick win

Clean up the temporary store directory after the test.

The test creates a temporary directory with mkdtemp() but never removes it, leaving files in /tmp. Add cleanup code before returning.

🧹 Proposed cleanup

After flb_destroy(ctx) on line 51, add:

    /* Clean up temporary store directory */
    char rm_cmd[256];
    snprintf(rm_cmd, sizeof(rm_cmd), "rm -rf %s", store_dir);
    system(rm_cmd);

Or use a more portable approach:

    /* Clean up temporary store directory */
    flb_utils_recursive_unlink(store_dir);
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@tests/runtime/out_gcs.c` around lines 19 - 21, The test creates a temporary
directory (store_dir via mkdtemp) but never removes it; after flb_destroy(ctx)
in the test teardown add cleanup to remove store_dir—either call
flb_utils_recursive_unlink(store_dir) if available, or invoke a safe removal
(e.g., build an rm -rf command with snprintf into a buffer and call system) to
recursively delete the temporary directory; ensure you reference the same
store_dir variable and perform the cleanup before the test returns.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@plugins/out_gcs/gcs_store.c`:
- Around line 55-68: The gcs_store_init currently creates a shared file store at
ctx->store_dir and always opens the same stream name "gcs_upload_buffer",
causing different out_gcs instances to share and drain each other's buffers;
update gcs_store_init to derive a unique stream namespace per instance (e.g.,
incorporate instance-specific fields such as ctx->name, ctx->bucket or an
internal instance id) and use that string when calling flb_fstore_stream_create
(ctx->fs_stream) so the sequence-index and buffered files live in an
instance-specific namespace; ensure the chosen identifier is stable across
restarts for that instance and still kept with ctx so cleanup and
flb_fstore_destroy behave correctly.

In `@plugins/out_gcs/gcs.c`:
- Around line 757-790: The generated key currently calls flb_get_s3_key(..., 0)
before ctx->seq_index is incremented, so $INDEX never appears in the object
name; move/perform the seq handling prior to key generation: if
ctx->key_fmt_has_seq_index, increment ctx->seq_index (or compute the next
sequence value) and pass that value as the final argument to flb_get_s3_key
instead of 0 so gcs_key/gcs_key_final includes the sequence, and keep the
existing write_seq_index(ctx->seq_index_file, ctx->seq_index) call to persist
the new value after successful use; update references to gcs_key, gcs_key_final,
ctx->seq_index, flb_get_s3_key and write_seq_index accordingly.
- Around line 56-82: The static header templates (content_type_header,
canned_acl_header, content_md5_header, storage_class_header) are mutated per
request and can race across out_gcs instances; move these into
gcs_upload_object() as local (stack) variables, initialize their
.key/.key_len/.val/.val_len per-request there, and use the local instances when
building the request instead of the global symbols; also remove or replace any
other references to the static globals so no code mutates shared header state.
- Around line 1084-1093: Check that flb_oauth2_create(...) and
flb_upstream_create(...) succeeded and fail init immediately if either returns
NULL: after assigning ctx->o = flb_oauth2_create(...) verify ctx->o != NULL (log
an error via flb_plg_error or process logger) and goto error on failure; do the
same after ctx->u = flb_upstream_create(...) to ensure ctx->u != NULL (and goto
error). Also ensure you clean up any partially initialized resources (e.g.,
destroy the mutex if token_mutex_initialized was set) when jumping to the error
path so there are no leaks.
- Around line 591-594: attach_recovered_chunk() backdates recovered entries but
add_to_queue() always overwrites entry->upload_time, causing recovered files to
be delayed; modify add_to_queue() so it only sets entry->upload_time when it is
not already initialized/backdated (e.g., if entry->upload_time == 0 or not in
the past), otherwise preserve the existing upload_time, then add the entry to
ctx->upload_queue; update the logic around upload_time in add_to_queue() (and
any callers) to ensure process_upload_queue() can pick up backdated entries
immediately.
- Around line 721-725: The code currently returns only the transport result from
flb_http_do (ret) which treats any completed HTTP response—including
401/403/5xx—as success; update the upload call path to inspect the HTTP response
status after flb_http_do (use the HTTP client/response available via the client
variable c or bytes/response fields) and treat only 2xx statuses as success
(return an error/non-zero for non-2xx so process_upload_queue() will retry),
while still calling flb_http_client_destroy(c) and
flb_upstream_conn_release(u_conn) to clean up; adjust the return value logic
around flb_http_do, the HTTP client 'c', and the response status check so that
non-2xx responses are not acknowledged as successful.

---

Nitpick comments:
In `@tests/runtime/out_gcs.c`:
- Around line 65-67: The test creates a temp directory via store_dir and mkdtemp
but never removes it; after flb_destroy(ctx) (the test teardown) add code to
recursively remove store_dir — either by invoking
flb_utils_recursive_unlink(store_dir) if available or by executing a safe
platform call to remove the directory contents — ensuring the cleanup runs
before the test returns so the /tmp directory is not left behind.
- Around line 19-21: The test creates a temporary directory (store_dir via
mkdtemp) but never removes it; after flb_destroy(ctx) in the test teardown add
cleanup to remove store_dir—either call flb_utils_recursive_unlink(store_dir) if
available, or invoke a safe removal (e.g., build an rm -rf command with snprintf
into a buffer and call system) to recursively delete the temporary directory;
ensure you reference the same store_dir variable and perform the cleanup before
the test returns.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: b203c4a8-5359-459a-b781-d5c061e3be04

📥 Commits

Reviewing files that changed from the base of the PR and between 7299905 and ffe6fb0.

📒 Files selected for processing (10)
  • CMakeLists.txt
  • cmake/plugins_options.cmake
  • plugins/CMakeLists.txt
  • plugins/out_gcs/CMakeLists.txt
  • plugins/out_gcs/gcs.c
  • plugins/out_gcs/gcs.h
  • plugins/out_gcs/gcs_store.c
  • plugins/out_gcs/gcs_store.h
  • tests/runtime/CMakeLists.txt
  • tests/runtime/out_gcs.c

Comment thread plugins/out_gcs/gcs_store.c
Comment thread plugins/out_gcs/gcs.c Outdated
Comment thread plugins/out_gcs/gcs.c
Comment thread plugins/out_gcs/gcs.c
Comment thread plugins/out_gcs/gcs.c
Comment thread plugins/out_gcs/gcs.c
Signed-off-by: Hiroshi Hatake <hiroshi@chronosphere.io>
Signed-off-by: Hiroshi Hatake <hiroshi@chronosphere.io>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant