Add native SOCKS4/SOCKS5 support for Squid cache_peer#30
Add native SOCKS4/SOCKS5 support for Squid cache_peer#30
Conversation
Build a custom Squid 6.10 image with patches that add socks4/socks5 options to cache_peer, enabling direct SOCKS proxy rotation without requiring Gost as an intermediary. The patch performs SOCKS negotiation (RFC 1928/1929) after TCP connect and before HTTP dispatch. - squid_patch/: Dockerfile (pinned Squid 6.10), SocksPeerConnector.h, patch_apply.sh to modify CachePeer.h, cache_cf.cc, FwdState.cc, tunnel.cc - setup/generate.php: socks4/socks5 types now use native cache_peer with originserver + socks options instead of spawning Gost containers - template/docker-compose.yml: build custom squid-socks:6.10 image https://claude.ai/code/session_01Tfy3kPd51qRgxpCFXjb2g9
CI pipeline with 5 jobs: - build: Compile Squid 6.10 from source with SOCKS patch (cached via GHA) - test-config: Validate socks4/socks5 cache_peer option parsing - test-socks5-e2e: End-to-end HTTP/HTTPS through SOCKS5 peer (microsocks) - test-socks5-auth: SOCKS5 with username/password authentication - test-generate: Verify generate.php produces correct SOCKS cache_peer config https://claude.ai/code/session_01Tfy3kPd51qRgxpCFXjb2g9
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughSquidにネイティブなSOCKS4/5向けcache_peerネゴシエーションを追加するソースパッチ、パッチ適用ビルド用Dockerイメージとエントリポイント、設定生成の拡張、およびそれらをビルド・検証するGitHub Actionsワークフローを追加します。 Changes
Sequence Diagram(s)sequenceDiagram
participant Client as Client
participant Squid as Patched Squid
participant Socks as SOCKS Server
participant Origin as Origin Server
Client->>Squid: HTTP/HTTPS リクエスト
activate Squid
Squid->>Squid: cache_peer選択 (socks_type != NONE)
Squid->>Socks: SOCKSハンドシェイク (greeting / auth / CONNECT)
activate Socks
Socks-->>Squid: ハンドシェイク応答 (成功/失敗)
deactivate Socks
Squid->>Origin: 接続経由でリクエスト転送(SOCKS経由)
activate Origin
Origin-->>Squid: レスポンス
deactivate Origin
Squid-->>Client: レスポンス返却
deactivate Squid
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
- Use 'python3 -' so stdin is read as script instead of treating the file argument as the script to execute - Re-declare ARG SQUID_VERSION after second FROM stage so the LABEL can reference it https://claude.ai/code/session_01Tfy3kPd51qRgxpCFXjb2g9
There was a problem hiding this comment.
🧹 Nitpick comments (8)
squid_patch/src/SocksPeerConnector.h (2)
149-157: RFC 1929 ではユーザー名とパスワードの長さは最大255バイトに制限されています。
user.length()とpass.length()をuint8_tにキャストしていますが、255バイトを超える入力の場合、サイレントにトランケートされます。長すぎる認証情報に対してはバリデーションを追加するか、早期リターンを検討してください。🛠️ 提案する修正
if (gResp[1] == 0x02) { if (!hasAuth) return false; + + if (user.length() > 255 || pass.length() > 255) + return false; /* RFC 1929: max 255 bytes each */ uint8_t auth[515]; size_t aPos = 0;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@squid_patch/src/SocksPeerConnector.h` around lines 149 - 157, The code building the RFC1929 username/password sub-negotiation (auth buffer in SocksPeerConnector.h using variables auth, aPos, user, pass) doesn't validate that user.length() and pass.length() fit in one byte (<=255) and will silently truncate; add explicit validation checks for user.length() and pass.length() <= 255 before writing into auth and handle violations (return error/early fail or propagate an exception) so you do not cast/truncate long inputs into uint8_t, and ensure any error path logs or signals the problem instead of proceeding.
236-275: ソケットフラグの復元処理は適切ですが、エラー時のログ出力があると良いでしょう。
negotiate()はソケットをブロッキングモードに切り替え、ハンドシェイク後に元の状態に復元します。Lines 268, 271-272 の復元処理でfcntl/setsockoptが失敗した場合、現在は黙って続行しますが、Squid のdebugs()マクロでログを出力するとデバッグが容易になります。🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@squid_patch/src/SocksPeerConnector.h` around lines 236 - 275, The negotiate() function restores socket flags and timeouts after the SOCKS handshake but currently ignores errors from the restore calls; update the restore block in negotiate (the fcntl(fd, F_SETFL, flags) and the setsockopt calls that clear SO_RCVTIMEO/SO_SNDTIMEO) to check their return values and emit appropriate Squid debug logs using debugs() (include context like "negotiate", the fd and which call failed, and errno/strerror) so failures are visible during debugging while still proceeding to return the handshake result..github/workflows/squid-build-test.yml (1)
186-192: Squid 起動待機ループでタイムアウト時のエラー処理がありません。15秒経過しても Squid が起動しない場合、ループは静かに終了し、後続のテストが失敗するまでエラーが検出されません。明示的なタイムアウトエラーを追加すると、デバッグが容易になります。
🔧 タイムアウト時のエラー処理を追加
for i in $(seq 1 15); do if curl -sf -x http://127.0.0.1:3128 -o /dev/null http://httpbin.org/ip 2>/dev/null; then echo "Squid is ready after ${i}s" break fi sleep 1 + if [ "$i" -eq 15 ]; then + echo "ERROR: Squid failed to start within 15 seconds" + docker logs squid-test 2>&1 || true + exit 1 + fi done🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/squid-build-test.yml around lines 186 - 192, The readiness loop that checks Squid with curl (the for i in $(seq 1 15); do ... curl -sf -x http://127.0.0.1:3128 -o /dev/null http://httpbin.org/ip ... done) lacks explicit timeout handling; modify the block so that if the loop completes without a successful curl (i.e., after 15 attempts) it prints a clear error message including the elapsed time and exits with a non-zero status (e.g., echo "Squid did not become ready after 15s" >&2; exit 1) to fail the job fast and aid debugging.squid_patch/Dockerfile (2)
34-37: ソースのダウンロードに HTTP を使用しています。
http://www.squid-cache.orgから tarball をダウンロードしていますが、HTTPS が利用可能であれば中間者攻撃のリスクを軽減できます。squid-cache.org が HTTPS をサポートしている場合はhttps://に変更することを検討してください。#!/bin/bash # Verify if HTTPS is supported curl -sI https://www.squid-cache.org/Versions/v6/squid-6.10.tar.xz | head -5🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@squid_patch/Dockerfile` around lines 34 - 37, The Dockerfile's RUN wget step uses an http URL which is insecure; update the RUN wget command that downloads "http://www.squid-cache.org/Versions/v6/squid-${SQUID_VERSION}.tar.xz" to use HTTPS (https://www.squid-cache.org/Versions/v6/squid-${SQUID_VERSION}.tar.xz) so the tarball is fetched over TLS, and ensure wget will fail on certificate issues (keep existing flags or add --https-only/--no-check-certificate as appropriate for your security policy).
84-93: 非 root ユーザーでの実行を検討してください。Trivy が指摘しているように、
USERディレクティブがありません。squidユーザーは作成されていますが、コンテナは root として起動します。エントリーポイントでchownやsquid -zを実行するために root が必要ですが、初期化後にgosuやsu-execでsquidユーザーに切り替えることで、セキュリティポスチャを改善できます。🔒 docker-entrypoint.sh での権限降格の例
# Ensure proper ownership chown -R squid:squid /var/cache/squid /var/log/squid /var/run/squid 2>/dev/null || true echo "Starting Squid with config: ${SQUID_CONFIG_FILE}" -exec squid -N -f "${SQUID_CONFIG_FILE}" "$@" +exec su-exec squid squid -N -f "${SQUID_CONFIG_FILE}" "$@"ランタイムステージに
su-execのインストールが必要です:RUN apt-get update && apt-get install -y --no-install-recommends \ libssl3 \ + su-exec \🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@squid_patch/Dockerfile` around lines 84 - 93, コンテナが常に root で起動しているためセキュリティリスクがあるので、Dockerfile とエントリーポイントを修正して初期化(/usr/local/bin/docker-entrypoint.sh 内での chown や squid -z 等)を root で行ったあとランタイムは非特権ユーザーに降格するようにしてください;具体的には Dockerfile に su-exec または gosu をインストールし(ランタイムステージ)、ENTRYPOINT の docker-entrypoint.sh 内で初期化処理を実行した後に権限を降ろすコマンドを呼び出して squid ユーザーで実行させ、さらに Dockerfile に USER squid を追加してデフォルト実行ユーザーを squid に変更してください(参照: useradd invocation, docker-entrypoint.sh, ENTRYPOINT, USER directive, su-exec/gosu).squid_patch/patch_apply.sh (2)
354-355: tunnel.cc のパッチも同様に、失敗時はビルドを中断すべきです。HTTPS/CONNECT トンネリングの SOCKS サポートが機能しない状態でビルドが成功すると、ランタイムで予期しない動作が発生します。
🛠️ 警告をエラーに変更
if inserted: with open(filepath, 'w') as f: f.write(content) else: - print("WARNING: Could not patch tunnel.cc - HTTPS tunneling through SOCKS peers may not work", file=sys.stderr) + print("ERROR: Could not patch tunnel.cc - HTTPS tunneling through SOCKS peers will not work", file=sys.stderr) + sys.exit(1)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@squid_patch/patch_apply.sh` around lines 354 - 355, 現在の patch_apply.sh の該当 else ブロックは単に警告を出すだけで失敗を許容しているため、tunnel.cc のパッチ失敗時もビルドを中断するように変更してください: 該当箇所(else: print("WARNING: Could not patch tunnel.cc ...", file=sys.stderr))を検出して、警告ではなく非ゼロ終了を行う実装に置き換え、エラーメッセージを stderr に出力した後に exit/return でプロセスを終了させる(シェルスクリプトの既存のエラー処理パターンに合わせて exit 1 を使用)。
239-241: パッチ適用失敗時に警告のみで続行していますが、ビルドを失敗させることを検討してください。
FwdState::dispatch()への SOCKS フック挿入に失敗した場合、HTTP リクエストの SOCKS プロキシ経由のルーティングが機能しません。警告を出力して続行するのではなく、exit 1でビルドを失敗させることで、問題を早期に検出できます。🛠️ 警告をエラーに変更
if not inserted: - print("WARNING: Could not find dispatch() insertion point in FwdState.cc", file=sys.stderr) - print(" SOCKS support for HTTP requests may not work", file=sys.stderr) + print("ERROR: Could not find dispatch() insertion point in FwdState.cc", file=sys.stderr) + print(" SOCKS support for HTTP requests will not work", file=sys.stderr) + sys.exit(1) else:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@squid_patch/patch_apply.sh` around lines 239 - 241, The script currently only prints warnings when it fails to insert the SOCKS hook into FwdState::dispatch() (the block that checks "if not inserted" and prints to sys.stderr); change this to fail the build by replacing the warning-only behavior with an explicit non-zero exit (e.g., call sys.exit(1) or raise SystemExit(1)) after logging the error so the process stops immediately; keep the same diagnostic messages but ensure the code path uses sys.exit(1) to surface the failure to the CI/build system.template/docker-compose.yml (1)
17-18: ヘルスチェックが外部サービス(httpbin.org)に依存しています。ネットワーク制限のある環境や httpbin.org のダウンタイム時にヘルスチェックが失敗する可能性があります。ローカルエンドポイント(例:
curl -sf http://127.0.0.1:3128/squid-internal-mgr/info)を検討してください。ただし、プロキシ機能のテストという目的であれば現状でも許容範囲です。🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@template/docker-compose.yml` around lines 17 - 18, ヘルスチェックが外部サービス(httpbin.org)に依存しているため、healthcheck の test コマンド (CMD-SHELL) 内で設定している https_proxy/http_proxy を使って外部へ curl する代わりにローカルエンドポイントを叩くように変更してください(例: 127.0.0.1:3128/squid-internal-mgr/info); 具体的には docker-compose の healthcheck → test (CMD-SHELL) の curl ターゲットを httpbin.org からローカルプロキシ管理パスに差し替えるか、環境変数で切り替え可能にしてネットワーク制限下でも失敗しないようにしてください。
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In @.github/workflows/squid-build-test.yml:
- Around line 186-192: The readiness loop that checks Squid with curl (the for i
in $(seq 1 15); do ... curl -sf -x http://127.0.0.1:3128 -o /dev/null
http://httpbin.org/ip ... done) lacks explicit timeout handling; modify the
block so that if the loop completes without a successful curl (i.e., after 15
attempts) it prints a clear error message including the elapsed time and exits
with a non-zero status (e.g., echo "Squid did not become ready after 15s" >&2;
exit 1) to fail the job fast and aid debugging.
In `@squid_patch/Dockerfile`:
- Around line 34-37: The Dockerfile's RUN wget step uses an http URL which is
insecure; update the RUN wget command that downloads
"http://www.squid-cache.org/Versions/v6/squid-${SQUID_VERSION}.tar.xz" to use
HTTPS (https://www.squid-cache.org/Versions/v6/squid-${SQUID_VERSION}.tar.xz) so
the tarball is fetched over TLS, and ensure wget will fail on certificate issues
(keep existing flags or add --https-only/--no-check-certificate as appropriate
for your security policy).
- Around line 84-93: コンテナが常に root で起動しているためセキュリティリスクがあるので、Dockerfile
とエントリーポイントを修正して初期化(/usr/local/bin/docker-entrypoint.sh 内での chown や squid -z 等)を
root で行ったあとランタイムは非特権ユーザーに降格するようにしてください;具体的には Dockerfile に su-exec または gosu
をインストールし(ランタイムステージ)、ENTRYPOINT の docker-entrypoint.sh
内で初期化処理を実行した後に権限を降ろすコマンドを呼び出して squid ユーザーで実行させ、さらに Dockerfile に USER squid
を追加してデフォルト実行ユーザーを squid に変更してください(参照: useradd invocation, docker-entrypoint.sh,
ENTRYPOINT, USER directive, su-exec/gosu).
In `@squid_patch/patch_apply.sh`:
- Around line 354-355: 現在の patch_apply.sh の該当 else
ブロックは単に警告を出すだけで失敗を許容しているため、tunnel.cc のパッチ失敗時もビルドを中断するように変更してください: 該当箇所(else:
print("WARNING: Could not patch tunnel.cc ...",
file=sys.stderr))を検出して、警告ではなく非ゼロ終了を行う実装に置き換え、エラーメッセージを stderr に出力した後に
exit/return でプロセスを終了させる(シェルスクリプトの既存のエラー処理パターンに合わせて exit 1 を使用)。
- Around line 239-241: The script currently only prints warnings when it fails
to insert the SOCKS hook into FwdState::dispatch() (the block that checks "if
not inserted" and prints to sys.stderr); change this to fail the build by
replacing the warning-only behavior with an explicit non-zero exit (e.g., call
sys.exit(1) or raise SystemExit(1)) after logging the error so the process stops
immediately; keep the same diagnostic messages but ensure the code path uses
sys.exit(1) to surface the failure to the CI/build system.
In `@squid_patch/src/SocksPeerConnector.h`:
- Around line 149-157: The code building the RFC1929 username/password
sub-negotiation (auth buffer in SocksPeerConnector.h using variables auth, aPos,
user, pass) doesn't validate that user.length() and pass.length() fit in one
byte (<=255) and will silently truncate; add explicit validation checks for
user.length() and pass.length() <= 255 before writing into auth and handle
violations (return error/early fail or propagate an exception) so you do not
cast/truncate long inputs into uint8_t, and ensure any error path logs or
signals the problem instead of proceeding.
- Around line 236-275: The negotiate() function restores socket flags and
timeouts after the SOCKS handshake but currently ignores errors from the restore
calls; update the restore block in negotiate (the fcntl(fd, F_SETFL, flags) and
the setsockopt calls that clear SO_RCVTIMEO/SO_SNDTIMEO) to check their return
values and emit appropriate Squid debug logs using debugs() (include context
like "negotiate", the fd and which call failed, and errno/strerror) so failures
are visible during debugging while still proceeding to return the handshake
result.
In `@template/docker-compose.yml`:
- Around line 17-18: ヘルスチェックが外部サービス(httpbin.org)に依存しているため、healthcheck の test
コマンド (CMD-SHELL) 内で設定している https_proxy/http_proxy を使って外部へ curl
する代わりにローカルエンドポイントを叩くように変更してください(例: 127.0.0.1:3128/squid-internal-mgr/info);
具体的には docker-compose の healthcheck → test (CMD-SHELL) の curl ターゲットを httpbin.org
からローカルプロキシ管理パスに差し替えるか、環境変数で切り替え可能にしてネットワーク制限下でも失敗しないようにしてください。
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 9faef85a-b5fb-4225-ac35-df79f9e86e5e
📒 Files selected for processing (8)
.github/workflows/squid-build-test.ymlsetup/generate.phpsquid_patch/.dockerignoresquid_patch/Dockerfilesquid_patch/docker-entrypoint.shsquid_patch/patch_apply.shsquid_patch/src/SocksPeerConnector.htemplate/docker-compose.yml
- SocksPeerConnector.h: Add RFC 1929 user/pass length validation (max 255) - patch_apply.sh: Fail build (exit 1) when FwdState.cc or tunnel.cc patch insertion fails instead of just warning - Dockerfile: Use HTTPS for Squid source download, add gosu for privilege de-escalation to squid user at runtime - docker-entrypoint.sh: Use gosu to drop to squid user after init - workflow: Add explicit timeout error handling for Squid startup loops https://claude.ai/code/session_01Tfy3kPd51qRgxpCFXjb2g9
- CachePeer.h: Use int/char* fields instead of including SocksPeerConnector.h (avoids POSIX header conflicts before squid.h) - cache_cf.cc: Use xstrdup() for string allocation (Squid convention) - FwdState.cc: Use url.host().c_str() (SBuf type in Squid 6.x), static_cast<SocksPeerType>, handle nullable char* for socks_user/pass - tunnel.cc: Target connectDone() (correct function name), use conn->getPeer()/conn->fd (not serverConnection() which doesn't exist in TunnelStateData) - All patches now exit 1 on failure instead of warning https://claude.ai/code/session_01Tfy3kPd51qRgxpCFXjb2g9
There was a problem hiding this comment.
Actionable comments posted: 6
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@squid_patch/Dockerfile`:
- Around line 33-37: Before extracting the Squid tarball, verify its integrity
by checking a trusted SHA256 (or signature) and fail the build on mismatch:
fetch the corresponding checksum or signature for squid-${SQUID_VERSION}.tar.xz
(or use a build-time known SHA256 value), compute and validate the downloaded
file (squid.tar.xz) with sha256sum -c or openssl dgst, or verify a .asc with gpg
--verify if upstream provides signatures; ensure the RUN step that downloads
squid-${SQUID_VERSION}.tar.xz performs this verification and exits non‑zero on
failure so extraction (tar xf squid.tar.xz) only runs after a successful
integrity/signature check.
In `@squid_patch/patch_apply.sh`:
- Around line 142-146: The sed currently replaces the first `#include` line and
puts SocksPeerConnector.h before squid.h; change the insertion logic to locate
the existing '#include "squid.h"' (in ${FWD_STATE}) and insert '#include
"SocksPeerConnector.h"' immediately after that line instead of before the first
include. Update the check that follows to still verify that
'SocksPeerConnector.h' exists in ${FWD_STATE} and keep using die on failure;
reference the symbols SocksPeerConnector.h, "squid.h", ${FWD_STATE} and die when
implementing the fix.
- Around line 61-130: The inserted SOCKS parsing must only be added inside the
originserver peer option block; update the anchor selection logic (the ANCHOR
variable) so it requires finding "originserver" specifically (or prefer
"originserver" over other patterns) and fail if "originserver" isn't found,
instead of inserting after other peer options, so the socks_code snippet is only
injected within the originserver handling block.
- Around line 83-94: The generated socks_code block is corrupting cache_cf.cc by
starting with a leading "}" and missing the closing brace for the final branch;
update the socks_code string so it does not begin with a stray "}" (change "}
else if (!strcmp(token, \"socks4\")) {" to "else if (!strcmp(token, \"socks4\"))
{") and add the matching closing "}" after the socks-pass branch (i.e., append a
"}" at the end of the socks_code string) so the else-if chain around
p->socks_type, p->socks_user and p->socks_pass remains syntactically correct in
cache_cf.cc.
In `@squid_patch/src/SocksPeerConnector.h`:
- Around line 35-59: The syncSend/syncRecv functions and their use from
negotiate(), FwdState::dispatch(), and TunnelStateData::connectDone() perform
blocking I/O and must be replaced by a non-blocking async Comm state machine:
remove calls to syncSend/syncRecv and stop switching the socket to blocking in
negotiate(); instead implement an asynchronous handshake state that drives
send/recv via the event loop (e.g. a new SocksHandshakeState or extend existing
TunnelStateData) using non-blocking sockets and registering writable/readable
callbacks (use existing Comm/dispatcher APIs or the dispatch mechanism) to
perform partial writes/reads until complete, and resume the original
FwdState/TunnelStateData only after the async handshake finishes.
- Around line 63-99: socks4Connect currently memcpy's user and host into the
fixed-size req buffer without bounds checks and mis-encodes IP literals;
validate lengths before any memcpy into req (ensure user.length() and
host.length() fit remaining space in req), reject or truncate overly long user
IDs/hostnames per SOCKS4 limits, and ensure useSocks4a is only set for non-IPv4
literals (use inet_pton to detect IPv4/IPv6 and encode IPv4/IPv6 appropriately);
update the logic around req, pos, user, host, and useSocks4a in socks4Connect
and apply the same bounds/encoding fixes to the analogous code at the other
occurrence (lines ~179-188) so no buffer overflow or incorrect ATYP encoding can
occur before calling syncSend.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: a2550288-9113-4f43-bddf-0a58f76ea13e
📒 Files selected for processing (5)
.github/workflows/squid-build-test.ymlsquid_patch/Dockerfilesquid_patch/docker-entrypoint.shsquid_patch/patch_apply.shsquid_patch/src/SocksPeerConnector.h
✅ Files skipped from review due to trivial changes (1)
- .github/workflows/squid-build-test.yml
🚧 Files skipped from review as they are similar to previous changes (1)
- squid_patch/docker-entrypoint.sh
| static inline bool syncSend(int fd, const void *buf, size_t len) | ||
| { | ||
| const char *p = static_cast<const char *>(buf); | ||
| size_t sent = 0; | ||
| while (sent < len) { | ||
| ssize_t n = ::send(fd, p + sent, len - sent, MSG_NOSIGNAL); | ||
| if (n <= 0) | ||
| return false; | ||
| sent += static_cast<size_t>(n); | ||
| } | ||
| return true; | ||
| } | ||
|
|
||
| static inline bool syncRecv(int fd, void *buf, size_t len) | ||
| { | ||
| char *p = static_cast<char *>(buf); | ||
| size_t got = 0; | ||
| while (got < len) { | ||
| ssize_t n = ::recv(fd, p + got, len - got, 0); | ||
| if (n <= 0) | ||
| return false; | ||
| got += static_cast<size_t>(n); | ||
| } | ||
| return true; | ||
| } |
There was a problem hiding this comment.
この handshake は Squid worker を同期ブロックします。
negotiate() は fd を blocking に切り替え、send/recv を完了まで回しています。この PR ではそれを FwdState::dispatch() と TunnelStateData::connectDone() に差し込むので、遅い SOCKS peer 1 台で同じ worker を最大 10 秒止められます。イベントループ上でやる処理ではないので、非同期の Comm state machine に分離したいです。
Also applies to: 240-279
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@squid_patch/src/SocksPeerConnector.h` around lines 35 - 59, The
syncSend/syncRecv functions and their use from negotiate(),
FwdState::dispatch(), and TunnelStateData::connectDone() perform blocking I/O
and must be replaced by a non-blocking async Comm state machine: remove calls to
syncSend/syncRecv and stop switching the socket to blocking in negotiate();
instead implement an asynchronous handshake state that drives send/recv via the
event loop (e.g. a new SocksHandshakeState or extend existing TunnelStateData)
using non-blocking sockets and registering writable/readable callbacks (use
existing Comm/dispatcher APIs or the dispatch mechanism) to perform partial
writes/reads until complete, and resume the original FwdState/TunnelStateData
only after the async handshake finishes.
- patch_apply.sh: Insert #include after squid.h (not before) - patch_apply.sh: Fix socks_code block - remove leading "}", add trailing "}" to maintain valid else-if chain in cache_cf.cc - patch_apply.sh: url.host() returns const char* in Squid 6.10, remove .c_str() calls - patch_apply.sh: Add originserver warning when socks4/socks5 is used without originserver option - SocksPeerConnector.h: Add bounds checks for SOCKS4 req buffer, detect IPv4/IPv6 literals for proper SOCKS5 ATYP encoding - Dockerfile: Add SHA256 integrity check for Squid tarball Verified: full configure + make succeeds on Squid 6.10 source. https://claude.ai/code/session_01Tfy3kPd51qRgxpCFXjb2g9
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (2)
squid_patch/patch_apply.sh (1)
85-99:⚠️ Potential issue | 🟠 Major
originserverなしの SOCKS peer を warning だけで通さないでください。ここで生成している hook は peer ソケットを
request->url.host():portへの直通トンネルに変えます。originserverでない peer に対してこれを許すと、その後の Squid の転送前提と噛み合わない設定を runtime まで持ち込みます。warning ではなく、socks_type != 0 && !p->options.originserverを parse 時点で失敗にしたいです。🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@squid_patch/patch_apply.sh` around lines 85 - 99, The code in the cache_peer token parsing (handling "socks4"/"socks5") currently sets p->socks_type and only logs a warning when p->options.originserver is false; instead make parsing fail immediately when a SOCKS type is specified without originserver by checking if (p->socks_type != 0 && !p->options.originserver) in the same parse block (the lines handling "socks4"/"socks5" where p->socks_type is set and debugs(...) is called) and replace the warning with a parse failure (return/throw the parser error or call the existing parse-failure path used elsewhere) so the config is rejected at parse time rather than emitting a runtime warning.squid_patch/src/SocksPeerConnector.h (1)
35-59:⚠️ Potential issue | 🟠 Majorこの handshake は worker のイベントループを同期ブロックします。
negotiate()が fd を blocking に切り替え、syncSend()/syncRecv()が 10 秒 timeout まで完了待ちします。これをFwdState::dispatch()/TunnelStateData::connectDone()から呼ぶと、遅い SOCKS peer 1 台で同じ Squid worker 上の無関係なトラフィックまで止まります。ここは非同期の Comm state に切り出す必要があります。Also applies to: 262-299
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@squid_patch/src/SocksPeerConnector.h` around lines 35 - 59, 現在の syncSend()/syncRecv() と negotiate() はブロッキングでワーカーのイベントループを停止するので、handshake 処理を同期呼び出し(FwdState::dispatch()/TunnelStateData::connectDone() 経由)から切り出して非同期化してください。具体的には syncSend/syncRecv の直接使用と negotiate() による fd の blocking 切替をやめ、handshake を新しい Comm state(または既存の非同期 Comm 接続ステート)に移してノンブロッキング socket I/O とイベントループ上の read/write ハンドラで進め、タイムアウトはイベントタイマーで管理するように実装を変更してください(参照関数/クラス: syncSend, syncRecv, negotiate, FwdState::dispatch, TunnelStateData::connectDone)。
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@squid_patch/patch_apply.sh`:
- Around line 33-49: CachePeer now has new char * fields socks_user and
socks_pass but there is no final free on CachePeer destruction; update the
CachePeer cleanup/destructor (e.g., CachePeer::~CachePeer, peerFree,
freeCachePeer or the function that releases CachePeer objects) to call
safe_free() (or the project’s preferred deallocator) on socks_user and
socks_pass before freeing the struct so credentials are not leaked; ensure both
fields are handled in the same cleanup path that frees other CachePeer members.
- Around line 168-189: After a successful SOCKS negotiation via
SocksPeerConnector::negotiate (called inside the block that checks
serverConnection()->getPeer() and sp->socks_type), mark the resulting
socket/connection as non-persistent and as bound to the specific target
host:port so it cannot be returned to the generic pconn pool; specifically, set
the appropriate non-persistent flag on the serverConnection()/peer connection
object (so it is not reused), attach a per-target key (e.g., derived from
request->url.host() and port) to the connection for target-scoped management,
and ensure code paths that would register this connection with the persistent
connection pool are bypassed after negotiation (so reuse/pool insertion is
prevented). Ensure these changes are applied immediately after a successful
SocksPeerConnector::negotiate and before returning to normal HTTP dispatch (the
block that currently logs "SOCKS negotiation OK for peer" and proceeds).
---
Duplicate comments:
In `@squid_patch/patch_apply.sh`:
- Around line 85-99: The code in the cache_peer token parsing (handling
"socks4"/"socks5") currently sets p->socks_type and only logs a warning when
p->options.originserver is false; instead make parsing fail immediately when a
SOCKS type is specified without originserver by checking if (p->socks_type != 0
&& !p->options.originserver) in the same parse block (the lines handling
"socks4"/"socks5" where p->socks_type is set and debugs(...) is called) and
replace the warning with a parse failure (return/throw the parser error or call
the existing parse-failure path used elsewhere) so the config is rejected at
parse time rather than emitting a runtime warning.
In `@squid_patch/src/SocksPeerConnector.h`:
- Around line 35-59: 現在の syncSend()/syncRecv() と negotiate()
はブロッキングでワーカーのイベントループを停止するので、handshake
処理を同期呼び出し(FwdState::dispatch()/TunnelStateData::connectDone()
経由)から切り出して非同期化してください。具体的には syncSend/syncRecv の直接使用と negotiate() による fd の
blocking 切替をやめ、handshake を新しい Comm state(または既存の非同期 Comm 接続ステート)に移してノンブロッキング
socket I/O とイベントループ上の read/write
ハンドラで進め、タイムアウトはイベントタイマーで管理するように実装を変更してください(参照関数/クラス: syncSend, syncRecv,
negotiate, FwdState::dispatch, TunnelStateData::connectDone)。
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: b7ec1c3e-58cf-4f1c-a822-a7fd7aaf37b3
📒 Files selected for processing (3)
squid_patch/Dockerfilesquid_patch/patch_apply.shsquid_patch/src/SocksPeerConnector.h
| socks_hook = r''' | ||
| /* SOCKS peer negotiation: after TCP connect, before HTTP dispatch */ | ||
| if (const auto sp = serverConnection()->getPeer()) { | ||
| if (sp->socks_type) { | ||
| const auto targetPort = static_cast<uint16_t>(request->url.port()); | ||
| debugs(17, 3, "SOCKS" << sp->socks_type | ||
| << " negotiation with peer " << sp->host | ||
| << " for " << request->url.host() << ":" << targetPort); | ||
| if (!SocksPeerConnector::negotiate( | ||
| serverConnection()->fd, | ||
| static_cast<SocksPeerType>(sp->socks_type), | ||
| std::string(request->url.host()), | ||
| targetPort, | ||
| sp->socks_user ? std::string(sp->socks_user) : std::string(), | ||
| sp->socks_pass ? std::string(sp->socks_pass) : std::string())) { | ||
| debugs(17, 2, "SOCKS negotiation FAILED for peer " << sp->host); | ||
| retryOrBail(); | ||
| return; | ||
| } | ||
| debugs(17, 3, "SOCKS negotiation OK for peer " << sp->host); | ||
| } | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Find and read the patch_apply.sh file
find . -name "patch_apply.sh" -type f | head -20Repository: 39ff/docker-rotating-proxy
Length of output: 98
🏁 Script executed:
# Also check the repository structure
git ls-files | grep -E "(patch|socks|peer)" | head -30Repository: 39ff/docker-rotating-proxy
Length of output: 215
🏁 Script executed:
# Read the patch_apply.sh file to see the full context
cat -n squid_patch/patch_apply.shRepository: 39ff/docker-rotating-proxy
Length of output: 13682
🏁 Script executed:
# Also check the SocksPeerConnector.h header
cat -n squid_patch/src/SocksPeerConnector.hRepository: 39ff/docker-rotating-proxy
Length of output: 11602
🏁 Script executed:
# Search for connection persistence/reuse related code patterns
rg -i "persistent|pconn|reuse" squid_patch/patch_apply.shRepository: 39ff/docker-rotating-proxy
Length of output: 52
🏁 Script executed:
# Check if there's any code in FwdState-related areas that handles connection persistence
rg -A 5 -B 5 "negotiate" squid_patch/patch_apply.sh | head -100Repository: 39ff/docker-rotating-proxy
Length of output: 1607
🏁 Script executed:
# Search for any connection cleanup or marking after SOCKS negotiation
rg -i "close|persistent|nonpersistent|pconn" squid_patch/src/Repository: 39ff/docker-rotating-proxy
Length of output: 52
🏁 Script executed:
# Let me check what happens after the negotiation - look at the full dispatch flow
sed -n '160,230p' squid_patch/patch_apply.shRepository: 39ff/docker-rotating-proxy
Length of output: 2806
🏁 Script executed:
# Search for any connection handling or persistence flags in Squid context
# Look for retryOrBail and what happens after negotiate succeeds
rg -B 3 -A 3 "retryOrBail" squid_patch/patch_apply.shRepository: 39ff/docker-rotating-proxy
Length of output: 804
🏁 Script executed:
# Let me check if there's any documentation or comments about connection handling in the SocksPeerConnector
grep -i "reuse\|persistent\|pool\|pconn" squid_patch/src/SocksPeerConnector.hRepository: 39ff/docker-rotating-proxy
Length of output: 52
🏁 Script executed:
# Check what happens after negotiation in the patched code flow
# Look at the complete hook structure
sed -n '168,191p' squid_patch/patch_apply.shRepository: 39ff/docker-rotating-proxy
Length of output: 1169
🏁 Script executed:
# Let me search for any comments or markers that might indicate connection handling strategy
rg "dispatch|tunnel" squid_patch/patch_apply.sh | grep -i "socks\|persistent\|reuse"Repository: 39ff/docker-rotating-proxy
Length of output: 1112
🏁 Script executed:
# Check if there are any additional patches or notes about connection management
find squid_patch -type f -name "*.md" -o -name "*.txt" -o -name "README*" 2>/dev/null | head -20Repository: 39ff/docker-rotating-proxy
Length of output: 52
🏁 Script executed:
# Look at the complete structure to see if there's configuration or notes
cat squid_patch/Dockerfile | head -50Repository: 39ff/docker-rotating-proxy
Length of output: 1609
SOCKS トンネルの再利用により宛先誤配送が発生します。対策が必要です。
SOCKS 交渉成功後、fd は「cache_peer への汎用接続」ではなく、そのリクエスト先(request->url.host():port)への専用トンネルになります。現在のパッチでは、この接続に対して以下の対策が実装されていません:
- 非永続化フラグの設定
- ターゲット単位のキー付けによる接続管理
- persistent connection(pconn)プールへの登録防止
Squid が pconn を再利用した時に、別のリクエストが前回の SOCKS トンネル先へ流れる可能性があります。この問題を解決するには、SOCKS 交渉後に接続ライフサイクルを適切に管理する必要があります。
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@squid_patch/patch_apply.sh` around lines 168 - 189, After a successful SOCKS
negotiation via SocksPeerConnector::negotiate (called inside the block that
checks serverConnection()->getPeer() and sp->socks_type), mark the resulting
socket/connection as non-persistent and as bound to the specific target
host:port so it cannot be returned to the generic pconn pool; specifically, set
the appropriate non-persistent flag on the serverConnection()/peer connection
object (so it is not reused), attach a per-target key (e.g., derived from
request->url.host() and port) to the connection for target-scoped management,
and ensure code paths that would register this connection with the persistent
connection pool are bypassed after negotiation (so reuse/pool insertion is
prevented). Ensure these changes are applied immediately after a successful
SocksPeerConnector::negotiate and before returning to normal HTTP dispatch (the
block that currently logs "SOCKS negotiation OK for peer" and proceeds).
…oding - CachePeer.cc: Add xfree(socks_user/socks_pass) in destructor to prevent memory leak on peer destruction - cache_cf.cc: Reject config at parse time (throw TextException) when socks4/socks5 is used without originserver option, instead of warning - SocksPeerConnector.h: Proper IPv4/IPv6/domain ATYP detection via inet_pton for SOCKS5 connect requests Note: pconn reuse concern is already mitigated by the existing server_persistent_connections off in template/squid.conf. Verified: full configure + make succeeds on Squid 6.10 source. https://claude.ai/code/session_01Tfy3kPd51qRgxpCFXjb2g9
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
squid_patch/patch_apply.sh (1)
337-342: 出力メッセージにCachePeer.ccが含まれていません。
CachePeer.ccもパッチされていますが(デストラクタへのxfree追加)、最終出力の修正ファイルリストに含まれていません。📝 修正案
echo "Modified files:" echo " - src/CachePeer.h (added socks_type/user/pass fields)" +echo " - src/CachePeer.cc (added socks_user/pass cleanup in destructor)" echo " - src/cache_cf.cc (added socks4/socks5 option parsing)" echo " - src/FwdState.cc (SOCKS negotiation in dispatch())" echo " - src/tunnel.cc (SOCKS negotiation in connectDone())"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@squid_patch/patch_apply.sh` around lines 337 - 342, The output list in patch_apply.sh omits the patched src/CachePeer.cc file; update the echo block that currently prints the modified files (the lines listing src/CachePeer.h, src/cache_cf.cc, src/FwdState.cc, src/tunnel.cc, src/SocksPeerConnector.h) to also include "src/CachePeer.cc" and a short note like "(destructor xfree added)" so the final "Modified files:" summary reflects the actual patched file set.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@squid_patch/patch_apply.sh`:
- Around line 161-162: 現在の else: ブロックは失敗を WARNING で出力して継続していますが、originserver
必須チェックの挿入失敗は致命的なので print(..., file=sys.stderr) を行った後に sys.exit(1)
でプロセスを終了するよう変更してください(該当箇所: originserver validation 挿入の else ブロック)。エラーメッセージは引き続き
stderr に出力し、直後に sys.exit(1) を呼んでビルドを失敗させるようにしてください。
- Around line 297-299: In TunnelStateData::connectDone(), the code currently
logs SOCKS negotiation failure then calls conn->close() and returns; instead,
propagate the failure through the same error-handling path used by FwdState
(e.g. call retryOrBail() or the equivalent failure/notification routine) so the
client is notified before the connection is torn down; update the failure branch
that references sp and conn to invoke the retryOrBail()/failure callback or send
an appropriate error response to the client, then perform cleanup, rather than
immediately closing conn.
---
Nitpick comments:
In `@squid_patch/patch_apply.sh`:
- Around line 337-342: The output list in patch_apply.sh omits the patched
src/CachePeer.cc file; update the echo block that currently prints the modified
files (the lines listing src/CachePeer.h, src/cache_cf.cc, src/FwdState.cc,
src/tunnel.cc, src/SocksPeerConnector.h) to also include "src/CachePeer.cc" and
a short note like "(destructor xfree added)" so the final "Modified files:"
summary reflects the actual patched file set.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: b14a2cec-b476-4443-972c-a3434e32c40c
📒 Files selected for processing (1)
squid_patch/patch_apply.sh
- patch_apply.sh: Exit with error when originserver validation insertion fails (was WARNING, now sys.exit(1)) - patch_apply.sh: Use saveError()+retryOrBail() in tunnel.cc SOCKS failure path instead of bare conn->close(), so client gets proper 502 error response - patch_apply.sh: Add CachePeer.cc to modified files output list Verified: full configure + make succeeds on Squid 6.10 source. https://claude.ai/code/session_01Tfy3kPd51qRgxpCFXjb2g9
In modern Debian, /var/run is a symlink to /run. Squid's make install creates /var/run as a real directory in DESTDIR, causing "cannot copy to non-directory" when COPY --from=builder tries to overlay it onto the runtime image's symlink. https://claude.ai/code/session_01Tfy3kPd51qRgxpCFXjb2g9
There was a problem hiding this comment.
🧹 Nitpick comments (2)
squid_patch/Dockerfile (1)
89-98:USERディレクティブの追加を検討してください。Trivy が DS-0002 でフラグを立てているように、コンテナはデフォルトで root として実行されます。
docker-entrypoint.shでgosuを使用して権限を降格しているのは良いパターンですが、エントリポイントが呼ばれる前の段階でも非 root ユーザーで実行することがベストプラクティスです。ただし、
docker-entrypoint.shではsquid -zでキャッシュの初期化とchownによる所有権の設定に root 権限が必要なため、現在のパターン(root で起動して entrypoint で権限を降格)が必要です。この意図を明確にするコメントを追加することを推奨します。💡 推奨: 現状維持でコメントで意図を明示
COPY docker-entrypoint.sh /usr/local/bin/ RUN chmod +x /usr/local/bin/docker-entrypoint.sh EXPOSE 3128 +# Note: Runs as root initially to allow cache initialization and permission setup; +# docker-entrypoint.sh drops privileges via gosu before starting Squid. ENTRYPOINT ["docker-entrypoint.sh"]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@squid_patch/Dockerfile` around lines 89 - 98, Add a short explanatory comment in the Dockerfile near the user setup/ENTRYPOINT that clarifies why the image intentionally stays root at container start (so docker-entrypoint.sh can run root-only operations like "squid -z" and chown) and that privileges are dropped later in docker-entrypoint.sh using gosu; mention the USER directive was considered but deliberately omitted for this reason so scanners like Trivy (DS-0002) understand the intention.squid_patch/patch_apply.sh (1)
310-323:connectToPeerパターンは接続初期化時の挿入点になり、SOCKS ネゴシエーション実行のタイミングが不適切です。Squid 6.10 の設計では、
connectToPeerは接続を開始し、connectDoneはその接続が完了した後に呼ばれるコールバック関数です。connectToPeerにフックを挿入すると、TCP 接続確立前に SOCKS ネゴシエーションが実行されるため失敗します。ほとんどのバージョンで最初の 3 つの
connectDoneパターンがマッチするため現在のコードは動作していますが、信頼性と意図の明確さのため、connectToPeerパターンを削除し、すべてのケースでconnectDoneのみをターゲットにすることを検討してください。♻️ 修正案
for pat in [ r'(void\s+TunnelStateData::connectDone\s*\([^)]*\)\s*\{)', r'(TunnelStateData::connectDone\s*\([^)]*\)\s*\n?\s*\{)', r'(void\s+tunnelConnectDone\s*\([^)]*\)\s*\{)', - r'(void\s+TunnelStateData::connectToPeer\s*\([^)]*\)\s*\{)', - r'(TunnelStateData::connectToPeer\s*\([^)]*\)\s*\n?\s*\{)', ]:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@squid_patch/patch_apply.sh` around lines 310 - 323, The current insertion loop includes patterns matching TunnelStateData::connectToPeer which runs before TCP handshake, causing SOCKS negotiation to occur too early; update the pattern list in the for-loop that searches content (the list containing r'(void\s+TunnelStateData::connectDone...', r'(TunnelStateData::connectDone...', r'(void\s+tunnelConnectDone...', r'(void\s+TunnelStateData::connectToPeer...', r'(TunnelStateData::connectToPeer...') by removing both connectToPeer patterns so only connectDone/tunnelConnectDone variants remain, then keep the existing insertion logic that uses match.end(), socks_tunnel_hook, and inserted flag to add the hook after the connectDone match. Ensure any logging/print remains accurate (e.g., the message referencing match.group(0)) and that no other logic assumes connectToPeer patterns exist.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@squid_patch/Dockerfile`:
- Around line 89-98: Add a short explanatory comment in the Dockerfile near the
user setup/ENTRYPOINT that clarifies why the image intentionally stays root at
container start (so docker-entrypoint.sh can run root-only operations like
"squid -z" and chown) and that privileges are dropped later in
docker-entrypoint.sh using gosu; mention the USER directive was considered but
deliberately omitted for this reason so scanners like Trivy (DS-0002) understand
the intention.
In `@squid_patch/patch_apply.sh`:
- Around line 310-323: The current insertion loop includes patterns matching
TunnelStateData::connectToPeer which runs before TCP handshake, causing SOCKS
negotiation to occur too early; update the pattern list in the for-loop that
searches content (the list containing
r'(void\s+TunnelStateData::connectDone...', r'(TunnelStateData::connectDone...',
r'(void\s+tunnelConnectDone...', r'(void\s+TunnelStateData::connectToPeer...',
r'(TunnelStateData::connectToPeer...') by removing both connectToPeer patterns
so only connectDone/tunnelConnectDone variants remain, then keep the existing
insertion logic that uses match.end(), socks_tunnel_hook, and inserted flag to
add the hook after the connectDone match. Ensure any logging/print remains
accurate (e.g., the message referencing match.group(0)) and that no other logic
assumes connectToPeer patterns exist.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 5e8968df-7439-4491-8dc2-3e972c5b1bc8
📒 Files selected for processing (2)
squid_patch/Dockerfilesquid_patch/patch_apply.sh
- Workflow: Replace non-existent ghcr.io/rofl0r/microsocks image with serjs/go-socks5-proxy (Docker Hub) for SOCKS5 test server - patch_apply.sh: Remove connectToPeer patterns from tunnel.cc patch (SOCKS negotiation must happen after TCP connect, not before) - Dockerfile: Add comment explaining why USER directive is omitted (root needed for cache init, gosu drops privileges at runtime) https://claude.ai/code/session_01Tfy3kPd51qRgxpCFXjb2g9
- Use python3 http.server instead of httpbin.org (eliminates external dependency, faster, more reliable in CI) - Verify SOCKS5 server works directly before testing through Squid - Check container is still running before readiness loop - Show early Squid logs to diagnose startup issues - Add curl -v for verbose output on test requests - Remove HTTPS tests (not possible with local HTTP server, config parsing already validates the tunnel code path) https://claude.ai/code/session_01Tfy3kPd51qRgxpCFXjb2g9
- Separate readiness check (port open) from proxy test - Capture HTTP status code, response body, and Squid logs per request - Upload test logs as artifacts for debugging CI failures - Add HTTP server liveness check between Squid start and proxy test https://claude.ai/code/session_01Tfy3kPd51qRgxpCFXjb2g9
Docker Hub images (serjs/go-socks5-proxy, ghcr.io/rofl0r/microsocks) are unreliable in CI. Build microsocks from source instead — it's a single C file with no dependencies, compiles in <1s. https://claude.ai/code/session_01Tfy3kPd51qRgxpCFXjb2g9
Run squid with -X (full debug) and -d5 (debug level 5) in a dry-run container before the actual test, to capture why Squid fails to start. https://claude.ai/code/session_01Tfy3kPd51qRgxpCFXjb2g9
The diagnostic step ran Squid on port 3128 for 5 seconds before the actual test container, likely causing a port conflict. Removed it. Also: - docker-entrypoint.sh: Stop hiding squid -z stderr (was 2>/dev/null) - Increase container startup wait from 3s to 5s - Increase port-open timeout from 15s to 30s https://claude.ai/code/session_01Tfy3kPd51qRgxpCFXjb2g9
Run Squid as root without gosu/entrypoint to eliminate permission and entrypoint-related failures. Squid stderr redirected to stdout so container logs capture all error output. https://claude.ai/code/session_01Tfy3kPd51qRgxpCFXjb2g9
Use actions/github-script to post docker logs output as a PR comment when E2E tests fail, enabling remote diagnosis of Squid crashes. https://claude.ai/code/session_01Tfy3kPd51qRgxpCFXjb2g9
E2E Test Squid Logs |
Auth Test Squid Logs |
Auth Test Squid Logs |
E2E Test Squid Logs |
Root cause: Squid's cache_effective_user (squid) cannot write to /proc/self/fd/1 because /proc/self/fd/ is not writable by non-root. access_log stdio:/proc/self/fd/1 caused FATAL error on startup. Fixes: - access_log: write to /var/log/squid/access.log (squid-owned dir) - cache_log: explicitly set to /var/log/squid/cache.log - Remove 'acl all src 0.0.0.0/0' (Squid 6.x requires plain 'all') - Restore proper entrypoint with gosu privilege de-escalation https://claude.ai/code/session_01Tfy3kPd51qRgxpCFXjb2g9
E2E Test Squid Logs |
Auth Test Squid Logs |
E2E Test Squid Logs |
Auth Test Squid Logs |
squid -z (as root) creates /var/run/squid.pid, then gosu drops to squid user which can't overwrite the root-owned file. Fixes: - Test configs: pid_filename /var/run/squid/squid.pid (squid-owned) - Entrypoint: remove stale PID file before starting as squid user https://claude.ai/code/session_01Tfy3kPd51qRgxpCFXjb2g9
There was a problem hiding this comment.
Pull request overview
Adds native SOCKS4/SOCKS5 support for Squid cache_peer by introducing a custom Squid 6.10 Docker build that applies source patches at build time, updating the config generator and template compose to use the new image, and adding CI to build + validate the SOCKS functionality.
Changes:
- Introduce a patched Squid 6.10 Docker image build (patch application script + SOCKS handshake implementation + entrypoint).
- Update config generation (
generate.php) and template compose to emit nativecache_peer ... socks4/socks5 ... originserverdirectives and build the custom Squid image. - Add GitHub Actions workflow to build the image and run parsing + E2E SOCKS tests.
Reviewed changes
Copilot reviewed 8 out of 8 changed files in this pull request and generated 9 comments.
Show a summary per file
| File | Description |
|---|---|
| template/docker-compose.yml | Switch Squid service to build and run the custom squid-socks:6.10 image; update healthcheck to use curl. |
| squid_patch/src/SocksPeerConnector.h | Implements synchronous SOCKS4/SOCKS4a/SOCKS5 negotiation on an established TCP socket. |
| squid_patch/patch_apply.sh | Applies pattern-based patches to Squid source to add SOCKS cache_peer fields, parsing, validation, and negotiation hooks. |
| squid_patch/docker-entrypoint.sh | Initializes cache and starts Squid via gosu, using SQUID_CONFIG_FILE. |
| squid_patch/Dockerfile | Multi-stage build to download, patch, build, and package Squid 6.10 with required runtime deps. |
| squid_patch/.dockerignore | Limits Docker build context contents. |
| setup/generate.php | Generates native SOCKS cache_peer directives (no Gost) and keeps Gost for non-SOCKS proxy types. |
| .github/workflows/squid-build-test.yml | CI pipeline to build image, test config parsing, run E2E SOCKS5 (auth/no-auth), and validate generator output. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
- docker-entrypoint.sh: Implement standard entrypoint pattern that handles `docker run ... squid -v` and arbitrary commands correctly - SocksPeerConnector.h: Handle EINTR in syncSend/syncRecv, validate SOCKS5 auth response VER byte (RFC 1929), save/restore original socket timeouts instead of zeroing them - patch_apply.sh: Add post-parse validation that socks-user/socks-pass require socks5 and must be set together - workflow: Use context.issue.number instead of hard-coded PR #30 https://claude.ai/code/session_01Tfy3kPd51qRgxpCFXjb2g9
- release.yml: Creates GitHub releases with auto-generated changelog from git log - container-release.yml: Builds multi-arch (amd64/arm64) image and pushes to ghcr.io Both are triggered via workflow_dispatch for manual execution. https://claude.ai/code/session_01Tfy3kPd51qRgxpCFXjb2g9
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 10 out of 10 changed files in this pull request and generated 6 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
setup/generate.php
Outdated
| $socksOpt .= sprintf(' socks-user=%s socks-pass=%s', | ||
| urlencode($proxyInfo['user']), | ||
| urlencode($proxyInfo['pass']) |
There was a problem hiding this comment.
SOCKS credentials are URL-encoded before being written into socks-user=/socks-pass=. The Squid patch stores these tokens verbatim and the SOCKS5 RFC1929 auth subnegotiation uses the stored strings directly, so any percent-encoding will be sent to the SOCKS server as literal characters (breaking auth for credentials containing special chars). Either avoid URL-encoding here, or update the Squid-side parsing/handshake to percent-decode socks-user/socks-pass (and keep generator + parser consistent).
| $socksOpt .= sprintf(' socks-user=%s socks-pass=%s', | |
| urlencode($proxyInfo['user']), | |
| urlencode($proxyInfo['pass']) | |
| // SOCKS5 RFC1929 uses raw username/password; do not URL-encode here. | |
| $socksOpt .= sprintf(' socks-user=%s socks-pass=%s', | |
| $proxyInfo['user'], | |
| $proxyInfo['pass'] |
| run: | | ||
| docker rm -f squid-test 2>/dev/null || true | ||
| pkill microsocks 2>/dev/null || true | ||
| kill %1 2>/dev/null || true |
There was a problem hiding this comment.
The cleanup step uses kill %1, but background jobs started in earlier steps are not in the same shell/job-control session, so %1 will typically not exist here. As a result the local python3 -m http.server (and possibly microsocks) processes may remain running for the rest of the job. Capture PIDs when starting background processes (e.g., echo $! > /tmp/http.pid) and kill by PID in cleanup, or use pkill -f with a precise match.
| kill %1 2>/dev/null || true | |
| pkill -f 'python3 -m http.server' 2>/dev/null || true |
| run: | | ||
| docker rm -f squid-auth 2>/dev/null || true | ||
| pkill microsocks 2>/dev/null || true | ||
| kill %1 2>/dev/null || true |
There was a problem hiding this comment.
Same issue as the earlier e2e job: kill %1 in cleanup will not reliably terminate the background python3 -m http.server started in a different step (no shared job-control). Store the background PIDs when launching the servers and kill by PID here (or use a targeted pkill -f), otherwise processes may leak within the job.
| kill %1 2>/dev/null || true | |
| pkill -f "python3 -m http.server 18081" 2>/dev/null || true |
|
|
||
| env: | ||
| REGISTRY: ghcr.io | ||
| IMAGE_NAME: ${{ github.repository_owner }}/squid-socks |
There was a problem hiding this comment.
IMAGE_NAME is built from ${{ github.repository_owner }}, but GHCR/Docker image names must be lowercase. If the repository owner contains uppercase characters (or if this workflow is used in a fork/org with uppercase), the push will fail with an invalid reference format. Consider lowercasing the owner (e.g., compute it in the Prepare tags step) or hardcode a known-lowercase namespace.
| IMAGE_NAME: ${{ github.repository_owner }}/squid-socks | |
| IMAGE_NAME: ${{ toLower(github.repository_owner) }}/squid-socks |
squid_patch/src/SocksPeerConnector.h
Outdated
| } else if (gResp[1] == 0xFF) { | ||
| return false; /* no acceptable method */ | ||
| } | ||
| /* else gResp[1] == 0x00 → no auth required */ | ||
|
|
There was a problem hiding this comment.
In SOCKS5 greeting handling, only methods 0x00 (no-auth), 0x02 (user/pass), and 0xFF are handled. If the server replies with any other method (e.g., GSSAPI 0x01), the code currently falls through and proceeds to send a CONNECT request without having negotiated a supported method. Add an explicit else return false (or equivalent) when gResp[1] is not 0x00/0x02/0xFF to avoid undefined/incorrect protocol state.
| } else if (gResp[1] == 0xFF) { | |
| return false; /* no acceptable method */ | |
| } | |
| /* else gResp[1] == 0x00 → no auth required */ | |
| } else if (gResp[1] == 0x00) { | |
| /* no auth required */ | |
| } else { | |
| return false; /* unsupported or unacceptable method (includes 0xFF) */ | |
| } |
squid_patch/src/SocksPeerConnector.h
Outdated
| /* save original flags */ | ||
| int flags = fcntl(fd, F_GETFL); | ||
| if (flags < 0) | ||
| return false; | ||
|
|
||
| /* switch to blocking for the handshake */ | ||
| if (fcntl(fd, F_SETFL, flags & ~O_NONBLOCK) < 0) | ||
| return false; | ||
|
|
||
| /* save original timeouts and set a 10 s limit for the handshake */ | ||
| struct timeval origRecvTv = {0, 0}, origSendTv = {0, 0}; | ||
| socklen_t tvLen = sizeof(struct timeval); | ||
| getsockopt(fd, SOL_SOCKET, SO_RCVTIMEO, &origRecvTv, &tvLen); | ||
| tvLen = sizeof(struct timeval); | ||
| getsockopt(fd, SOL_SOCKET, SO_SNDTIMEO, &origSendTv, &tvLen); | ||
|
|
||
| struct timeval tv; | ||
| tv.tv_sec = 10; | ||
| tv.tv_usec = 0; | ||
| setsockopt(fd, SOL_SOCKET, SO_RCVTIMEO, &tv, sizeof(tv)); | ||
| setsockopt(fd, SOL_SOCKET, SO_SNDTIMEO, &tv, sizeof(tv)); | ||
|
|
||
| bool ok = false; | ||
| if (type == SOCKS_V4) | ||
| ok = socks4Connect(fd, targetHost, targetPort, user); | ||
| else if (type == SOCKS_V5) | ||
| ok = socks5Connect(fd, targetHost, targetPort, user, pass); | ||
|
|
||
| /* restore original flags and timeouts */ | ||
| fcntl(fd, F_SETFL, flags); | ||
| setsockopt(fd, SOL_SOCKET, SO_RCVTIMEO, &origRecvTv, sizeof(origRecvTv)); | ||
| setsockopt(fd, SOL_SOCKET, SO_SNDTIMEO, &origSendTv, sizeof(origSendTv)); |
There was a problem hiding this comment.
Socket flag/timeout changes during negotiation ignore error returns (getsockopt/setsockopt and the restoring fcntl/setsockopt calls). If setting SO_RCVTIMEO/SO_SNDTIMEO fails, the subsequent blocking recv/send loops may block longer than intended; if restoring flags/timeouts fails, the connection may remain in an unexpected state for the rest of Squid's lifecycle. Please check these return values and handle failures (e.g., fail negotiation, or fall back to a poll/select-based timeout), and ensure restoration is best-effort but validated/logged.
…on, cleanup fixes - SocksPeerConnector.h: Check getsockopt/setsockopt return values, fail negotiation if timeout setup fails, validate socket restore on cleanup - SocksPeerConnector.h: Explicitly reject unsupported SOCKS5 auth methods (e.g. GSSAPI) instead of falling through - generate.php: Remove URL-encoding of SOCKS credentials (RFC1929 uses raw) - squid-build-test.yml: Replace `kill %1` with `pkill -f` for reliable background process cleanup across steps - container-release.yml: Lowercase repository owner for GHCR image name https://claude.ai/code/session_01Tfy3kPd51qRgxpCFXjb2g9
PR Review SummaryCI Status: All Passing ✅All 5 jobs pass: Build, Config Parsing, SOCKS5 E2E, SOCKS5 Auth, Config Generator. Review Comments: All AddressedCodeRabbit (10 threads) — 8 resolved, 2 are acknowledged design decisions:
Copilot (15 threads) — All addressed:
Design Decisions (2 open threads)Both CodeRabbit and Copilot flagged the synchronous blocking handshake and pconn reuse. These are deliberate trade-offs: 1. Blocking Handshake The SOCKS handshake uses synchronous
2. pconn Reuse After SOCKS negotiation, the TCP connection is a fixed tunnel to
Files Changed (this PR)
Generated by Claude Code |
|
@codex review this |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 02e7f474a2
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| if ($proxyInfo['user'] && $proxyInfo['pass']) { | ||
| // SOCKS5 RFC1929 uses raw username/password; do not URL-encode. | ||
| $socksOpt .= sprintf(' socks-user=%s socks-pass=%s', | ||
| $proxyInfo['user'], |
There was a problem hiding this comment.
Guard socks-user/socks-pass behind socks5 scheme
The SOCKS branch now appends socks-user/socks-pass whenever both credentials are present, even for socks4 entries. In this same change set, the Squid patch enforces that these options are valid only with socks5, so a proxy list line like host:port:socks4:user:pass will generate an invalid cache_peer and make Squid fail config parsing/startup. Restrict auth option emission to socks5 (or translate SOCKS4 credentials to SOCKS4 USERID semantics) so valid SOCKS4 inputs do not break deployment.
Useful? React with 👍 / 👎.
Summary
This PR adds native SOCKS4/SOCKS5 proxy support directly to Squid via cache_peer directives, eliminating the need for intermediate Gost containers. The implementation includes a custom Squid 6.10 build with patches that enable SOCKS negotiation for both HTTP and HTTPS tunneling through cache peers.
Key Changes
SocksPeerConnector.h (new): Implements synchronous SOCKS4/SOCKS4a and SOCKS5 (RFC 1928/1929) protocol negotiation with support for username/password authentication. Handles blocking socket operations with 10-second timeout.
patch_apply.sh (new): Automated patching script that modifies Squid source to:
socks_type,socks_user,socks_passfields toCachePeerstructsocks4,socks5,socks-user=,socks-pass=cache_peer options incache_cf.ccFwdState.cc(HTTP requests) andtunnel.cc(HTTPS CONNECT tunneling)Dockerfile (new): Multi-stage build for custom Squid 6.10 with SOCKS patches applied, replacing the generic b4tman/squid:5.8 image.
docker-entrypoint.sh (new): Squid container startup script with cache initialization and proper permission handling.
generate.php (modified): Updated proxy list generator to:
socks4andsocks5scheme typesoriginserverflag (since SOCKS tunnel acts as direct connection)docker-compose.yml (modified): Updated to build custom Squid image instead of using pre-built image.
GitHub Actions workflow (new): Comprehensive CI/CD pipeline including:
Implementation Details
originserverflag is required in cache_peer confighttps://claude.ai/code/session_01Tfy3kPd51qRgxpCFXjb2g9
Summary by CodeRabbit
新機能
改善
テスト/CI