feat(xdns): multi-resolver fan-out + fix(kcp): reduce aggressive retransmissions#5872
feat(xdns): multi-resolver fan-out + fix(kcp): reduce aggressive retransmissions#5872nnemirovsky wants to merge 16 commits intoXTLS:mainfrom
Conversation
Add optional `resolvers` config field to XDNS finalmask. When set, the client sends DNS queries through public DNS resolvers instead of connecting directly to the server on port 53. - One UDP socket per resolver with independent receive goroutines - Round-robin query distribution across resolvers - Backward compatible: omitting resolvers preserves direct mode - Fix server sendLoop starvation under mKCP retransmission flood - Drain excess query records to skip stale queries - Reduce server response delay from 1s to 50ms - Increase server write queue from 512 to 4096
…s enabled The cwnd *= 20 multiplier allowed 20x more packets in flight than the congestion window, defeating the purpose of congestion control. When congestion is enabled, respect the actual cwnd without the multiplier. This prevents mKCP from flooding low-bandwidth transports like XDNS. Also increase connection timeout from 30s to 120s to accommodate high-latency transports like DNS tunneling.
|
粗略看了下,我设计 xdns 里的 closed 不需要 sync,不存在竞态,xdns 的服务端天然支持从不同 dns 收发数据,理论无需改动,多 conn 收发也只需改动客户端 怎么说呢,我能理解你的想法,但放在 mask 里并不合适,mask 不应该进行额外的 dial,目前是允许 xdns 在任意层的, 如果要接受这个 pr,要么把 xdns 剥离到独立传输层,要么加上限制 xdns 和 xicmp 一样只能处于最外层,且不能搭配 udphop 与 dialerproxy |
|
或许可以利用 writeto 的 addr 特性来实现 multi dns,如果能不新建 conn,那么可以留在 mask |
Address LjhAUMEM review: masks should not create new connections. Use WriteTo addr on the existing PacketConn to send to different resolvers instead of creating separate sockets. Revert server changes (server already supports data from different DNS sources). Remove sockopt files, sync changes, and layer validation.
XDNS 可以限制为必须在最外层,也就 noise 有点用,udphop 对它来说没啥用,
没看代码,这个特性就简单些用数组而不是 map 吧,随机选择,重复次数多的 ip 更容易被选中 |
|
@LjhAUMEM 或许改一下 noise 让它兼容一下 XDNS 在倒数第二层的情况 |
|
但公共 UDP 明文 DNS 基本上都是 53 端口而且这时候 DNS 又不需要握手,可能审查者就直接按 DNS 按单个包分析流量了
|
可以搞成忽略前面所有的 noise
|
|
不过这个最好搭配个 mux,kcp 客户端没有 mux 单条连接其实不会持续很久,客户端日志的大量 xdns closed 能证明这一点,再加上 multi dns, |
XDRIVE 强制 mux 所以 mux 的增强和分享已经提上日程了 |
|
If some of the DNS resolvers in the list become unavailable will it quickly switch to using the others? |
|
@nnemirovsky You can keep the original multi-connection method, but it should be possible to modify only the client side without changing the server side. |
Per review: separate UDP sockets per resolver avoids ISP source-port rate limiting. Each resolver has its own recvLoop goroutine. Client-only changes, server unchanged.
|
@hippo2025 Currently it's round-robin without failover. If a resolver stops responding, queries sent to it are lost and KCP handles retransmission on the next round. With 3 resolvers and 1 dead, throughput drops ~33% but the tunnel stays up. @LjhAUMEM Updated to use separate sockets per resolver (no server changes). Reverted all sync/server modifications from earlier. |
|
@nnemirovsky Do you mind if I make changes directly on your branch? @RPRX |
|
单个文件的修改可以点铅笔 要大改我一般都是关了在本地重新弄一个 写co author就是了 |
|
@LjhAUMEM go ahead, feel free to push directly to the branch. |
|
|
@LjhAUMEM 给你 Maintain 权限了 |
|
@nnemirovsky I think we can start testing {
"log": { "loglevel": "debug" },
"inbounds": [
{
"listen": "127.0.0.1",
"port": 1080,
"protocol": "socks",
"settings": {
"auth": "noauth",
"udp": true
}
}
],
"outbounds": [
{
"protocol": "vless",
"settings": {
"address": "127.0.0.1",
"port": 53,
"id": "5783a3e7-e373-51cd-8642-c83782b807c5",
"encryption": "none"
},
"streamSettings": {
"network": "kcp",
"kcpSettings": {
"mtu": 130
},
"finalmask": {
"udp": [
{
"type": "xdns",
"settings": {
"domain": "", // 1
"resolvers": [
"8.8.8.8:53",
"1.1.1.1:53",
"[2001:4860:4860::8888]:53",
"[2606:4700:4700::1111]:53"
]
}
}
]
}
}
}
]
}{
"log": { "loglevel": "debug" },
"inbounds": [
{
// "listen": "127.0.0.1",
"port": 53,
"protocol": "vless",
"settings": {
"clients": [
{
"id": "5783a3e7-e373-51cd-8642-c83782b807c5"
}
],
"decryption": "none"
},
"streamSettings": {
"network": "kcp",
"kcpSettings": {
"mtu": 900
},
"finalmask": {
"udp": [
{
"type": "xdns",
"settings": {
"domain": "" // 1
}
}
]
}
}
}
],
"outbounds": [
{
"protocol": "freedom"
}
]
} |
|
现在 xdns 和 xicmp 一样,都要在最外层,都忽略原始 conn,一个自动起 mult conn,一个起 ip conn, 只有一个区别,xicmp 会使用上层传进来的目标,xdns 则会忽略,只使用 resolvers 里的 那两个 kcp 的修改我不确定是不是必要的 还有个想法是把队列改成 wg bind 里的阻塞形式,不过再说了,改也不会在这个 pr |
|
@LjhAUMEM both KCP changes are necessary for XDNS to work reliably. here's why: 120s timeout (connection.go) mKCP's default 30s timeout is too short for DNS tunneling. the round-trip through public resolvers adds significant latency (DNS query -> resolver -> authoritative NS -> response -> resolver -> client). during TLS handshake, mKCP enters state 1 (handshaking) and times out at 30s before data can flow. confirmed with debug logging showing cwnd*20 removal with congestion (sending.go) the |
Cleaned up resubmission of #5871. Removed unnecessary files, squashed to 2 commits.
Changes
1. XDNS multi-resolver fan-out
Optional
resolversconfig field. When set, the client distributes DNS queries across multiple public resolvers within a single mKCP session for higher throughput.2. mKCP: respect congestion control (per RPRX suggestion in #5871)
The
cwnd *= 20multiplier allowed 20x more in-flight packets than the congestion window, defeating congestion control. Whencongestion: true, the multiplier is now skipped so mKCP doesn't flood low-bandwidth transports like XDNS.Connection timeout increased from 30s to 120s to accommodate DNS tunnel latency.
Test plan
TestParseResolverAddr: resolver address parsingTestResolverModeRoundTrip: mock resolver end-to-endTestMultiResolverDistribution: round-robin verificationTestDirectModeRoundTrip/TestResolverModeServerToClient: bidirectional data