Skip to content

Conversation

@kartikjoshi21
Copy link
Contributor

bootstrapper: plumb ip-family through kubeadm, kubelet and control-plane alias

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: kartikjoshi21
Once this PR has been reviewed and has the lgtm label, please assign prezha for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Dec 8, 2025
@k8s-ci-robot
Copy link
Contributor

Hi @kartikjoshi21. Thanks for your PR.

I'm waiting for a github.com member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Dec 8, 2025
@minikube-bot
Copy link
Collaborator

Can one of the admins verify this patch?

@kartikjoshi21
Copy link
Contributor Author

Logs:

kartikjoshi@kartikjoshiwindows:~/minikube-ipv6-support/minikube$ ./out/minikube start -p v4-only \
  --driver=docker \
  --ip-family=ipv4 \
  --cni=bridge \
  --service-cluster-ip-range=10.96.0.0/12 \
  --pod-cidr=10.244.0.0/16
😄  [v4-only] minikube v1.37.0 on Ubuntu 24.04 (kvm/amd64)
✨  Using the docker driver based on user configuration
📌  Using Docker driver with root privileges
👍  Starting "v4-only" primary control-plane node in "v4-only" cluster
🚜  Pulling base image v0.0.48-1763789673-21948 ...
💾  Downloading Kubernetes v1.34.1 preload ...
    > preloaded-images-k8s-v18-v1...:  337.01 MiB / 337.01 MiB  100.00% 3.71 Mi
🔥  Creating docker container (CPUs=2, Memory=3072MB) ...
🐳  Preparing Kubernetes v1.34.1 on Docker 29.0.2 ...
🔗  Configuring bridge CNI (Container Networking Interface) ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "v4-only" cluster and "default" namespace by default
kartikjoshi@kartikjoshiwindows:~/minikube-ipv6-support/minikube$ minikube ssh -p v4-only -- 'sudo cat /var/tmp/minikube/kubeadm.yaml'
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: "192.168.58.2"
  bindPort: 8443
bootstrapTokens:
  - groups:
      - system:bootstrappers:kubeadm:default-node-token
    ttl: 24h0m0s
    usages:
      - signing
      - authentication
nodeRegistration:
  criSocket: unix:///var/run/cri-dockerd.sock
  name: "v4-only"
  kubeletExtraArgs:
    - name: "node-ip"
      value: "192.168.58.2"
  taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration

apiServer:
  certSANs:
    - "control-plane.minikube.internal"
    - "127.0.0.1"
  extraArgs:
    - name: "enable-admission-plugins"
      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"

controllerManager:
  extraArgs:
    - name: "allocate-node-cidrs"
      value: "true"
    - name: "leader-elect"
      value: "false"

scheduler:
  extraArgs:
    - name: "leader-elect"
      value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: "control-plane.minikube.internal:8443"
etcd:
  local:
    dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.34.1
networking:
  dnsDomain: cluster.local
  podSubnet: "10.244.0.0/16"
  serviceSubnet: "10.96.0.0/12"
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
  x509:
    clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: systemd
containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
  nodefs.available: "0%"
  nodefs.inodesFree: "0%"
  imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: "0.0.0.0:10249"
conntrack:
  maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
  tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
  tcpCloseWaitTimeout: 0s

kartikjoshi@kartikjoshiwindows:~/minikube-ipv6-support/minikube$ minikube ssh -p v4-only -- 'grep control-plane.minikube.internal /etc/hosts'
192.168.58.2    control-plane.minikube.internal

kartikjoshi@kartikjoshiwindows:~/minikube-ipv6-support/minikube$ kubectl --context v4-only get svc kube-dns -n kube-system -o yaml
apiVersion: v1
kind: Service
metadata:
  annotations:
    prometheus.io/port: "9153"
    prometheus.io/scrape: "true"
  creationTimestamp: "2025-12-04T10:53:26Z"
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    kubernetes.io/name: CoreDNS
  name: kube-dns
  namespace: kube-system
  resourceVersion: "278"
  uid: b517354c-9628-4c12-9d01-3a4172fef1f0
spec:
  clusterIP: 10.96.0.10
  clusterIPs:
  - 10.96.0.10
  internalTrafficPolicy: Cluster
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - name: dns
    port: 53
    protocol: UDP
    targetPort: 53
  - name: dns-tcp
    port: 53
    protocol: TCP
    targetPort: 53
  - name: metrics
    port: 9153
    protocol: TCP
    targetPort: 9153
  selector:
    k8s-app: kube-dns
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}




kartikjoshi@kartikjoshiwindows:~/minikube-ipv6-support/minikube$ minikube delete -p v6-only
docker network rm v6-only || true   # clean any stale network

./out/minikube start -p v6-only \
  --driver=docker \
  --cni=bridge \
  --ip-family=ipv6 \
  --pod-cidr-v6=fd11:11::/64 \
  --service-cluster-ip-range-v6=fd00:100::/108
  # NOTE: no --subnet-v6 and no --static-ipv6
🔥  Deleting "v6-only" in docker ...
🔥  Deleting container "v6-only" ...
🔥  Removing /home/kartikjoshi/.minikube/machines/v6-only ...
💀  Removed all traces of the "v6-only" cluster.
v6-only
😄  [v6-only] minikube v1.37.0 on Ubuntu 24.04 (kvm/amd64)
✨  Using the docker driver based on user configuration
📌  Using Docker driver with root privileges
💡  If Docker daemon IPv6 is disabled, enable it in /etc/docker/daemon.json and restart:
  {"ipv6": true, "fixed-cidr-v6": "fd00:55:66::/64"}
👍  Starting "v6-only" primary control-plane node in "v6-only" cluster
🚜  Pulling base image v0.0.48-1763789673-21948 ...
🔥  Creating docker container (CPUs=2, Memory=3072MB) ...
🐳  Preparing Kubernetes v1.34.1 on Docker 29.0.2 ...
🔗  Configuring bridge CNI (Container Networking Interface) ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "v6-only" cluster and "default" namespace by default
kartikjoshi@kartikjoshiwindows:~/minikube-ipv6-support/minikube$ kubectl get nodes -o wide
NAME      STATUS   ROLES           AGE     VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                         KERNEL-VERSION                     CONTAINER-RUNTIME
v6-only   Ready    control-plane   4m38s   v1.34.1   fd00::2       <none>        Debian GNU/Linux 12 (bookworm)   6.6.87.2-microsoft-standard-WSL2   docker://29.0.2
kartikjoshi@kartikjoshiwindows:~/minikube-ipv6-support/minikube$ ./out/minikube ssh -p v6-only -- 'sudo cat /var/tm
p/minikube/kubeadm.yaml'
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: "fd00::2"
  bindPort: 8443
bootstrapTokens:
  - groups:
      - system:bootstrappers:kubeadm:default-node-token
    ttl: 24h0m0s
    usages:
      - signing
      - authentication
nodeRegistration:
  criSocket: unix:///var/run/cri-dockerd.sock
  name: "v6-only"
  kubeletExtraArgs:
    - name: "node-ip"
      value: "fd00::2"
  taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration

apiServer:
  certSANs:
    - "control-plane.minikube.internal"
    - "::1"
  extraArgs:
    - name: "bind-address"
      value: "::"
    - name: "enable-admission-plugins"
      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"

controllerManager:
  extraArgs:
    - name: "allocate-node-cidrs"
      value: "true"
    - name: "leader-elect"
      value: "false"

scheduler:
  extraArgs:
    - name: "leader-elect"
      value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: "[fd00::2]:8443"
etcd:
  local:
    dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.34.1
networking:
  dnsDomain: cluster.local
  podSubnet: "fd11:11::/64"
  serviceSubnet: "fd00:100::/108"
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
  x509:
    clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: systemd
containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
  nodefs.available: "0%"
  nodefs.inodesFree: "0%"
  imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "fd11:11::/64"
metricsBindAddress: "[::]:10249"
conntrack:
  maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
  tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
  tcpCloseWaitTimeout: 0s


kartikjoshi@kartikjoshiwindows:~/minikube-ipv6-support/minikube$ ./out/minikube start    --driver=docker   --ip-fam
ily=dual    --service-cluster-ip-range=10.96.0.0/12   --service-cluster-ip-range-v6=fd00:200::/108   --pod-cidr=10.
244.0.0/16   --pod-cidr-v6=fd11:22::/64
😄  minikube v1.37.0 on Ubuntu 24.04 (kvm/amd64)
✨  Using the docker driver based on user configuration
📌  Using Docker driver with root privileges
💡  If Docker daemon IPv6 is disabled, enable it in /etc/docker/daemon.json and restart:
  {"ipv6": true, "fixed-cidr-v6": "fd00:55:66::/64"}
👍  Starting "minikube" primary control-plane node in "minikube" cluster
🚜  Pulling base image v0.0.48-1763789673-21948 ...
💾  Downloading Kubernetes v1.34.1 preload ...
    > preloaded-images-k8s-v18-v1...:  337.01 MiB / 337.01 MiB  100.00% 3.22 Mi
🔥  Creating docker container (CPUs=2, Memory=3072MB) ...
🐳  Preparing Kubernetes v1.34.1 on Docker 29.0.2 ...
🔗  Configuring bridge CNI (Container Networking Interface) ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
kartikjoshi@kartikjoshiwindows:~/minikube-ipv6-support/minikube$ minikube ssh -- 'sudo sed -n "1,220p" /var/tmp/minikube/kubeadm.yaml'
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: "192.168.58.2"
  bindPort: 8443
bootstrapTokens:
  - groups:
      - system:bootstrappers:kubeadm:default-node-token
    ttl: 24h0m0s
    usages:
      - signing
      - authentication
nodeRegistration:
  criSocket: unix:///var/run/cri-dockerd.sock
  name: "minikube"
  taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration

apiServer:
  certSANs:
    - "control-plane.minikube.internal"
    - "127.0.0.1"
    - "::1"
  extraArgs:
    - name: "bind-address"
      value: "::"
    - name: "enable-admission-plugins"
      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"

controllerManager:
  extraArgs:
    - name: "allocate-node-cidrs"
      value: "true"
    - name: "leader-elect"
      value: "false"

scheduler:
  extraArgs:
    - name: "leader-elect"
      value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: "control-plane.minikube.internal:8443"
etcd:
  local:
    dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.34.1
networking:
  dnsDomain: cluster.local
  podSubnet: "10.244.0.0/16,fd11:22::/64"
  serviceSubnet: "10.96.0.0/12,fd00:200::/108"
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
  x509:
    clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: systemd
containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
  nodefs.available: "0%"
  nodefs.inodesFree: "0%"
  imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16,fd11:22::/64"
metricsBindAddress: "0.0.0.0:10249"
conntrack:
  maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
  tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
  tcpCloseWaitTimeout: 0s


kartikjoshi@kartikjoshiwindows:~/minikube-ipv6-support/minikube$ kubectl get svc kube-dns -n kube-system -o yaml | sed -n '1,80p'
apiVersion: v1
kind: Service
metadata:
  annotations:
    prometheus.io/port: "9153"
    prometheus.io/scrape: "true"
  creationTimestamp: "2025-12-05T08:00:49Z"
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    kubernetes.io/name: CoreDNS
  name: kube-dns
  namespace: kube-system
  resourceVersion: "293"
  uid: 35db8ff7-11be-4336-a315-51452beb8b99
spec:
  clusterIP: 10.96.0.10
  clusterIPs:
  - 10.96.0.10
  - fd00:200::c60
  internalTrafficPolicy: Cluster
  ipFamilies:
  - IPv4
  - IPv6
  ipFamilyPolicy: PreferDualStack
  ports:
  - name: dns
    port: 53
    protocol: UDP
    targetPort: 53
  - name: dns-tcp
    port: 53
    protocol: TCP
    targetPort: 53
  - name: metrics
    port: 9153
    protocol: TCP
    targetPort: 9153
  selector:
    k8s-app: kube-dns
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}


kartikjoshi@kartikjoshiwindows:~/minikube-ipv6-support/minikube$ cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Namespace
metadata:
  name: dualtest
---
apiVersion: v1
kind: Pod
metadata:
  name: dual-nginx
  namespace: dualtest
  labels:
    app: dual-nginx
spec:
  containers:
  - name: nginx
    image: nginx:stable
    ports:
    - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: dual-nginx-svc
  namespace: dualtest
spec:
  selector:
    app: dual-nginx
  ports:
  - port: 80
    targetPort: 80
  ipFamilyPolicy: PreferDualStack
EOF
namespace/dualtest created
pod/dual-nginx created
service/dual-nginx-svc created
kartikjoshi@kartikjoshiwindows:~/minikube-ipv6-support/minikube$ kubectl get svc dual-nginx-svc -n dualtest -o yaml | sed -n '1,80p'
apiVersion: v1
kind: Service
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"dual-nginx-svc","namespace":"dualtest"},"spec":{"ipFamilyPolicy":"PreferDualStack","ports":[{"port":80,"targetPort":80}],"selector":{"app":"dual-nginx"}}}
  creationTimestamp: "2025-12-05T08:14:43Z"
  name: dual-nginx-svc
  namespace: dualtest
  resourceVersion: "1059"
  uid: 7be9e20a-b745-4b6f-964c-5790f35b2234
spec:
  clusterIP: 10.104.18.22
  clusterIPs:
  - 10.104.18.22
  - fd00:200::c17f
  internalTrafficPolicy: Cluster
  ipFamilies:
  - IPv4
  - IPv6
  ipFamilyPolicy: PreferDualStack
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: dual-nginx
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}



Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants