Skip to content

Conversation

@PedroRivera125
Copy link
Contributor

Add Initial IPv6 (External IPs) Support

  • new 'assign_external_ipv6' flag to signal usage of IPv6 addresses when PKB creates networks/firewall rules/VMs (False by default)

  • added 'assign_external_ipv6' and 'ipv6_address' attributes to BaseVirtualMachine for storage of IPv6 addresses (parallel to existing 'assign_external_ip' and 'ip_address' attributes) (allows for dual-stack networking VMs)

  • define ShouldRunOnExternalIpv6Address() function to allow benchmarks to determine if they should attempt to perform runs with IPv6 addresses (parallel to ShouldRunOnInternalIpAddress() / ShouldRunOnExternalIpAddress())

  • define AllowIcmpIpv6() for BaseVirtualMachine & BaseFirewall, which should create necessary firewall rules to allow 'ICMP for IPv6' (protocol number 58) traffic

  • augment ping_benchmark.py to attempt ping runs with IPv6 addresses if appropriate

  • implement required functions in gce_network.py and gce_virtual_machine.py to allow for creation of dual-stack VMs with external IPv6 addresses when appropriate flag is set

Potential Issues

On GCP, 'gce_network_type' must be set to 'custom' to allow for the creation of subnets which support IPV4&IPV6 traffic. ('auto' mode creates subnets which only support IPv4) However, if --gce_network_type=custom, benchmark runs will fail for inter-region cases (ex: --zones=us-central1-c,us-east4-b). With current behavior, PKB will only create a subnet for the first VM/region.

So currently, a command like

python ./pkb.py --benchmarks=ping --cloud=GCP --gce_network_type=custom --zone=us-central1-c --ip_addresses=EXTERNAL

will succeed, but the following will fail.

python ./pkb.py --benchmarks=ping --cloud=GCP --gce_network_type=custom --zone=us-central1-c,us-east4-b --ip_addresses=EXTERNAL

One way to 'side-step' this issue and run inter-region benchmarks with custom-mode VPC networks is to use vm_groups and specify CIDRs ranges for each group, but this creates a new VPC network for each group. The preferred behavior (imo) would be for PKB to 'manually' create the subnets for each zone in the 'zones' flag, rather than just the first one. (Another option could be to first create a VPC network in auto-mode, change it to custom-mode, and then update each subnet to the IPV4_IPV6 dual-stack type) I've held off on modifying this behavior for insight/guidance on what the preferred solution would be.

Example Run of Dual-Stack Ping benchmarks on GCP

Command:

python ./pkb.py --benchmarks=ping --cloud=GCP --gce_network_type=custom --assign_external_ipv6=True --zone=us-central1-c --ip_addresses=EXTERNAL

Truncated Results (showing only 'Average Latency'):

-------------------------PerfKitBenchmarker Results Summary-------------------------
PING:
  Average Latency                       0.413000 ms                             (ip_type="external" receiving_zone="us-central1-c" run_number="0" sending_zone="us-central1-c" using_ipv6="False" vm_1_gce_nic_type="['GVNIC']" vm_2_gce_nic_type="['GVNIC']")
  Average Latency                       0.180000 ms                             (ip_type="external" receiving_zone="us-central1-c" run_number="0" sending_zone="us-central1-c" using_ipv6="True" vm_1_gce_nic_type="['GVNIC']" vm_2_gce_nic_type="['GVNIC']")
  Average Latency                       0.424000 ms                             (ip_type="external" receiving_zone="us-central1-c" run_number="0" sending_zone="us-central1-c" using_ipv6="False" vm_1_gce_nic_type="['GVNIC']" vm_2_gce_nic_type="['GVNIC']")
  Average Latency                       0.194000 ms                             (ip_type="external" receiving_zone="us-central1-c" run_number="0" sending_zone="us-central1-c" using_ipv6="True" vm_1_gce_nic_type="['GVNIC']" vm_2_gce_nic_type="['GVNIC']")

... 

-------------------------
For all tests: perfkitbenchmarker_version="v1.12.0-6010-g2dc54ee15" vm_1_/dev/loop0="77545472" vm_1_/dev/loop1="53399552" vm_1_/dev/loop2="476524544" vm_1_/dev/nvme0n1="10737418240" vm_1_automatic_restart="False" vm_1_boot_disk_size="10" vm_1_boot_disk_type="hyperdisk-balanced" vm_1_cloud="GCP" vm_1_cpu_arch="x86_64" vm_1_cpu_version="GenuineIntel_6_207_2" vm_1_create_operation_name="operation-1765391336100-6459d337a52dd-5f42bbca-0a25ac59" vm_1_create_start_time="1765391334.845238" vm_1_dedicated_host="False" vm_1_gce_network_name="pkb-network-3185626a" vm_1_gce_network_tier="premium" vm_1_gce_shielded_secure_boot="False" vm_1_gce_subnet_name="pkb-network-3185626a" vm_1_image="ubuntu-2404-noble-amd64-v20251210" vm_1_image_family="ubuntu-2404-lts-amd64" vm_1_image_project="ubuntu-os-cloud" vm_1_ipv6_address="2600:1900:4000:f04f:0:0:0:0" vm_1_kernel_release="6.14.0-1021-gcp" vm_1_machine_type="n4-standard-2" vm_1_mtu="1460" vm_1_num_cpus="2" vm_1_numa_node_count="1" vm_1_os_info="Ubuntu 24.04.3 LTS" vm_1_os_type="ubuntu2404" vm_1_placement_group_style="none" vm_1_project="XXXXXX" vm_1_tcp_congestion_control="cubic" vm_1_threads_per_core="2" vm_1_vm_count="1" vm_1_vm_ids="5905926050530198791" vm_1_vm_ip_addresses="136.114.132.83" vm_1_vm_names="pkb-3185626a-0" vm_1_vm_platform="DEFAULT_VM" vm_1_zone="us-central1-c" vm_2_/dev/loop0="77545472" vm_2_/dev/loop1="476524544" vm_2_/dev/loop2="53399552" vm_2_/dev/nvme0n1="10737418240" vm_2_automatic_restart="False" vm_2_boot_disk_size="10" vm_2_boot_disk_type="hyperdisk-balanced" vm_2_cloud="GCP" vm_2_cpu_arch="x86_64" vm_2_cpu_version="GenuineIntel_6_207_2" vm_2_create_operation_name="operation-1765391380228-6459d361baab7-f95e146c-e620de7f" vm_2_create_start_time="1765391379.0285537" vm_2_dedicated_host="False" vm_2_gce_network_name="pkb-network-3185626a" vm_2_gce_network_tier="premium" vm_2_gce_shielded_secure_boot="False" vm_2_gce_subnet_name="pkb-network-3185626a" vm_2_image="ubuntu-2404-noble-amd64-v20251210" vm_2_image_family="ubuntu-2404-lts-amd64" vm_2_image_project="ubuntu-os-cloud" vm_2_ipv6_address="2600:1900:4000:f04f:0:1:0:0" vm_2_kernel_release="6.14.0-1021-gcp" vm_2_machine_type="n4-standard-2" vm_2_mtu="1460" vm_2_num_cpus="2" vm_2_numa_node_count="1" vm_2_os_info="Ubuntu 24.04.3 LTS" vm_2_os_type="ubuntu2404" vm_2_placement_group_style="none" vm_2_project="XXXXXX" vm_2_tcp_congestion_control="cubic" vm_2_threads_per_core="2" vm_2_vm_count="1" vm_2_vm_ids="414859492725097211" vm_2_vm_ip_addresses="34.68.198.124" vm_2_vm_names="pkb-3185626a-1" vm_2_vm_platform="DEFAULT_VM" vm_2_zone="us-central1-c" vm_ids="5905926050530198791,414859492725097211" vm_ip_addresses="136.114.132.83,34.68.198.124" vm_names="pkb-3185626a-0,pkb-3185626a-1"

As expected, PKB performs ping runs with external IPv4 and IPv6 addresses, ('using_ipv6' tag in the ping result output). Additionally, the IPv6 addresses for each VM is available in the all-test metadata.

Future Work and/or TODOs

  • add IPv6 support to netperf & iperf benchmarks
  • extend IPv6 support to other clouds

PedroRivera125 and others added 3 commits December 10, 2025 12:04
- assign_external_ipv6 flag to signal usage of IPv6 addresses
- ipv6_address attribute in BaseVirtualMachine (similar to existing ip_address attribute)
- AllowIcmpIpv6 function for BaseVirtualMachine and BaseFirewall
- define necessary functions for IPv6 usage on GCP (other clouds TODO)
- implement IPv6 support for ping benchmark (netperf/iperf TODO)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant