add e2e test for networkpolicy of openshift-config-operator#30910
add e2e test for networkpolicy of openshift-config-operator#30910zhouying7780 wants to merge 1 commit intoopenshift:mainfrom
Conversation
|
Pipeline controller notification For optional jobs, comment This repository is configured in: automatic mode |
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: zhouying7780 The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
WalkthroughA new Ginkgo test suite validates Kubernetes NetworkPolicy enforcement, testing basic connectivity rules and OpenShift config-operator policies. Tests verify pod-to-pod connectivity with and without policies, check ingress/egress rules, and validate DNS connectivity. Helpers manage pod creation, IP formatting, policy construction, and connectivity polling. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Warning There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure. 🔧 golangci-lint (2.11.3)Error: can't load config: unsupported version of the configuration: "" See https://golangci-lint.run/docs/product/migration-guide for migration instructions Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (2)
test/extended/networking/config_operator_networkpolicy.go (2)
38-43: Remove misplaceddefer ginkgo.GinkgoRecover().
GinkgoRecover()is intended for use inside goroutines spawned within test specs to catch panics. At theDescribeblock level, it has no effect since the function returns immediately after registering callbacks.Suggested fix
var _ = ginkgo.Describe("[sig-network][Feature:NetworkPolicy] Config Operator NetworkPolicy", func() { - defer ginkgo.GinkgoRecover() - oc := exutil.NewCLI("config-operator-networkpolicy-e2e")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/extended/networking/config_operator_networkpolicy.go` around lines 38 - 43, Remove the misplaced defer ginkgo.GinkgoRecover() call from the top-level ginkgo.Describe block (the Describe closure where oc := exutil.NewCLI... and f := oc.KubeFramework() are defined); GinkgoRecover() should only be used inside goroutines spawned in specs, so simply delete the line "defer ginkgo.GinkgoRecover()" from that Describe function (no other changes required unless you add goroutines, in which case call ginkgo.GinkgoRecover() inside those goroutines).
423-435:wait.PollImmediateis deprecated—migrate towait.PollUntilContextTimeout.The deprecation is documented in k8s.io/apimachinery and explicitly recommends
PollUntilContextTimeoutas the replacement. This applies to all four usages in this file (lines 425, 457, 534, 552).Example migration for expectConnectivityForIP
- err := wait.PollImmediate(5*time.Second, 2*time.Minute, func() (bool, error) { + err := wait.PollUntilContextTimeout(context.Background(), 5*time.Second, 2*time.Minute, true, func(ctx context.Context) (bool, error) { succeeded, err := runConnectivityCheck(kubeClient, namespace, clientLabels, serverIP, port) if err != nil { return false, err } return succeeded == shouldSucceed, nil })The
trueparameter setsimmediatemode, matchingPollImmediatebehavior. No need for nested context timeouts.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/extended/networking/config_operator_networkpolicy.go` around lines 423 - 435, Replace wait.PollImmediate calls with wait.PollUntilContextTimeout using a background context and immediate=true to preserve behavior: for example in expectConnectivityForIP (and the three other occurrences in this file) create a ctx (e.g., context.Background()) and call wait.PollUntilContextTimeout(ctx, 5*time.Second, 2*time.Minute, true, func(ctx context.Context) (bool, error) { /* move existing body here, change closure signature to accept ctx and return (bool,error) */ }), leaving the existing connectivity check logic and gomega.Expect unchanged; ensure any inner closure returns the same success boolean and error as before.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@test/extended/networking/config_operator_networkpolicy.go`:
- Around line 525-530: The code dereferences
ContainerStatuses[0].State.Terminated without guarding for nil, which can panic;
in the function waiting for pod completion (e.g., waitForPodCompletion) check
that completed.Status.ContainerStatuses is non-empty and that
ContainerStatuses[0].State is non-nil and ContainerStatuses[0].State.Terminated
is non-nil before reading ExitCode, and if Terminated is nil return a
descriptive error (including namespace/name) instead of proceeding; update the
logic around exitCode/return to only compute exitCode when Terminated exists and
otherwise surface the error so framework.Logf and the boolean result aren't
based on a nil dereference.
---
Nitpick comments:
In `@test/extended/networking/config_operator_networkpolicy.go`:
- Around line 38-43: Remove the misplaced defer ginkgo.GinkgoRecover() call from
the top-level ginkgo.Describe block (the Describe closure where oc :=
exutil.NewCLI... and f := oc.KubeFramework() are defined); GinkgoRecover()
should only be used inside goroutines spawned in specs, so simply delete the
line "defer ginkgo.GinkgoRecover()" from that Describe function (no other
changes required unless you add goroutines, in which case call
ginkgo.GinkgoRecover() inside those goroutines).
- Around line 423-435: Replace wait.PollImmediate calls with
wait.PollUntilContextTimeout using a background context and immediate=true to
preserve behavior: for example in expectConnectivityForIP (and the three other
occurrences in this file) create a ctx (e.g., context.Background()) and call
wait.PollUntilContextTimeout(ctx, 5*time.Second, 2*time.Minute, true, func(ctx
context.Context) (bool, error) { /* move existing body here, change closure
signature to accept ctx and return (bool,error) */ }), leaving the existing
connectivity check logic and gomega.Expect unchanged; ensure any inner closure
returns the same success boolean and error as before.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 7640fa04-74a9-4944-a68b-e94f16a687b9
📒 Files selected for processing (1)
test/extended/networking/config_operator_networkpolicy.go
| if len(completed.Status.ContainerStatuses) == 0 { | ||
| return false, fmt.Errorf("no container status recorded for pod %s", name) | ||
| } | ||
| exitCode := completed.Status.ContainerStatuses[0].State.Terminated.ExitCode | ||
| framework.Logf("client pod %s/%s exitCode=%d", namespace, name, exitCode) | ||
| return exitCode == 0, nil |
There was a problem hiding this comment.
Potential nil pointer dereference on Terminated state.
When waitForPodCompletion returns, the pod phase is Succeeded or Failed, but ContainerStatuses[0].State.Terminated may still be nil in edge cases (e.g., container never started, or eviction). Accessing Terminated.ExitCode directly will panic.
Proposed fix
if len(completed.Status.ContainerStatuses) == 0 {
return false, fmt.Errorf("no container status recorded for pod %s", name)
}
+ terminated := completed.Status.ContainerStatuses[0].State.Terminated
+ if terminated == nil {
+ return false, fmt.Errorf("container did not terminate properly for pod %s", name)
+ }
- exitCode := completed.Status.ContainerStatuses[0].State.Terminated.ExitCode
+ exitCode := terminated.ExitCode
framework.Logf("client pod %s/%s exitCode=%d", namespace, name, exitCode)
return exitCode == 0, nil📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if len(completed.Status.ContainerStatuses) == 0 { | |
| return false, fmt.Errorf("no container status recorded for pod %s", name) | |
| } | |
| exitCode := completed.Status.ContainerStatuses[0].State.Terminated.ExitCode | |
| framework.Logf("client pod %s/%s exitCode=%d", namespace, name, exitCode) | |
| return exitCode == 0, nil | |
| if len(completed.Status.ContainerStatuses) == 0 { | |
| return false, fmt.Errorf("no container status recorded for pod %s", name) | |
| } | |
| terminated := completed.Status.ContainerStatuses[0].State.Terminated | |
| if terminated == nil { | |
| return false, fmt.Errorf("container did not terminate properly for pod %s", name) | |
| } | |
| exitCode := terminated.ExitCode | |
| framework.Logf("client pod %s/%s exitCode=%d", namespace, name, exitCode) | |
| return exitCode == 0, nil |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/extended/networking/config_operator_networkpolicy.go` around lines 525 -
530, The code dereferences ContainerStatuses[0].State.Terminated without
guarding for nil, which can panic; in the function waiting for pod completion
(e.g., waitForPodCompletion) check that completed.Status.ContainerStatuses is
non-empty and that ContainerStatuses[0].State is non-nil and
ContainerStatuses[0].State.Terminated is non-nil before reading ExitCode, and if
Terminated is nil return a descriptive error (including namespace/name) instead
of proceeding; update the logic around exitCode/return to only compute exitCode
when Terminated exists and otherwise surface the error so framework.Logf and the
boolean result aren't based on a nil dereference.
|
Scheduling required tests: |
|
/retest-required |
|
@zhouying7780: The following tests failed, say
Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
|
Job Failure Risk Analysis for sha: 8cf1f04
Risk analysis has seen new tests most likely introduced by this PR. New Test Risks for sha: 8cf1f04
New tests seen in this PR at sha: 8cf1f04
|
No description provided.