NE-2529: add new dcm e2e tests#30926
NE-2529: add new dcm e2e tests#30926openshift-merge-bot[bot] merged 2 commits intoopenshift:mainfrom
Conversation
|
Pipeline controller notification For optional jobs, comment This repository is configured in: automatic mode |
|
@jcmoraisjr: This pull request references NE-2529 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.22.0" version, but no target version was set. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
Important Review skippedAuto reviews are limited based on label configuration. 🚫 Review skipped — only excluded labels are configured. (1)
Please check the settings in the CodeRabbit UI or the ⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughAdds a new Ginkgo e2e test file that validates the OpenShift HAProxy router’s IngressController Dynamic Configuration Manager feature, including router probing helpers, a routeStackBuilder for creating and mutating backend stacks, and tests for connectivity, scale-out, scale-in, route updates, and PID stability. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
🧹 Nitpick comments (5)
test/extended/router/config_manager_ingress.go (3)
172-173: Usesets.New[string]instead of deprecatedsets.NewString.
sets.NewStringis deprecated. The codebase should use the genericsets.New[string]()for consistency with modern Go practices.♻️ Proposed fix
- expectedServers := sets.NewString(currentServers...) - actualServers := sets.NewString() + expectedServers := sets.New[string](currentServers...) + actualServers := sets.New[string]()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/extended/router/config_manager_ingress.go` around lines 172 - 173, Replace deprecated sets.NewString usage: initialize expectedServers with sets.New[string](currentServers...) instead of sets.NewString(currentServers...), and initialize actualServers with sets.New[string]() instead of sets.NewString(); update both occurrences where variables expectedServers and actualServers are declared to use the generic sets.New[string] constructor.
187-224: TODO tests are well-documented placeholders.The descriptions clearly outline the test intent and expected behavior. Consider adding
g.Skip("not yet implemented")to make it explicit that these are pending tests.Optional: Add Skip to make pending tests explicit
g.It("should handle scale-in operations", func() { - // TODO + g.Skip("TODO: not yet implemented") })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/extended/router/config_manager_ingress.go` around lines 187 - 224, Replace the TODO bodies inside the g.It tests in config_manager_ingress.go with explicit skips so they are marked pending; for each g.It("should ...") block (the scale-in, various route updates, balancing after scale-out/in, metrics after scale events, and steady memory/PID usage tests) call g.Skip("not yet implemented") as the first statement in the closure so the test runner reports them as skipped instead of empty/implicit. Ensure each g.It closure contains only the g.Skip invocation until the test is implemented.
10-11: Duplicate import aliases for the same package.Both
metav1andv1are imported fromk8s.io/apimachinery/pkg/apis/meta/v1. This creates confusion and inconsistency - the file usesmetav1in some places (line 40) andv1in others (lines 57, 79, 88, 91).♻️ Consolidate to a single import alias
import ( "context" "net/http" "time" g "github.com/onsi/ginkgo/v2" o "github.com/onsi/gomega" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - v1 "k8s.io/apimachinery/pkg/apis/meta/v1" "k8s.io/apimachinery/pkg/labels"Then update all usages of
v1.tometav1.throughout the file.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/extended/router/config_manager_ingress.go` around lines 10 - 11, There are duplicate import aliases: both metav1 and v1 point to k8s.io/apimachinery/pkg/apis/meta/v1; remove the redundant v1 alias in the import block so only metav1 is imported, and update every usage of v1.* in this file (e.g., any v1.ObjectMeta, v1.ListOptions, etc.) to metav1.* so all references consistently use metav1.test/extended/router/config_manager.go (1)
842-844: Missing timeout inwaitDeploymentcould cause tests to hang indefinitely.The
oc waitcommand has no--timeoutflag specified, so it defaults to 30s which may be acceptable, but explicit timeout would be clearer and safer for test reliability.Optional: Add explicit timeout
func waitDeployment(oc *exutil.CLI, resourceName string, replicas int) error { - return oc.AsAdmin().Run("wait").Args("--for", "jsonpath={.status.readyReplicas}="+strconv.Itoa(replicas), "deployment/"+resourceName).Execute() + return oc.AsAdmin().Run("wait").Args("--for", "jsonpath={.status.readyReplicas}="+strconv.Itoa(replicas), "--timeout=180s", "deployment/"+resourceName).Execute() }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/extended/router/config_manager.go` around lines 842 - 844, The waitDeployment function calls oc.AsAdmin().Run("wait") without an explicit timeout which can lead to hangs; update waitDeployment to add a --timeout argument to the oc wait invocation (e.g., "--timeout=2m" or another test-appropriate duration) when building Args for deployment/"+resourceName so the command uses an explicit timeout rather than relying on defaults.test/extended/router/execpod.go (1)
28-46: Parsing error fromstrconv.Atoiis not fully handled.On line 45, if
strconv.Atoifails, the error is returned but the function also returnscode(which is 0) andoutput[:idx]. However,errfrom line 45 shadows the original nil error value, and this uninitializedcodevalue could confuse callers.Additionally, the
RunHostCmdfunction (from context snippet 1) does not specify a container, which could target the wrong container in multi-container pods like the router pod. Consider whether container specification is needed for reliability.Suggested improvement for error handling
code, err = strconv.Atoi(output[idx+1:]) + if err != nil { + return 0, "", fmt.Errorf("failed to parse response code %q: %w", output[idx+1:], err) + } return code, output[:idx], nil🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/extended/router/execpod.go` around lines 28 - 46, The strconv.Atoi call in readURL may fail but its error is returned while still returning a possibly misleading code and partial output; change the parsing to use a separate variable (e.g., parsedCode, parseErr := strconv.Atoi(output[idx+1:])) and if parseErr != nil return 0, "", parseErr, otherwise return parsedCode, output[:idx], nil; while here also ensure the host command targets the intended container in multi-container pods by switching the e2eoutput.RunHostCmd call to the container-aware variant or pass the pod container name (reference e2eoutput.RunHostCmd and the execPod/pod fields) so the curl runs in the correct container.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@test/extended/router/config_manager_ingress.go`:
- Around line 172-173: Replace deprecated sets.NewString usage: initialize
expectedServers with sets.New[string](currentServers...) instead of
sets.NewString(currentServers...), and initialize actualServers with
sets.New[string]() instead of sets.NewString(); update both occurrences where
variables expectedServers and actualServers are declared to use the generic
sets.New[string] constructor.
- Around line 187-224: Replace the TODO bodies inside the g.It tests in
config_manager_ingress.go with explicit skips so they are marked pending; for
each g.It("should ...") block (the scale-in, various route updates, balancing
after scale-out/in, metrics after scale events, and steady memory/PID usage
tests) call g.Skip("not yet implemented") as the first statement in the closure
so the test runner reports them as skipped instead of empty/implicit. Ensure
each g.It closure contains only the g.Skip invocation until the test is
implemented.
- Around line 10-11: There are duplicate import aliases: both metav1 and v1
point to k8s.io/apimachinery/pkg/apis/meta/v1; remove the redundant v1 alias in
the import block so only metav1 is imported, and update every usage of v1.* in
this file (e.g., any v1.ObjectMeta, v1.ListOptions, etc.) to metav1.* so all
references consistently use metav1.
In `@test/extended/router/config_manager.go`:
- Around line 842-844: The waitDeployment function calls
oc.AsAdmin().Run("wait") without an explicit timeout which can lead to hangs;
update waitDeployment to add a --timeout argument to the oc wait invocation
(e.g., "--timeout=2m" or another test-appropriate duration) when building Args
for deployment/"+resourceName so the command uses an explicit timeout rather
than relying on defaults.
In `@test/extended/router/execpod.go`:
- Around line 28-46: The strconv.Atoi call in readURL may fail but its error is
returned while still returning a possibly misleading code and partial output;
change the parsing to use a separate variable (e.g., parsedCode, parseErr :=
strconv.Atoi(output[idx+1:])) and if parseErr != nil return 0, "", parseErr,
otherwise return parsedCode, output[:idx], nil; while here also ensure the host
command targets the intended container in multi-container pods by switching the
e2eoutput.RunHostCmd call to the container-aware variant or pass the pod
container name (reference e2eoutput.RunHostCmd and the execPod/pod fields) so
the curl runs in the correct container.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: c0498002-2ebf-493e-9f96-813021b0c975
📒 Files selected for processing (3)
test/extended/router/config_manager.gotest/extended/router/config_manager_ingress.gotest/extended/router/execpod.go
|
Scheduling required tests: |
95b624a to
ebdabb6
Compare
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@test/extended/router/config_manager_ingress.go`:
- Around line 74-76: The Route admission is not scoped to the test namespace
because NamespaceSelector was commented out; restore namespace scoping by
re-enabling NamespaceSelector (use v1.SetAsLabelSelector(labels.Set{"name":
workingNamespace}) or an equivalent label selector that matches the test's
namespace) in the same struct where RouteSelector is set (look for the
RouteSelector field and the commented NamespaceSelector in
config_manager_ingress.go), or alternatively generate unique route
hosts/selectors per test (ensure routerSelectorKey/routerSelectorValue remain
unique per test) so routes from other namespaces cannot be admitted.
- Around line 415-417: The test currently assumes an EndpointSlice will be
created implicitly (endpointsAPICreatesEndpointSlice = true) while
createDetachedService does not create or wait for one, causing scaleInEndpoints
to depend on asynchronous mirroring and flake; update the test so that either
createDetachedService explicitly creates and waits for the EndpointSlice after
creating the Endpoints (so scaleInEndpoints always finds it), or set
endpointsAPICreatesEndpointSlice = false and change the scaleInEndpoints
call-path to handle the absence of an EndpointSlice (e.g., by creating/waiting
for one or operating against Endpoints directly); reference
createDetachedService, scaleInEndpoints, and endpointsAPICreatesEndpointSlice
when making the change.
- Around line 167-173: The test is polling too aggressively (time.Millisecond)
inside the Eventually block that calls readLocalURL against routerPod; change
the polling interval to a less aggressive value (for example 100-500ms) or
introduce a named constant (e.g., dcmIngressPollInterval) and use that instead
of time.Millisecond to avoid hammering the API server; update the call where
Eventually(...).WithTimeout(dcmIngressTimeout).WithPolling(time.Millisecond) to
use the new interval constant or a larger duration like 200*time.Millisecond.
- Around line 251-279: The test cases for the g.It blocks titled "should handle
various route updates", "should maintain proper balancing after scale-out and
scale-in operations", "should report accurate metrics after scale-out and
scale-in operations", and "should maintain steady memory and PID usage after
scale-out and scale-in operations" are empty and produce false-green results;
either implement meaningful assertions and test logic inside each It block
(exercise the router behavior, e.g., perform scale-in/scale-out operations, send
requests, verify balancing, metrics and memory/PID stability) or explicitly mark
each case pending/skipped (e.g., use Skip/Pending helpers) so they do not pass
silently—update the bodies for those specific g.It declarations to contain real
assertions or a deliberate skip.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: d2393801-4a6f-464f-9fda-bb7c3e5f8e5e
📒 Files selected for processing (2)
test/extended/router/config_manager.gotest/extended/router/config_manager_ingress.go
✅ Files skipped from review due to trivial changes (1)
- test/extended/router/config_manager.go
|
Scheduling required tests: |
There was a problem hiding this comment.
♻️ Duplicate comments (1)
test/extended/router/config_manager_ingress.go (1)
471-510:⚠️ Potential issue | 🟠 Major
createDetachedServicestill relies on async endpoint mirroring to make scale-in work.Line 473 hard-codes the optimistic path, and Lines 499-510 clone the source
Endpointsonly once.scaleInEndpointslater requires a detachedEndpointSlice, so this still flakes if mirroring is disabled/slow or if the source endpoints have not converged yet. Please create the detachedEndpointSliceexplicitly here, or wait for both resources before returning.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/extended/router/config_manager_ingress.go` around lines 471 - 510, The createDetachedService path currently assumes endpoint mirroring and only clones Endpoints once (endpointsAPICreatesEndpointSlice true/false branch and the block creating ep from epCurrent), which flakes when mirroring is disabled or slow; modify createDetachedService to explicitly create a detached EndpointSlice (use r.fetchEndpointSlice to get the source slice, generate epsName with names.SimpleNameGenerator.GenerateName, construct a discoveryv1.EndpointSlice with Labels {"kubernetes.io/service-name": serviceName} and the source AddressType/Ports/Endpoints, and call DiscoveryV1().EndpointSlices(...).Create) and/or wait for both the new EndpointSlice and Endpoints to exist and converge before returning so scaleInEndpoints can rely on the detached EndpointSlice rather than on async mirroring.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@test/extended/router/config_manager_ingress.go`:
- Around line 471-510: The createDetachedService path currently assumes endpoint
mirroring and only clones Endpoints once (endpointsAPICreatesEndpointSlice
true/false branch and the block creating ep from epCurrent), which flakes when
mirroring is disabled or slow; modify createDetachedService to explicitly create
a detached EndpointSlice (use r.fetchEndpointSlice to get the source slice,
generate epsName with names.SimpleNameGenerator.GenerateName, construct a
discoveryv1.EndpointSlice with Labels {"kubernetes.io/service-name":
serviceName} and the source AddressType/Ports/Endpoints, and call
DiscoveryV1().EndpointSlices(...).Create) and/or wait for both the new
EndpointSlice and Endpoints to exist and converge before returning so
scaleInEndpoints can rely on the detached EndpointSlice rather than on async
mirroring.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 7799a49d-c18f-4405-a6a6-15047777d580
📒 Files selected for processing (1)
test/extended/router/config_manager_ingress.go
|
Scheduling required tests: |
|
/test e2e-gcp-ovn |
|
/hold |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@test/extended/router/config_manager_ingress.go`:
- Around line 586-592: exposeDeployment returns too early by using Pod lists as
backends; instead, after calling r.oc.AsAdmin().Run("expose") wait/poll until
the Service has populated Endpoints or EndpointSlices and then build
backendServers from those Endpoint/EndpointSlice records (not from
selector-matched Pods). Update exposeDeployment (and the similar logic around
createDetachedService/scaleDeployment) to poll with a timeout/retry loop, query
Endpoints and/or EndpointSlices for the service name, extract the actual backend
target IDs (addresses/targetRefs), and return those values so callers see only
real, ready backends.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: bb31693a-a8cb-40cc-95f6-c799210ce870
📒 Files selected for processing (1)
test/extended/router/config_manager_ingress.go
|
Scheduling required tests: |
1 similar comment
|
Scheduling required tests: |
|
/unhold |
| g.It("should maintain steady memory and PID usage after scale-out and scale-in operations", func() { | ||
| builder := newRouteStackBuilder(oc, "insecure-steady-mem-pid", routeSelector) | ||
| hostname := "route-steady-mem-pid.local" | ||
| initReplicas := 3 |
There was a problem hiding this comment.
Should we reconsider relying on initReplicas := 3 here, or explicitly document why it is being used?
I think that since the test starts with 3 static entries, then on the manager side when we scale from 1 to 2 replicas, it skips the code path where health checks might break?
There was a problem hiding this comment.
Sure I can make some changes or add some more comments, but I think I'm missing the purpose. Asking this so I can add some better and accurate comments. The goal of the test is to ensure we don't potentially have memory or pid exhaustion, making a number of distinct jumps in the number of replicas. Feel free to share some changing suggestions.
| // Related: https://redhat.atlassian.net/browse/OCPBUGS-80932 | ||
| // | ||
| // TL;DR: once scaling-in to `initReplicas` or less, a scale-out to more than `maxDynamicServers` can lead to a reload. | ||
| changingReplicas := []int{6, 5, 1, 2, 3, 4} |
There was a problem hiding this comment.
Should we add to this to show that going beyond the dynamic limit falls back gracefully?
There was a problem hiding this comment.
The idea here is the contrary - we need to be inside the boundary of not having reloads. We could also add another test that goes beyond that and ensures everything is fine, this however has some caveats: usually the tests don't know internal details, so we should know that crossing this boundary is a thing to create a test for this; and after the add/del server api calls we're going to implement, this boundary won't exist anymore, making the test without a real purpose.
Anyway we do have some other sort of tests that currently forces a reload, like "should handle various route updates", and it should continue to reload even after the DCM improvements we're planning. I think this covers the reload scenario - and the better - without the test itself knowing about that 🙂
| // did fail to update HAProxy, you would continue to see responses from it. This can be achieved e.g. using a | ||
| // service without a selector, creating an endpointslice manually and removing manually the pods from this | ||
| // endpointslice. | ||
| g.It("should handle scale-in operations", func() { |
There was a problem hiding this comment.
I wonder if we might find some useful information (and avoid regression later if we have it right) by scaling to 0 and back up. DCM introduces different behind the scenes methodology for this now, right?
We could definitely push dealing with this off until later and write more tests just to target whether the scale to 0 and back to 1 is handled seamlessly now with DCM.
There was a problem hiding this comment.
Having a zero replicas test is a fair addition, added as a new test though, since this one is focused in just scale-in operations.
gcs278
left a comment
There was a problem hiding this comment.
@jcmoraisjr nice PR - I do appreciate the details comments, they 100% help. And I like the stack builder pattern you've got going.
A lot of nits and most of my comments are pretty surface level, but I'll come back when I get a chance to finish reviewing.
| o.Eventually(func(g o.Gomega) { | ||
| svc, err := oc.AdminKubeClient().CoreV1().Services(nsRouter).Get(ctx, svcName, metav1.GetOptions{}) | ||
| g.Expect(err).NotTo(o.HaveOccurred()) | ||
|
|
||
| listOpts := metav1.ListOptions{LabelSelector: labels.FormatLabels(svc.Spec.Selector)} | ||
| pods, err := oc.AdminKubeClient().CoreV1().Pods(nsRouter).List(ctx, listOpts) | ||
| g.Expect(err).NotTo(o.HaveOccurred()) | ||
| g.Expect(pods.Items).To(o.HaveLen(1)) | ||
|
|
||
| routerPod = &pods.Items[0] | ||
| }).WithTimeout(dcmIngressTimeout).WithPolling(time.Second).Should(o.Succeed()) |
There was a problem hiding this comment.
I think we should leverage waitForIngressControllerCondition in shard.go (you'll have to make it public).
If there was a flake, it'd be hard to tell where it failed if we only look for a service and pod. The available and progressing conditions on the IngressController represent this information as well as descriptive messages on the state.
You can leverage what the set of conditions should be by looking at the CIO conditions: https://github.com/openshift/cluster-ingress-operator/blob/fc6e70e0073410a750b5782b0ef385c2ff0eee4a/test/e2e/operator_test.go#L77-L82.
There was a problem hiding this comment.
Good point, just added before taking the list of pods. Based on your other comment, this just not only removes the need for wait(), but also for the Eventually() below, so it is guaranteed to find the router pod?
|
|
||
| g.By("waiting router to deploy the route") | ||
|
|
||
| output, err := waitLocalURL(ctx, routerPod, hostname, false, "/", http.StatusOK, dcmIngressTimeout) |
There was a problem hiding this comment.
Reusing the same routerPod instance could result in flakey behavior. Tests often run in very noisy and busy clusters, and should be able to handle a node scale-out/scale-in. And in that case, the router pod will change.
Is there a reason we need to use the local router pod and not an exec pod? Similar to how you wrote the last tests in config_manager.go with createExecPod?
There was a problem hiding this comment.
This is new for me, I wasn't expecting the router pod itself moving to another node. But this should happen with the execpod as well, shouldn't do? Wondering if handling 404 here makes sense, we move to the svc selector listing again in case it happens?
I'm using router pod itself just because it's simpler. Although whitelist depends on the source ip, having just loopback is also simpler but we can also grab the ip address of the exec pod as well.
Any good reason to use exec pod instead?
There was a problem hiding this comment.
Actually, I forgot that BeforeEach runs...before each test.
That should be okay - I didn't realize you are deleting/recreating the ingresscontroller and router pods on each test, so you should have a fresh router pod. I was think you were referencing a router pod for entirety of all these tests combined.
Now for a long-ish running test (like that one that should maintain proper balancing after scale-out and scale-in operations), I think it's still a possibility that we could lose the router pod, but I might be overstating the risk of a scale-in or node reboot during testing.
If you want to be super safe, you could have a wrapper function that calls execPodReadURL and this wrapper function resolves the router pod each time it's called (instead of having a static reference throughout the test). But I'm okay with this as long as you haven't seen flakes in CI. 👍
There was a problem hiding this comment.
This can be a good improvement. We can add it as well on a possible followup PR.
| // if expectedCode is `0`, an empty response and FIN or RST is expected from the server side. | ||
| func waitLocalURL(ctx context.Context, routerPod *corev1.Pod, host string, secure bool, abspath string, expectedCode int, timeout time.Duration) (output string, err error) { | ||
| err = wait.PollUntilContextTimeout(ctx, 2*time.Second, timeout, true, func(ctx context.Context) (done bool, err error) { | ||
| code, out, err := readLocalURL(routerPod, host, secure, abspath) |
There was a problem hiding this comment.
nit Have you considered adapting readURL rather than creating new helpers? Or adapting readUrl to use readLocalURL? I think they are quite similar.
There was a problem hiding this comment.
I needed to make some changes on the old ones but didn't want to add noise in the config_manager.go tests. But you're right. Since the new ones are more flexible, I just give a try on changing the old ones into just entrypoints to the new ones. What do you think?
There was a problem hiding this comment.
I like the updates 👍
We can refactor more in a following PR - my goal is to avoid too much helper proliferation. We should really make a router util.go anyways, but thanks for the consolidation so far.
|
Thanks for the updates - LGTM. I think we have some follow up improvements that we've talked about, but since we are trying to get CI signal for this feature gate, let's proceed with merging (as long as the tech-preview CI results are good). /lgtm |
|
/test e2e-aws-ovn-single-node-techpreview |
|
Scheduling required tests: |
|
@jcmoraisjr In my LGTM message, I forgot to include a thank you for providing great comments on the "why" on your tests. It was very helpful for helping me do a quick review! And thanks for being responsive on the comments. |
|
Nice e2e-gcp-ovn-techpreview passed. I'm going to spin it again, since we are waiting on approve/verification. /test e2e-gcp-ovn-techpreview |
|
@lihongan is this something you could provide verified for? We are looking to merge hopefully tomorrow (and waiting on approve from the appropriate person). |
|
/verified by ci the 8 tests passed in techpreview jobs |
|
@lihongan: This PR has been marked as verified by DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
Job Failure Risk Analysis for sha: e6739c8
Risk analysis has seen new tests most likely introduced by this PR. New Test Risks for sha: e6739c8
New tests seen in this PR at sha: e6739c8
|
|
/retest-required |
|
Job Failure Risk Analysis for sha: e6739c8
Risk analysis has seen new tests most likely introduced by this PR. New tests seen in this PR at sha: e6739c8
|
|
/retest seeing |
|
Job Failure Risk Analysis for sha: e6739c8
Risk analysis has seen new tests most likely introduced by this PR. New tests seen in this PR at sha: e6739c8
|
|
/retest-required infra issue around auth that is reported to be fixed now |
|
Job Failure Risk Analysis for sha: e6739c8
Risk analysis has seen new tests most likely introduced by this PR. New Test Risks for sha: e6739c8
New tests seen in this PR at sha: e6739c8
|
1 similar comment
|
Job Failure Risk Analysis for sha: e6739c8
Risk analysis has seen new tests most likely introduced by this PR. New Test Risks for sha: e6739c8
New tests seen in this PR at sha: e6739c8
|
|
/test ci/prow/e2e-aws-ovn-single-node-techpreview |
|
/test e2e-aws-ovn-single-node-techpreview |
|
/test e2e-gcp-ovn-techpreview |
|
@neisw I think the infra issue has resolved on that failed e2e-aws-ovn-fips. The TechPreview jobs are passing our new tests, but failing on other issues. Could you approve when you have a chance? Thanks. |
|
Job Failure Risk Analysis for sha: e6739c8
Risk analysis has seen new tests most likely introduced by this PR. New Test Risks for sha: e6739c8
New tests seen in this PR at sha: e6739c8
|
|
Job Failure Risk Analysis for sha: e6739c8
Risk analysis has seen new tests most likely introduced by this PR. New Test Risks for sha: e6739c8
New tests seen in this PR at sha: e6739c8
|
|
/approve |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: gcs278, jcmoraisjr, neisw The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
@jcmoraisjr: all tests passed! Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
Add DCM e2e tests focused on user experience.
Here are some requisites and thoughts kept in mind while designing and coding the tests:
https://redhat.atlassian.net/browse/NE-2529