USHIFT-966 USHIFT-973: microshift e2e test suite#1557
USHIFT-966 USHIFT-973: microshift e2e test suite#1557openshift-merge-robot merged 45 commits intoopenshift:mainfrom
Conversation
|
@pmtk: GitHub didn't allow me to request PR reviews from the following users: pmtk. Note that only openshift members and repo collaborators can review this PR, and authors cannot review their own PRs. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
|
being tested in openshift/release#37629 |
e2e/tests/0030-router-smoke-test.sh
Outdated
There was a problem hiding this comment.
Can we split these test scripts up so it is possible to run them against a local deployment, too?
e2e/cluster-debug-info.sh
Outdated
There was a problem hiding this comment.
At what point can this script just become a call to sos report?
There was a problem hiding this comment.
As soon as the sos version we need is publicly available this script may be retired.
There was a problem hiding this comment.
I don't remember the details, but I think we still don't have everything in SOS? Does it have "feature parity" with the cluster-debug-info.sh?
There was a problem hiding this comment.
I take ^ back. Just looked at sos from master and it's pretty dope.
@dhellmann @pacevedom what do you think about leaving couple commands in the job log to have quickly glanceable info in case of error? Like oc get {pods,nodes} and maybe oc get events -A --sort-by=.metadata.creationTimestamp | head -n 20. Given our problems with topolvm image pulling I'd get quickly annoyed to go through downloading and extracting sos report only to find that issue there :P
e2e/main.sh
Outdated
There was a problem hiding this comment.
Passing shell commands to ssh to run with pipes and loops and things can make quoting pretty messy. Can we process the output of the script outside of ssh? Or can we make the script being run produce just the output we want (maybe via a command line switch)?
We also, at one point, talked about just copying all of the test code into the VM and then running everything there. I can see some issues with that, but the appeal is that is simplifies the test runner logic, at least in some cases. Does it make the implementation of the tests harder, though?
There was a problem hiding this comment.
I feel that tests like router/lb smokes would not be as sane ran from within.
Also - reboot test (which I haven't ported(? - there's nothing to port there actually...) yet)
I know that current state of the PR is "in between" approaches, but I didn't wanted to make too many changes to existing tests (esp greenboot) hoping for iterative approach.
Re that monstrosity after pipe - it just adds timestamp to beginning of the line, somehow I couldn't find moreutils package on RHEL (which would provide ts and that would be much simpler).
Also, it's not needed at all - just put it there because I noticed that greenboot check takes way too long (pod restart check takes 50s for each namespace sequentially)
There was a problem hiding this comment.
I just remembered testing I've done simulating "power loss during initialization" - test similar to this (or just "do 100 reboots") has no place running from the microshift itself.
Having external tests runner feels cleaner to me, but there's nothing prohibiting the test suite from the microshift (like selected tests running on dev vm against dev vm), but that's probably not what you meant?
Are we talking about test runner on dev VM or just having test cases in scripts, copied to remote and executed there? Even though bash scripts are much simpler at times due to being native to shell (duh) or set -x providing full debug info, they feel like a mess.
There was a problem hiding this comment.
I'm sorry, yes, you've pointed out that reboot case to me a few times before, and I keep forgetting it.
I'm trying to minimize the complexity of the commands embedded in the ssh arguments. Maybe we can isolate just those cases as temporary scripts? A guideline like "if you wouldn't type it on the command line, don't pass it to ssh as an argument" could give us a heuristic. Maybe that's stricter than needed, though?
|
@pmtk: This pull request references USHIFT-973 which is a valid jira issue. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
2023943 to
c642ae8
Compare
|
/label tide/merge-method-squash |
|
@pmtk: This pull request references USHIFT-973 which is a valid jira issue. DetailsIn response to this: Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
dhellmann
left a comment
There was a problem hiding this comment.
I'm happy with this version. I'll leave the PR open to let some of the rest of the team look at it, since there are others who have been following the work more closely than I have.
e2e/main.sh
Outdated
| set -euo pipefail | ||
|
|
||
| SCRIPT_DIR="$(readlink -f "$(dirname "${BASH_SOURCE[0]}")")" | ||
| OUTPUT_DIR="${ARTIFACT_DIR:-_output}/microshift-e2e-$(date +'%Y%m%d-%H%M%S')/" |
There was a problem hiding this comment.
Can we make the default _output value relative to the SCRIPT_DIR so that it's a full path?
| echo "" | ||
| echo " Script expects two environmental variables:" | ||
| echo " - USHIFT_IP" | ||
| echo " - USHIFT_USER" |
There was a problem hiding this comment.
Can we set 127.0.0.1 and microshift defaults for those? This should facilitate easier local execution of the scripts on the dev machine.
There was a problem hiding this comment.
There will be a class of tests that cannot be run locally (anything requiring a reboot, for example. That won't be all of the tests, but for ease of development of the test suite we should be consistent with running the tests remotely.
|
/hold |
|
/unhold |
|
I'm approving this PR and putting it on hold. Please unhold when ready to merge. |
|
/test e2e-router-smoke-test |
| ssh_cmd 'cat << EOF | sudo tee /etc/microshift/config.yaml | ||
| --- | ||
| apiServer: | ||
| subjectAltNames: | ||
| - '"${USHIFT_IP}"' | ||
| EOF' &>"${output_dir}/0001-setup.log" |
There was a problem hiding this comment.
Is this needed? Should it remain as part of the installation?
There was a problem hiding this comment.
But it's only done in CI, missing for local VMs (unless someone does it manually). I'd prefer using IPs over hostnames if we can
There was a problem hiding this comment.
Tests only require a host for the execution, be that a hostname or an IP, right? I would put the requirement on configuring the system outside of the tests so that they are generic for any environment.
In CI you have that as part of the install steps, and in a dev environment you may get away with default configuration. For example, I use "microshift-rhel9" as hostname for my VM and never use its IP, therefore the default config is ok.
|
/unhold |
|
/lgtm |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: dhellmann, ggiguash, pmtk The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
@pmtk: all tests passed! Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
No description provided.