-
Notifications
You must be signed in to change notification settings - Fork 2
Add S3 Tests #78
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Add S3 Tests #78
Conversation
api.py
Outdated
| api.request("DELETE", url) | ||
|
|
||
| # Wait for snapshots to be deleted | ||
| if 'volume-snapshots' in url: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No need to check this, if volume-snapshots is not in the URL, this handler would not have been called.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, fixed.
| """ Before deleting an objects user, we have to delete owned buckets. """ | ||
|
|
||
| user = ObjectsUser.from_href(None, api, url, name="") | ||
| user.wait_for_access() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In this case the user must already exist. I don't think we need to wait here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If it already exists and all checks out, then we don't wait much at all anyway. If for some reason there is a problem (during development I may have hit this occasionally), then this makes the cleanup more robust.
|
|
||
| for topic in sns.list_topics().get('Topics', ()): | ||
| arn = topic["TopicArn"] | ||
| assert re.match(r'arn:aws:sns:(rma|lpg)::at-.+', arn) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As we don't intend to run this against Reef clusters we might also skip this or mark it for removal after the Squid update.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would like to keep it as a security measure, if for some reason this is ever off, then it would be good to know.
| server.assert_run(oneliner(""" | ||
| sudo apt update; | ||
| sudo apt install -y podman jq; | ||
| sudo systemd-run --unit webhook.service | ||
| podman run | ||
| --name webhook | ||
| --publish 80:8080 | ||
| --env LOG_WITHOUT_NEWLINE=true | ||
| --env DISABLE_REQUEST_LOGS=true | ||
| docker.io/mendhak/http-https-echo:38 | ||
| """)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we want to depend on an external Docker image? This could make tests fail i fthere is an issue with DockerHub or it's rate limiting....
The used image looks nice and has lots of features. But most of them we don't need. How hard would it be to implement this in the same way as lbaaas-http-test-server?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Personally I think this is fine. I guess it would not be super hard to implement it like lbaas-http-test-server, though I might not be able to do that in time. Maybe see if this is ever a problem, and if we do see test failures that way that are not our fault, we change this.
gaudenz
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm a bit unsure about the list of fixtures. Do we really need the access_key and secret_key fixtures? Isn't the objects_user fixture good enough for this?
On the other hand an s3_client and sns_client fixture might be handy. I see that constructing the client once in the test to show how to do it is nice, but if we grow this test suite, I might become repetitive.
On the other hand even the object_user fixture is debatable because we probably also want to have tests covering objects user and key creation.
| def objects_endpoint_for(self, zone): | ||
| netloc = urlparse(self.api_url).netloc | ||
|
|
||
| if netloc.startswith("api"): | ||
| prefix = "" | ||
| tld = "ch" | ||
| else: | ||
| prefix = f"{netloc.split('-')[0]}-" | ||
| tld = "zone" | ||
|
|
||
| return f"{prefix}objects.{zone.rstrip('012345679')}.cloudscale.{tld}" | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would prefer not to encode our naming scheme here. I would prefer a new environment variable for the URL with a default of https://object.<region>.cloudscale.ch.
We can still enforce some sanity checks like setting a CLOUDSCALE_API_URL requires setting the objects URL and that the URL contains the region name.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As discussed, you will follow-up with this internally. Ideally I think we would have some kind of API path that has all this information per API endpoint. In the absence of that, I prefer this solution over introducing additional environment variables.
I personally err on the side of too many fixtures: They are cheap, simple, and easy to understand. If it makes the code in front a bit nicer to read, they should be included.
My thought exactly, but since I don't have additional tests yet, I think this would be premature.
For that the |
This adds an initial set of S3 tests.