-
Notifications
You must be signed in to change notification settings - Fork 1
SAN-5817 - Store container logs in s3 #1903
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Updated binding Updated binding Added task Added tail limit Updated to use aws.s3 s -> S Updated to use .then instead of .tap Create instance of S3 Updated to only tail if a tail limit is set Re-working container log storing Updated logging Updated to clean stream Pass end event through Updated trace Updated data log Updated data length Re-ordered code Updated order WIP Being silly Being silly Being silly Meh Push raw data through Go back in time
fix all the stream stuff so we can start saving logs to s3
# Conflicts: # lib/models/rabbitmq/index.js
| return Promise.try(this._clearBuildResources.bind(this)) | ||
| .bind(this) | ||
| .then(this._updateModelsAndGetUpdatedContextVersions) | ||
| .tap(rabbitMQ.publishContainerLogsStore.bind(rabbitMQ, {containerId: this.id})) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
() => rabbitMQ.publishContainerLogsStore({containerId: this.id})
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cuz it looks better
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I disagree. Not going to change it unless we've a style guide thing that prevents it.
lib/workers/container.logs.store.js
Outdated
| .promise() | ||
| .catch((err) => { | ||
| log.error({err}, 'Error uploading logs to s3') | ||
| throw new WorkerStopError('Error uploading logs to s3', {originalError: err}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a bit aggressive, might just be network errors. Just add max retry count unless you have specific errors to workerStop
lib/workers/container.logs.store.js
Outdated
| throw new WorkerStopError('Error uploading logs to s3', {originalError: err}) | ||
| }) | ||
| }) | ||
| .then((uploadData) => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: ponos should print out end of worker.
also what is uploadData ? is that the entire log string?!!?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
uploadData is the promise containing the upload results.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just making sure, is upload results the entire log in memory?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, it's the response that we get from s3
|
Deployed |
|
tests passed when I run them in small chunks of under 100 locally on bdd. |
We need to store logs of containers when they die so we can show them for testing environments. We're going to need to add more logic and store container histories so we can fetch old versions etc but this is the first step.
Store them in s3 by container ID, then eventually we'll be able to stream directly to the client from s3 bypassing API sockets.