fix(s3): retry failed multipart uploads with decreased concurrency#53419
Conversation
Signed-off-by: Kent Delante <kent.delante@proton.me>
|
/backport to stable31 |
|
/backport to stable30 |
| $uploader->upload(); | ||
| $uploaded = true; | ||
| } catch (S3MultipartUploadException $e) { | ||
| $exception = $e; |
There was a problem hiding this comment.
Is that necessary? If so, you can also simply rename the variable name in the catch declaration.
But I would probably do without to reduce the amount of changed line.
In any cases, let's not have two variables with the same content.
There was a problem hiding this comment.
I think it makes the code more understandable, as you expect the variable from the catch declaration to be used within the brackets. However, $uploaded is not needed, as $exception is not null.
There was a problem hiding this comment.
Right, I missed the declaration above. It is a bit convoluted, but alright.
|
aouch, auto merge, my bad; should have waited for Louis' comment before approving |
|
Sorry about that. I probably should have held off on enabling auto merge for this. |
Summary
Sometimes, multipart upload requests to S3 are too frequent and we get a "slow down" response resulting in large file uploads (more than 5gb) failing. When the uploader fails, try uploading again and cut concurrent requests in half. Affects S3 as primary and external storage.
Checklist