Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Is there a file size limit for Amazon S3

After running successfully the bash script stop working This line is producing an error:

move failed: ../../../../Users/thisuser/Desktop/somefile.zip to s3://cloudbackups/somefile.zip ('Connection aborted.', error(32, 'Broken pipe'))

aws s3 mv $source_file_path $target_path

the source file is more than 8GB. Mac OS.

like image 459
Franck Avatar asked Oct 16 '25 15:10

Franck


2 Answers

From S3 FAQ

Q: How much data can I store in Amazon S3?

The total volume of data and number of objects you can store are unlimited. Individual Amazon S3 objects can range in size from a minimum of 0 bytes to a maximum of 5 terabytes. The largest object that can be uploaded in a single PUT is 5 gigabytes. For objects larger than 100 megabytes, customers should consider using the Multipart Upload capability.)

While the maximum file size is 5TB, the maximum size for a single PUT operation is 5GB - this means you will be unable to upload an 8GB file with a single operation, and you need to use multipart upload. Note that AWS recommends multipart upload for any files larger than 100MB.

Multipart uploads have huge reliability advantages - failures and retries are limited in size and scope. In order to do this via command line interface, you'll need to familiarize yourself with multiple different commands documented here

For more details on multipart upload see the documentation:

  • Improved throughput - You can upload parts in parallel to improve throughput.
  • Quick recovery from any network issues - Smaller part size minimizes the impact of restarting a failed upload due to a network error.
  • Pause and resume object uploads - You can upload object parts over time. Once you initiate a multipart upload there is no expiry; you must explicitly complete or abort the multipart upload.
  • Begin an upload before you know the final object size - You can upload an object as you are creating it.
like image 110
Krease Avatar answered Oct 18 '25 08:10

Krease


Amazon S3 has a file size limit of 5TB, so that clearly isn't your issue.

The Broken Pipe message suggests a networking problem. Copying an 8GB file makes it more susceptible to networking issues.

I would suggest retrying it, especially if the failure occurs at different positions each time. If possible, use a more reliable network (eg corporate network vs home network).

You can also activate debug by appending --debug to your command. This might provide additional information to investigate the issue.

like image 34
John Rotenstein Avatar answered Oct 18 '25 08:10

John Rotenstein