Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Retrieving a file from a URL without loading it into RAM first

Python's urllib.request module offers a urlopen function which retrieves a URL's content plus some metadata and stores everything in main memory. In environments with limited memory this can quickly result in MemoryErrors.

There is another function called urlretrieve which seems to do what I am looking for. For some reason, though, the official documentation mentions that it might become deprecated in the future.

Is there an "official", built-in and non-legacy way for performing a download directly to the local file system? I am aware that this can easily be achieved with third-party libraries such as requests but I am working under strict computational & memory constraints and therefore would favor a built-in solution.

like image 329
zepp133 Avatar asked Dec 05 '25 00:12

zepp133


1 Answers

If you want to limit yourself to Python's standard library, please note that urlopen returns HTTPResponse objects, which have methods to read the response into memory chunk by chunk. You can buffer chunks of the response in RAM and write it to disk or elsewhere along the way.

The requests module makes the whole thing more streamlined.

like image 92
9000 Avatar answered Dec 06 '25 14:12

9000



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!