How to download large file in python with requests.py?
Requests is a really nice library. I'd like to use it for download big files (>1GB). The problem is it's not possible to keep whole file in memory I need to read it in chunks. And this is a problem with the following code
import requests def DownloadFile(url) local_filename = url.split('/')[-1] r = requests.get(url) f = open(local_filename, 'wb') for chunk in r.iter_content(chunk_size=512 * 1024): if chunk: # filter out keep-alive new chunks f.write(chunk) f.close() return
By some reason it doesn't work this way. It still loads response into memory before save it to a file.
If you need a small client (Python 2.x /3.x) which can download big files from FTP, you can find it here. It supports multithreading & reconnects (it does monitor connections) also it tunes socket params for the download task.
I figured out what should be changed. The trick was to set
stream = True in the
After this python process stopped to suck memory (stays around 30kb regardless size of the download file).
Thank you @danodonovan for you syntax I use it here:
def download_file(url): local_filename = url.split('/')[-1] # NOTE the stream=True parameter r = requests.get(url, stream=True) with open(local_filename, 'wb') as f: for chunk in r.iter_content(chunk_size=1024): if chunk: # filter out keep-alive new chunks f.write(chunk) #f.flush() commented by recommendation from J.F.Sebastian return local_filename
See http://docs.python-requests.org/en/latest/user/advanced/#body-content-workflow for further reference.
★ Back to homepage or read more recommendations: