How to save S3 object to a file using boto3

I'm trying to do a "hello world" with new boto3 client for AWS.

The use-case I have is fairly simple: get object from S3 and save it to the file.

In boto 2.X I would do it like this:

    import boto
    key = boto.connect_s3().get_bucket('foo').get_key('foo')
    key.get_contents_to_filename('/tmp/foo')

In boto 3 . I can't find a clean way to do the same thing, so I'm manually iterating over the "Streaming" object:

    import boto3
    key = boto3.resource('s3').Object('fooo', 'docker/my-image.tar.gz').get()
    with open('/tmp/my-image.tar.gz', 'w') as f:
        chunk = key['Body'].read(1024*8)
        while chunk:
            f.write(chunk)
            chunk = key['Body'].read(1024*8)

or

    import boto3
    key = boto3.resource('s3').Object('fooo', 'docker/my-image.tar.gz').get()
    with open('/tmp/my-image.tar.gz', 'w') as f:
        for chunk in iter(lambda: key['Body'].read(4096), b''):
            f.write(chunk)

And it works fine. I was wondering is there any "native" boto3 function that will do the same task?

There is a customization that went into Boto3 recently which helps with this (among other things). It is currently exposed on the low-level S3 client, and can be used like this:

    s3_client = boto3.client('s3')
    open('hello.txt').write('Hello, world!')

    # Upload the file to S3
    s3_client.upload_file('hello.txt', 'MyBucket', 'hello-remote.txt')

    # Download the file from S3
    s3_client.download_file('MyBucket', 'hello-remote.txt', 'hello2.txt')
    print(open('hello2.txt').read())

These functions will automatically handle reading/writing files as well as doing multipart uploads in parallel for large files.

From: stackoverflow.com/q/29378763