import os bucket_name = 'my-bucket' prefix = 'my-folder/' # The "folder" path in S3 bucket = boto3.resource('s3').Bucket(bucket_name) for obj in bucket.objects.filter(Prefix=prefix): # Ensure local directory exists target = obj.key if not os.path.exists(os.path.dirname(target)): os.makedirs(os.path.dirname(target), exist_ok=True) # Download the file bucket.download_file(obj.key, target) Use code with caution. Downloading files - Boto3 1.43.5 documentation
: The easiest way is to use the AWS CLI to run aws configure , which stores your AWS Access Key ID and Secret Access Key in a local file. Alternatively, you can pass credentials directly in your script using a boto3 Session. 2. Core Download Methods download s3 data python
Downloading data from Amazon S3 using Python is a fundamental task for data engineers and developers. The primary tool for this is , the official AWS SDK for Python . 1. Prerequisites & Authentication import os bucket_name = 'my-bucket' prefix = 'my-folder/'
import io buffer = io.BytesIO() s3.download_fileobj('my-bucket-name', 'data.csv', buffer) buffer.seek(0) # Reset pointer to the start Use code with caution. 3. Downloading Entire Folders or Buckets download s3 data python
This is useful for streaming data into a buffer like io.BytesIO or an already open file handle.
response = s3.get_object(Bucket='my-bucket-name', Key='data.json') content = response['Body'].read().decode('utf-8') print(content) Use code with caution. C. Download to File-like Object ( download_fileobj )