Features

Simple way to set key as public or setting Cache-Control and Content-Type headers.

Pool implementation for fast multi-threaded actions

Support

Python 2.6

Python 2.7

Python 3.2

Python 3.3

PyPy

Installation

$ pip install tinys3

Or if you're using easy_install:

$ easy_install tinys3

Usage

Uploading files to S3

Uploading a single file:

importtinys3# Creating a simple connectionconn=tinys3.Connection(S3_ACCESS_KEY,S3_SECRET_KEY)# Uploading a single filef=open('some_file.zip','rb')conn.upload('some_file.zip',f,'my_bucket')

Some more options for the connection:

# Specifying a default bucketconn=tinys3.Connection(S3_ACCESS_KEY,S3_SECRET_KEY,default='my_bucket')# So we could skip the bucket parameter on every requestf=open('some_file.zip','rb')conn.upload('some_file.zip',f)# Controlling the use of TLSconn=tinys3.Connection(S3_ACCESS_KEY,S3_SECRET_KEY,tls=True)

Setting expiry headers.

# File will be stored in cache for one hourconn.upload('my_awesome_key.zip',f,bucket='sample_bucket',expires=3600)# Passing 'max' as the value to 'expires' will make it cachable for a yearconn.upload('my_awesome_key.zip',f,bucket='sample_bucket',expires='max')# Expires can also handle timedelta objectfromdatetimeimporttimedeltat=timedelta(weeks=5)# File will be stored in cache for 5 weeksconn.upload('my_awesome_key.zip',f,bucket='sample_bucket',expires=t)

tinys3 will try to guess the content type from the key (using the mimetypes package),
but you can override it:

Copy keys inside/between buckets

Use the 'copy' method to copy a key or update metadata.

# Simple copy between two bucketsconn.copy('source_key.jpg','source_bucket','target_key.jpg','target_bucket')# No need to specify the target bucket if we're copying inside the same bucketconn.copy('source_key.jpg','source_bucket','target_key.jpg')# We could also update the metadata of the target fileconn.copy('source_key.jpg','source_bucket','target_key.jpg','target_bucket',metadata={'x-amz-storage-class':'REDUCED_REDUNDANCY'})# Or set the target file as privateconn.copy('source_key.jpg','source_bucket','target_key.jpg','target_bucket',public=False)

Updating
metadata

# Updating metadata for a keyconn.update_metadata('key.jpg',{'x-amz-storage-class':'REDUCED_REDUNDANCY'},'my_bucket')# We can also change the privacy of a file, without updating it's metadataconn.update_metadata('key.jpg',bucket='my_bucket',public=False)

Using tinys3's Connection Pool

The pool uses 5 worker threads by default. The 'size' parameter allows us to override it:

pool=tinys3.Pool(S3_ACCESS_KEY,S3_SECRET_KEY,size=25)

Using the pool to perform actions:

# Let's use the pool to delete a file>>>r=pool.delete('a_key_to_delete.zip','my_bucket')<Futureat0x2c8de48Lstate=pending># Futures are the standard python implementation of the "promise" pattern# You can read more about them here:# http://docs.python.org/3.3/library/concurrent.futures.html#future-objects# Did we finish?>>>r.done()False# Block until the response is completed>>>r.result()<Response[200]># Block until completed with a timeout# If the response is not completed until the timeout has passed, a TimeoutError will be raised>>>r.result(timeout=120)<Response[200]>

Using as_completed and all_completed

# First we'll create a lot of async requests>>>requests=[]>>>foriinrange(100)>>>requests.append(pool.delete('key'+str(i),'my_bucket'))# The helper methods as_completed and all_completed helps us work# with multiple Future objects# This will block until all the requests are completed# The results are the responses themselves, without the Future wrappers>>>pool.all_completed(requests)[<Response[200]>,...]# The as_completed generator will yield on every completed request.>>>forrinpool.as_completed(requests)>>># r is the response object itself, without the Future wrapper>>>printr