S3

boto.s3.acl

class boto.s3.acl.ACL(policy=None)
add_email_grant(permission, email_address)
add_grant(grant)
add_user_grant(permission, user_id, display_name=None)
endElement(name, value, connection)
startElement(name, attrs, connection)
to_xml()
class boto.s3.acl.Grant(permission=None, type=None, id=None, display_name=None, uri=None, email_address=None)
NameSpace = 'xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"'
endElement(name, value, connection)
startElement(name, attrs, connection)
to_xml()
class boto.s3.acl.Policy(parent=None)
endElement(name, value, connection)
startElement(name, attrs, connection)
to_xml()

boto.s3.bucket

class boto.s3.bucket.Bucket(connection=None, name=None, key_class=<class 'boto.s3.key.Key'>)
BucketLoggingBody = '<?xml version="1.0" encoding="UTF-8"?>\n <BucketLoggingStatus xmlns="http://s3.amazonaws.com/doc/2006-03-01/">\n <LoggingEnabled>\n <TargetBucket>%s</TargetBucket>\n <TargetPrefix>%s</TargetPrefix>\n </LoggingEnabled>\n </BucketLoggingStatus>'
BucketPaymentBody = '<?xml version="1.0" encoding="UTF-8"?>\n <RequestPaymentConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">\n <Payer>%s</Payer>\n </RequestPaymentConfiguration>'
EmptyBucketLoggingBody = '<?xml version="1.0" encoding="UTF-8"?>\n <BucketLoggingStatus xmlns="http://s3.amazonaws.com/doc/2006-03-01/">\n </BucketLoggingStatus>'
LoggingGroup = 'http://acs.amazonaws.com/groups/s3/LogDelivery'
MFADeleteRE = '<MfaDelete>([A-Za-z]+)</MfaDelete>'
VersionRE = '<Status>([A-Za-z]+)</Status>'
VersioningBody = '<?xml version="1.0" encoding="UTF-8"?>\n <VersioningConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">\n <Status>%s</Status>\n <MfaDelete>%s</MfaDelete>\n </VersioningConfiguration>'
WebsiteBody = '<?xml version="1.0" encoding="UTF-8"?>\n <WebsiteConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">\n <IndexDocument><Suffix>%s</Suffix></IndexDocument>\n %s\n </WebsiteConfiguration>'
WebsiteErrorFragment = '<ErrorDocument><Key>%s</Key></ErrorDocument>'
add_email_grant(permission, email_address, recursive=False, headers=None)

Convenience method that provides a quick way to add an email grant to a bucket. This method retrieves the current ACL, creates a new grant based on the parameters passed in, adds that grant to the ACL and then PUT’s the new ACL back to S3.

Parameters:
  • permission (string) – The permission being granted. Should be one of: (READ, WRITE, READ_ACP, WRITE_ACP, FULL_CONTROL).
  • email_address (string) – The email address associated with the AWS account your are granting the permission to.
  • recursive (boolean) – A boolean value to controls whether the command will apply the grant to all keys within the bucket or not. The default value is False. By passing a True value, the call will iterate through all keys in the bucket and apply the same grant to each key. CAUTION: If you have a lot of keys, this could take a long time!
add_user_grant(permission, user_id, recursive=False, headers=None, display_name=None)

Convenience method that provides a quick way to add a canonical user grant to a bucket. This method retrieves the current ACL, creates a new grant based on the parameters passed in, adds that grant to the ACL and then PUT’s the new ACL back to S3.

Parameters:
  • permission (string) – The permission being granted. Should be one of: (READ, WRITE, READ_ACP, WRITE_ACP, FULL_CONTROL).
  • user_id (string) – The canonical user id associated with the AWS account your are granting the permission to.
  • recursive (boolean) – A boolean value to controls whether the command will apply the grant to all keys within the bucket or not. The default value is False. By passing a True value, the call will iterate through all keys in the bucket and apply the same grant to each key. CAUTION: If you have a lot of keys, this could take a long time!
  • display_name (string) – An option string containing the user’s Display Name. Only required on Walrus.
cancel_multipart_upload(key_name, upload_id, headers=None)
complete_multipart_upload(key_name, upload_id, xml_body, headers=None)

Complete a multipart upload operation.

configure_versioning(versioning, mfa_delete=False, mfa_token=None, headers=None)

Configure versioning for this bucket.

..note:: This feature is currently in beta release and is available
only in the Northern California region.
Parameters:
  • versioning (bool) – A boolean indicating whether version is enabled (True) or disabled (False).
  • mfa_delete (bool) – A boolean indicating whether the Multi-Factor Authentication Delete feature is enabled (True) or disabled (False). If mfa_delete is enabled then all Delete operations will require the token from your MFA device to be passed in the request.
  • mfa_token (tuple or list of strings) – A tuple or list consisting of the serial number from the MFA device and the current value of the six-digit token associated with the device. This value is required when you are changing the status of the MfaDelete property of the bucket.
configure_website(suffix, error_key='', headers=None)

Configure this bucket to act as a website

Parameters:
  • suffix (str) – Suffix that is appended to a request that is for a “directory” on the website endpoint (e.g. if the suffix is index.html and you make a request to samplebucket/images/ the data that is returned will be for the object with the key name images/index.html). The suffix must not be empty and must not include a slash character.
  • error_key (str) – The object key name to use when a 4XX class error occurs. This is optional.
copy_key(new_key_name, src_bucket_name, src_key_name, metadata=None, src_version_id=None, storage_class='STANDARD', preserve_acl=False)

Create a new key in the bucket by copying another existing key.

Parameters:
  • new_key_name (string) – The name of the new key
  • src_bucket_name (string) – The name of the source bucket
  • src_key_name (string) – The name of the source key
  • src_version_id (string) – The version id for the key. This param is optional. If not specified, the newest version of the key will be copied.
  • metadata (dict) – Metadata to be associated with new key. If metadata is supplied, it will replace the metadata of the source key being copied. If no metadata is supplied, the source key’s metadata will be copied to the new key.
  • storage_class (string) – The storage class of the new key. By default, the new key will use the standard storage class. Possible values are: STANDARD | REDUCED_REDUNDANCY
  • preserve_acl (bool) – If True, the ACL from the source key will be copied to the destination key. If False, the destination key will have the default ACL. Note that preserving the ACL in the new key object will require two additional API calls to S3, one to retrieve the current ACL and one to set that ACL on the new object. If you don’t care about the ACL, a value of False will be significantly more efficient.
Return type:

boto.s3.key.Key or subclass

Returns:

An instance of the newly created key object

delete(headers=None)
delete_key(key_name, headers=None, version_id=None, mfa_token=None)

Deletes a key from the bucket. If a version_id is provided, only that version of the key will be deleted.

Parameters:
  • key_name (string) – The key name to delete
  • version_id (string) – The version ID (optional)
  • mfa_token (tuple or list of strings) – A tuple or list consisting of the serial number from the MFA device and the current value of the six-digit token associated with the device. This value is required anytime you are deleting versioned objects from a bucket that has the MFADelete option on the bucket.
delete_website_configuration(headers=None)

Removes all website configuration from the bucket.

disable_logging(headers=None)
enable_logging(target_bucket, target_prefix='', headers=None)
endElement(name, value, connection)
generate_url(expires_in, method='GET', headers=None, force_http=False, response_headers=None)
get_acl(key_name='', headers=None, version_id=None)
get_all_keys(headers=None, **params)

A lower-level method for listing contents of a bucket. This closely models the actual S3 API and requires you to manually handle the paging of results. For a higher-level method that handles the details of paging for you, you can use the list method.

Parameters:
  • max_keys (int) – The maximum number of keys to retrieve
  • prefix (string) – The prefix of the keys you want to retrieve
  • marker (string) – The “marker” of where you are in the result set
  • delimiter (string) – If this optional, Unicode string parameter is included with your request, then keys that contain the same string between the prefix and the first occurrence of the delimiter will be rolled up into a single result element in the CommonPrefixes collection. These rolled-up keys are not returned elsewhere in the response.
Return type:

ResultSet

Returns:

The result from S3 listing the keys requested

get_all_multipart_uploads(headers=None, **params)

A lower-level, version-aware method for listing active MultiPart uploads for a bucket. This closely models the actual S3 API and requires you to manually handle the paging of results. For a higher-level method that handles the details of paging for you, you can use the list method.

Parameters:
  • max_uploads (int) – The maximum number of uploads to retrieve. Default value is 1000.
  • key_marker (string) –

    Together with upload_id_marker, this parameter specifies the multipart upload after which listing should begin. If upload_id_marker is not specified, only the keys lexicographically greater than the specified key_marker will be included in the list.

    If upload_id_marker is specified, any multipart uploads for a key equal to the key_marker might also be included, provided those multipart uploads have upload IDs lexicographically greater than the specified upload_id_marker.

  • upload_id_marker (string) – Together with key-marker, specifies the multipart upload after which listing should begin. If key_marker is not specified, the upload_id_marker parameter is ignored. Otherwise, any multipart uploads for a key equal to the key_marker might be included in the list only if they have an upload ID lexicographically greater than the specified upload_id_marker.
Return type:

ResultSet

Returns:

The result from S3 listing the uploads requested

get_all_versions(headers=None, **params)

A lower-level, version-aware method for listing contents of a bucket. This closely models the actual S3 API and requires you to manually handle the paging of results. For a higher-level method that handles the details of paging for you, you can use the list method.

Parameters:
  • max_keys (int) – The maximum number of keys to retrieve
  • prefix (string) – The prefix of the keys you want to retrieve
  • key_marker (string) – The “marker” of where you are in the result set with respect to keys.
  • version_id_marker (string) – The “marker” of where you are in the result set with respect to version-id’s.
  • delimiter (string) – If this optional, Unicode string parameter is included with your request, then keys that contain the same string between the prefix and the first occurrence of the delimiter will be rolled up into a single result element in the CommonPrefixes collection. These rolled-up keys are not returned elsewhere in the response.
Return type:

ResultSet

Returns:

The result from S3 listing the keys requested

get_key(key_name, headers=None, version_id=None)

Check to see if a particular key exists within the bucket. This method uses a HEAD request to check for the existance of the key. Returns: An instance of a Key object or None

Parameters:key_name (string) – The name of the key to retrieve
Return type:boto.s3.key.Key
Returns:A Key object from this bucket.
get_location()

Returns the LocationConstraint for the bucket.

Return type:str
Returns:The LocationConstraint for the bucket or the empty string if no constraint was specified when bucket was created.
get_logging_status(headers=None)
get_policy(headers=None)
get_request_payment(headers=None)
get_versioning_status(headers=None)

Returns the current status of versioning on the bucket.

Return type:dict
Returns:A dictionary containing a key named ‘Versioning’ that can have a value of either Enabled, Disabled, or Suspended. Also, if MFADelete has ever been enabled on the bucket, the dictionary will contain a key named ‘MFADelete’ which will have a value of either Enabled or Suspended.
get_website_configuration(headers=None)

Returns the current status of website configuration on the bucket.

Return type:dict
Returns:
A dictionary containing a Python representation
of the XML response from S3. The overall structure is:
  • WebsiteConfiguration
    • IndexDocument
      • Suffix : suffix that is appended to request that

      is for a “directory” on the website endpoint * ErrorDocument

      • Key : name of object to serve when an error occurs
get_website_endpoint()

Returns the fully qualified hostname to use is you want to access this bucket as a website. This doesn’t validate whether the bucket has been correctly configured as a website or not.

get_xml_acl(key_name='', headers=None, version_id=None)
initiate_multipart_upload(key_name, headers=None, reduced_redundancy=False, metadata=None)

Start a multipart upload operation.

Parameters:
  • key_name (string) – The name of the key that will ultimately result from this multipart upload operation. This will be exactly as the key appears in the bucket after the upload process has been completed.
  • headers (dict) – Additional HTTP headers to send and store with the resulting key in S3.
  • reduced_redundancy (boolean) – In multipart uploads, the storage class is specified when initiating the upload, not when uploading individual parts. So if you want the resulting key to use the reduced redundancy storage class set this flag when you initiate the upload.
  • metadata (dict) – Any metadata that you would like to set on the key that results from the multipart upload.
list(prefix='', delimiter='', marker='', headers=None)

List key objects within a bucket. This returns an instance of an BucketListResultSet that automatically handles all of the result paging, etc. from S3. You just need to keep iterating until there are no more results.

Called with no arguments, this will return an iterator object across all keys within the bucket.

The Key objects returned by the iterator are obtained by parsing the results of a GET on the bucket, also known as the List Objects request. The XML returned by this request contains only a subset of the information about each key. Certain metadata fields such as Content-Type and user metadata are not available in the XML. Therefore, if you want these additional metadata fields you will have to do a HEAD request on the Key in the bucket.

Parameters:
  • prefix (string) – allows you to limit the listing to a particular prefix. For example, if you call the method with prefix=’/foo/’ then the iterator will only cycle through the keys that begin with the string ‘/foo/’.
  • delimiter (string) – can be used in conjunction with the prefix to allow you to organize and browse your keys hierarchically. See: http://docs.amazonwebservices.com/AmazonS3/2006-03-01/ for more details.
  • marker (string) – The “marker” of where you are in the result set
Return type:

boto.s3.bucketlistresultset.BucketListResultSet

Returns:

an instance of a BucketListResultSet that handles paging, etc

list_grants(headers=None)
list_multipart_uploads(key_marker='', upload_id_marker='', headers=None)

List multipart upload objects within a bucket. This returns an instance of an MultiPartUploadListResultSet that automatically handles all of the result paging, etc. from S3. You just need to keep iterating until there are no more results.

Parameters:marker (string) – The “marker” of where you are in the result set
Return type:boto.s3.bucketlistresultset.BucketListResultSet
Returns:an instance of a BucketListResultSet that handles paging, etc
list_versions(prefix='', delimiter='', key_marker='', version_id_marker='', headers=None)

List version objects within a bucket. This returns an instance of an VersionedBucketListResultSet that automatically handles all of the result paging, etc. from S3. You just need to keep iterating until there are no more results. Called with no arguments, this will return an iterator object across all keys within the bucket.

Parameters:
  • prefix (string) – allows you to limit the listing to a particular prefix. For example, if you call the method with prefix=’/foo/’ then the iterator will only cycle through the keys that begin with the string ‘/foo/’.
  • delimiter (string) – can be used in conjunction with the prefix to allow you to organize and browse your keys hierarchically. See: http://docs.amazonwebservices.com/AmazonS3/2006-03-01/ for more details.
  • marker (string) – The “marker” of where you are in the result set
Return type:

boto.s3.bucketlistresultset.BucketListResultSet

Returns:

an instance of a BucketListResultSet that handles paging, etc

lookup(key_name, headers=None)

Deprecated: Please use get_key method.

Parameters:key_name (string) – The name of the key to retrieve
Return type:boto.s3.key.Key
Returns:A Key object from this bucket.
make_public(recursive=False, headers=None)
new_key(key_name=None)

Creates a new key

Parameters:key_name (string) – The name of the key to create
Return type:boto.s3.key.Key or subclass
Returns:An instance of the newly created key object
set_acl(acl_or_str, key_name='', headers=None, version_id=None)
set_as_logging_target(headers=None)
set_canned_acl(acl_str, key_name='', headers=None, version_id=None)
set_key_class(key_class)

Set the Key class associated with this bucket. By default, this would be the boto.s3.key.Key class but if you want to subclass that for some reason this allows you to associate your new class with a bucket so that when you call bucket.new_key() or when you get a listing of keys in the bucket you will get an instances of your key class rather than the default.

Parameters:key_class (class) – A subclass of Key that can be more specific
set_policy(policy, headers=None)
set_request_payment(payer='BucketOwner', headers=None)
set_xml_acl(acl_str, key_name='', headers=None, version_id=None)
startElement(name, attrs, connection)
class boto.s3.bucket.S3WebsiteEndpointTranslate
trans_region = defaultdict(<function <lambda> at 0x7fcc6caa0758>, {'EU': 's3-website-eu-west-1', 'ap-northeast-1': 's3-website-ap-northeast-1', 'us-west-1': 's3-website-us-west-1', 'ap-southeast-1': 's3-website-ap-southeast-1'})
classmethod translate_region(reg)

boto.s3.bucketlistresultset

class boto.s3.bucketlistresultset.BucketListResultSet(bucket=None, prefix='', delimiter='', marker='', headers=None)

A resultset for listing keys within a bucket. Uses the bucket_lister generator function and implements the iterator interface. This transparently handles the results paging from S3 so even if you have many thousands of keys within the bucket you can iterate over all keys in a reasonably efficient manner.

class boto.s3.bucketlistresultset.MultiPartUploadListResultSet(bucket=None, key_marker='', upload_id_marker='', headers=None)

A resultset for listing multipart uploads within a bucket. Uses the multipart_upload_lister generator function and implements the iterator interface. This transparently handles the results paging from S3 so even if you have many thousands of uploads within the bucket you can iterate over all keys in a reasonably efficient manner.

class boto.s3.bucketlistresultset.VersionedBucketListResultSet(bucket=None, prefix='', delimiter='', key_marker='', version_id_marker='', headers=None)

A resultset for listing versions within a bucket. Uses the bucket_lister generator function and implements the iterator interface. This transparently handles the results paging from S3 so even if you have many thousands of keys within the bucket you can iterate over all keys in a reasonably efficient manner.

boto.s3.bucketlistresultset.bucket_lister(bucket, prefix='', delimiter='', marker='', headers=None)

A generator function for listing keys in a bucket.

boto.s3.bucketlistresultset.multipart_upload_lister(bucket, key_marker='', upload_id_marker='', headers=None)

A generator function for listing multipart uploads in a bucket.

boto.s3.bucketlistresultset.versioned_bucket_lister(bucket, prefix='', delimiter='', key_marker='', version_id_marker='', headers=None)

A generator function for listing versions in a bucket.

boto.s3.connection

class boto.s3.connection.Location
APNortheast = 'ap-northeast-1'
APSoutheast = 'ap-southeast-1'
DEFAULT = ''
EU = 'EU'
USWest = 'us-west-1'
class boto.s3.connection.OrdinaryCallingFormat
build_path_base(bucket, key='')
get_bucket_server(server, bucket)
class boto.s3.connection.ProtocolIndependentOrdinaryCallingFormat
build_url_base(connection, protocol, server, bucket, key='')
class boto.s3.connection.S3Connection(aws_access_key_id=None, aws_secret_access_key=None, is_secure=True, port=None, proxy=None, proxy_port=None, proxy_user=None, proxy_pass=None, host='s3.amazonaws.com', debug=0, https_connection_factory=None, calling_format=<boto.s3.connection.SubdomainCallingFormat instance>, path='/', provider='aws', bucket_class=<class 'boto.s3.bucket.Bucket'>)
DefaultHost = 's3.amazonaws.com'
QueryString = 'Signature=%s&Expires=%d&AWSAccessKeyId=%s'
build_post_form_args(bucket_name, key, expires_in=6000, acl=None, success_action_redirect=None, max_content_length=None, http_method='http', fields=None, conditions=None)

Taken from the AWS book Python examples and modified for use with boto This only returns the arguments required for the post form, not the actual form This does not return the file input field which also needs to be added

Parameters:
  • bucket_name (string) – Bucket to submit to
  • key (string) – Key name, optionally add ${filename} to the end to attach the submitted filename
  • expires_in (integer) – Time (in seconds) before this expires, defaults to 6000
  • acl (boto.s3.acl.ACL) – ACL rule to use, if any
  • success_action_redirect (string) – URL to redirect to on success
  • max_content_length (integer) – Maximum size for this file
  • http_method (string) – HTTP Method to use, “http” or “https”
Return type:

dict

Returns:

A dictionary containing field names/values as well as a url to POST to

{
    "action": action_url_to_post_to, 
    "fields": [ 
        {
            "name": field_name, 
            "value":  field_value
        }, 
        {
            "name": field_name2, 
            "value": field_value2
        } 
    ] 
}

build_post_policy(expiration_time, conditions)

Taken from the AWS book Python examples and modified for use with boto

create_bucket(bucket_name, headers=None, location='', policy=None)

Creates a new located bucket. By default it’s in the USA. You can pass Location.EU to create an European bucket.

Parameters:
  • bucket_name (string) – The name of the new bucket
  • headers (dict) – Additional headers to pass along with the request to AWS.
  • location (boto.s3.connection.Location) – The location of the new bucket
  • policy (boto.s3.acl.CannedACLStrings) – A canned ACL policy that will be applied to the new key in S3.
delete_bucket(bucket, headers=None)
generate_url(expires_in, method, bucket='', key='', headers=None, query_auth=True, force_http=False, response_headers=None)
get_all_buckets(headers=None)
get_bucket(bucket_name, validate=True, headers=None)
get_canonical_user_id(headers=None)

Convenience method that returns the “CanonicalUserID” of the user who’s credentials are associated with the connection. The only way to get this value is to do a GET request on the service which returns all buckets associated with the account. As part of that response, the canonical userid is returned. This method simply does all of that and then returns just the user id.

Return type:string
Returns:A string containing the canonical user id.
lookup(bucket_name, validate=True, headers=None)
make_request(method, bucket='', key='', headers=None, data='', query_args=None, sender=None, override_num_retries=None)
set_bucket_class(bucket_class)

Set the Bucket class associated with this bucket. By default, this would be the boto.s3.key.Bucket class but if you want to subclass that for some reason this allows you to associate your new class.

Parameters:bucket_class (class) – A subclass of Bucket that can be more specific
class boto.s3.connection.SubdomainCallingFormat
get_bucket_server(*args, **kwargs)
class boto.s3.connection.VHostCallingFormat
get_bucket_server(*args, **kwargs)
boto.s3.connection.assert_case_insensitive(f)
boto.s3.connection.check_lowercase_bucketname(n)

Bucket names must not contain uppercase characters. We check for this by appending a lowercase character and testing with islower(). Note this also covers cases like numeric bucket names with dashes.

>>> check_lowercase_bucketname("Aaaa")
Traceback (most recent call last):
...
BotoClientError: S3Error: Bucket names cannot contain upper-case
characters when using either the sub-domain or virtual hosting calling
format.
>>> check_lowercase_bucketname("1234-5678-9123")
True
>>> check_lowercase_bucketname("abcdefg1234")
True

boto.s3.key

class boto.s3.key.Key(bucket=None, name=None)
BufferSize = 8192
DefaultContentType = 'application/octet-stream'
add_email_grant(permission, email_address, headers=None)

Convenience method that provides a quick way to add an email grant to a key. This method retrieves the current ACL, creates a new grant based on the parameters passed in, adds that grant to the ACL and then PUT’s the new ACL back to S3.

Parameters:
  • permission (string) – The permission being granted. Should be one of: (READ, WRITE, READ_ACP, WRITE_ACP, FULL_CONTROL).
  • email_address (string) – The email address associated with the AWS account your are granting the permission to.
  • recursive (boolean) – A boolean value to controls whether the command will apply the grant to all keys within the bucket or not. The default value is False. By passing a True value, the call will iterate through all keys in the bucket and apply the same grant to each key. CAUTION: If you have a lot of keys, this could take a long time!
add_user_grant(permission, user_id, headers=None, display_name=None)

Convenience method that provides a quick way to add a canonical user grant to a key. This method retrieves the current ACL, creates a new grant based on the parameters passed in, adds that grant to the ACL and then PUT’s the new ACL back to S3.

Parameters:
  • permission (string) – The permission being granted. Should be one of: (READ, WRITE, READ_ACP, WRITE_ACP, FULL_CONTROL).
  • user_id (string) – The canonical user id associated with the AWS account your are granting the permission to.
  • display_name (string) – An option string containing the user’s Display Name. Only required on Walrus.
change_storage_class(new_storage_class, dst_bucket=None)

Change the storage class of an existing key. Depending on whether a different destination bucket is supplied or not, this will either move the item within the bucket, preserving all metadata and ACL info bucket changing the storage class or it will copy the item to the provided destination bucket, also preserving metadata and ACL info.

Parameters:
  • new_storage_class (string) – The new storage class for the Key. Possible values are: * STANDARD * REDUCED_REDUNDANCY
  • dst_bucket (string) – The name of a destination bucket. If not provided the current bucket of the key will be used.
close()
closed = False
compute_md5(fp)
Parameters:fp (file) – File pointer to the file to MD5 hash. The file pointer will be reset to the beginning of the file before the method returns.
Return type:tuple
Returns:A tuple containing the hex digest version of the MD5 hash as the first element and the base64 encoded version of the plain digest as the second element.
copy(dst_bucket, dst_key, metadata=None, reduced_redundancy=False, preserve_acl=False)

Copy this Key to another bucket.

Parameters:
  • dst_bucket (string) – The name of the destination bucket
  • dst_key (string) – The name of the destination key
  • metadata (dict) – Metadata to be associated with new key. If metadata is supplied, it will replace the metadata of the source key being copied. If no metadata is supplied, the source key’s metadata will be copied to the new key.
  • reduced_redundancy (bool) – If True, this will force the storage class of the new Key to be REDUCED_REDUNDANCY regardless of the storage class of the key being copied. The Reduced Redundancy Storage (RRS) feature of S3, provides lower redundancy at lower storage cost.
  • preserve_acl (bool) – If True, the ACL from the source key will be copied to the destination key. If False, the destination key will have the default ACL. Note that preserving the ACL in the new key object will require two additional API calls to S3, one to retrieve the current ACL and one to set that ACL on the new object. If you don’t care about the ACL, a value of False will be significantly more efficient.
Return type:

boto.s3.key.Key or subclass

Returns:

An instance of the newly created key object

delete()

Delete this key from S3

endElement(name, value, connection)
exists()

Returns True if the key exists

Return type:bool
Returns:Whether the key exists on S3
generate_url(expires_in, method='GET', headers=None, query_auth=True, force_http=False, response_headers=None)

Generate a URL to access this key.

Parameters:
  • expires_in (int) – How long the url is valid for, in seconds
  • method (string) – The method to use for retrieving the file (default is GET)
  • headers (dict) – Any headers to pass along in the request
  • query_auth (bool) –
Return type:

string

Returns:

The URL to access the key

get_acl(headers=None)
get_contents_as_string(headers=None, cb=None, num_cb=10, torrent=False, version_id=None, response_headers=None)

Retrieve an object from S3 using the name of the Key object as the key in S3. Return the contents of the object as a string. See get_contents_to_file method for details about the parameters.

Parameters:
  • headers (dict) – Any additional headers to send in the request
  • cb (int) – a callback function that will be called to report progress on the upload. The callback should accept two integer parameters, the first representing the number of bytes that have been successfully transmitted to S3 and the second representing the size of the to be transmitted object.
  • num_cb – (optional) If a callback is specified with the cb parameter this parameter determines the granularity of the callback by defining the maximum number of times the callback will be called during the file transfer.
  • torrent (bool) – If True, returns the contents of a torrent file as a string.
  • response_headers (dict) – A dictionary containing HTTP headers/values that will override any headers associated with the stored object in the response. See http://goo.gl/EWOPb for details.
Return type:

string

Returns:

The contents of the file as a string

get_contents_to_file(fp, headers=None, cb=None, num_cb=10, torrent=False, version_id=None, res_download_handler=None, response_headers=None)

Retrieve an object from S3 using the name of the Key object as the key in S3. Write the contents of the object to the file pointed to by ‘fp’.

Parameters:
  • fp (File -like object) –
  • headers (dict) – additional HTTP headers that will be sent with the GET request.
  • cb (int) – a callback function that will be called to report progress on the upload. The callback should accept two integer parameters, the first representing the number of bytes that have been successfully transmitted to S3 and the second representing the size of the to be transmitted object.
  • num_cb – (optional) If a callback is specified with the cb parameter this parameter determines the granularity of the callback by defining the maximum number of times the callback will be called during the file transfer.
  • torrent (bool) – If True, returns the contents of a torrent file as a string.
  • res_download_handler – If provided, this handler will perform the download.
  • response_headers (dict) – A dictionary containing HTTP headers/values that will override any headers associated with the stored object in the response. See http://goo.gl/EWOPb for details.
get_contents_to_filename(filename, headers=None, cb=None, num_cb=10, torrent=False, version_id=None, res_download_handler=None, response_headers=None)

Retrieve an object from S3 using the name of the Key object as the key in S3. Store contents of the object to a file named by ‘filename’. See get_contents_to_file method for details about the parameters.

Parameters:
  • filename (string) – The filename of where to put the file contents
  • headers (dict) – Any additional headers to send in the request
  • cb (int) – a callback function that will be called to report progress on the upload. The callback should accept two integer parameters, the first representing the number of bytes that have been successfully transmitted to S3 and the second representing the size of the to be transmitted object.
  • num_cb – (optional) If a callback is specified with the cb parameter this parameter determines the granularity of the callback by defining the maximum number of times the callback will be called during the file transfer.
  • torrent (bool) – If True, returns the contents of a torrent file as a string.
  • res_download_handler – If provided, this handler will perform the download.
  • response_headers (dict) – A dictionary containing HTTP headers/values that will override any headers associated with the stored object in the response. See http://goo.gl/EWOPb for details.
get_file(fp, headers=None, cb=None, num_cb=10, torrent=False, version_id=None, override_num_retries=None, response_headers=None)

Retrieves a file from an S3 Key

Parameters:
  • fp (file) – File pointer to put the data into
  • cb (int) – a callback function that will be called to report progress on the upload. The callback should accept two integer parameters, the first representing the number of bytes that have been successfully transmitted to S3 and the second representing the size of the to be transmitted object.
  • num_cb – (optional) If a callback is specified with the cb parameter this parameter determines the granularity of the callback by defining the maximum number of times the callback will be called during the file transfer.
  • torrent (bool) – Flag for whether to get a torrent for the file
  • override_num_retries (int) – If not None will override configured num_retries parameter for underlying GET.
  • response_headers (dict) – A dictionary containing HTTP headers/values that will override any headers associated with the stored object in the response. See http://goo.gl/EWOPb for details.
Param:

headers to send when retrieving the files

get_md5_from_hexdigest(md5_hexdigest)

A utility function to create the 2-tuple (md5hexdigest, base64md5) from just having a precalculated md5_hexdigest.

get_metadata(name)
get_torrent_file(fp, headers=None, cb=None, num_cb=10)

Get a torrent file (see to get_file)

Parameters:
  • fp (file) – The file pointer of where to put the torrent
  • headers (dict) – Headers to be passed
  • cb (int) – a callback function that will be called to report progress on the upload. The callback should accept two integer parameters, the first representing the number of bytes that have been successfully transmitted to S3 and the second representing the size of the to be transmitted object.
  • num_cb – (optional) If a callback is specified with the cb parameter this parameter determines the granularity of the callback by defining the maximum number of times the callback will be called during the file transfer.
get_xml_acl(headers=None)
handle_version_headers(resp, force=False)
make_public(headers=None)
next()

By providing a next method, the key object supports use as an iterator. For example, you can now say:

for bytes in key:
write bytes to a file or whatever

All of the HTTP connection stuff is handled for you.

open(mode='r', headers=None, query_args=None, override_num_retries=None)
open_read(headers=None, query_args=None, override_num_retries=None, response_headers=None)

Open this key for reading

Parameters:
  • headers (dict) – Headers to pass in the web request
  • query_args (string) – Arguments to pass in the query string (ie, ‘torrent’)
  • override_num_retries (int) – If not None will override configured num_retries parameter for underlying GET.
  • response_headers (dict) – A dictionary containing HTTP headers/values that will override any headers associated with the stored object in the response. See http://goo.gl/EWOPb for details.
open_write(headers=None, override_num_retries=None)

Open this key for writing. Not yet implemented

Parameters:
  • headers (dict) – Headers to pass in the write request
  • override_num_retries (int) – If not None will override configured num_retries parameter for underlying PUT.
provider
read(size=0)
send_file(fp, headers=None, cb=None, num_cb=10, query_args=None)

Upload a file to a key into a bucket on S3.

Parameters:
  • fp (file) – The file pointer to upload
  • headers (dict) – The headers to pass along with the PUT request
  • cb (function) – a callback function that will be called to report progress on the upload. The callback should accept two integer parameters, the first representing the number of bytes that have been successfully transmitted to S3 and the second representing the size of the to be transmitted object.
  • num_cb (int) – (optional) If a callback is specified with the cb parameter this parameter determines the granularity of the callback by defining the maximum number of times the callback will be called during the file transfer. Providing a negative integer will cause your callback to be called with each buffer read.
set_acl(acl_str, headers=None)
set_canned_acl(acl_str, headers=None)
set_contents_from_file(fp, headers=None, replace=True, cb=None, num_cb=10, policy=None, md5=None, reduced_redundancy=False, query_args=None)

Store an object in S3 using the name of the Key object as the key in S3 and the contents of the file pointed to by ‘fp’ as the contents.

Parameters:
  • fp (file) – the file whose contents to upload
  • headers (dict) – Additional HTTP headers that will be sent with the PUT request.
  • replace (bool) – If this parameter is False, the method will first check to see if an object exists in the bucket with the same key. If it does, it won’t overwrite it. The default value is True which will overwrite the object.
  • cb (int) – a callback function that will be called to report progress on the upload. The callback should accept two integer parameters, the first representing the number of bytes that have been successfully transmitted to S3 and the second representing the size of the to be transmitted object.
  • num_cb – (optional) If a callback is specified with the cb parameter this parameter determines the granularity of the callback by defining the maximum number of times the callback will be called during the file transfer.
  • policy (boto.s3.acl.CannedACLStrings) – A canned ACL policy that will be applied to the new key in S3.
  • md5 (A tuple containing the hexdigest version of the MD5 checksum of the file as the first element and the Base64-encoded version of the plain checksum as the second element. This is the same format returned by the compute_md5 method.) – If you need to compute the MD5 for any reason prior to upload, it’s silly to have to do it twice so this param, if present, will be used as the MD5 values of the file. Otherwise, the checksum will be computed.
  • reduced_redundancy (bool) – If True, this will set the storage class of the new Key to be REDUCED_REDUNDANCY. The Reduced Redundancy Storage (RRS) feature of S3, provides lower redundancy at lower storage cost.
set_contents_from_filename(filename, headers=None, replace=True, cb=None, num_cb=10, policy=None, md5=None, reduced_redundancy=False)

Store an object in S3 using the name of the Key object as the key in S3 and the contents of the file named by ‘filename’. See set_contents_from_file method for details about the parameters.

Parameters:
  • filename (string) – The name of the file that you want to put onto S3
  • headers (dict) – Additional headers to pass along with the request to AWS.
  • replace (bool) – If True, replaces the contents of the file if it already exists.
  • cb (int) – a callback function that will be called to report progress on the upload. The callback should accept two integer parameters, the first representing the number of bytes that have been successfully transmitted to S3 and the second representing the size of the to be transmitted object.
  • num_cb – (optional) If a callback is specified with the cb parameter this parameter determines the granularity of the callback by defining the maximum number of times the callback will be called during the file transfer.
  • policy (boto.s3.acl.CannedACLStrings) – A canned ACL policy that will be applied to the new key in S3.
  • md5 (A tuple containing the hexdigest version of the MD5 checksum of the file as the first element and the Base64-encoded version of the plain checksum as the second element. This is the same format returned by the compute_md5 method.) – If you need to compute the MD5 for any reason prior to upload, it’s silly to have to do it twice so this param, if present, will be used as the MD5 values of the file. Otherwise, the checksum will be computed.
  • reduced_redundancy (bool) – If True, this will set the storage class of the new Key to be REDUCED_REDUNDANCY. The Reduced Redundancy Storage (RRS) feature of S3, provides lower redundancy at lower storage cost.
set_contents_from_string(s, headers=None, replace=True, cb=None, num_cb=10, policy=None, md5=None, reduced_redundancy=False)

Store an object in S3 using the name of the Key object as the key in S3 and the string ‘s’ as the contents. See set_contents_from_file method for details about the parameters.

Parameters:
  • headers (dict) – Additional headers to pass along with the request to AWS.
  • replace (bool) – If True, replaces the contents of the file if it already exists.
  • cb (int) – a callback function that will be called to report progress on the upload. The callback should accept two integer parameters, the first representing the number of bytes that have been successfully transmitted to S3 and the second representing the size of the to be transmitted object.
  • num_cb – (optional) If a callback is specified with the cb parameter this parameter determines the granularity of the callback by defining the maximum number of times the callback will be called during the file transfer.
  • policy (boto.s3.acl.CannedACLStrings) – A canned ACL policy that will be applied to the new key in S3.
  • md5 (A tuple containing the hexdigest version of the MD5 checksum of the file as the first element and the Base64-encoded version of the plain checksum as the second element. This is the same format returned by the compute_md5 method.) – If you need to compute the MD5 for any reason prior to upload, it’s silly to have to do it twice so this param, if present, will be used as the MD5 values of the file. Otherwise, the checksum will be computed.
  • reduced_redundancy (bool) – If True, this will set the storage class of the new Key to be REDUCED_REDUNDANCY. The Reduced Redundancy Storage (RRS) feature of S3, provides lower redundancy at lower storage cost.
set_metadata(name, value)
set_xml_acl(acl_str, headers=None)
startElement(name, attrs, connection)
update_metadata(d)

boto.s3.prefix

class boto.s3.prefix.Prefix(bucket=None, name=None)
endElement(name, value, connection)
startElement(name, attrs, connection)

boto.s3.user

class boto.s3.user.User(parent=None, id='', display_name='')
endElement(name, value, connection)
startElement(name, attrs, connection)
to_xml(element_name='Owner')

boto.s3.multipart

class boto.s3.multipart.CompleteMultiPartUpload(bucket=None)

Represents a completed MultiPart Upload. Contains the following useful attributes:

  • location - The URI of the completed upload

  • bucket_name - The name of the bucket in which the upload

    is contained

  • key_name - The name of the new, completed key

  • etag - The MD5 hash of the completed, combined upload

endElement(name, value, connection)
startElement(name, attrs, connection)
class boto.s3.multipart.MultiPartUpload(bucket=None)

Represents a MultiPart Upload operation.

cancel_upload()

Cancels a MultiPart Upload operation. The storage consumed by any previously uploaded parts will be freed. However, if any part uploads are currently in progress, those part uploads might or might not succeed. As a result, it might be necessary to abort a given multipart upload multiple times in order to completely free all storage consumed by all parts.

complete_upload()

Complete the MultiPart Upload operation. This method should be called when all parts of the file have been successfully uploaded to S3.

Return type:boto.s3.multipart.CompletedMultiPartUpload
Returns:An object representing the completed upload.
endElement(name, value, connection)
get_all_parts(max_parts=None, part_number_marker=None)

Return the uploaded parts of this MultiPart Upload. This is a lower-level method that requires you to manually page through results. To simplify this process, you can just use the object itself as an iterator and it will automatically handle all of the paging with S3.

startElement(name, attrs, connection)
to_xml()
upload_part_from_file(fp, part_num, headers=None, replace=True, cb=None, num_cb=10, policy=None, md5=None)

Upload another part of this MultiPart Upload.

Parameters:
  • fp (file) – The file object you want to upload.
  • part_num (int) – The number of this part.

The other parameters are exactly as defined for the boto.s3.key.Key set_contents_from_file method.

class boto.s3.multipart.Part(bucket=None)

Represents a single part in a MultiPart upload. Attributes include:

  • part_number - The integer part number
  • last_modified - The last modified date of this part
  • etag - The MD5 hash of this part
  • size - The size, in bytes, of this part
endElement(name, value, connection)
startElement(name, attrs, connection)
boto.s3.multipart.part_lister(mpupload, part_number_marker=None)

A generator function for listing parts of a multipart upload.

boto.s3.resumable_download_handler

class boto.s3.resumable_download_handler.ByteTranslatingCallbackHandler(proxied_cb, download_start_point)

Proxy class that translates progress callbacks made by boto.s3.Key.get_file(), taking into account that we’re resuming a download.

call(total_bytes_uploaded, total_size)
class boto.s3.resumable_download_handler.ResumableDownloadHandler(tracker_file_name=None, num_retries=None)

Handler for resumable downloads.

Constructor. Instantiate once for each downloaded file.

Parameters:
  • tracker_file_name (string) – optional file name to save tracking info about this download. If supplied and the current process fails the download, it can be retried in a new process. If called with an existing file containing an unexpired timestamp, we’ll resume the transfer for this file; else we’ll start a new resumable download.
  • num_retries (int) – the number of times we’ll re-try a resumable download making no progress. (Count resets every time we get progress, so download can span many more than this number of retries.)
ETAG_REGEX = '([a-z0-9]{32})\n'
RETRYABLE_EXCEPTIONS = (<class 'httplib.HTTPException'>, <type 'exceptions.IOError'>, <class 'socket.error'>, <class 'socket.gaierror'>)
get_file(key, fp, headers, cb=None, num_cb=10, torrent=False, version_id=None)

Retrieves a file from a Key :type key: boto.s3.key.Key or subclass :param key: The Key object from which upload is to be downloaded

Parameters:
  • fp (file) – File pointer into which data should be downloaded
  • cb (function) – (optional) a callback function that will be called to report progress on the download. The callback should accept two integer parameters, the first representing the number of bytes that have been successfully transmitted from the storage service and the second representing the total number of bytes that need to be transmitted.
  • num_cb (int) – (optional) If a callback is specified with the cb parameter this parameter determines the granularity of the callback by defining the maximum number of times the callback will be called during the file transfer.
  • torrent (bool) – Flag for whether to get a torrent for the file
  • version_id (string) – The version ID (optional)
Param:

headers to send when retrieving the files

Raises ResumableDownloadException if a problem occurs during
the transfer.
boto.s3.resumable_download_handler.get_cur_file_size(fp, position_to_eof=False)

Returns size of file, optionally leaving fp positioned at EOF.

boto.s3.deletemarker

class boto.s3.deletemarker.DeleteMarker(bucket=None, name=None)
endElement(name, value, connection)
startElement(name, attrs, connection)