boto: A Python interface to Amazon Web Services

An integrated interface to current and future infrastructural services offered by Amazon Web Services.

Currently, this includes:

  • Simple Storage Service (S3)
  • Simple Queue Service (SQS)
  • Elastic Compute Cloud (EC2)
  • Elastic Load Balancer (ELB)
  • CloudWatch
  • AutoScale
  • Mechanical Turk
  • SimpleDB (SDB) - See SimpleDbPage for details
  • CloudFront
  • Virtual Private Cloud (VPC)
  • Relational Data Services (RDS)
  • Elastic Map Reduce (EMR)
  • Flexible Payment Service (FPS)
  • Identity and Access Management (IAM)

The boto project page is at http://boto.googlecode.com/

The boto source repository is at http://github.com/boto

Follow project updates on Twitter (http://twitter.com/pythonboto).

Follow Mitch on Twitter (http://twitter.com/garnaat).

Join our IRC channel (#boto on FreeNode).

Documentation Contents

An Introduction to boto’s SQS interface

This tutorial focuses on the boto interface to the Simple Queue Service from Amazon Web Services. This tutorial assumes that you have already downloaded and installed boto.

Creating a Connection

The first step in accessing SQS is to create a connection to the service. There are two ways to do this in boto. The first is:

>>> from boto.sqs.connection import SQSConnection
>>> conn = SQSConnection('<aws access key>', '<aws secret key>')

At this point the variable conn will point to an SQSConnection object. In this example, the AWS access key and AWS secret key are passed in to the method explicitely. Alternatively, you can set the environment variables:

AWS_ACCESS_KEY_ID - Your AWS Access Key ID AWS_SECRET_ACCESS_KEY - Your AWS Secret Access Key

and then call the constructor without any arguments, like this:

>>> conn = SQSConnection()

There is also a shortcut function in the boto package, called connect_sqs that may provide a slightly easier means of creating a connection:

>>> import boto
>>> conn = boto.connect_sqs()

In either case, conn will point to an SQSConnection object which we will use throughout the remainder of this tutorial.

Creating a Queue

Once you have a connection established with SQS, you will probably want to create a queue. That can be accomplished like this:

>>> q = conn.create_queue('myqueue')

The create_queue method will create the requested queue if it does not exist or will return the existing queue if it does exist. There is an optional parameter to create_queue called visibility_timeout. This basically controls how long a message will remain invisible to other queue readers once it has been read (see SQS documentation for more detailed explanation). If this is not explicitly specified the queue will be created with whatever default value SQS provides (currently 30 seconds). If you would like to specify another value, you could do so like this:

>>> q = conn.create_queue('myqueue', 120)

This would establish a default visibility timeout for this queue of 120 seconds. As you will see later on, this default value for the queue can also be overridden each time a message is read from the queue. If you want to check what the default visibility timeout is for a queue:

>>> q.get_timeout()
30
>>>

Writing Messages

Once you have a queue, presumably you will want to write some messages to it. SQS doesn’t care what kind of information you store in your messages or what format you use to store it. As long as the amount of data per message is less than or equal to 256Kb, it’s happy.

However, you may have a lot of specific requirements around the format of that data. For example, you may want to store one big string or you might want to store something that looks more like RFC822 messages or you might want to store a binary payload such as pickled Python objects.

The way boto deals with this is to define a simple Message object that treats the message data as one big string which you can set and get. If that Message object meets your needs, you’re good to go. However, if you need to incorporate different behavior in your message or handle different types of data you can create your own Message class. You just need to register that class with the queue so that it knows that when you read a message from the queue that it should create one of your message objects rather than the default boto Message object. To register your message class, you would:

>>> q.set_message_class(MyMessage)

where MyMessage is the class definition for your message class. Your message class should subclass the boto Message because there is a small bit of Python magic happening in the __setattr__ method of the boto Message class.

For this tutorial, let’s just assume that we are using the boto Message class. So, first we need to create a Message object:

>>> from boto.sqs.message import Message
>>> m = Message()
>>> m.set_body('This is my first message.')
>>> status = q.write(m)

The write method returns a True if everything went well. If the write didn’t succeed it will either return a False (meaning SQS simply chose not to write the message for some reason) or an exception if there was some sort of problem with the request.

Reading Messages

So, now we have a message in our queue. How would we go about reading it? Here’s one way:

>>> rs = q.get_messages()
>>> len(rs)
1
>>> m = rs[0]
>>> m.get_body()
u'This is my first message'

The get_messages method also returns a ResultSet object as described above. In addition to the special attributes that we already talked about the ResultSet object also contains any results returned by the request. To get at the results you can treat the ResultSet as a sequence object (e.g. a list). We can check the length (how many results) and access particular items within the list using the slice notation familiar to Python programmers.

At this point, we have read the message from the queue and SQS will make sure that this message remains invisible to other readers of the queue until the visibility timeout period for the queue expires. If I delete the message before the timeout period expires then no one will ever see the message again. However, if I don’t delete it (maybe because I crashed or failed in some way, for example) it will magically reappear in my queue for someone else to read. If you aren’t happy with the default visibility timeout defined for the queue, you can override it when you read a message:

>>> q.get_messages(visibility_timeout=60)

This means that regardless of what the default visibility timeout is for the queue, this message will remain invisible to other readers for 60 seconds.

The get_messages method can also return more than a single message. By passing a num_messages parameter (defaults to 1) you can control the maximum number of messages that will be returned by the method. To show this feature off, first let’s load up a few more messages.

>>> for i in range(1, 11):
...   m = Message()
...   m.set_body('This is message %d' % i)
...   q.write(m)
...
>>> rs = q.get_messages(10)
>>> len(rs)
10

Don’t be alarmed if the length of the result set returned by the get_messages call is less than 10. Sometimes it takes some time for new messages to become visible in the queue. Give it a minute or two and they will all show up.

If you want a slightly simpler way to read messages from a queue, you can use the read method. It will either return the message read or it will return None if no messages were available. You can also pass a visibility_timeout parameter to read, if you desire:

>>> m = q.read(60)
>>> m.get_body()
u'This is my first message'

Deleting Messages and Queues

Note that the first message we put in the queue is still there, even though we have read it a number of times. That’s because we never deleted it. To remove a message from a queue:

>>> q.delete_message(m)
[]

If I want to delete the entire queue, I would use:

>>> conn.delete_queue(q)

However, this won’t succeed unless the queue is empty.

Listing All Available Queues

In addition to accessing specific queues via the create_queue method you can also get a list of all available queues that you have created.

>>> rs = conn.get_all_queues()

This returns a ResultSet object, as described above. The ResultSet can be used as a sequence or list type object to retrieve Queue objects.

>>> len(rs)
11
>>> for q in rs:
... print q.id
...
<listing of available queues>
>>> q = rs[0]

Other Stuff

That covers the basic operations of creating queues, writing messages, reading messages, deleting messages, and deleting queues. There are a few utility methods in boto that might be useful as well. For example, to count the number of messages in a queue:

>>> q.count()
10

This can be handy but is command as well as the other two utility methods I’ll describe in a minute are inefficient and should be used with caution on queues with lots of messages (e.g. many hundreds or more). Similarly, you can clear (delete) all messages in a queue with:

>>> q.clear()

Be REAL careful with that one! Finally, if you want to dump all of the messages in a queue to a local file:

>>> q.dump('messages.txt', sep='\n------------------\n')

This will read all of the messages in the queue and write the bodies of each of the messages to the file messages.txt. The option sep argument is a separator that will be printed between each message body in the file.

An Introduction to boto’s S3 interface

This tutorial focuses on the boto interface to the Simple Storage Service from Amazon Web Services. This tutorial assumes that you have already downloaded and installed boto.

Creating a Connection

The first step in accessing S3 is to create a connection to the service. There are two ways to do this in boto. The first is:

>>> from boto.s3.connection import S3Connection
>>> conn = S3Connection('<aws access key>', '<aws secret key>')

At this point the variable conn will point to an S3Connection object. In this example, the AWS access key and AWS secret key are passed in to the method explicitely. Alternatively, you can set the environment variables:

AWS_ACCESS_KEY_ID - Your AWS Access Key ID AWS_SECRET_ACCESS_KEY - Your AWS Secret Access Key

and then call the constructor without any arguments, like this:

>>> conn = S3Connection()

There is also a shortcut function in the boto package, called connect_s3 that may provide a slightly easier means of creating a connection:

>>> import boto
>>> conn = boto.connect_s3()

In either case, conn will point to an S3Connection object which we will use throughout the remainder of this tutorial.

Creating a Bucket

Once you have a connection established with S3, you will probably want to create a bucket. A bucket is a container used to store key/value pairs in S3. A bucket can hold un unlimited about of data so you could potentially have just one bucket in S3 for all of your information. Or, you could create separate buckets for different types of data. You can figure all of that out later, first let’s just create a bucket. That can be accomplished like this:

>>> bucket = conn.create_bucket('mybucket')
Traceback (most recent call last):
  File "<stdin>", line 1, in ?
  File "boto/connection.py", line 285, in create_bucket
    raise S3CreateError(response.status, response.reason)
boto.exception.S3CreateError: S3Error[409]: Conflict

Whoa. What happended there? Well, the thing you have to know about buckets is that they are kind of like domain names. It’s one flat name space that everyone who uses S3 shares. So, someone has already create a bucket called “mybucket” in S3 and that means no one else can grab that bucket name. So, you have to come up with a name that hasn’t been taken yet. For example, something that uses a unique string as a prefix. Your AWS_ACCESS_KEY (NOT YOUR SECRET KEY!) could work but I’ll leave it to your imagination to come up with something. I’ll just assume that you found an acceptable name.

The create_bucket method will create the requested bucket if it does not exist or will return the existing bucket if it does exist.

Creating a Bucket In Another Location

The example above assumes that you want to create a bucket in the standard US region. However, it is possible to create buckets in other locations. To do so, first import the Location object from the boto.s3.connection module, like this:

>>> from boto.s3.connection import Location
>>> dir(Location)
['DEFAULT', 'EU', 'USWest', 'APSoutheast', '__doc__', '__module__']
>>>

As you can see, the Location object defines three possible locations; DEFAULT, EU, USWest, and APSoutheast. By default, the location is the empty string which is interpreted as the US Classic Region, the original S3 region. However, by specifying another location at the time the bucket is created, you can instruct S3 to create the bucket in that location. For example:

>>> conn.create_bucket('mybucket', location=Location.EU)

will create the bucket in the EU region (assuming the name is available).

Storing Data

Once you have a bucket, presumably you will want to store some data in it. S3 doesn’t care what kind of information you store in your objects or what format you use to store it. All you need is a key that is unique within your bucket.

The Key object is used in boto to keep track of data stored in S3. To store new data in S3, start by creating a new Key object:

>>> from boto.s3.key import Key
>>> k = Key(bucket)
>>> k.key = 'foobar'
>>> k.set_contents_from_string('This is a test of S3')

The net effect of these statements is to create a new object in S3 with a key of “foobar” and a value of “This is a test of S3”. To validate that this worked, quit out of the interpreter and start it up again. Then:

>>> import boto
>>> c = boto.connect_s3()
>>> b = c.create_bucket('mybucket') # substitute your bucket name here
>>> from boto.s3.key import Key
>>> k = Key(b)
>>> k.key = 'foobar'
>>> k.get_contents_as_string()
'This is a test of S3'

So, we can definitely store and retrieve strings. A more interesting example may be to store the contents of a local file in S3 and then retrieve the contents to another local file.

>>> k = Key(b)
>>> k.key = 'myfile'
>>> k.set_contents_from_filename('foo.jpg')
>>> k.get_contents_to_filename('bar.jpg')

There are a couple of things to note about this. When you send data to S3 from a file or filename, boto will attempt to determine the correct mime type for that file and send it as a Content-Type header. The boto package uses the standard mimetypes package in Python to do the mime type guessing. The other thing to note is that boto does stream the content to and from S3 so you should be able to send and receive large files without any problem.

Listing All Available Buckets

In addition to accessing specific buckets via the create_bucket method you can also get a list of all available buckets that you have created.

>>> rs = conn.get_all_buckets()

This returns a ResultSet object (see the SQS Tutorial for more info on ResultSet objects). The ResultSet can be used as a sequence or list type object to retrieve Bucket objects.

>>> len(rs)
11
>>> for b in rs:
... print b.name
...
<listing of available buckets>
>>> b = rs[0]

Setting / Getting the Access Control List for Buckets and Keys

The S3 service provides the ability to control access to buckets and keys within s3 via the Access Control List (ACL) associated with each object in S3. There are two ways to set the ACL for an object:

  1. Create a custom ACL that grants specific rights to specific users. At the moment, the users that are specified within grants have to be registered users of Amazon Web Services so this isn’t as useful or as general as it could be.
  2. Use a “canned” access control policy. There are four canned policies defined: a. private: Owner gets FULL_CONTROL. No one else has any access rights. b. public-read: Owners gets FULL_CONTROL and the anonymous principal is granted READ access. c. public-read-write: Owner gets FULL_CONTROL and the anonymous principal is granted READ and WRITE access. d. authenticated-read: Owner gets FULL_CONTROL and any principal authenticated as a registered Amazon S3 user is granted READ access.

To set a canned ACL for a bucket, use the set_acl method of the Bucket object. The argument passed to this method must be one of the four permissable canned policies named in the list CannedACLStrings contained in acl.py. For example, to make a bucket readable by anyone:

>>> b.set_acl('public-read')

You can also set the ACL for Key objects, either by passing an additional argument to the above method:

>>> b.set_acl('public-read', 'foobar')

where ‘foobar’ is the key of some object within the bucket b or you can call the set_acl method of the Key object:

>>> k.set_acl('public-read')

You can also retrieve the current ACL for a Bucket or Key object using the get_acl object. This method parses the AccessControlPolicy response sent by S3 and creates a set of Python objects that represent the ACL.

>>> acp = b.get_acl()
>>> acp
<boto.acl.Policy instance at 0x2e6940>
>>> acp.acl
<boto.acl.ACL instance at 0x2e69e0>
>>> acp.acl.grants
[<boto.acl.Grant instance at 0x2e6a08>]
>>> for grant in acp.acl.grants:
...   print grant.permission, grant.display_name, grant.email_address, grant.id
...
FULL_CONTROL <boto.user.User instance at 0x2e6a30>

The Python objects representing the ACL can be found in the acl.py module of boto.

Both the Bucket object and the Key object also provide shortcut methods to simplify the process of granting individuals specific access. For example, if you want to grant an individual user READ access to a particular object in S3 you could do the following:

>>> key = b.lookup('mykeytoshare')
>>> key.add_email_grant('READ', 'foo@bar.com')

The email address provided should be the one associated with the users AWS account. There is a similar method called add_user_grant that accepts the canonical id of the user rather than the email address.

Setting/Getting Metadata Values on Key Objects

S3 allows arbitrary user metadata to be assigned to objects within a bucket. To take advantage of this S3 feature, you should use the set_metadata and get_metadata methods of the Key object to set and retrieve metadata associated with an S3 object. For example:

>>> k = Key(b)
>>> k.key = 'has_metadata'
>>> k.set_metadata('meta1', 'This is the first metadata value')
>>> k.set_metadata('meta2', 'This is the second metadata value')
>>> k.set_contents_from_filename('foo.txt')

This code associates two metadata key/value pairs with the Key k. To retrieve those values later:

>>> k = b.get_key('has_metadata)
>>> k.get_metadata('meta1')
'This is the first metadata value'
>>> k.get_metadata('meta2')
'This is the second metadata value'
>>>

An Introduction to boto’s EC2 interface

This tutorial focuses on the boto interface to the Elastic Compute Cloud from Amazon Web Services. This tutorial assumes that you have already downloaded and installed boto.

Creating a Connection

The first step in accessing EC2 is to create a connection to the service. There are two ways to do this in boto. The first is:

>>> from boto.ec2.connection import EC2Connection
>>> conn = EC2Connection('<aws access key>', '<aws secret key>')

At this point the variable conn will point to an EC2Connection object. In this example, the AWS access key and AWS secret key are passed in to the method explicitely. Alternatively, you can set the environment variables:

AWS_ACCESS_KEY_ID - Your AWS Access Key ID AWS_SECRET_ACCESS_KEY - Your AWS Secret Access Key

and then call the constructor without any arguments, like this:

>>> conn = EC2Connection()

There is also a shortcut function in the boto package, called connect_ec2 that may provide a slightly easier means of creating a connection:

>>> import boto
>>> conn = boto.connect_ec2()

In either case, conn will point to an EC2Connection object which we will use throughout the remainder of this tutorial.

A Note About Regions

The 2008-12-01 version of the EC2 API introduced the idea of Regions. A Region is geographically distinct and is completely isolated from other EC2 Regions. At the time of the launch of the 2008-12-01 API there were two available regions, us-east-1 and eu-west-1. Each Region has it’s own service endpoint and therefore would require it’s own EC2Connection object in boto.

The default behavior in boto, as shown above, is to connect you with the us-east-1 region which is exactly the same as the behavior prior to the introduction of Regions.

However, if you would like to connect to a region other than us-east-1, there are a couple of ways to accomplish that. The first way, is to as EC2 to provide a list of currently supported regions. You can do that using the regions function in the boto.ec2 module:

>>> import boto.ec2
>>> regions = boto.ec2.regions()
>>> regions
[RegionInfo:eu-west-1, RegionInfo:us-east-1]
>>>

As you can see, a list of available regions is returned. Each region is represented by a RegionInfo object. A RegionInfo object has two attributes; a name and an endpoint.

>>> eu = regions[0]
>>> eu.name
u'eu-west-1'
>>> eu.endpoint
u'eu-west-1.ec2.amazonaws.com'
>>>

You can easily create a connection to a region by using the connect method of the RegionInfo object:

>>> conn_eu = eu.connect()
>>> conn_eu
<boto.ec2.connection.EC2Connection instance at 0xccaaa8>
>>>

The variable conn_eu is now bound to an EC2Connection object connected to the endpoint of the eu-west-1 region and all operations performed via that connection and all objects created by that connection will be scoped to the eu-west-1 region. You can always tell which region a connection is associated with by accessing it’s region attribute:

>>> conn_eu.region
RegionInfo:eu-west-1
>>>

Supporting EC2 objects such as SecurityGroups, KeyPairs, Addresses, Volumes, Images and SnapShots are local to a particular region. So don’t expect to find the security groups you created in the us-east-1 region to be available in the eu-west-1 region.

Some objects in boto, such as SecurityGroup, have a new method called copy_to_region which will attempt to create a copy of the object in another region. For example:

>>> regions
[RegionInfo:eu-west-1, RegionInfo:us-east-1]
>>> conn_us = regions[1].connect()
>>> groups = conn_us.get_all_security_groups()
>>> groups
[SecurityGroup:alfresco, SecurityGroup:apache, SecurityGroup:vnc,
SecurityGroup:appserver2, SecurityGroup:FTP, SecurityGroup:webserver,
SecurityGroup:default, SecurityGroup:test-1228851996]
>>> us_group = groups[0]
>>> us_group
SecurityGroup:alfresco
>>> us_group.rules
[IPPermissions:tcp(22-22), IPPermissions:tcp(80-80), IPPermissions:tcp(1445-1445)]
>>> eu_group = us_group.copy_to_region(eu)
>>> eu_group.rules
[IPPermissions:tcp(22-22), IPPermissions:tcp(80-80), IPPermissions:tcp(1445-1445)]

In the above example, we chose one of the security groups available in the us-east-1 region (the group alfresco) and copied that security group to the eu-west-1 region. All of the rules associated with the original security group will be copied as well.

If you would like your default region to be something other than us-east-1, you can override that default in your boto config file (either ~/.boto for personal settings or /etc/boto.cfg for system-wide settings). For example:

[Boto] ec2_region_name = eu-west-1 ec2_region_endpoint = eu-west-1.ec2.amazonaws.com

The above lines added to either boto config file would set the default region to be eu-west-1.

Images & Instances

An Image object represents an Amazon Machine Image (AMI) which is an encrypted machine image stored in Amazon S3. It contains all of the information necessary to boot instances of your software in EC2.

To get a listing of all available Images:

>>> images = conn.get_all_images()
>>> images
[Image:ami-20b65349, Image:ami-22b6534b, Image:ami-23b6534a, Image:ami-25b6534c, Image:ami-26b6534f, Image:ami-2bb65342, Image:ami-78b15411, Image:ami-a4aa4fcd, Image:ami-c3b550aa, Image:ami-e4b6538d, Image:ami-f1b05598]
>>> for image in images:
...    print image.location
ec2-public-images/fedora-core4-base.manifest.xml
ec2-public-images/fedora-core4-mysql.manifest.xml
ec2-public-images/fedora-core4-apache.manifest.xml
ec2-public-images/fedora-core4-apache-mysql.manifest.xml
ec2-public-images/developer-image.manifest.xml
ec2-public-images/getting-started.manifest.xml
marcins_cool_public_images/fedora-core-6.manifest.xml
khaz_fc6_win2003/image.manifest
aes-images/django.manifest
marcins_cool_public_images/ubuntu-6.10.manifest.xml
ckk_public_ec2_images/centos-base-4.4.manifest.xml

The most useful thing you can do with an Image is to actually run it, so let’s run a new instance of the base Fedora image:

>>> image = images[0]
>>> image.location
ec2-public-images/fedora-core4-base.manifest.xml
>>> reservation = image.run()

This will begin the boot process for a new EC2 instance. The run method returns a Reservation object which represents a collection of instances that are all started at the same time. In this case, we only started one but you can check the instances attribute of the Reservation object to see all of the instances associated with this reservation:

>>> reservation.instances
[Instance:i-6761850e]
>>> instance = reservation.instances[0]
>>> instance.state
u'pending'
>>>

So, we have an instance booting up that is still in the pending state. We can call the update method on the instance to get a refreshed view of it’s state:

>>> instance.update()
>>> instance.state
u'pending'
>>> # wait a few minutes
>>> instance.update()
>>> instance.state
u'running'

So, now our instance is running. The time it takes to boot a new instance varies based on a number of different factors but usually it takes less than five minutes.

Now the instance is up and running you can find out its DNS name like this:

>>> instance.dns_name
u'ec2-72-44-40-153.z-2.compute-1.amazonaws.com'

This provides the public DNS name for your instance. Since the 2007–3-22 release of the EC2 service, the default addressing scheme for instances uses NAT-addresses which means your instance has both a public IP address and a non-routable private IP address. You can access each of these addresses like this:

>>> instance.public_dns_name
u'ec2-72-44-40-153.z-2.compute-1.amazonaws.com'
>>> instance.private_dns_name
u'domU-12-31-35-00-42-33.z-2.compute-1.internal'

Even though your instance has a public DNS name, you won’t be able to access it yet because you need to set up some security rules which are described later in this tutorial.

Since you are now being charged for that instance we just created, you will probably want to know how to terminate the instance, as well. The simplest way is to use the stop method of the Instance object:

>>> instance.stop()
>>> instance.update()
>>> instance.state
u'shutting-down'
>>> # wait a minute
>>> instance.update()
>>> instance.state
u'terminated'
>>>

When we created our new instance, we didn’t pass any args to the run method so we got all of the default values. The full set of possible parameters to the run method are:

min_count - The minimum number of instances to launch. max_count - The maximum number of instances to launch. keypair - Keypair to launch instances with (either a KeyPair object or a string with the name of the desired keypair. security_groups - A list of security groups to associate with the instance. This can either be a list of SecurityGroup objects or a list of strings with the names of the desired security groups. user_data - Data to be made available to the launched instances. This should be base64 encoded according to the EC2 documentation.

So, if I wanted to create two instances of the base image and launch them with my keypair, called gsg-keypair, I would to this:

>>> reservation.image.run(2,2,'gsg-keypair')
>>> reservation.instances
[Instance:i-5f618536, Instance:i-5e618537]
>>> for i in reservation.instances:
...    print i.status
u'pending'
u'pending'
>>>

Later, when you are finished with the instances you can either stop each individually or you can call the stop_all method on the Reservation object:

>>> reservation.stop_all()
>>>

If you just want to get a list of all of your running instances, use the get_all_instances method of the connection object. Note that the list returned is actually a list of Reservation objects (which contain the Instances) and that the list may include recently terminated instances for a small period of time subsequent to their termination.

>>> instances = conn.get_all_instances()
>>> instances
[Reservation:r-a76085ce, Reservation:r-a66085cf, Reservation:r-8c6085e5]
>>> r = instances[0]
>>> for inst in r.instances:
...    print inst.state
u'terminated'
>>>

A recent addition to the EC2 api’s is to allow other EC2 users to launch your images. There are a couple of ways of accessing this capability in boto but I’ll show you the simplest way here. First of all, you need to know the Amazon ID for the user in question. The Amazon Id is a twelve digit number that appears on your Account Activity page at AWS. It looks like this:

1234-5678-9012

To use this number in API calls, you need to remove the dashes so in our example the user ID would be 12345678912. To allow the user associated with this ID to launch one of your images, let’s assume that the variable image represents the Image you want to share. So:

>>> image.get_launch_permissions()
{}
>>>

The get_launch_permissions method returns a dictionary object two possible entries; user_ids or groups. In our case we haven’t yet given anyone permission to launch our image so the dictionary is empty. To add our EC2 user:

>>> image.set_launch_permissions(['123456789012'])
True
>>> image.get_launch_permissions()
{'user_ids': [u'123456789012']}
>>>

We have now added the desired user to the launch permissions for the Image so that user will now be able to access and launch our Image. You can add multiple users at one time by adding them all to the list you pass in as a parameter to the method. To revoke the user’s launch permissions:

>>> image.remove_launch_permissions(['123456789012'])
True
>>> image.get_launch_permissions()
{}
>>>

It is possible to pass a list of group names to the set_launch_permissions method, as well. The only group available at the moment is the group “all” which would allow any valid EC2 user to launch your image.

Finally, you can completely reset the launch permissions for an Image with:

>>> image.reset_launch_permissions()
True
>>>

This will remove all users and groups from the launch permission list and makes the Image private, again.

Security Groups

Amazon defines a security group as:

“A security group is a named collection of access rules. These access rules
specify which ingress, i.e. incoming, network traffic should be delivered to your instance.”

To get a listing of all currently defined security groups:

>>> rs = conn.get_all_security_groups()
>>> print rs
[SecurityGroup:appserver, SecurityGroup:default, SecurityGroup:vnc, SecurityGroup:webserver]
>>>

Each security group can have an arbitrary number of rules which represent different network ports which are being enabled. To find the rules for a particular security group, use the rules attribute:

>>> sg = rs[1]
>>> sg.name
u'default'
>>> sg.rules
[IPPermissions:tcp(0-65535),
 IPPermissions:udp(0-65535),
 IPPermissions:icmp(-1--1),
 IPPermissions:tcp(22-22),
 IPPermissions:tcp(80-80)]
>>>

In addition to listing the available security groups you can also create a new security group. I’ll follow through the “Three Tier Web Service” example included in the EC2 Developer’s Guide for an example of how to create security groups and add rules to them.

First, let’s create a group for our Apache web servers that allows HTTP access to the world:

>>> web = conn.create_security_group('apache', 'Our Apache Group')
>>> web
SecurityGroup:apache
>>> web.authorize('tcp', 80, 80, '0.0.0.0/0')
True
>>>

The first argument is the ip protocol which can be one of; tcp, udp or icmp. The second argument is the FromPort or the beginning port in the range, the third argument is the ToPort or the ending port in the range and the last argument is the CIDR IP range to authorize access to.

Next we create another group for the app servers:

>>> app = conn.create_security_group('appserver', 'The application tier')
>>>

We then want to grant access between the web server group and the app server group. So, rather than specifying an IP address as we did in the last example, this time we will specify another SecurityGroup object.

>>> app.authorize(src_group=web)
True
>>>

Now, to verify that the web group now has access to the app servers, we want to temporarily allow SSH access to the web servers from our computer. Let’s say that our IP address is 192.168.1.130 as it is in the EC2 Developer Guide. To enable that access:

>>> web.authorize(ip_protocol='tcp', from_port=22, to_port=22, cidr_ip='192.168.1.130/32')
True
>>>

Now that this access is authorized, we could ssh into an instance running in the web group and then try to telnet to specific ports on servers in the appserver group, as shown in the EC2 Developer’s Guide. When this testing is complete, we would want to revoke SSH access to the web server group, like this:

>>> web.rules
[IPPermissions:tcp(80-80),
 IPPermissions:tcp(22-22)]
>>> web.revoke('tcp', 22, 22, cidr_ip='192.168.1.130/32')
True
>>> web.rules
[IPPermissions:tcp(80-80)]
>>>

An Introduction to boto’s Elastic Load Balancing interface

This tutorial focuses on the boto interface for Elastic Load Balancing from Amazon Web Services. This tutorial assumes that you have already downloaded and installed boto, and are familiar with the boto ec2 interface.

Elastic Load Balancing Concepts

Elastic Load Balancing (ELB) is intimately connected with Amazon’s Elastic Compute Cloud (EC2) service. Using the ELB service allows you to create a load balancer - a DNS endpoint and set of ports that distributes incoming requests to a set of ec2 instances. The advantages of using a load balancer is that it allows you to truly scale up or down a set of backend instances without disrupting service. Before the ELB service you had to do this manually by launching an EC2 instance and installing load balancer software on it (nginx, haproxy, perlbal, etc.) to distribute traffic to other EC2 instances.

Recall that the ec2 service is split into Regions and Availability Zones (AZ). At the time of writing, there are two Regions - US and Europe, and each region is divided into a number of AZs (for example, us-east-1a, us-east-1b, etc.). You can think of AZs as data centers - each runs off a different set of ISP backbones and power providers. ELB load balancers can span multiple AZs but cannot span multiple regions. That means that if you’d like to create a set of instances spanning both the US and Europe Regions you’d have to create two load balancers and have some sort of other means of distributing requests between the two loadbalancers. An example of this could be using GeoIP techniques to choose the correct load balancer, or perhaps DNS round robin. Keep in mind also that traffic is distributed equally over all AZs the ELB balancer spans. This means you should have an equal number of instances in each AZ if you want to equally distribute load amongst all your instances.

Creating a Connection

The first step in accessing ELB is to create a connection to the service. There are two ways to do this in boto. The first is:

>>> from boto.ec2.elb import ELBConnection
>>> conn = ELBConnection('<aws access key>', '<aws secret key>')

There is also a shortcut function in the boto package, called connect_elb that may provide a slightly easier means of creating a connection:

>>> import boto
>>> conn = boto.connect_elb()

In either case, conn will point to an ELBConnection object which we will use throughout the remainder of this tutorial.

A Note About Regions and Endpoints

Like EC2 the ELB service has a different endpoint for each region. By default the US endpoint is used. To choose a specific region, instantiate the ELBConnection object with that region’s endpoint.

>>> ec2 = boto.connect_elb(host='eu-west-1.elasticloadbalancing.amazonaws.com')

Alternatively, edit your boto.cfg with the default ELB endpoint to use:

[Boto]
elb_endpoint = eu-west-1.elasticloadbalancing.amazonaws.com
Getting Existing Load Balancers

To retrieve any exiting load balancers:

>>> conn.get_all_load_balancers()

You will get back a list of LoadBalancer objects.

Creating a Load Balancer

To create a load balancer you need the following:
  1. The specific ports and protocols you want to load balancer over, and what port you want to connect to all instances.
  2. A health check - the ELB concept of a heart beat or ping. ELB will use this health check to see whether your instances are up or down. If they go down, the load balancer will no longer send requests to them.
  3. A list of Availability Zones you’d like to create your load balancer over.
Ports and Protocols

An incoming connection to your load balancer will come on one or more ports - for example 80 (HTTP) and 443 (HTTPS). Each can be using a protocol - currently, the supported protocols are TCP and HTTP. We also need to tell the load balancer which port to route connects to on each instance. For example, to create a load balancer for a website that accepts connections on 80 and 443, and that routes connections to port 8080 and 8443 on each instance, you would specify that the load balancer ports and protocols are:

  • 80, 8080, HTTP
  • 443, 8443, TCP

This says that the load balancer will listen on two ports - 80 and 443. Connections on 80 will use an HTTP load balancer to forward connections to port 8080 on instances. Likewise, the load balancer will listen on 443 to forward connections to 8443 on each instance using the TCP balancer. We need to use TCP for the HTTPS port because it is encrypted at the application layer. Of course, we could specify the load balancer use TCP for port 80, however specifying HTTP allows you to let ELB handle some work for you - for example HTTP header parsing.

Configuring a Health Check

A health check allows ELB to determine which instances are alive and able to respond to requests. A health check is essentially a tuple consisting of:

  • target: What to check on an instance. For a TCP check this is comprised of:

    TCP:PORT_TO_CHECK
    

    Which attempts to open a connection on PORT_TO_CHECK. If the connection opens successfully, that specific instance is deemed healthy, otherwise it is marked temporarily as unhealthy. For HTTP, the situation is slightly different:

    HTTP:PORT_TO_CHECK/RESOURCE
    

    This means that the health check will connect to the resource /RESOURCE on PORT_TO_CHECK. If an HTTP 200 status is returned the instance is deemed healthy.

  • interval: How often the check is made. This is given in seconds and defaults to 30. The valid range of intervals goes from 5 seconds to 600 seconds.

  • timeout: The number of seconds the load balancer will wait for a check to return a result.

  • UnhealthyThreshold: The number of consecutive failed checks to deem the instance as being dead. The default is 5, and the range of valid values lies from 2 to 10.

The following example creates a health check called instance_health that simply checks instances every 20 seconds on port 80 over HTTP at the resource /health for 200 successes.

>>> import boto
>>> from boto.ec2.elb import HealthCheck
>>> conn = boto.connect_elb()
>>> hc = HealthCheck('instance_health', interval=20, target='HTTP:8080/health')
Putting It All Together

Finally, let’s create a load balancer in the US region that listens on ports 80 and 443 and distributes requests to instances on 8080 and 8443 over HTTP and TCP. We want the load balancer to span the availability zones us-east-1a and us-east-1b:

>>> lb = conn.create_load_balancer('my_lb', ['us-east-1a', 'us-east-1b'],
                                   [(80, 8080, 'http'), (443, 8443, 'tcp')])
>>> lb.configure_health_check(hc)

The load balancer has been created. To see where you can actually connect to it, do:

>>> print lb.dns_name
my_elb-123456789.us-east-1.elb.amazonaws.com

You can then CNAME map a better name, i.e. www.MYWEBSITE.com to the above address.

Adding Instances To a Load Balancer

Now that the load balancer has been created, there are two ways to add instances to it:

  1. Manually, adding each instance in turn.
  2. Mapping an autoscale group to the load balancer. Please see the Autoscale tutorial for information on how to do this.
Manually Adding and Removing Instances

Assuming you have a list of instance ids, you can add them to the load balancer

>>> instance_ids = ['i-4f8cf126', 'i-0bb7ca62']
>>> lb.register_instances(instance_ids)

Keep in mind that these instances should be in Security Groups that match the internal ports of the load balancer you just created (for this example, they should allow incoming connections on 8080 and 8443).

To remove instances:

>>> lb.degregister_instances(instance_ids)

Modifying Availability Zones for a Load Balancer

If you wanted to disable one or more zones from an existing load balancer:

>>> lb.disable_zones(['us-east-1a'])

You can then terminate each instance in the disabled zone and then deregister then from your load balancer.

To enable zones:

>>> lb.enable_zones(['us-east-1c'])

Deleting a Load Balancer

>>> lb.delete()

An Introduction to boto’s Autoscale interface

This tutorial focuses on the boto interface to the Autoscale service. This assumes you are familiar with boto’s EC2 interface and concepts.

Autoscale Concepts

The AWS Autoscale service is comprised of three core concepts:

  1. Autoscale Group (AG): An AG can be viewed as a collection of criteria for maintaining or scaling a set of EC2 instances over one or more availability zones. An AG is limited to a single region.
  2. Launch Configuration (LC): An LC is the set of information needed by the AG to launch new instances - this can encompass image ids, startup data, security groups and keys. Only one LC is attached to an AG.
  3. Triggers: A trigger is essentially a set of rules for determining when to scale an AG up or down. These rules can encompass a set of metrics such as average CPU usage across instances, or incoming requests, a threshold for when an action will take place, as well as parameters to control how long to wait after a threshold is crossed.

Creating a Connection

The first step in accessing autoscaling is to create a connection to the service. There are two ways to do this in boto. The first is:

>>> from boto.ec2.autoscale import AutoScaleConnection
>>> conn = AutoScaleConnection('<aws access key>', '<aws secret key>')

Alternatively, you can use the shortcut:

>>> conn = boto.connect_autoscale()
A Note About Regions and Endpoints

Like EC2 the Autoscale service has a different endpoint for each region. By default the US endpoint is used. To choose a specific region, instantiate the AutoScaleConnection object with that region’s endpoint.

>>> ec2 = boto.connect_autoscale(host='autoscaling.eu-west-1.amazonaws.com')

Alternatively, edit your boto.cfg with the default Autoscale endpoint to use:

[Boto]
autoscale_endpoint = autoscaling.eu-west-1.amazonaws.com
Getting Existing AutoScale Groups

To retrieve existing autoscale groups:

>>> conn.get_all_groups()

You will get back a list of AutoScale group objects, one for each AG you have.

Creating Autoscaling Groups

An Autoscaling group has a number of parameters associated with it.

  1. Name: The name of the AG.
  2. Availability Zones: The list of availability zones it is defined over.
  3. Minimum Size: Minimum number of instances running at one time.
  4. Maximum Size: Maximum number of instances running at one time.
  5. Launch Configuration (LC): A set of instructions on how to launch an instance.
  6. Load Balancer: An optional ELB load balancer to use. See the ELB tutorial for information on how to create a load balancer.

For the purposes of this tutorial, let’s assume we want to create one autoscale group over the us-east-1a and us-east-1b availability zones. We want to have two instances in each availability zone, thus a minimum size of 4. For now we won’t worry about scaling up or down - we’ll introduce that later when we talk about triggers. Thus we’ll set a maximum size of 4 as well. We’ll also associate the AG with a load balancer which we assume we’ve already created, called ‘my_lb’.

Our LC tells us how to start an instance. This will at least include the image id to use, security_group, and key information. We assume the image id, key name and security groups have already been defined elsewhere - see the EC2 tutorial for information on how to create these.

>>> from boto.ec2.autoscale import LaunchConfiguration
>>> from boto.ec2.autoscale import AutoScalingGroup
>>> lc = LaunchConfiguration(name='my-launch_config', image_id='my-ami',
                             key_name='my_key_name',
                             security_groups=['my_security_groups'])
>>> conn.create_launch_configuration(lc)

We now have created a launch configuration called ‘my-launch-config’. We are now ready to associate it with our new autoscale group.

>>> ag = AutoScalingGroup(group_name='my_group', load_balancers=['my-lb'],
                          availability_zones=['us-east-1a', 'us-east-1b'],
                          launch_config=lc, min_size=4, max_size=4)
>>> conn.create_auto_scaling_group(ag)

We now have a new autoscaling group defined! At this point instances should be starting to launch. To view activity on an autoscale group:

>>> ag.get_activities()
 [Activity:Launching a new EC2 instance status:Successful progress:100,
  ...]

or alternatively:

>>> conn.get_all_activities(ag)

This autoscale group is fairly useful in that it will maintain the minimum size without breaching the maximum size defined. That means if one instance crashes, the autoscale group will use the launch configuration to start a new one in an attempt to maintain its minimum defined size. It knows instance health using the health check defined on its associated load balancer.

Scaling a Group Up or Down

It might be more useful to also define means to scale a group up or down depending on certain criteria. For example, if the average CPU utilization of all your instances goes above 60%, you may want to scale up a number of instances to deal with demand - likewise you might want to scale down if usage drops. These criteria are defined in triggers.

For example, let’s modify our above group to have a maxsize of 8 and define means of scaling up based on CPU utilization. We’ll say we should scale up if the average CPU usage goes above 80% and scale down if it goes below 40%.

>>> from boto.ec2.autoscale import Trigger
>>> tr = Trigger(name='my_trigger', autoscale_group=ag,
             measure_name='CPUUtilization', statistic='Average',
             unit='Percent',
             dimensions=[('AutoScalingGroupName', ag.name)],
             period=60, lower_threshold=40,
             lower_breach_scale_increment='-5',
             upper_threshold=80,
             upper_breach_scale_increment='10',
             breach_duration=360)
>> conn.create_trigger(tr)

An Introduction to boto’s VPC interface

This tutorial is based on the examples in the Amazon Virtual Private Cloud Getting Started Guide (http://docs.amazonwebservices.com/AmazonVPC/latest/GettingStartedGuide/). In each example, it tries to show the boto request that correspond to the AWS command line tools.

Creating a VPC connection

First, we need to create a new VPC connection:

>>> from boto.vpc import VPCConnection
>>> c = VPCConnection()

To create a VPC

Now that we have a VPC connection, we can create our first VPC.

>>> vpc = c.create_vpc('10.0.0.0/24')
>>> vpc
VPC:vpc-6b1fe402
>>> vpc.id
u'vpc-6b1fe402'
>>> vpc.state
u'pending'
>>> vpc.cidr_block
u'10.0.0.0/24'
>>> vpc.dhcp_options_id
u'default'
>>>

To create a subnet

The next step is to create a subnet to associate with your VPC.

>>> subnet = c.create_subnet(vpc.id, '10.0.0.0/25')
>>> subnet.id
u'subnet-6a1fe403'
>>> subnet.state
u'pending'
>>> subnet.cidr_block
u'10.0.0.0/25'
>>> subnet.available_ip_address_count
123
>>> subnet.availability_zone
u'us-east-1b'
>>>

To create a customer gateway

Next, we create a customer gateway.

>>> cg = c.create_customer_gateway('ipsec.1', '12.1.2.3', 65534)
>>> cg.id
u'cgw-b6a247df'
>>> cg.type
u'ipsec.1'
>>> cg.state
u'available'
>>> cg.ip_address
u'12.1.2.3'
>>> cg.bgp_asn
u'65534'
>>>

To create a VPN gateway

>>> vg = c.create_vpn_gateway('ipsec.1')
>>> vg.id
u'vgw-44ad482d'
>>> vg.type
u'ipsec.1'
>>> vg.state
u'pending'
>>> vg.availability_zone
u'us-east-1b'
>>>

Attaching a VPN Gateway to a VPC

>>> vg.attach(vpc.id)
>>>

An Introduction to boto’s Elastic Mapreduce interface

This tutorial focuses on the boto interface to Elastic Mapreduce from Amazon Web Services. This tutorial assumes that you have already downloaded and installed boto.

Creating a Connection

The first step in accessing Elastic Mapreduce is to create a connection to the service. There are two ways to do this in boto. The first is:

>>> from boto.emr.connection import EmrConnection
>>> conn = EmrConnection('<aws access key>', '<aws secret key>')

At this point the variable conn will point to an EmrConnection object. In this example, the AWS access key and AWS secret key are passed in to the method explicitly. Alternatively, you can set the environment variables:

AWS_ACCESS_KEY_ID - Your AWS Access Key ID AWS_SECRET_ACCESS_KEY - Your AWS Secret Access Key

and then call the constructor without any arguments, like this:

>>> conn = EmrConnection()

There is also a shortcut function in the boto package called connect_emr that may provide a slightly easier means of creating a connection:

>>> import boto
>>> conn = boto.connect_emr()

In either case, conn points to an EmrConnection object which we will use throughout the remainder of this tutorial.

Creating Streaming JobFlow Steps

Upon creating a connection to Elastic Mapreduce you will next want to create one or more jobflow steps. There are two types of steps, streaming and custom jar, both of which have a class in the boto Elastic Mapreduce implementation.

Creating a streaming step that runs the AWS wordcount example, itself written in Python, can be accomplished by:

>>> from boto.emr.step import StreamingStep
>>> step = StreamingStep(name='My wordcount example',
...                      mapper='s3n://elasticmapreduce/samples/wordcount/wordSplitter.py',
...                      reducer='aggregate',
...                      input='s3n://elasticmapreduce/samples/wordcount/input',
...                      output='s3n://<my output bucket>/output/wordcount_output')

where <my output bucket> is a bucket you have created in S3.

Note that this statement does not run the step, that is accomplished later when we create a jobflow.

Additional arguments of note to the streaming jobflow step are cache_files, cache_archive and step_args. The options cache_files and cache_archive enable you to use the Hadoops distributed cache to share files amongst the instances that run the step. The argument step_args allows one to pass additional arguments to Hadoop streaming, for example modifications to the Hadoop job configuration.

Creating Custom Jar Job Flow Steps

The second type of jobflow step executes tasks written with a custom jar. Creating a custom jar step for the AWS CloudBurst example can be accomplished by:

>>> from boto.emr.step import JarStep
>>> step = JarStep(name='Coudburst example',
...                jar='s3n://elasticmapreduce/samples/cloudburst/cloudburst.jar',
...                step_args=['s3n://elasticmapreduce/samples/cloudburst/input/s_suis.br',
...                           's3n://elasticmapreduce/samples/cloudburst/input/100k.br',
...                           's3n://<my output bucket>/output/cloudfront_output',
...                            36, 3, 0, 1, 240, 48, 24, 24, 128, 16])

Note that this statement does not actually run the step, that is accomplished later when we create a jobflow. Also note that this JarStep does not include a main_class argument since the jar MANIFEST.MF has a Main-Class entry.

Creating JobFlows

Once you have created one or more jobflow steps, you will next want to create and run a jobflow. Creating a jobflow that executes either of the steps we created above can be accomplished by:

>>> import boto
>>> conn = boto.connect_emr()
>>> jobid = conn.run_jobflow(name='My jobflow',
...                          log_uri='s3://<my log uri>/jobflow_logs',
...                          steps=[step])

The method will not block for the completion of the jobflow, but will immediately return. The status of the jobflow can be determined by:

>>> status = conn.describe_jobflow(jobid)
>>> status.state
u'STARTING'

One can then use this state to block for a jobflow to complete. Valid jobflow states currently defined in the AWS API are COMPLETED, FAILED, TERMINATED, RUNNING, SHUTTING_DOWN, STARTING and WAITING.

In some cases you may not have built all of the steps prior to running the jobflow. In these cases additional steps can be added to a jobflow by running:

>>> conn.add_jobflow_steps(jobid, [second_step])

If you wish to add additional steps to a running jobflow you may want to set the keep_alive parameter to True in run_jobflow so that the jobflow does not automatically terminate when the first step completes.

The run_jobflow method has a number of important parameters that are worth investigating. They include parameters to change the number and type of EC2 instances on which the jobflow is executed, set a SSH key for manual debugging and enable AWS console debugging.

Terminating JobFlows

By default when all the steps of a jobflow have finished or failed the jobflow terminates. However, if you set the keep_alive parameter to True or just want to halt the execution of a jobflow early you can terminate a jobflow by:

>>> import boto
>>> conn = boto.connect_emr()
>>> conn.terminate_jobflow('<jobflow id>')

API Reference

boto

boto
class boto.NullHandler(level=0)

Initializes the instance - basically setting the formatter to None and the filter list to empty.

emit(record)
boto.check_extensions(module_name, module_path)

This function checks for extensions to boto modules. It should be called in the __init__.py file of all boto modules. See: http://code.google.com/p/boto/wiki/ExtendModules

for details.

boto.connect_autoscale(aws_access_key_id=None, aws_secret_access_key=None, **kwargs)
Parameters:
  • aws_access_key_id (string) – Your AWS Access Key ID
  • aws_secret_access_key (string) – Your AWS Secret Access Key
Return type:

boto.ec2.autoscale.AutoScaleConnection

Returns:

A connection to Amazon’s Auto Scaling Service

boto.connect_cloudfront(aws_access_key_id=None, aws_secret_access_key=None, **kwargs)
Parameters:
  • aws_access_key_id (string) – Your AWS Access Key ID
  • aws_secret_access_key (string) – Your AWS Secret Access Key
Return type:

boto.fps.connection.FPSConnection

Returns:

A connection to FPS

boto.connect_cloudwatch(aws_access_key_id=None, aws_secret_access_key=None, **kwargs)
Parameters:
  • aws_access_key_id (string) – Your AWS Access Key ID
  • aws_secret_access_key (string) – Your AWS Secret Access Key
Return type:

boto.ec2.cloudwatch.CloudWatchConnection

Returns:

A connection to Amazon’s EC2 Monitoring service

boto.connect_ec2(aws_access_key_id=None, aws_secret_access_key=None, **kwargs)
Parameters:
  • aws_access_key_id (string) – Your AWS Access Key ID
  • aws_secret_access_key (string) – Your AWS Secret Access Key
Return type:

boto.ec2.connection.EC2Connection

Returns:

A connection to Amazon’s EC2

boto.connect_elb(aws_access_key_id=None, aws_secret_access_key=None, **kwargs)
Parameters:
  • aws_access_key_id (string) – Your AWS Access Key ID
  • aws_secret_access_key (string) – Your AWS Secret Access Key
Return type:

boto.ec2.elb.ELBConnection

Returns:

A connection to Amazon’s Load Balancing Service

boto.connect_emr(aws_access_key_id=None, aws_secret_access_key=None, **kwargs)
Parameters:
  • aws_access_key_id (string) – Your AWS Access Key ID
  • aws_secret_access_key (string) – Your AWS Secret Access Key
Return type:

boto.emr.EmrConnection

Returns:

A connection to Elastic mapreduce

boto.connect_euca(host, aws_access_key_id=None, aws_secret_access_key=None, port=8773, path='/services/Eucalyptus', is_secure=False, **kwargs)

Connect to a Eucalyptus service.

Parameters:
  • host (string) – the host name or ip address of the Eucalyptus server
  • aws_access_key_id (string) – Your AWS Access Key ID
  • aws_secret_access_key (string) – Your AWS Secret Access Key
Return type:

boto.ec2.connection.EC2Connection

Returns:

A connection to Eucalyptus server

boto.connect_fps(aws_access_key_id=None, aws_secret_access_key=None, **kwargs)
Parameters:
  • aws_access_key_id (string) – Your AWS Access Key ID
  • aws_secret_access_key (string) – Your AWS Secret Access Key
Return type:

boto.fps.connection.FPSConnection

Returns:

A connection to FPS

boto.connect_gs(gs_access_key_id=None, gs_secret_access_key=None, **kwargs)

@type gs_access_key_id: string @param gs_access_key_id: Your Google Storage Access Key ID

@type gs_secret_access_key: string @param gs_secret_access_key: Your Google Storage Secret Access Key

@rtype: L{GSConnection<boto.gs.connection.GSConnection>} @return: A connection to Google’s Storage service

boto.connect_ia(ia_access_key_id=None, ia_secret_access_key=None, is_secure=False, **kwargs)

Connect to the Internet Archive via their S3-like API.

Parameters:
  • ia_access_key_id (string) – Your IA Access Key ID. This will also look in your boto config file for an entry in the Credentials section called “ia_access_key_id”
  • ia_secret_access_key (string) – Your IA Secret Access Key. This will also look in your boto config file for an entry in the Credentials section called “ia_secret_access_key”
Return type:

boto.s3.connection.S3Connection

Returns:

A connection to the Internet Archive

boto.connect_iam(aws_access_key_id=None, aws_secret_access_key=None, **kwargs)
Parameters:
  • aws_access_key_id (string) – Your AWS Access Key ID
  • aws_secret_access_key (string) – Your AWS Secret Access Key
Return type:

boto.iam.IAMConnection

Returns:

A connection to Amazon’s IAM

boto.connect_mturk(aws_access_key_id=None, aws_secret_access_key=None, **kwargs)
Parameters:
  • aws_access_key_id (string) – Your AWS Access Key ID
  • aws_secret_access_key (string) – Your AWS Secret Access Key
Return type:

boto.mturk.connection.MTurkConnection

Returns:

A connection to MTurk

boto.connect_rds(aws_access_key_id=None, aws_secret_access_key=None, **kwargs)
Parameters:
  • aws_access_key_id (string) – Your AWS Access Key ID
  • aws_secret_access_key (string) – Your AWS Secret Access Key
Return type:

boto.rds.RDSConnection

Returns:

A connection to RDS

boto.connect_route53(aws_access_key_id=None, aws_secret_access_key=None, **kwargs)
Parameters:
  • aws_access_key_id (string) – Your AWS Access Key ID
  • aws_secret_access_key (string) – Your AWS Secret Access Key
Return type:

boto.dns.Route53Connection

Returns:

A connection to Amazon’s Route53 DNS Service

boto.connect_s3(aws_access_key_id=None, aws_secret_access_key=None, **kwargs)
Parameters:
  • aws_access_key_id (string) – Your AWS Access Key ID
  • aws_secret_access_key (string) – Your AWS Secret Access Key
Return type:

boto.s3.connection.S3Connection

Returns:

A connection to Amazon’s S3

boto.connect_sdb(aws_access_key_id=None, aws_secret_access_key=None, **kwargs)
Parameters:
  • aws_access_key_id (string) – Your AWS Access Key ID
  • aws_secret_access_key (string) – Your AWS Secret Access Key
Return type:

boto.sdb.connection.SDBConnection

Returns:

A connection to Amazon’s SDB

boto.connect_ses(aws_access_key_id=None, aws_secret_access_key=None, **kwargs)
Parameters:
  • aws_access_key_id (string) – Your AWS Access Key ID
  • aws_secret_access_key (string) – Your AWS Secret Access Key
Return type:

boto.ses.SESConnection

Returns:

A connection to Amazon’s SES

boto.connect_sns(aws_access_key_id=None, aws_secret_access_key=None, **kwargs)
Parameters:
  • aws_access_key_id (string) – Your AWS Access Key ID
  • aws_secret_access_key (string) – Your AWS Secret Access Key
Return type:

boto.sns.SNSConnection

Returns:

A connection to Amazon’s SNS

boto.connect_sqs(aws_access_key_id=None, aws_secret_access_key=None, **kwargs)
Parameters:
  • aws_access_key_id (string) – Your AWS Access Key ID
  • aws_secret_access_key (string) – Your AWS Secret Access Key
Return type:

boto.sqs.connection.SQSConnection

Returns:

A connection to Amazon’s SQS

boto.connect_vpc(aws_access_key_id=None, aws_secret_access_key=None, **kwargs)
Parameters:
  • aws_access_key_id (string) – Your AWS Access Key ID
  • aws_secret_access_key (string) – Your AWS Secret Access Key
Return type:

boto.vpc.VPCConnection

Returns:

A connection to VPC

boto.connect_walrus(host, aws_access_key_id=None, aws_secret_access_key=None, port=8773, path='/services/Walrus', is_secure=False, **kwargs)

Connect to a Walrus service.

Parameters:
  • host (string) – the host name or ip address of the Walrus server
  • aws_access_key_id (string) – Your AWS Access Key ID
  • aws_secret_access_key (string) – Your AWS Secret Access Key
Return type:

boto.s3.connection.S3Connection

Returns:

A connection to Walrus

boto.init_logging()
boto.lookup(service, name)
boto.set_file_logger(name, filepath, level=20, format_string=None)
boto.set_stream_logger(name, level=10, format_string=None)
boto.storage_uri(uri_str, default_scheme='file', debug=0, validate=True, bucket_storage_uri_class=<class 'boto.storage_uri.BucketStorageUri'>)

Instantiate a StorageUri from a URI string.

Parameters:
  • uri_str (string) – URI naming bucket + optional object.
  • default_scheme (string) – default scheme for scheme-less URIs.
  • debug (int) – debug level to pass in to boto connection (range 0..2).
  • validate (bool) – whether to check for bucket name validity.
  • bucket_storage_uri_class (BucketStorageUri interface.) – Allows mocking for unit tests.

We allow validate to be disabled to allow caller to implement bucket-level wildcarding (outside the boto library; see gsutil).

Return type:boto.StorageUri subclass
Returns:StorageUri subclass for given URI.

uri_str must be one of the following formats:

  • gs://bucket/name
  • s3://bucket/name
  • gs://bucket
  • s3://bucket
  • filename

The last example uses the default scheme (‘file’, unless overridden)

boto.storage_uri_for_key(key)

Returns a StorageUri for the given key.

Parameters:key (boto.s3.key.Key or subclass) – URI naming bucket + optional object.
boto.connection

Handles basic connections to AWS

class boto.connection.AWSAuthConnection(host, aws_access_key_id=None, aws_secret_access_key=None, is_secure=True, port=None, proxy=None, proxy_port=None, proxy_user=None, proxy_pass=None, debug=0, https_connection_factory=None, path='/', provider='aws')
Parameters:
  • host (str) – The host to make the connection to
  • aws_access_key_id (str) – Your AWS Access Key ID (provided by Amazon). If none is specified, the value in your AWS_ACCESS_KEY_ID environmental variable is used.
  • aws_secret_access_key (str) – Your AWS Secret Access Key (provided by Amazon). If none is specified, the value in your AWS_SECRET_ACCESS_KEY environmental variable is used.
  • is_secure (boolean) – Whether the connection is over SSL
  • https_connection_factory (list or tuple) – A pair of an HTTP connection factory and the exceptions to catch. The factory should have a similar interface to L{httplib.HTTPSConnection}.
  • proxy (str) – Address/hostname for a proxy server
  • proxy_port (int) – The port to use when connecting over a proxy
  • proxy_user (str) – The username to connect with on the proxy
  • proxy_pass (str) – The password to use when connection over a proxy.
  • port (int) – The port to use to connect
access_key
aws_access_key_id
aws_secret_access_key
build_base_http_request(method, path, auth_path, params=None, headers=None, data='', host=None)
close()

(Optional) Close any open HTTP connections. This is non-destructive, and making a new request will open a connection again.

connection
get_http_connection(host, is_secure)
get_path(path='/')
get_proxy_auth_header()
gs_access_key_id
gs_secret_access_key
handle_proxy(proxy, proxy_port, proxy_user, proxy_pass)
make_request(method, path, headers=None, data='', host=None, auth_path=None, sender=None, override_num_retries=None)

Makes a request to the server, with stock multiple-retry logic.

new_http_connection(host, is_secure)
prefix_proxy_to_path(path, host=None)
proxy_ssl()
put_http_connection(host, is_secure, connection)
secret_key
server_name(port=None)
class boto.connection.AWSQueryConnection(aws_access_key_id=None, aws_secret_access_key=None, is_secure=True, port=None, proxy=None, proxy_port=None, proxy_user=None, proxy_pass=None, host=None, debug=0, https_connection_factory=None, path='/')
APIVersion = ''
ResponseError

alias of BotoServerError

build_list_params(params, items, label)
get_list(action, params, markers, path='/', parent=None, verb='GET')
get_object(action, params, cls, path='/', parent=None, verb='GET')
get_status(action, params, path='/', parent=None, verb='GET')
get_utf8_value(value)
make_request(action, params=None, path='/', verb='GET')
class boto.connection.ConnectionPool(hosts, connections_per_host)
class boto.connection.HTTPRequest(method, protocol, host, port, path, auth_path, params, headers, body)

Represents an HTTP request.

Parameters:
  • method (string) – The HTTP method name, ‘GET’, ‘POST’, ‘PUT’ etc.
  • protocol (string) – The http protocol used, ‘http’ or ‘https’.
  • host (string) – Host to which the request is addressed. eg. abc.com
  • port (int) – port on which the request is being sent. Zero means unset, in which case default port will be chosen.
  • path (string) – URL path that is bein accessed.
  • path – The part of the URL path used when creating the authentication string.
  • params (dict) – HTTP url query parameters, with key as name of the param, and value as value of param.
  • headers (dict) – HTTP headers, with key as name of the header and value as value of header.
  • body (string) – Body of the HTTP request. If not present, will be None or empty string (‘’).
authorize(connection, **kwargs)
boto.exception

Exception classes - Subclassing allows you to check for specific errors

exception boto.exception.AWSConnectionError(reason, *args)

General error connecting to Amazon Web Services.

exception boto.exception.BotoClientError(reason, *args)

General Boto Client error (error accessing AWS)

exception boto.exception.BotoServerError(status, reason, body=None, *args)
endElement(name, value, connection)
startElement(name, attrs, connection)
class boto.exception.ConsoleOutput(parent=None)
endElement(name, value, connection)
startElement(name, attrs, connection)
exception boto.exception.EC2ResponseError(status, reason, body=None)

Error in response from EC2.

endElement(name, value, connection)
startElement(name, attrs, connection)
exception boto.exception.EmrResponseError(status, reason, body=None, *args)

Error in response from EMR

exception boto.exception.FPSResponseError(status, reason, body=None, *args)
exception boto.exception.GSCopyError(status, reason, body=None, *args)

Error copying a key on GS.

exception boto.exception.GSCreateError(status, reason, body=None)

Error creating a bucket or key on GS.

exception boto.exception.GSDataError(reason, *args)

Error receiving data from GS.

exception boto.exception.GSPermissionsError(reason, *args)

Permissions error when accessing a bucket or key on GS.

exception boto.exception.GSResponseError(status, reason, body=None)

Error in response from GS.

exception boto.exception.InvalidAclError(message)

Exception raised when ACL XML is invalid.

exception boto.exception.InvalidUriError(message)

Exception raised when URI is invalid.

exception boto.exception.NoAuthHandlerFound

Is raised when no auth handlers were found ready to authenticate.

exception boto.exception.ResumableDownloadException(message, disposition)

Exception raised for various resumable download problems.

self.disposition is of type ResumableTransferDisposition.

class boto.exception.ResumableTransferDisposition
ABORT = 'ABORT'
ABORT_CUR_PROCESS = 'ABORT_CUR_PROCESS'
START_OVER = 'START_OVER'
WAIT_BEFORE_RETRY = 'WAIT_BEFORE_RETRY'
exception boto.exception.ResumableUploadException(message, disposition)

Exception raised for various resumable upload problems.

self.disposition is of type ResumableTransferDisposition.

exception boto.exception.S3CopyError(status, reason, body=None, *args)

Error copying a key on S3.

exception boto.exception.S3CreateError(status, reason, body=None)

Error creating a bucket or key on S3.

exception boto.exception.S3DataError(reason, *args)

Error receiving data from S3.

exception boto.exception.S3PermissionsError(reason, *args)

Permissions error when accessing a bucket or key on S3.

exception boto.exception.S3ResponseError(status, reason, body=None)

Error in response from S3.

exception boto.exception.SDBPersistenceError
exception boto.exception.SDBResponseError(status, reason, body=None, *args)

Error in responses from SDB.

exception boto.exception.SQSDecodeError(reason, message)

Error when decoding an SQS message.

exception boto.exception.SQSError(status, reason, body=None)

General Error on Simple Queue Service.

endElement(name, value, connection)
startElement(name, attrs, connection)
exception boto.exception.StorageCopyError(status, reason, body=None, *args)

Error copying a key on a storage service.

exception boto.exception.StorageCreateError(status, reason, body=None)

Error creating a bucket or key on a storage service.

endElement(name, value, connection)
exception boto.exception.StorageDataError(reason, *args)

Error receiving data from a storage service.

exception boto.exception.StoragePermissionsError(reason, *args)

Permissions error when accessing a bucket or key on a storage service.

exception boto.exception.StorageResponseError(status, reason, body=None)

Error in response from a storage service.

endElement(name, value, connection)
startElement(name, attrs, connection)
exception boto.exception.TooManyAuthHandlerReadyToAuthenticate

Is raised when there are more than one auth handler ready.

In normal situation there should only be one auth handler that is ready to authenticate. In case where more than one auth handler is ready to authenticate, we raise this exception, to prevent unpredictable behavior when multiple auth handlers can handle a particular case and the one chosen depends on the order they were checked.

boto.handler
class boto.handler.XmlHandler(root_node, connection)
characters(content)
endElement(name)
startElement(name, attrs)
boto.resultset
class boto.resultset.BooleanResult(marker_elem=None)
endElement(name, value, connection)
startElement(name, attrs, connection)
to_boolean(value, true_value='true')
class boto.resultset.ResultSet(marker_elem=None)

The ResultSet is used to pass results back from the Amazon services to the client. It is light wrapper around Python’s list class, with some additional methods for parsing XML results from AWS. Because I don’t really want any dependencies on external libraries, I’m using the standard SAX parser that comes with Python. The good news is that it’s quite fast and efficient but it makes some things rather difficult.

You can pass in, as the marker_elem parameter, a list of tuples. Each tuple contains a string as the first element which represents the XML element that the resultset needs to be on the lookout for and a Python class as the second element of the tuple. Each time the specified element is found in the XML, a new instance of the class will be created and popped onto the stack.

Variables:next_token (str) – A hash used to assist in paging through very long result sets. In most cases, passing this value to certain methods will give you another ‘page’ of results.
endElement(name, value, connection)
startElement(name, attrs, connection)
to_boolean(value, true_value='true')
boto.utils

Some handy utility functions used by several classes.

class boto.utils.AuthSMTPHandler(mailhost, username, password, fromaddr, toaddrs, subject)

This class extends the SMTPHandler in the standard Python logging module to accept a username and password on the constructor and to then use those credentials to authenticate with the SMTP server. To use this, you could add something like this in your boto config file:

[handler_hand07] class=boto.utils.AuthSMTPHandler level=WARN formatter=form07 args=(‘localhost’, ‘username’, ‘password’, 'from@abc‘, ['user1@abc‘, 'user2@xyz‘], ‘Logger Subject’)

Initialize the handler.

We have extended the constructor to accept a username/password for SMTP authentication.

emit(record)

Emit a record.

Format the record and send it to the specified addressees. It would be really nice if I could add authorization to this class without having to resort to cut and paste inheritance but, no.

class boto.utils.LRUCache(capacity)

A dictionary-like object that stores only a certain number of items, and discards its least recently used item when full.

>>> cache = LRUCache(3)
>>> cache['A'] = 0
>>> cache['B'] = 1
>>> cache['C'] = 2
>>> len(cache)
3
>>> cache['A']
0

Adding new items to the cache does not increase its size. Instead, the least recently used item is dropped:

>>> cache['D'] = 3
>>> len(cache)
3
>>> 'B' in cache
False

Iterating over the cache returns the keys, starting with the most recently used:

>>> for key in cache:
...     print key
D
A
C

This code is based on the LRUCache class from Genshi which is based on Mighty’s LRUCache from myghtyutils.util, written by Mike Bayer and released under the MIT license (Genshi uses the BSD License). See:

class boto.utils.Password(str=None, hashfunc=None)

Password object that stores itself as hashed. Hash defaults to SHA512 if available, MD5 otherwise.

Load the string from an initial value, this should be the raw hashed password.

hashfunc()

Returns a sha512 hash object; optionally initialized with a string

set(value)
class boto.utils.ShellCommand(command, wait=True, fail_fast=False, cwd=None)
getOutput()
getStatus()
output

The STDIN and STDERR output of the command

run(cwd=None)
setReadOnly(value)
status

The exit code for the command

boto.utils.canonical_string(method, path, headers, expires=None, provider=None)
boto.utils.fetch_file(uri, file=None, username=None, password=None)

Fetch a file based on the URI provided. If you do not pass in a file pointer a tempfile.NamedTemporaryFile, or None if the file could not be retrieved is returned. The URI can be either an HTTP url, or “s3://bucket_name/key_name”

boto.utils.find_class(module_name, class_name=None)
boto.utils.get_aws_metadata(headers, provider=None)
boto.utils.get_instance_metadata(version='latest', url='http://169.254.169.254')

Returns the instance metadata as a nested Python dictionary. Simple values (e.g. local_hostname, hostname, etc.) will be stored as string values. Values such as ancestor-ami-ids will be stored in the dict as a list of string values. More complex fields such as public-keys and will be stored as nested dicts.

boto.utils.get_instance_userdata(version='latest', sep=None, url='http://169.254.169.254')
boto.utils.get_ts(ts=None)
boto.utils.get_utf8_value(value)
boto.utils.guess_mime_type(content, deftype)

Description: Guess the mime type of a block of text :param content: content we’re finding the type of :type str:

Parameters:deftype – Default mime type
Return type:<type>:
Returns:<description>
boto.utils.merge_meta(headers, metadata, provider=None)
boto.utils.mklist(value)
boto.utils.notify(subject, body=None, html_body=None, to_string=None, attachments=None, append_instance_id=True)
boto.utils.parse_ts(ts)
boto.utils.pythonize_name(name, sep='_')
boto.utils.retry_url(url, retry_on_404=True, num_retries=10)
boto.utils.update_dme(username, password, dme_id, ip_address)

Update your Dynamic DNS record with DNSMadeEasy.com

boto.utils.write_mime_multipart(content, compress=False, deftype='text/plain', delimiter=':')

Description: :param content: A list of tuples of name-content pairs. This is used instead of a dict to ensure that scripts run in order :type list of tuples:

Parameters:
  • compress – Use gzip to compress the scripts, defaults to no compression
  • deftype – The type that should be assumed if nothing else can be figured out
  • delimiter – mime delimiter
Returns:

Final mime multipart

Return type:

str:

cloudfront

A Crash Course in CloudFront in Boto

This new boto module provides an interface to Amazon’s new Content Service, CloudFront.

Caveats:

This module is not well tested. Paging of distributions is not yet supported. CNAME support is completely untested. Use with caution. Feedback and bug reports are greatly appreciated.

The following shows the main features of the cloudfront module from an interactive shell:

Create an cloudfront connection:

>>> from boto.cloudfront import CloudFrontConnection
>>> c = CloudFrontConnection()

Create a new boto.cloudfront.distribution.Distribution:

>>> distro = c.create_distribution(origin='mybucket.s3.amazonaws.com', enabled=False, comment='My new Distribution')
>>> d.domain_name
u'd2oxf3980lnb8l.cloudfront.net'
>>> d.id
u'ECH69MOIW7613'
>>> d.status
u'InProgress'
>>> d.config.comment
u'My new distribution'
>>> d.config.origin
u'mybucket.s3.amazonaws.com'
>>> d.config.caller_reference
u'31b8d9cf-a623-4a28-b062-a91856fac6d0'
>>> d.config.enabled
False

Note that a new caller reference is created automatically, using uuid.uuid4(). The boto.cloudfront.distribution.Distribution, boto.cloudfront.distribution.DistributionConfig and boto.cloudfront.distribution.DistributionSummary objects are defined in the boto.cloudfront.distribution module.

To get a listing of all current distributions:

>>> rs = c.get_all_distributions()
>>> rs
[<boto.cloudfront.distribution.DistributionSummary instance at 0xe8d4e0>,
 <boto.cloudfront.distribution.DistributionSummary instance at 0xe8d788>]

This returns a list of boto.cloudfront.distribution.DistributionSummary objects. Note that paging is not yet supported! To get a boto.cloudfront.distribution.DistributionObject from a boto.cloudfront.distribution.DistributionSummary object:

>>> ds = rs[1]
>>> distro = ds.get_distribution()
>>> distro.domain_name
u'd2oxf3980lnb8l.cloudfront.net'

To change a property of a distribution object:

>>> distro.comment
u'My new distribution'
>>> distro.update(comment='This is a much better comment')
>>> distro.comment
'This is a much better comment'

You can also enable/disable a distribution using the following convenience methods:

>>> distro.enable()  # just calls distro.update(enabled=True)

or

>>> distro.disable()  # just calls distro.update(enabled=False)

The only attributes that can be updated for a Distribution are comment, enabled and cnames.

To delete a boto.cloudfront.distribution.Distribution:

>>> distro.delete()
boto.cloudfront
class boto.cloudfront.CloudFrontConnection(aws_access_key_id=None, aws_secret_access_key=None, port=None, proxy=None, proxy_port=None, host='cloudfront.amazonaws.com', debug=0)
DefaultHost = 'cloudfront.amazonaws.com'
Version = '2010-11-01'
create_distribution(origin, enabled, caller_reference='', cnames=None, comment='')
create_invalidation_request(distribution_id, paths, caller_reference=None)

Creates a new invalidation request :see: http://goo.gl/8vECq

create_origin_access_identity(caller_reference='', comment='')
create_streaming_distribution(origin, enabled, caller_reference='', cnames=None, comment='')
delete_distribution(distribution_id, etag)
delete_origin_access_identity(access_id, etag)
delete_streaming_distribution(distribution_id, etag)
get_all_distributions()
get_all_origin_access_identity()
get_all_streaming_distributions()
get_distribution_config(distribution_id)
get_distribution_info(distribution_id)
get_etag(response)
get_origin_access_identity_config(access_id)
get_origin_access_identity_info(access_id)
get_streaming_distribution_config(distribution_id)
get_streaming_distribution_info(distribution_id)
invalidation_request_status(distribution_id, request_id, caller_reference=None)
set_distribution_config(distribution_id, etag, config)
set_origin_access_identity_config(access_id, etag, config)
set_streaming_distribution_config(distribution_id, etag, config)
boto.cloudfront.distribution
class boto.cloudfront.distribution.Distribution(connection=None, config=None, domain_name='', id='', last_modified_time=None, status='')
add_object(name, content, headers=None, replace=True)

Adds a new content object to the Distribution. The content for the object will be copied to a new Key in the S3 Bucket and the permissions will be set appropriately for the type of Distribution.

Parameters:
  • name (str or unicode) – The name or key of the new object.
  • content (file-like object) – A file-like object that contains the content for the new object.
  • headers (dict) – A dictionary containing additional headers you would like associated with the new object in S3.
Return type:

boto.cloudfront.object.Object

Returns:

The newly created object.

delete()

Delete this CloudFront Distribution. The content associated with the Distribution is not deleted from the underlying Origin bucket in S3.

disable()

Activate the Distribution. A convenience wrapper around the update method.

enable()

Deactivate the Distribution. A convenience wrapper around the update method.

endElement(name, value, connection)
get_objects()

Return a list of all content objects in this distribution.

Return type:list of boto.cloudfront.object.Object
Returns:The content objects
set_permissions(object, replace=False)

Sets the S3 ACL grants for the given object to the appropriate value based on the type of Distribution. If the Distribution is serving private content the ACL will be set to include the Origin Access Identity associated with the Distribution. If the Distribution is serving public content the content will be set up with “public-read”.

Parameters:
  • enabled – The Object whose ACL is being set
  • replace (bool) – If False, the Origin Access Identity will be appended to the existing ACL for the object. If True, the ACL for the object will be completely replaced with one that grants READ permission to the Origin Access Identity.
set_permissions_all(replace=False)

Sets the S3 ACL grants for all objects in the Distribution to the appropriate value based on the type of Distribution.

Parameters:replace (bool) – If False, the Origin Access Identity will be appended to the existing ACL for the object. If True, the ACL for the object will be completely replaced with one that grants READ permission to the Origin Access Identity.
startElement(name, attrs, connection)
update(enabled=None, cnames=None, comment=None)

Update the configuration of the Distribution. The only values of the DistributionConfig that can be updated are:

  • CNAMES
  • Comment
  • Whether the Distribution is enabled or not
Parameters:
  • enabled (bool) – Whether the Distribution is active or not.
  • cnames (list of str) – The DNS CNAME’s associated with this Distribution. Maximum of 10 values.
  • comment (str or unicode) – The comment associated with the Distribution.
class boto.cloudfront.distribution.DistributionConfig(connection=None, origin=None, enabled=False, caller_reference='', cnames=None, comment='', trusted_signers=None, default_root_object=None, logging=None)
Parameters:
  • origin (boto.cloudfront.origin.S3Origin or boto.cloudfront.origin.CustomOrigin) – Origin information to associate with the distribution. If your distribution will use an Amazon S3 origin, then this should be an S3Origin object. If your distribution will use a custom origin (non Amazon S3), then this should be a CustomOrigin object.
  • enabled (array of str) – Whether the distribution is enabled to accept end user requests for content.
  • caller_reference – A unique number that ensures the request can’t be replayed. If no caller_reference is provided, boto will generate a type 4 UUID for use as the caller reference.
  • cnames – A CNAME alias you want to associate with this distribution. You can have up to 10 CNAME aliases per distribution.
  • comment (str) – Any comments you want to include about the distribution.
  • trusted_signers (:class`boto.cloudfront.signers.TrustedSigners`) – Specifies any AWS accounts you want to permit to create signed URLs for private content. If you want the distribution to use signed URLs, this should contain a TrustedSigners object; if you want the distribution to use basic URLs, leave this None.
  • default_root_object – Designates a default root object. Only include a DefaultRootObject value if you are going to assign a default root object for the distribution.
  • logging (:class`boto.cloudfront.logging.LoggingInfo`) – Controls whether access logs are written for the distribution. If you want to turn on access logs, this should contain a LoggingInfo object; otherwise it should contain None.
endElement(name, value, connection)
startElement(name, attrs, connection)
to_xml()
class boto.cloudfront.distribution.DistributionSummary(connection=None, domain_name='', id='', last_modified_time=None, status='', origin=None, cname='', comment='', enabled=False)
endElement(name, value, connection)
get_distribution()
startElement(name, attrs, connection)
class boto.cloudfront.distribution.StreamingDistribution(connection=None, config=None, domain_name='', id='', last_modified_time=None, status='')
delete()
startElement(name, attrs, connection)
update(enabled=None, cnames=None, comment=None)

Update the configuration of the StreamingDistribution. The only values of the StreamingDistributionConfig that can be updated are:

  • CNAMES
  • Comment
  • Whether the Distribution is enabled or not
Parameters:
  • enabled (bool) – Whether the StreamingDistribution is active or not.
  • cnames (list of str) – The DNS CNAME’s associated with this Distribution. Maximum of 10 values.
  • comment (str or unicode) – The comment associated with the Distribution.
class boto.cloudfront.distribution.StreamingDistributionConfig(connection=None, origin='', enabled=False, caller_reference='', cnames=None, comment='', trusted_signers=None, logging=None)
to_xml()
class boto.cloudfront.distribution.StreamingDistributionSummary(connection=None, domain_name='', id='', last_modified_time=None, status='', origin=None, cname='', comment='', enabled=False)
get_distribution()
boto.cloudfront.exception
exception boto.cloudfront.exception.CloudFrontServerError(status, reason, body=None, *args)

contrib

boto.contrib
boto.contrib.m2helpers

Note

This module requires installation of M2Crypto in your Python path.

boto.contrib.ymlmessage

This module was contributed by Chris Moyer. It provides a subclass of the SQS Message class that supports YAML as the body of the message.

This module requires the yaml module.

class boto.contrib.ymlmessage.YAMLMessage(queue=None, body='', xml_attrs=None)

The YAMLMessage class provides a YAML compatible message. Encoding and decoding are handled automaticaly.

Access this message data like such:

m.data = [ 1, 2, 3] m.data[0] # Returns 1

This depends on the PyYAML package

get_body()
set_body(body)

EC2

boto.ec2

This module provides an interface to the Elastic Compute Cloud (EC2) service from AWS.

boto.ec2.connect_to_region(region_name, **kw_params)

Given a valid region name, return a boto.ec2.connection.EC2Connection.

Type:str
Parameters:region_name – The name of the region to connect to.
Return type:boto.ec2.connection.EC2Connection or None
Returns:A connection to the given region, or None if an invalid region name is given
boto.ec2.get_region(region_name, **kw_params)

Find and return a boto.ec2.regioninfo.RegionInfo object given a region name.

Type:str
Param:The name of the region.
Return type:boto.ec2.regioninfo.RegionInfo
Returns:The RegionInfo object for the given region or None if an invalid region name is provided.
boto.ec2.regions(**kw_params)

Get all available regions for the EC2 service. You may pass any of the arguments accepted by the EC2Connection object’s constructor as keyword arguments and they will be passed along to the EC2Connection object.

Return type:list
Returns:A list of boto.ec2.regioninfo.RegionInfo
boto.ec2.address

Represents an EC2 Elastic IP Address

class boto.ec2.address.Address(connection=None, public_ip=None, instance_id=None)
associate(instance_id)
delete()
disassociate()
endElement(name, value, connection)
release()
boto.ec2.autoscale

This module provides an interface to the Elastic Compute Cloud (EC2) Auto Scaling service.

class boto.ec2.autoscale.AutoScaleConnection(aws_access_key_id=None, aws_secret_access_key=None, is_secure=True, port=None, proxy=None, proxy_port=None, proxy_user=None, proxy_pass=None, debug=1, https_connection_factory=None, region=None, path='/')

Init method to create a new connection to the AutoScaling service.

B{Note:} The host argument is overridden by the host specified in the
boto configuration file.
APIVersion = '2010-08-01'
DefaultRegionEndpoint = 'autoscaling.amazonaws.com'
DefaultRegionName = 'us-east-1'
build_list_params(params, items, label)
items is a list of dictionaries or strings:
[{‘Protocol’ : ‘HTTP’,
‘LoadBalancerPort’ : ‘80’, ‘InstancePort’ : ‘80’},..] etc.
or
[‘us-east-1b’,...]
create_auto_scaling_group(as_group)

Create auto scaling group.

create_launch_configuration(launch_config)

Creates a new Launch Configuration.

Parameters:launch_config (boto.ec2.autoscale.launchconfig.LaunchConfiguration) – LaunchConfiguration object.
create_scaling_policy(scaling_policy)

Creates a new Scaling Policy.

Parameters:scaling_policy (boto.ec2.autoscale.policy.ScalingPolicy) – ScalingPolicy object.
create_scheduled_group_action(as_group, name, time, desired_capacity=None, min_size=None, max_size=None)

Creates a scheduled scaling action for a Auto Scaling group. If you leave a parameter unspecified, the corresponding value remains unchanged in the affected Auto Scaling group.

Parameters:
  • as_group (string) – The auto scaling group to get activities on.
  • name (string) – Scheduled action name.
  • time (datetime.datetime) – The time for this action to start.
  • desired_capacity (int) – The number of EC2 instances that should be running in this group.
  • min_size (int) – The minimum size for the new auto scaling group.
  • max_size (int) – The minimum size for the new auto scaling group.
delete_auto_scaling_group(name)

Deletes the specified auto scaling group if the group has no instances and no scaling activities in progress.

delete_launch_configuration(launch_config_name)

Deletes the specified LaunchConfiguration.

The specified launch configuration must not be attached to an Auto Scaling group. Once this call completes, the launch configuration is no longer available for use.

delete_policy(policy_name, autoscale_group=None)
delete_scheduled_action(scheduled_action_name, autoscale_group=None)
disable_metrics_collection(as_group, metrics=None)

Disables monitoring of group metrics for the Auto Scaling group specified in AutoScalingGroupName. You can specify the list of affected metrics with the Metrics parameter.

enable_metrics_collection(as_group, granularity, metrics=None)

Enables monitoring of group metrics for the Auto Scaling group specified in AutoScalingGroupName. You can specify the list of enabled metrics with the Metrics parameter.

Auto scaling metrics collection can be turned on only if the InstanceMonitoring.Enabled flag, in the Auto Scaling group’s launch configuration, is set to true.

Parameters:
  • autoscale_group (string) – The auto scaling group to get activities on.
  • granularity (string) – The granularity to associate with the metrics to collect. Currently, the only legal granularity is “1Minute”.
  • metrics (string list) – The list of metrics to collect. If no metrics are specified, all metrics are enabled.
execute_policy(policy_name, as_group=None, honor_cooldown=None)
get_all_activities(autoscale_group, activity_ids=None, max_records=None, next_token=None)

Get all activities for the given autoscaling group.

This action supports pagination by returning a token if there are more pages to retrieve. To get the next page, call this action again with the returned token as the NextToken parameter

Parameters:
Return type:

list

Returns:

List of boto.ec2.autoscale.activity.Activity instances.

get_all_adjustment_types()
get_all_autoscaling_instances(instance_ids=None, max_records=None, next_token=None)

Returns a description of each Auto Scaling instance in the instance_ids list. If a list is not provided, the service returns the full details of all instances up to a maximum of fifty.

This action supports pagination by returning a token if there are more pages to retrieve. To get the next page, call this action again with the returned token as the NextToken parameter.

Parameters:
  • instance_ids (list) – List of Autoscaling Instance IDs which should be searched for.
  • max_records (int) – Maximum number of results to return.
Return type:

list

Returns:

List of boto.ec2.autoscale.activity.Activity instances.

get_all_groups(names=None, max_records=None, next_token=None)

Returns a full description of each Auto Scaling group in the given list. This includes all Amazon EC2 instances that are members of the group. If a list of names is not provided, the service returns the full details of all Auto Scaling groups.

This action supports pagination by returning a token if there are more pages to retrieve. To get the next page, call this action again with the returned token as the NextToken parameter.

Parameters:
  • names (list) – List of group names which should be searched for.
  • max_records (int) – Maximum amount of groups to return.
Return type:

list

Returns:

List of boto.ec2.autoscale.group.AutoScalingGroup instances.

get_all_launch_configurations(**kwargs)

Returns a full description of the launch configurations given the specified names.

If no names are specified, then the full details of all launch configurations are returned.

Parameters:
  • names (list) – List of configuration names which should be searched for.
  • max_records (int) – Maximum amount of configurations to return.
  • next_token (str) – If you have more results than can be returned at once, pass in this parameter to page through all results.
Return type:

list

Returns:

List of boto.ec2.autoscale.launchconfig.LaunchConfiguration instances.

get_all_metric_collection_types()

Returns a list of metrics and a corresponding list of granularities for each metric.

get_all_policies(as_group=None, policy_names=None, max_records=None, next_token=None)

Returns descriptions of what each policy does. This action supports pagination. If the response includes a token, there are more records available. To get the additional records, repeat the request with the response token as the NextToken parameter.

If no group name or list of policy names are provided, all available policies are returned.

Parameters:
get_all_scaling_process_types()

Returns scaling process types for use in the ResumeProcesses and SuspendProcesses actions.

get_all_scheduled_actions(as_group=None, start_time=None, end_time=None, scheduled_actions=None, max_records=None, next_token=None)
resume_processes(as_group, scaling_processes=None)

Resumes Auto Scaling processes for an Auto Scaling group.

Parameters:
  • as_group (string) – The auto scaling group to resume processes on.
  • scaling_processes (list) – Processes you want to resume. If omitted, all processes will be resumed.
set_instance_health(instance_id, health_status, should_respect_grace_period=True)

Explicitly set the health status of an instance.

Parameters:
  • instance_id (str) – The identifier of the EC2 instance.
  • health_status (str) – The health status of the instance. “Healthy” means that the instance is healthy and should remain in service. “Unhealthy” means that the instance is unhealthy. Auto Scaling should terminate and replace it.
  • should_respect_grace_period (bool) – If True, this call should respect the grace period associated with the group.
suspend_processes(as_group, scaling_processes=None)

Suspends Auto Scaling processes for an Auto Scaling group.

Parameters:
  • as_group (string) – The auto scaling group to suspend processes on.
  • scaling_processes (list) – Processes you want to suspend. If omitted, all processes will be suspended.
terminate_instance(instance_id, decrement_capacity=True)
boto.ec2.autoscale.connect_to_region(region_name, **kw_params)

Given a valid region name, return a boto.ec2.autoscale.AutoScaleConnection.

Parameters:region_name (str) – The name of the region to connect to.
Return type:boto.ec2.AutoScaleConnection or None
Returns:A connection to the given region, or None if an invalid region name is given
boto.ec2.autoscale.regions()

Get all available regions for the Auto Scaling service.

Return type:list
Returns:A list of boto.RegionInfo instances
boto.ec2.autoscale.activity
class boto.ec2.autoscale.activity.Activity(connection=None)
endElement(name, value, connection)
startElement(name, attrs, connection)
boto.ec2.autoscale.group
class boto.ec2.autoscale.group.AutoScalingGroup(connection=None, name=None, launch_config=None, availability_zones=None, load_balancers=None, default_cooldown=None, health_check_type=None, health_check_period=None, placement_group=None, vpc_zone_identifier=None, desired_capacity=None, min_size=None, max_size=None, **kwargs)

Creates a new AutoScalingGroup with the specified name.

You must not have already used up your entire quota of AutoScalingGroups in order for this call to be successful. Once the creation request is completed, the AutoScalingGroup is ready to be used in other calls.

Parameters:
  • name (str) – Name of autoscaling group (required).
  • availability_zones (list) – List of availability zones (required).
  • default_cooldown (int) – Number of seconds after a Scaling Activity completes before any further scaling activities can start.
  • desired_capacity (int) – The desired capacity for the group.
  • health_check_period (str) – Length of time in seconds after a new EC2 instance comes into service that Auto Scaling starts checking its health.
  • health_check_type (str) – The service you want the health status from, Amazon EC2 or Elastic Load Balancer.
  • launch_config (str or LaunchConfiguration) – Name of launch configuration (required).
  • load_balancers (list) – List of load balancers.
  • maxsize – Maximum size of group (required).
  • minsize – Minimum size of group (required).
  • placement_group (str) – Physical location of your cluster placement group created in Amazon EC2.
  • vpc_zone_identifier (str) – The subnet identifier of the Virtual Private Cloud.
Return type:

boto.ec2.autoscale.group.AutoScalingGroup

Returns:

An autoscale group.

cooldown
delete()

Delete this auto-scaling group if no instances attached or no scaling activities in progress.

endElement(name, value, connection)
get_activities(activity_ids=None, max_records=50)

Get all activies for this group.

resume_processes(scaling_processes=None)

Resumes Auto Scaling processes for an Auto Scaling group.

set_capacity(capacity)

Set the desired capacity for the group.

shutdown_instances()

Convenience method which shuts down all instances associated with this group.

startElement(name, attrs, connection)
suspend_processes(scaling_processes=None)

Suspends Auto Scaling processes for an Auto Scaling group.

update()

Sync local changes with AutoScaling group.

class boto.ec2.autoscale.group.AutoScalingGroupMetric(connection=None)
endElement(name, value, connection)
startElement(name, attrs, connection)
class boto.ec2.autoscale.group.EnabledMetric(connection=None, metric=None, granularity=None)
endElement(name, value, connection)
startElement(name, attrs, connection)
class boto.ec2.autoscale.group.ProcessType(connection=None)
endElement(name, value, connection)
startElement(name, attrs, connection)
class boto.ec2.autoscale.group.SuspendedProcess(connection=None)
endElement(name, value, connection)
startElement(name, attrs, connection)
boto.ec2.autoscale.instance
class boto.ec2.autoscale.instance.Instance(connection=None)
endElement(name, value, connection)
startElement(name, attrs, connection)
boto.ec2.autoscale.launchconfig
class boto.ec2.autoscale.launchconfig.BlockDeviceMapping(connection=None, device_name=None, virtual_name=None)
endElement(name, value, connection)
startElement(name, attrs, connection)
class boto.ec2.autoscale.launchconfig.Ebs(connection=None, snapshot_id=None, volume_size=None)
endElement(name, value, connection)
startElement(name, attrs, connection)
class boto.ec2.autoscale.launchconfig.InstanceMonitoring(connection=None, enabled='false')
endElement(name, value, connection)
startElement(name, attrs, connection)
class boto.ec2.autoscale.launchconfig.LaunchConfiguration(connection=None, name=None, image_id=None, key_name=None, security_groups=None, user_data=None, instance_type='m1.small', kernel_id=None, ramdisk_id=None, block_device_mappings=None, instance_monitoring=False)

A launch configuration.

Parameters:
  • name (str) – Name of the launch configuration to create.
  • image_id (str) – Unique ID of the Amazon Machine Image (AMI) which was assigned during registration.
  • key_name (str) – The name of the EC2 key pair.
  • security_groups (list) – Names of the security groups with which to associate the EC2 instances.
  • user_data (str) – The user data available to launched EC2 instances.
  • instance_type (str) – The instance type
  • kern_id (str) – Kernel id for instance
  • ramdisk_id (str) – RAM disk id for instance
  • block_device_mappings (list) – Specifies how block devices are exposed for instances
  • instance_monitoring (bool) – Whether instances in group are launched with detailed monitoring.
delete()

Delete this launch configuration.

endElement(name, value, connection)
startElement(name, attrs, connection)
boto.ec2.autoscale.policy
class boto.ec2.autoscale.policy.AdjustmentType(connection=None)
endElement(name, value, connection)
startElement(name, attrs, connection)
class boto.ec2.autoscale.policy.Alarm(connection=None)
endElement(name, value, connection)
startElement(name, attrs, connection)
class boto.ec2.autoscale.policy.MetricCollectionTypes(connection=None)
class BaseType(connection)
arg = ''
endElement(name, value, connection)
startElement(name, attrs, connection)
class MetricCollectionTypes.Granularity(connection)
arg = 'Granularity'
class MetricCollectionTypes.Metric(connection)
arg = 'Metric'
MetricCollectionTypes.endElement(name, value, connection)
MetricCollectionTypes.startElement(name, attrs, connection)
class boto.ec2.autoscale.policy.ScalingPolicy(connection=None, **kwargs)

Scaling Policy

Parameters:
  • name (str) – Name of scaling policy.
  • adjustment_type (str) – Specifies the type of adjustment. Valid values are ChangeInCapacity, ExactCapacity and PercentChangeInCapacity.
  • as_name (str or int) – Name or ARN of the Auto Scaling Group.
  • scaling_adjustment (int) – Value of adjustment (type specified in adjustment_type).
  • cooldown (int) – Time (in seconds) before Alarm related Scaling Activities can start after the previous Scaling Activity ends.
delete()
endElement(name, value, connection)
startElement(name, attrs, connection)
boto.ec2.autoscale.request
class boto.ec2.autoscale.request.Request(connection=None)
endElement(name, value, connection)
startElement(name, attrs, connection)
boto.ec2.autoscale.scheduled
class boto.ec2.autoscale.scheduled.ScheduledUpdateGroupAction(connection=None)
endElement(name, value, connection)
startElement(name, attrs, connection)
boto.ec2.buyreservation
class boto.ec2.buyreservation.BuyReservation
get(params)
get_instance_type(params)
get_quantity(params)
get_region(params)
get_zone(params)
boto.ec2.cloudwatch

This module provides an interface to the Elastic Compute Cloud (EC2) CloudWatch service from AWS.

The 5 Minute How-To Guide

First, make sure you have something to monitor. You can either create a LoadBalancer or enable monitoring on an existing EC2 instance. To enable monitoring, you can either call the monitor_instance method on the EC2Connection object or call the monitor method on the Instance object.

It takes a while for the monitoring data to start accumulating but once it does, you can do this:

>>> import boto
>>> c = boto.connect_cloudwatch()
>>> metrics = c.list_metrics()
>>> metrics
[Metric:NetworkIn,
 Metric:NetworkOut,
 Metric:NetworkOut(InstanceType,m1.small),
 Metric:NetworkIn(InstanceId,i-e573e68c),
 Metric:CPUUtilization(InstanceId,i-e573e68c),
 Metric:DiskWriteBytes(InstanceType,m1.small),
 Metric:DiskWriteBytes(ImageId,ami-a1ffb63),
 Metric:NetworkOut(ImageId,ami-a1ffb63),
 Metric:DiskWriteOps(InstanceType,m1.small),
 Metric:DiskReadBytes(InstanceType,m1.small),
 Metric:DiskReadOps(ImageId,ami-a1ffb63),
 Metric:CPUUtilization(InstanceType,m1.small),
 Metric:NetworkIn(ImageId,ami-a1ffb63),
 Metric:DiskReadOps(InstanceType,m1.small),
 Metric:DiskReadBytes,
 Metric:CPUUtilization,
 Metric:DiskWriteBytes(InstanceId,i-e573e68c),
 Metric:DiskWriteOps(InstanceId,i-e573e68c),
 Metric:DiskWriteOps,
 Metric:DiskReadOps,
 Metric:CPUUtilization(ImageId,ami-a1ffb63),
 Metric:DiskReadOps(InstanceId,i-e573e68c),
 Metric:NetworkOut(InstanceId,i-e573e68c),
 Metric:DiskReadBytes(ImageId,ami-a1ffb63),
 Metric:DiskReadBytes(InstanceId,i-e573e68c),
 Metric:DiskWriteBytes,
 Metric:NetworkIn(InstanceType,m1.small),
 Metric:DiskWriteOps(ImageId,ami-a1ffb63)]

The list_metrics call will return a list of all of the available metrics that you can query against. Each entry in the list is a Metric object. As you can see from the list above, some of the metrics are generic metrics and some have Dimensions associated with them (e.g. InstanceType=m1.small). The Dimension can be used to refine your query. So, for example, I could query the metric Metric:CPUUtilization which would create the desired statistic by aggregating cpu utilization data across all sources of information available or I could refine that by querying the metric Metric:CPUUtilization(InstanceId,i-e573e68c) which would use only the data associated with the instance identified by the instance ID i-e573e68c.

Because for this example, I’m only monitoring a single instance, the set of metrics available to me are fairly limited. If I was monitoring many instances, using many different instance types and AMI’s and also several load balancers, the list of available metrics would grow considerably.

Once you have the list of available metrics, you can actually query the CloudWatch system for that metric. Let’s choose the CPU utilization metric for our instance.

>>> m = metrics[5]
>>> m
Metric:CPUUtilization(InstanceId,i-e573e68c)

The Metric object has a query method that lets us actually perform the query against the collected data in CloudWatch. To call that, we need a start time and end time to control the time span of data that we are interested in. For this example, let’s say we want the data for the previous hour:

>>> import datetime
>>> end = datetime.datetime.now()
>>> start = end - datetime.timedelta(hours=1)

We also need to supply the Statistic that we want reported and the Units to use for the results. The Statistic can be one of these values:

[‘Minimum’, ‘Maximum’, ‘Sum’, ‘Average’, ‘SampleCount’]

And Units must be one of the following:

[‘Seconds’, ‘Percent’, ‘Bytes’, ‘Bits’, ‘Count’, ‘Bytes/Second’, ‘Bits/Second’, ‘Count/Second’]

The query method also takes an optional parameter, period. This parameter controls the granularity (in seconds) of the data returned. The smallest period is 60 seconds and the value must be a multiple of 60 seconds. So, let’s ask for the average as a percent:

>>> datapoints = m.query(start, end, 'Average', 'Percent')
>>> len(datapoints)
60

Our period was 60 seconds and our duration was one hour so we should get 60 data points back and we can see that we did. Each element in the datapoints list is a DataPoint object which is a simple subclass of a Python dict object. Each Datapoint object contains all of the information available about that particular data point.

>>> d = datapoints[0]
>>> d
{u'Average': 0.0,
 u'SampleCount': 1.0,
 u'Timestamp': u'2009-05-21T19:55:00Z',
 u'Unit': u'Percent'}

My server obviously isn’t very busy right now!

class boto.ec2.cloudwatch.CloudWatchConnection(aws_access_key_id=None, aws_secret_access_key=None, is_secure=True, port=None, proxy=None, proxy_port=None, proxy_user=None, proxy_pass=None, debug=0, https_connection_factory=None, region=None, path='/')

Init method to create a new connection to EC2 Monitoring Service.

B{Note:} The host argument is overridden by the host specified in the boto configuration file.

APIVersion = '2010-08-01'
DefaultRegionEndpoint = 'monitoring.amazonaws.com'
DefaultRegionName = 'us-east-1'
build_list_params(params, items, label)
create_alarm(alarm)

Creates or updates an alarm and associates it with the specified Amazon CloudWatch metric. Optionally, this operation can associate one or more Amazon Simple Notification Service resources with the alarm.

When this operation creates an alarm, the alarm state is immediately set to INSUFFICIENT_DATA. The alarm is evaluated and its StateValue is set appropriately. Any actions associated with the StateValue is then executed.

When updating an existing alarm, its StateValue is left unchanged.

Parameters:alarm (boto.ec2.cloudwatch.alarm.MetricAlarm) – MetricAlarm object.
delete_alarms(alarms)

Deletes all specified alarms. In the event of an error, no alarms are deleted.

Parameters:alarms (list) – List of alarm names.
describe_alarm_history(alarm_name=None, start_date=None, end_date=None, max_records=None, history_item_type=None, next_token=None)

Retrieves history for the specified alarm. Filter alarms by date range or item type. If an alarm name is not specified, Amazon CloudWatch returns histories for all of the owner’s alarms.

Amazon CloudWatch retains the history of deleted alarms for a period of six weeks. If an alarm has been deleted, its history can still be queried.

Parameters:
  • alarm_name (string) – The name of the alarm.
  • start_date (datetime) – The starting date to retrieve alarm history.
  • end_date (datetime) – The starting date to retrieve alarm history.
  • history_item_type (string) – The type of alarm histories to retreive (ConfigurationUpdate | StateUpdate | Action)
  • max_records (int) – The maximum number of alarm descriptions to retrieve.
  • next_token (string) – The token returned by a previous call to indicate that there is more data.

:rtype list

describe_alarms(action_prefix=None, alarm_name_prefix=None, alarm_names=None, max_records=None, state_value=None, next_token=None)

Retrieves alarms with the specified names. If no name is specified, all alarms for the user are returned. Alarms can be retrieved by using only a prefix for the alarm name, the alarm state, or a prefix for any action.

Parameters:
  • action_name – The action name prefix.
  • alarm_name_prefix (string) – The alarm name prefix. AlarmNames cannot be specified if this parameter is specified.
  • alarm_names (list) – A list of alarm names to retrieve information for.
  • max_records (int) – The maximum number of alarm descriptions to retrieve.
  • state_value (string) – The state value to be used in matching alarms.
  • next_token (string) – The token returned by a previous call to indicate that there is more data.

:rtype list

describe_alarms_for_metric(metric_name, namespace, period=None, statistic=None, dimensions=None, unit=None)

Retrieves all alarms for a single metric. Specify a statistic, period, or unit to filter the set of alarms further.

Parameters:
  • metric_name (string) – The name of the metric
  • namespace (string) – The namespace of the metric.
  • period (int) – The period in seconds over which the statistic is applied.
  • statistic (string) – The statistic for the metric.

:rtype list

disable_alarm_actions(alarm_names)

Disables actions for the specified alarms.

Parameters:alarms (list) – List of alarm names.
enable_alarm_actions(alarm_names)

Enables actions for the specified alarms.

Parameters:alarms (list) – List of alarm names.
get_metric_statistics(period, start_time, end_time, metric_name, namespace, statistics, dimensions=None, unit=None)

Get time-series data for one or more statistics of a given metric.

Parameters:
  • period (integer) – The granularity, in seconds, of the returned datapoints. Period must be at least 60 seconds and must be a multiple of 60. The default value is 60.
  • start_time (datetime) – The time stamp to use for determining the first datapoint to return. The value specified is inclusive; results include datapoints with the time stamp specified.
  • end_time (datetime) – The time stamp to use for determining the last datapoint to return. The value specified is exclusive; results will include datapoints up to the time stamp specified.
  • metric_name (string) – The metric name.
Return type:

list

list_metrics(next_token=None)

Returns a list of the valid metrics for which there is recorded data available.

Parameters:next_token (string) – A maximum of 500 metrics will be returned at one time. If more results are available, the ResultSet returned will contain a non-Null next_token attribute. Passing that token as a parameter to list_metrics will retrieve the next page of metrics.
put_metric_alarm(alarm)

Creates or updates an alarm and associates it with the specified Amazon CloudWatch metric. Optionally, this operation can associate one or more Amazon Simple Notification Service resources with the alarm.

When this operation creates an alarm, the alarm state is immediately set to INSUFFICIENT_DATA. The alarm is evaluated and its StateValue is set appropriately. Any actions associated with the StateValue is then executed.

When updating an existing alarm, its StateValue is left unchanged.

Parameters:alarm (boto.ec2.cloudwatch.alarm.MetricAlarm) – MetricAlarm object.
put_metric_data(namespace, name, value=None, timestamp=None, unit=None, dimensions=None, statistics=None)

Publishes metric data points to Amazon CloudWatch. Amazon Cloudwatch associates the data points with the specified metric. If the specified metric does not exist, Amazon CloudWatch creates the metric.

Parameters:
  • namespace (string) – The namespace of the metric.
  • name (string) – The name of the metric.
  • value (int) – The value for the metric.
  • timestamp (datetime) – The time stamp used for the metric. If not specified, the default value is set to the time the metric data was received.
  • unit (string) – The unit of the metric. Valid Values: Seconds | Microseconds | Milliseconds | Bytes | Kilobytes | Megabytes | Gigabytes | Terabytes | Bits | Kilobits | Megabits | Gigabits | Terabits | Percent | Count | Bytes/Second | Kilobytes/Second | Megabytes/Second | Gigabytes/Second | Terabytes/Second | Bits/Second | Kilobits/Second | Megabits/Second | Gigabits/Second | Terabits/Second | Count/Second | None
  • dimensions (dict) – Add extra name value pairs to associate with the metric, i.e.: {‘name1’: value1, ‘name2’: value2}
  • statistics (dict) –

    Use a statistic set instead of a value, for example {‘maximum’: 30, ‘minimum’: 1,

    ‘samplecount’: 100, ‘sum’: 10000}
set_alarm_state(alarm_name, state_reason, state_value, state_reason_data=None)

Temporarily sets the state of an alarm. When the updated StateValue differs from the previous value, the action configured for the appropriate state is invoked. This is not a permanent change. The next periodic alarm check (in about a minute) will set the alarm to its actual state.

Parameters:
  • alarm_name (string) – Descriptive name for alarm.
  • state_reason (string) – Human readable reason.
  • state_value (string) – OK | ALARM | INSUFFICIENT_DATA
  • state_reason_data (string) – Reason string (will be jsonified).
update_alarm(alarm)

Creates or updates an alarm and associates it with the specified Amazon CloudWatch metric. Optionally, this operation can associate one or more Amazon Simple Notification Service resources with the alarm.

When this operation creates an alarm, the alarm state is immediately set to INSUFFICIENT_DATA. The alarm is evaluated and its StateValue is set appropriately. Any actions associated with the StateValue is then executed.

When updating an existing alarm, its StateValue is left unchanged.

Parameters:alarm (boto.ec2.cloudwatch.alarm.MetricAlarm) – MetricAlarm object.
boto.ec2.cloudwatch.connect_to_region(region_name, **kw_params)

Given a valid region name, return a boto.ec2.cloudwatch.CloudWatchConnection.

Parameters:region_name (str) – The name of the region to connect to.
Return type:boto.ec2.CloudWatchConnection or None
Returns:A connection to the given region, or None if an invalid region name is given
boto.ec2.cloudwatch.regions()

Get all available regions for the CloudWatch service.

Return type:list
Returns:A list of boto.RegionInfo instances
boto.ec2.cloudwatch.datapoint
class boto.ec2.cloudwatch.datapoint.Datapoint(connection=None)
endElement(name, value, connection)
startElement(name, attrs, connection)
boto.ec2.cloudwatch.metric
class boto.ec2.cloudwatch.metric.Dimensions
endElement(name, value, connection)
startElement(name, attrs, connection)
class boto.ec2.cloudwatch.metric.Metric(connection=None)
Statistics = ['Minimum', 'Maximum', 'Sum', 'Average', 'SampleCount']
Units = ['Seconds', 'Percent', 'Bytes', 'Bits', 'Count', 'Bytes/Second', 'Bits/Second', 'Count/Second']
describe_alarms(period=None, statistic=None, dimensions=None, unit=None)
endElement(name, value, connection)
query(start_time, end_time, statistic, unit=None, period=60)
startElement(name, attrs, connection)
boto.ec2.connection

Represents a connection to the EC2 service.

class boto.ec2.connection.EC2Connection(aws_access_key_id=None, aws_secret_access_key=None, is_secure=True, host=None, port=None, proxy=None, proxy_port=None, proxy_user=None, proxy_pass=None, debug=0, https_connection_factory=None, region=None, path='/', api_version=None)

Init method to create a new connection to EC2.

B{Note:} The host argument is overridden by the host specified in the
boto configuration file.
APIVersion = '2011-01-01'
DefaultRegionEndpoint = 'ec2.amazonaws.com'
DefaultRegionName = 'us-east-1'
ResponseError

alias of EC2ResponseError

allocate_address()

Allocate a new Elastic IP address and associate it with your account.

Return type:boto.ec2.address.Address
Returns:The newly allocated Address
associate_address(instance_id, public_ip)

Associate an Elastic IP address with a currently running instance.

Parameters:
  • instance_id (string) – The ID of the instance
  • public_ip (string) – The public IP address
Return type:

bool

Returns:

True if successful

attach_volume(volume_id, instance_id, device)

Attach an EBS volume to an EC2 instance.

Parameters:
  • volume_id (str) – The ID of the EBS volume to be attached.
  • instance_id (str) – The ID of the EC2 instance to which it will be attached.
  • device (str) – The device on the instance through which the volume will be exposted (e.g. /dev/sdh)
Return type:

bool

Returns:

True if successful

authorize_security_group(group_name, src_security_group_name=None, src_security_group_owner_id=None, ip_protocol=None, from_port=None, to_port=None, cidr_ip=None)

Add a new rule to an existing security group. You need to pass in either src_security_group_name and src_security_group_owner_id OR ip_protocol, from_port, to_port, and cidr_ip. In other words, either you are authorizing another group or you are authorizing some ip-based rule.

Parameters:
  • group_name (string) – The name of the security group you are adding the rule to.
  • src_security_group_name (string) – The name of the security group you are granting access to.
  • src_security_group_owner_id (string) – The ID of the owner of the security group you are granting access to.
  • ip_protocol (string) – Either tcp | udp | icmp
  • from_port (int) – The beginning port number you are enabling
  • to_port (int) – The ending port number you are enabling
  • cidr_ip (string) – The CIDR block you are providing access to. See http://goo.gl/Yj5QC
Return type:

bool

Returns:

True if successful.

authorize_security_group_deprecated(group_name, src_security_group_name=None, src_security_group_owner_id=None, ip_protocol=None, from_port=None, to_port=None, cidr_ip=None)
NOTE: This method uses the old-style request parameters
that did not allow a port to be specified when authorizing a group.
Parameters:
  • group_name (string) – The name of the security group you are adding the rule to.
  • src_security_group_name (string) – The name of the security group you are granting access to.
  • src_security_group_owner_id (string) – The ID of the owner of the security group you are granting access to.
  • ip_protocol (string) – Either tcp | udp | icmp
  • from_port (int) – The beginning port number you are enabling
  • to_port (string) – The ending port number you are enabling
  • to_port – The CIDR block you are providing access to. See http://goo.gl/Yj5QC
Return type:

bool

Returns:

True if successful.

build_filter_params(params, filters)
build_tag_param_list(params, tags)
bundle_instance(instance_id, s3_bucket, s3_prefix, s3_upload_policy)

Bundle Windows instance.

Parameters:
  • instance_id (string) – The instance id
  • s3_bucket (string) – The bucket in which the AMI should be stored.
  • s3_prefix (string) – The beginning of the file name for the AMI.
  • s3_upload_policy (string) – Base64 encoded policy that specifies condition and permissions for Amazon EC2 to upload the user’s image into Amazon S3.
cancel_bundle_task(bundle_id)

Cancel a previously submitted bundle task

Parameters:bundle_id (string) – The identifier of the bundle task to cancel.
cancel_spot_instance_requests(request_ids)

Cancel the specified Spot Instance Requests.

Parameters:request_ids (list) – A list of strings of the Request IDs to terminate
Return type:list
Returns:A list of the instances terminated
confirm_product_instance(product_code, instance_id)
create_image(instance_id, name, description=None, no_reboot=False)

Will create an AMI from the instance in the running or stopped state.

Parameters:
  • instance_id (string) – the ID of the instance to image.
  • name (string) – The name of the new image
  • description (string) – An optional human-readable string describing the contents and purpose of the AMI.
  • no_reboot (bool) – An optional flag indicating that the bundling process should not attempt to shutdown the instance before bundling. If this flag is True, the responsibility of maintaining file system integrity is left to the owner of the instance.
Return type:

string

Returns:

The new image id

create_key_pair(key_name)

Create a new key pair for your account. This will create the key pair within the region you are currently connected to.

Parameters:key_name (string) – The name of the new keypair
Return type:boto.ec2.keypair.KeyPair
Returns:The newly created boto.ec2.keypair.KeyPair. The material attribute of the new KeyPair object will contain the the unencrypted PEM encoded RSA private key.
create_placement_group(name, strategy='cluster')

Create a new placement group for your account. This will create the placement group within the region you are currently connected to.

Parameters:
  • name (string) – The name of the new placement group
  • strategy (string) – The placement strategy of the new placement group. Currently, the only acceptable value is “cluster”.
Return type:

boto.ec2.placementgroup.PlacementGroup

Returns:

The newly created boto.ec2.keypair.KeyPair.

create_security_group(name, description)

Create a new security group for your account. This will create the security group within the region you are currently connected to.

Parameters:
  • name (string) – The name of the new security group
  • description (string) – The description of the new security group
Return type:

boto.ec2.securitygroup.SecurityGroup

Returns:

The newly created boto.ec2.keypair.KeyPair.

create_snapshot(volume_id, description=None)

Create a snapshot of an existing EBS Volume.

Parameters:
  • volume_id (str) – The ID of the volume to be snapshot’ed
  • description (str) – A description of the snapshot. Limited to 255 characters.
Return type:

bool

Returns:

True if successful

create_spot_datafeed_subscription(bucket, prefix)

Create a spot instance datafeed subscription for this account.

Parameters:
  • bucket (str or unicode) – The name of the bucket where spot instance data will be written. The account issuing this request must have FULL_CONTROL access to the bucket specified in the request.
  • prefix (str or unicode) – An optional prefix that will be pre-pended to all data files written to the bucket.
Return type:

boto.ec2.spotdatafeedsubscription.SpotDatafeedSubscription

Returns:

The datafeed subscription object or None

create_tags(resource_ids, tags)

Create new metadata tags for the specified resource ids.

Parameters:
  • resource_ids (list) – List of strings
  • tags (dict) – A dictionary containing the name/value pairs
create_volume(size, zone, snapshot=None)

Create a new EBS Volume.

Parameters:
  • size (int) – The size of the new volume, in GiB
  • zone (string or boto.ec2.zone.Zone) – The availability zone in which the Volume will be created.
  • snapshot (string or boto.ec2.snapshot.Snapshot) – The snapshot from which the new Volume will be created.
delete_key_pair(key_name)

Delete a key pair from your account.

Parameters:key_name (string) – The name of the keypair to delete
delete_placement_group(name)

Delete a placement group from your account.

Parameters:key_name (string) – The name of the keypair to delete
delete_security_group(name)

Delete a security group from your account.

Parameters:key_name (string) – The name of the keypair to delete
delete_snapshot(snapshot_id)
delete_spot_datafeed_subscription()

Delete the current spot instance data feed subscription associated with this account

Return type:bool
Returns:True if successful
delete_tags(resource_ids, tags)

Delete metadata tags for the specified resource ids.

Parameters:
  • resource_ids (list) – List of strings
  • tags (dict or list) – Either a dictionary containing name/value pairs or a list containing just tag names. If you pass in a dictionary, the values must match the actual tag values or the tag will not be deleted.
delete_volume(volume_id)

Delete an EBS volume.

Parameters:volume_id (str) – The ID of the volume to be delete.
Return type:bool
Returns:True if successful
deregister_image(image_id, delete_snapshot=False)

Unregister an AMI.

Parameters:
  • image_id (string) – the ID of the Image to unregister
  • delete_snapshot (bool) – Set to True if we should delete the snapshot associated with an EBS volume mounted at /dev/sda1
Return type:

bool

Returns:

True if successful

detach_volume(volume_id, instance_id=None, device=None, force=False)

Detach an EBS volume from an EC2 instance.

Parameters:
  • volume_id (str) – The ID of the EBS volume to be attached.
  • instance_id (str) – The ID of the EC2 instance from which it will be detached.
  • device (str) – The device on the instance through which the volume is exposted (e.g. /dev/sdh)
  • force (bool) – Forces detachment if the previous detachment attempt did not occur cleanly. This option can lead to data loss or a corrupted file system. Use this option only as a last resort to detach a volume from a failed instance. The instance will not have an opportunity to flush file system caches nor file system meta data. If you use this option, you must perform file system check and repair procedures.
Return type:

bool

Returns:

True if successful

disassociate_address(public_ip)

Disassociate an Elastic IP address from a currently running instance.

Parameters:public_ip (string) – The public IP address
Return type:bool
Returns:True if successful
get_all_addresses(addresses=None, filters=None)

Get all EIP’s associated with the current credentials.

Parameters:
  • addresses (list) – Optional list of addresses. If this list is present, only the Addresses associated with these addresses will be returned.
  • filters (dict) – Optional filters that can be used to limit the results returned. Filters are provided in the form of a dictionary consisting of filter names as the key and filter values as the value. The set of allowable filter names/values is dependent on the request being performed. Check the EC2 API guide for details.
Return type:

list of boto.ec2.address.Address

Returns:

The requested Address objects

get_all_bundle_tasks(bundle_ids=None, filters=None)

Retrieve current bundling tasks. If no bundle id is specified, all tasks are retrieved.

Parameters:
  • bundle_ids (list) – A list of strings containing identifiers for previously created bundling tasks.
  • filters (dict) – Optional filters that can be used to limit the results returned. Filters are provided in the form of a dictionary consisting of filter names as the key and filter values as the value. The set of allowable filter names/values is dependent on the request being performed. Check the EC2 API guide for details.
get_all_images(image_ids=None, owners=None, executable_by=None, filters=None)

Retrieve all the EC2 images available on your account.

Parameters:
  • image_ids (list) – A list of strings with the image IDs wanted
  • owners (list) – A list of owner IDs
  • executable_by (list) – Returns AMIs for which the specified user ID has explicit launch permissions
  • filters (dict) – Optional filters that can be used to limit the results returned. Filters are provided in the form of a dictionary consisting of filter names as the key and filter values as the value. The set of allowable filter names/values is dependent on the request being performed. Check the EC2 API guide for details.
Return type:

list

Returns:

A list of boto.ec2.image.Image

get_all_instances(instance_ids=None, filters=None)

Retrieve all the instances associated with your account.

Parameters:
  • instance_ids (list) – A list of strings of instance IDs
  • filters (dict) – Optional filters that can be used to limit the results returned. Filters are provided in the form of a dictionary consisting of filter names as the key and filter values as the value. The set of allowable filter names/values is dependent on the request being performed. Check the EC2 API guide for details.
Return type:

list

Returns:

A list of boto.ec2.instance.Reservation

get_all_kernels(kernel_ids=None, owners=None)

Retrieve all the EC2 kernels available on your account. Constructs a filter to allow the processing to happen server side.

Parameters:
  • kernel_ids (list) – A list of strings with the image IDs wanted
  • owners (list) – A list of owner IDs
Return type:

list

Returns:

A list of boto.ec2.image.Image

get_all_key_pairs(keynames=None, filters=None)

Get all key pairs associated with your account.

Parameters:
  • keynames (list) – A list of the names of keypairs to retrieve. If not provided, all key pairs will be returned.
  • filters (dict) – Optional filters that can be used to limit the results returned. Filters are provided in the form of a dictionary consisting of filter names as the key and filter values as the value. The set of allowable filter names/values is dependent on the request being performed. Check the EC2 API guide for details.
Return type:

list

Returns:

A list of boto.ec2.keypair.KeyPair

get_all_placement_groups(groupnames=None, filters=None)

Get all placement groups associated with your account in a region.

Parameters:
  • groupnames (list) – A list of the names of placement groups to retrieve. If not provided, all placement groups will be returned.
  • filters (dict) – Optional filters that can be used to limit the results returned. Filters are provided in the form of a dictionary consisting of filter names as the key and filter values as the value. The set of allowable filter names/values is dependent on the request being performed. Check the EC2 API guide for details.
Return type:

list

Returns:

A list of boto.ec2.placementgroup.PlacementGroup

get_all_ramdisks(ramdisk_ids=None, owners=None)

Retrieve all the EC2 ramdisks available on your account. Constructs a filter to allow the processing to happen server side.

Parameters:
  • ramdisk_ids (list) – A list of strings with the image IDs wanted
  • owners (list) – A list of owner IDs
Return type:

list

Returns:

A list of boto.ec2.image.Image

get_all_regions(region_names=None, filters=None)

Get all available regions for the EC2 service.

Parameters:
  • region_names (list of str) – Names of regions to limit output
  • filters (dict) – Optional filters that can be used to limit the results returned. Filters are provided in the form of a dictionary consisting of filter names as the key and filter values as the value. The set of allowable filter names/values is dependent on the request being performed. Check the EC2 API guide for details.
Return type:

list

Returns:

A list of boto.ec2.regioninfo.RegionInfo

get_all_reserved_instances(reserved_instances_id=None, filters=None)

Describes Reserved Instance offerings that are available for purchase.

Parameters:
  • reserved_instance_ids (list) – A list of the reserved instance ids that will be returned. If not provided, all reserved instances will be returned.
  • filters (dict) – Optional filters that can be used to limit the results returned. Filters are provided in the form of a dictionary consisting of filter names as the key and filter values as the value. The set of allowable filter names/values is dependent on the request being performed. Check the EC2 API guide for details.
Return type:

list

Returns:

A list of boto.ec2.reservedinstance.ReservedInstance

get_all_reserved_instances_offerings(reserved_instances_id=None, instance_type=None, availability_zone=None, product_description=None, filters=None)

Describes Reserved Instance offerings that are available for purchase.

Parameters:
  • reserved_instances_id (str) – Displays Reserved Instances with the specified offering IDs.
  • instance_type (str) – Displays Reserved Instances of the specified instance type.
  • availability_zone (str) – Displays Reserved Instances within the specified Availability Zone.
  • product_description (str) – Displays Reserved Instances with the specified product description.
  • filters (dict) – Optional filters that can be used to limit the results returned. Filters are provided in the form of a dictionary consisting of filter names as the key and filter values as the value. The set of allowable filter names/values is dependent on the request being performed. Check the EC2 API guide for details.
Return type:

list

Returns:

A list of boto.ec2.reservedinstance.ReservedInstancesOffering

get_all_security_groups(groupnames=None, filters=None)

Get all security groups associated with your account in a region.

Parameters:
  • groupnames (list) – A list of the names of security groups to retrieve. If not provided, all security groups will be returned.
  • filters (dict) – Optional filters that can be used to limit the results returned. Filters are provided in the form of a dictionary consisting of filter names as the key and filter values as the value. The set of allowable filter names/values is dependent on the request being performed. Check the EC2 API guide for details.
Return type:

list

Returns:

A list of boto.ec2.securitygroup.SecurityGroup

get_all_snapshots(snapshot_ids=None, owner=None, restorable_by=None, filters=None)

Get all EBS Snapshots associated with the current credentials.

Parameters:
  • snapshot_ids (list) – Optional list of snapshot ids. If this list is present, only the Snapshots associated with these snapshot ids will be returned.
  • owner (str) –

    If present, only the snapshots owned by the specified user will be returned. Valid values are:

    • self
    • amazon
    • AWS Account ID
  • restorable_by (str) – If present, only the snapshots that are restorable by the specified account id will be returned.
  • filters (dict) – Optional filters that can be used to limit the results returned. Filters are provided in the form of a dictionary consisting of filter names as the key and filter values as the value. The set of allowable filter names/values is dependent on the request being performed. Check the EC2 API guide for details.
Return type:

list of boto.ec2.snapshot.Snapshot

Returns:

The requested Snapshot objects

get_all_spot_instance_requests(request_ids=None, filters=None)

Retrieve all the spot instances requests associated with your account.

Parameters:
  • request_ids (list) – A list of strings of spot instance request IDs
  • filters (dict) – Optional filters that can be used to limit the results returned. Filters are provided in the form of a dictionary consisting of filter names as the key and filter values as the value. The set of allowable filter names/values is dependent on the request being performed. Check the EC2 API guide for details.
Return type:

list

Returns:

A list of boto.ec2.spotinstancerequest.SpotInstanceRequest

get_all_tags(filters=None)

Retrieve all the metadata tags associated with your account.

Parameters:filters (dict) – Optional filters that can be used to limit the results returned. Filters are provided in the form of a dictionary consisting of filter names as the key and filter values as the value. The set of allowable filter names/values is dependent on the request being performed. Check the EC2 API guide for details.
Return type:dict
Returns:A dictionary containing metadata tags
get_all_volumes(volume_ids=None, filters=None)

Get all Volumes associated with the current credentials.

Parameters:
  • volume_ids (list) – Optional list of volume ids. If this list is present, only the volumes associated with these volume ids will be returned.
  • filters (dict) – Optional filters that can be used to limit the results returned. Filters are provided in the form of a dictionary consisting of filter names as the key and filter values as the value. The set of allowable filter names/values is dependent on the request being performed. Check the EC2 API guide for details.
Return type:

list of boto.ec2.volume.Volume

Returns:

The requested Volume objects

get_all_zones(zones=None, filters=None)

Get all Availability Zones associated with the current region.

Parameters:
  • zones (list) – Optional list of zones. If this list is present, only the Zones associated with these zone names will be returned.
  • filters (dict) – Optional filters that can be used to limit the results returned. Filters are provided in the form of a dictionary consisting of filter names as the key and filter values as the value. The set of allowable filter names/values is dependent on the request being performed. Check the EC2 API guide for details.
Return type:

list of boto.ec2.zone.Zone

Returns:

The requested Zone objects

get_console_output(instance_id)

Retrieves the console output for the specified instance.

Parameters:instance_id (string) – The instance ID of a running instance on the cloud.
Return type:boto.ec2.instance.ConsoleOutput
Returns:The console output as a ConsoleOutput object
get_image(image_id)

Shortcut method to retrieve a specific image (AMI).

Parameters:image_id (string) – the ID of the Image to retrieve
Return type:boto.ec2.image.Image
Returns:The EC2 Image specified or None if the image is not found
get_image_attribute(image_id, attribute='launchPermission')

Gets an attribute from an image.

Parameters:
  • image_id (string) – The Amazon image id for which you want info about
  • attribute (string) – The attribute you need information about. Valid choices are: * launchPermission * productCodes * blockDeviceMapping
Return type:

boto.ec2.image.ImageAttribute

Returns:

An ImageAttribute object representing the value of the attribute requested

get_instance_attribute(instance_id, attribute)

Gets an attribute from an instance.

Parameters:
  • instance_id (string) – The Amazon id of the instance
  • attribute (string) –

    The attribute you need information about Valid choices are:

    • instanceType|kernel|ramdisk|userData|
    • disableApiTermination|
    • instanceInitiatedShutdownBehavior|
    • rootDeviceName|blockDeviceMapping
Return type:

boto.ec2.image.InstanceAttribute

Returns:

An InstanceAttribute object representing the value of the attribute requested

get_key_pair(keyname)

Convenience method to retrieve a specific keypair (KeyPair).

Parameters:image_id (string) – the ID of the Image to retrieve
Return type:boto.ec2.keypair.KeyPair
Returns:The KeyPair specified or None if it is not found
get_params()

Returns a dictionary containing the value of of all of the keyword arguments passed when constructing this connection.

get_password_data(instance_id)

Get encrypted administrator password for a Windows instance.

Parameters:instance_id (string) – The identifier of the instance to retrieve the password for.
get_snapshot_attribute(snapshot_id, attribute='createVolumePermission')

Get information about an attribute of a snapshot. Only one attribute can be specified per call.

Parameters:
  • snapshot_id (str) – The ID of the snapshot.
  • attribute (str) –

    The requested attribute. Valid values are:

    • createVolumePermission
Return type:

list of boto.ec2.snapshotattribute.SnapshotAttribute

Returns:

The requested Snapshot attribute

get_spot_datafeed_subscription()

Return the current spot instance data feed subscription associated with this account, if any.

Return type:boto.ec2.spotdatafeedsubscription.SpotDatafeedSubscription
Returns:The datafeed subscription object or None
get_spot_price_history(start_time=None, end_time=None, instance_type=None, product_description=None)

Retrieve the recent history of spot instances pricing.

Parameters:
  • start_time (str) – An indication of how far back to provide price changes for. An ISO8601 DateTime string.
  • end_time (str) – An indication of how far forward to provide price changes for. An ISO8601 DateTime string.
  • instance_type (str) – Filter responses to a particular instance type.
  • product_descripton – Filter responses to a particular platform. Valid values are currently: Linux
Return type:

list

Returns:

A list tuples containing price and timestamp.

import_key_pair(key_name, public_key_material)

mports the public key from an RSA key pair that you created with a third-party tool.

Supported formats:

  • OpenSSH public key format (e.g., the format in ~/.ssh/authorized_keys)
  • Base64 encoded DER format
  • SSH public key file format as specified in RFC4716

DSA keys are not supported. Make sure your key generator is set up to create RSA keys.

Supported lengths: 1024, 2048, and 4096.

Parameters:
  • key_name (string) – The name of the new keypair
  • public_key_material (string) – The public key. You must base64 encode the public key material before sending it to AWS.
Return type:

boto.ec2.keypair.KeyPair

Returns:

The newly created boto.ec2.keypair.KeyPair. The material attribute of the new KeyPair object will contain the the unencrypted PEM encoded RSA private key.

modify_image_attribute(image_id, attribute='launchPermission', operation='add', user_ids=None, groups=None, product_codes=None)

Changes an attribute of an image.

Parameters:
  • image_id (string) – The image id you wish to change
  • attribute (string) – The attribute you wish to change
  • operation (string) – Either add or remove (this is required for changing launchPermissions)
  • user_ids (list) – The Amazon IDs of users to add/remove attributes
  • groups (list) – The groups to add/remove attributes
  • product_codes (list) – Amazon DevPay product code. Currently only one product code can be associated with an AMI. Once set, the product code cannot be changed or reset.
modify_instance_attribute(instance_id, attribute, value)

Changes an attribute of an instance

Parameters:
  • instance_id (string) – The instance id you wish to change
  • attribute (string) –

    The attribute you wish to change.

    • AttributeName - Expected value (default)
    • instanceType - A valid instance type (m1.small)
    • kernel - Kernel ID (None)
    • ramdisk - Ramdisk ID (None)
    • userData - Base64 encoded String (None)
    • disableApiTermination - Boolean (true)
    • instanceInitiatedShutdownBehavior - stop|terminate
    • rootDeviceName - device name (None)
  • value (string) – The new value for the attribute
Return type:

bool

Returns:

Whether the operation succeeded or not

modify_snapshot_attribute(snapshot_id, attribute='createVolumePermission', operation='add', user_ids=None, groups=None)

Changes an attribute of an image.

Parameters:
  • snapshot_id (string) – The snapshot id you wish to change
  • attribute (string) – The attribute you wish to change. Valid values are: createVolumePermission
  • operation (string) – Either add or remove (this is required for changing snapshot ermissions)
  • user_ids (list) – The Amazon IDs of users to add/remove attributes
  • groups (list) – The groups to add/remove attributes. The only valid value at this time is ‘all’.
monitor_instance(instance_id)

Deprecated Version, maintained for backward compatibility. Enable CloudWatch monitoring for the supplied instance.

Parameters:instance_id (string) – The instance id
Return type:list
Returns:A list of boto.ec2.instanceinfo.InstanceInfo
monitor_instances(instance_ids)

Enable CloudWatch monitoring for the supplied instances.

Parameters:instance_id (list of strings) – The instance ids
Return type:list
Returns:A list of boto.ec2.instanceinfo.InstanceInfo
purchase_reserved_instance_offering(reserved_instances_offering_id, instance_count=1)

Purchase a Reserved Instance for use with your account. ** CAUTION ** This request can result in large amounts of money being charged to your AWS account. Use with caution!

Parameters:
  • reserved_instances_offering_id (string) – The offering ID of the Reserved Instance to purchase
  • instance_count (int) – The number of Reserved Instances to purchase. Default value is 1.
Return type:

boto.ec2.reservedinstance.ReservedInstance

Returns:

The newly created Reserved Instance

reboot_instances(instance_ids=None)

Reboot the specified instances.

Parameters:instance_ids (list) – The instances to terminate and reboot
register_image(name=None, description=None, image_location=None, architecture=None, kernel_id=None, ramdisk_id=None, root_device_name=None, block_device_map=None)

Register an image.

Parameters:
  • name (string) – The name of the AMI. Valid only for EBS-based images.
  • description (string) – The description of the AMI.
  • image_location (string) – Full path to your AMI manifest in Amazon S3 storage. Only used for S3-based AMI’s.
  • architecture (string) – The architecture of the AMI. Valid choices are: i386 | x86_64
  • kernel_id (string) – The ID of the kernel with which to launch the instances
  • root_device_name (string) – The root device name (e.g. /dev/sdh)
  • block_device_map (boto.ec2.blockdevicemapping.BlockDeviceMapping) – A BlockDeviceMapping data structure describing the EBS volumes associated with the Image.
Return type:

string

Returns:

The new image id

release_address(public_ip)

Free up an Elastic IP address

Parameters:public_ip (string) – The public IP address
Return type:bool
Returns:True if successful
request_spot_instances(price, image_id, count=1, type='one-time', valid_from=None, valid_until=None, launch_group=None, availability_zone_group=None, key_name=None, security_groups=None, user_data=None, addressing_type=None, instance_type='m1.small', placement=None, kernel_id=None, ramdisk_id=None, monitoring_enabled=False, subnet_id=None, block_device_map=None)

Request instances on the spot market at a particular price.

Parameters:
  • price (str) – The maximum price of your bid
  • image_id (string) – The ID of the image to run
  • count (int) – The of instances to requested
  • type (str) – Type of request. Can be ‘one-time’ or ‘persistent’. Default is one-time.
  • valid_from (str) – Start date of the request. An ISO8601 time string.
  • valid_until (str) – End date of the request. An ISO8601 time string.
  • launch_group (str) – If supplied, all requests will be fulfilled as a group.
  • availability_zone_group (str) – If supplied, all requests will be fulfilled within a single availability zone.
  • key_name (string) – The name of the key pair with which to launch instances
  • security_groups (list of strings) – The names of the security groups with which to associate instances
  • user_data (string) – The user data passed to the launched instances
  • instance_type (string) –

    The type of instance to run:

    • m1.small
    • m1.large
    • m1.xlarge
    • c1.medium
    • c1.xlarge
    • m2.xlarge
    • m2.2xlarge
    • m2.4xlarge
    • cc1.4xlarge
    • t1.micro
  • placement (string) – The availability zone in which to launch the instances
  • kernel_id (string) – The ID of the kernel with which to launch the instances
  • ramdisk_id (string) – The ID of the RAM disk with which to launch the instances
  • monitoring_enabled (bool) – Enable CloudWatch monitoring on the instance.
  • subnet_id (string) – The subnet ID within which to launch the instances for VPC.
  • block_device_map (boto.ec2.blockdevicemapping.BlockDeviceMapping) – A BlockDeviceMapping data structure describing the EBS volumes associated with the Image.
Return type:

Reservation

Returns:

The boto.ec2.spotinstancerequest.SpotInstanceRequest associated with the request for machines

reset_image_attribute(image_id, attribute='launchPermission')

Resets an attribute of an AMI to its default value.

Parameters:
  • image_id (string) – ID of the AMI for which an attribute will be described
  • attribute (string) – The attribute to reset
Return type:

bool

Returns:

Whether the operation succeeded or not

reset_instance_attribute(instance_id, attribute)

Resets an attribute of an instance to its default value.

Parameters:
  • instance_id (string) – ID of the instance
  • attribute (string) – The attribute to reset. Valid values are: kernel|ramdisk
Return type:

bool

Returns:

Whether the operation succeeded or not

reset_snapshot_attribute(snapshot_id, attribute='createVolumePermission')

Resets an attribute of a snapshot to its default value.

Parameters:
  • snapshot_id (string) – ID of the snapshot
  • attribute (string) – The attribute to reset
Return type:

bool

Returns:

Whether the operation succeeded or not

revoke_security_group(group_name, src_security_group_name=None, src_security_group_owner_id=None, ip_protocol=None, from_port=None, to_port=None, cidr_ip=None)

Remove an existing rule from an existing security group. You need to pass in either src_security_group_name and src_security_group_owner_id OR ip_protocol, from_port, to_port, and cidr_ip. In other words, either you are revoking another group or you are revoking some ip-based rule.

Parameters:
  • group_name (string) – The name of the security group you are removing the rule from.
  • src_security_group_name (string) – The name of the security group you are revoking access to.
  • src_security_group_owner_id (string) – The ID of the owner of the security group you are revoking access to.
  • ip_protocol (string) – Either tcp | udp | icmp
  • from_port (int) – The beginning port number you are disabling
  • to_port (int) – The ending port number you are disabling
  • cidr_ip (string) – The CIDR block you are revoking access to. See http://goo.gl/Yj5QC
Return type:

bool

Returns:

True if successful.

revoke_security_group_deprecated(group_name, src_security_group_name=None, src_security_group_owner_id=None, ip_protocol=None, from_port=None, to_port=None, cidr_ip=None)
NOTE: This method uses the old-style request parameters
that did not allow a port to be specified when authorizing a group.

Remove an existing rule from an existing security group. You need to pass in either src_security_group_name and src_security_group_owner_id OR ip_protocol, from_port, to_port, and cidr_ip. In other words, either you are revoking another group or you are revoking some ip-based rule.

Parameters:
  • group_name (string) – The name of the security group you are removing the rule from.
  • src_security_group_name (string) – The name of the security group you are revoking access to.
  • src_security_group_owner_id (string) – The ID of the owner of the security group you are revoking access to.
  • ip_protocol (string) – Either tcp | udp | icmp
  • from_port (int) – The beginning port number you are disabling
  • to_port (string) – The ending port number you are disabling
  • to_port – The CIDR block you are revoking access to. http://goo.gl/Yj5QC
Return type:

bool

Returns:

True if successful.

run_instances(image_id, min_count=1, max_count=1, key_name=None, security_groups=None, user_data=None, addressing_type=None, instance_type='m1.small', placement=None, kernel_id=None, ramdisk_id=None, monitoring_enabled=False, subnet_id=None, block_device_map=None, disable_api_termination=False, instance_initiated_shutdown_behavior=None, private_ip_address=None, placement_group=None, client_token=None, security_group_ids=None)

Runs an image on EC2.

Parameters:
  • image_id (string) – The ID of the image to run
  • min_count (int) – The minimum number of instances to launch
  • max_count (int) – The maximum number of instances to launch
  • key_name (string) – The name of the key pair with which to launch instances
  • security_groups (list of strings) – The names of the security groups with which to associate instances
  • user_data (string) – The user data passed to the launched instances
  • instance_type (string) –

    The type of instance to run:

    • m1.small
    • m1.large
    • m1.xlarge
    • c1.medium
    • c1.xlarge
    • m2.xlarge
    • m2.2xlarge
    • m2.4xlarge
    • cc1.4xlarge
    • t1.micro
  • placement (string) – The availability zone in which to launch the instances
  • kernel_id (string) – The ID of the kernel with which to launch the instances
  • ramdisk_id (string) – The ID of the RAM disk with which to launch the instances
  • monitoring_enabled (bool) – Enable CloudWatch monitoring on the instance.
  • subnet_id (string) – The subnet ID within which to launch the instances for VPC.
  • private_ip_address (string) – If you’re using VPC, you can optionally use this parameter to assign the instance a specific available IP address from the subnet (e.g., 10.0.0.25).
  • block_device_map (boto.ec2.blockdevicemapping.BlockDeviceMapping) – A BlockDeviceMapping data structure describing the EBS volumes associated with the Image.
  • disable_api_termination (bool) – If True, the instances will be locked and will not be able to be terminated via the API.
  • instance_initiated_shutdown_behavior (string) –

    Specifies whether the instance stops or terminates on instance-initiated shutdown. Valid values are:

    • stop
    • terminate
  • placement_group (string) – If specified, this is the name of the placement group in which the instance(s) will be launched.
  • client_token (string) – Unique, case-sensitive identifier you provide to ensure idempotency of the request. Maximum 64 ASCII characters
  • security_group_ids (list of strings) – The ID of the VPC security groups with which to associate instances
Return type:

Reservation

Returns:

The boto.ec2.instance.Reservation associated with the request for machines

start_instances(instance_ids=None)

Start the instances specified

Parameters:instance_ids (list) – A list of strings of the Instance IDs to start
Return type:list
Returns:A list of the instances started
stop_instances(instance_ids=None, force=False)

Stop the instances specified

Parameters:
  • instance_ids (list) – A list of strings of the Instance IDs to stop
  • force (bool) – Forces the instance to stop
Return type:

list

Returns:

A list of the instances stopped

terminate_instances(instance_ids=None)

Terminate the instances specified

Parameters:instance_ids (list) – A list of strings of the Instance IDs to terminate
Return type:list
Returns:A list of the instances terminated
trim_snapshots(hourly_backups=8, daily_backups=7, weekly_backups=4)

Trim excess snapshots, based on when they were taken. More current snapshots are retained, with the number retained decreasing as you move back in time.

If ebs volumes have a ‘Name’ tag with a value, their snapshots will be assigned the same tag when they are created. The values of the ‘Name’ tags for snapshots are used by this function to group snapshots taken from the same volume (or from a series of like-named volumes over time) for trimming.

For every group of like-named snapshots, this function retains the newest and oldest snapshots, as well as, by default, the first snapshots taken in each of the last eight hours, the first snapshots taken in each of the last seven days, the first snapshots taken in the last 4 weeks (counting Midnight Sunday morning as the start of the week), and the first snapshot from the first Sunday of each month forever.

Parameters:
  • hourly_backups (int) – How many recent hourly backups should be saved.
  • daily_backups (int) – How many recent daily backups should be saved.
  • weekly_backups (int) – How many recent weekly backups should be saved.
unmonitor_instance(instance_id)

Deprecated Version, maintained for backward compatibility. Disable CloudWatch monitoring for the supplied instance.

Parameters:instance_id (string) – The instance id
Return type:list
Returns:A list of boto.ec2.instanceinfo.InstanceInfo
unmonitor_instances(instance_ids)

Disable CloudWatch monitoring for the supplied instance.

Parameters:instance_id (list of string) – The instance id
Return type:list
Returns:A list of boto.ec2.instanceinfo.InstanceInfo
boto.ec2.ec2object

Represents an EC2 Object

class boto.ec2.ec2object.EC2Object(connection=None)
endElement(name, value, connection)
startElement(name, attrs, connection)
class boto.ec2.ec2object.TaggedEC2Object(connection=None)

Any EC2 resource that can be tagged should be represented by a Python object that subclasses this class. This class has the mechanism in place to handle the tagSet element in the Describe* responses. If tags are found, it will create a TagSet object and allow it to parse and collect the tags into a dict that is stored in the “tags” attribute of the object.

add_tag(key, value=None)

Add a tag to this object. Tag’s are stored by AWS and can be used to organize and filter resources. Adding a tag involves a round-trip to the EC2 service.

Parameters:
  • key (str) – The key or name of the tag being stored.
  • value (str) – An optional value that can be stored with the tag.
remove_tag(key, value=None)

Remove a tag from this object. Removing a tag involves a round-trip to the EC2 service.

Parameters:
  • key (str) – The key or name of the tag being stored.
  • value (str) – An optional value that can be stored with the tag. If a value is provided, it must match the value currently stored in EC2. If not, the tag will not be removed.
startElement(name, attrs, connection)
boto.ec2.elb

This module provides an interface to the Elastic Compute Cloud (EC2) load balancing service from AWS.

class boto.ec2.elb.ELBConnection(aws_access_key_id=None, aws_secret_access_key=None, is_secure=False, port=None, proxy=None, proxy_port=None, proxy_user=None, proxy_pass=None, debug=0, https_connection_factory=None, region=None, path='/')

Init method to create a new connection to EC2 Load Balancing Service.

B{Note:} The region argument is overridden by the region specified in the boto configuration file.

APIVersion = '2011-04-05'
DefaultRegionEndpoint = 'elasticloadbalancing.amazonaws.com'
DefaultRegionName = 'us-east-1'
build_list_params(params, items, label)
configure_health_check(name, health_check)

Define a health check for the EndPoints.

Parameters:
Return type:

boto.ec2.elb.healthcheck.HealthCheck

Returns:

The updated boto.ec2.elb.healthcheck.HealthCheck

Generates a stickiness policy with sticky session lifetimes that follow that of an application-generated cookie. This policy can only be associated with HTTP listeners.

This policy is similar to the policy created by CreateLBCookieStickinessPolicy, except that the lifetime of the special Elastic Load Balancing cookie follows the lifetime of the application-generated cookie specified in the policy configuration. The load balancer only inserts a new stickiness cookie when the application response includes a new application cookie.

If the application cookie is explicitly removed or expires, the session stops being sticky until a new application cookie is issued.

Generates a stickiness policy with sticky session lifetimes controlled by the lifetime of the browser (user-agent) or a specified expiration period. This policy can only be associated only with HTTP listeners.

When a load balancer implements this policy, the load balancer uses a special cookie to track the backend server instance for each request. When the load balancer receives a request, it first checks to see if this cookie is present in the request. If so, the load balancer sends the request to the application server specified in the cookie. If not, the load balancer sends the request to a server that is chosen based on the existing load balancing algorithm.

A cookie is inserted into the response for binding subsequent requests from the same user to that server. The validity of the cookie is based on the cookie expiration time, which is specified in the policy configuration.

create_load_balancer(name, zones, listeners)

Create a new load balancer for your account.

Parameters:
  • name (string) – The mnemonic name associated with the new load balancer
  • zones (List of strings) – The names of the availability zone(s) to add.
  • listeners (List of tuples) – Each tuple contains three or four values, (LoadBalancerPortNumber, InstancePortNumber, Protocol, [SSLCertificateId]) where LoadBalancerPortNumber and InstancePortNumber are integer values between 1 and 65535, Protocol is a string containing either ‘TCP’, ‘HTTP’ or ‘HTTPS’; SSLCertificateID is the ARN of a AWS AIM certificate, and must be specified when doing HTTPS.
Return type:

boto.ec2.elb.loadbalancer.LoadBalancer

Returns:

The newly created boto.ec2.elb.loadbalancer.LoadBalancer

create_load_balancer_listeners(name, listeners)

Creates a Listener (or group of listeners) for an existing Load Balancer

Parameters:
  • name (string) – The name of the load balancer to create the listeners for
  • listeners (List of tuples) – Each tuple contains three values, (LoadBalancerPortNumber, InstancePortNumber, Protocol, [SSLCertificateId]) where LoadBalancerPortNumber and InstancePortNumber are integer values between 1 and 65535, Protocol is a string containing either ‘TCP’, ‘HTTP’ or ‘HTTPS’; SSLCertificateID is the ARN of a AWS AIM certificate, and must be specified when doing HTTPS.
Returns:

The status of the request

delete_lb_policy(lb_name, policy_name)

Deletes a policy from the LoadBalancer. The specified policy must not be enabled for any listeners.

delete_load_balancer(name)

Delete a Load Balancer from your account.

Parameters:name (string) – The name of the Load Balancer to delete
delete_load_balancer_listeners(name, ports)

Deletes a load balancer listener (or group of listeners)

Parameters:
  • name (string) – The name of the load balancer to create the listeners for
  • ports (List int) – Each int represents the port on the ELB to be removed
Returns:

The status of the request

deregister_instances(load_balancer_name, instances)

Remove Instances from an existing Load Balancer.

Parameters:
  • load_balancer_name (string) – The name of the Load Balancer
  • instances (List of strings) – The instance ID’s of the EC2 instances to remove.
Return type:

List of strings

Returns:

An updated list of instances for this Load Balancer.

describe_instance_health(load_balancer_name, instances=None)

Get current state of all Instances registered to an Load Balancer.

Parameters:
  • load_balancer_name (string) – The name of the Load Balancer
  • instances (List of strings) – The instance ID’s of the EC2 instances to return status for. If not provided, the state of all instances will be returned.
Return type:

List of boto.ec2.elb.instancestate.InstanceState

Returns:

list of state info for instances in this Load Balancer.

disable_availability_zones(load_balancer_name, zones_to_remove)

Remove availability zones from an existing Load Balancer. All zones must be in the same region as the Load Balancer. Removing zones that are not registered with the Load Balancer has no effect. You cannot remove all zones from an Load Balancer.

Parameters:
  • load_balancer_name (string) – The name of the Load Balancer
  • zones (List of strings) – The name of the zone(s) to remove.
Return type:

List of strings

Returns:

An updated list of zones for this Load Balancer.

enable_availability_zones(load_balancer_name, zones_to_add)

Add availability zones to an existing Load Balancer All zones must be in the same region as the Load Balancer Adding zones that are already registered with the Load Balancer has no effect.

Parameters:
  • load_balancer_name (string) – The name of the Load Balancer
  • zones (List of strings) – The name of the zone(s) to add.
Return type:

List of strings

Returns:

An updated list of zones for this Load Balancer.

get_all_load_balancers(load_balancer_names=None)

Retrieve all load balancers associated with your account.

Parameters:load_balancer_names (list) – An optional list of load balancer names
Return type:list
Returns:A list of boto.ec2.elb.loadbalancer.LoadBalancer
register_instances(load_balancer_name, instances)

Add new Instances to an existing Load Balancer.

Parameters:
  • load_balancer_name (string) – The name of the Load Balancer
  • instances (List of strings) – The instance ID’s of the EC2 instances to add.
Return type:

List of strings

Returns:

An updated list of instances for this Load Balancer.

set_lb_listener_SSL_certificate(lb_name, lb_port, ssl_certificate_id)

Sets the certificate that terminates the specified listener’s SSL connections. The specified certificate replaces any prior certificate that was used on the same LoadBalancer and port.

set_lb_policies_of_listener(lb_name, lb_port, policies)

Associates, updates, or disables a policy with a listener on the load balancer. Currently only zero (0) or one (1) policy can be associated with a listener.

boto.ec2.elb.connect_to_region(region_name, **kw_params)

Given a valid region name, return a boto.ec2.elb.ELBConnection.

Parameters:region_name (str) – The name of the region to connect to.
Return type:boto.ec2.ELBConnection or None
Returns:A connection to the given region, or None if an invalid region name is given
boto.ec2.elb.regions()

Get all available regions for the SDB service.

Return type:list
Returns:A list of boto.RegionInfo instances
boto.ec2.elb.healthcheck
class boto.ec2.elb.healthcheck.HealthCheck(access_point=None, interval=30, target=None, healthy_threshold=3, timeout=5, unhealthy_threshold=5)

Represents an EC2 Access Point Health Check

endElement(name, value, connection)
startElement(name, attrs, connection)
update()
boto.ec2.elb.instancestate
class boto.ec2.elb.instancestate.InstanceState(load_balancer=None, description=None, state=None, instance_id=None, reason_code=None)

Represents the state of an EC2 Load Balancer Instance

endElement(name, value, connection)
startElement(name, attrs, connection)
boto.ec2.elb.listelement
class boto.ec2.elb.listelement.ListElement
endElement(name, value, connection)
startElement(name, attrs, connection)
boto.ec2.elb.listener
class boto.ec2.elb.listener.Listener(load_balancer=None, load_balancer_port=0, instance_port=0, protocol='', ssl_certificate_id=None)

Represents an EC2 Load Balancer Listener tuple

endElement(name, value, connection)
get_tuple()
startElement(name, attrs, connection)
boto.ec2.elb.loadbalancer
class boto.ec2.elb.loadbalancer.LoadBalancer(connection=None, name=None, endpoints=None)

Represents an EC2 Load Balancer

configure_health_check(health_check)
create_listener(inPort, outPort=None, proto='tcp')
create_listeners(listeners)
delete()

Delete this load balancer

delete_listener(inPort, outPort=None, proto='tcp')
delete_listeners(listeners)
delete_policy(policy_name)

Deletes a policy from the LoadBalancer. The specified policy must not be enabled for any listeners.

deregister_instances(instances)

Remove instances from this Load Balancer. Removing instances that are not registered with the Load Balancer has no effect.

Parameters:zones (string or List of instance id's) – The name of the endpoint(s) to add.
disable_zones(zones)

Disable availability zones from this Access Point.

Parameters:zones (string or List of strings) – The name of the zone(s) to add.
enable_zones(zones)

Enable availability zones to this Access Point. All zones must be in the same region as the Access Point.

Parameters:zones (string or List of strings) – The name of the zone(s) to add.
endElement(name, value, connection)
get_instance_health(instances=None)
register_instances(instances)

Add instances to this Load Balancer All instances must be in the same region as the Load Balancer. Adding endpoints that are already registered with the Load Balancer has no effect.

Parameters:zones (string or List of instance id's) – The name of the endpoint(s) to add.
set_listener_SSL_certificate(lb_port, ssl_certificate_id)
set_policies_of_listener(lb_port, policies)
startElement(name, attrs, connection)
boto.ec2.image
class boto.ec2.image.Image(connection=None)

Represents an EC2 Image

deregister(delete_snapshot=False)
endElement(name, value, connection)
get_kernel()
get_launch_permissions()
get_ramdisk()
remove_launch_permissions(user_ids=None, group_names=None)
reset_launch_attributes()
run(min_count=1, max_count=1, key_name=None, security_groups=None, user_data=None, addressing_type=None, instance_type='m1.small', placement=None, kernel_id=None, ramdisk_id=None, monitoring_enabled=False, subnet_id=None, block_device_map=None, disable_api_termination=False, instance_initiated_shutdown_behavior=None, private_ip_address=None, placement_group=None, security_group_ids=None)

Runs this instance.

Parameters:
  • min_count (int) – The minimum number of instances to start
  • max_count (int) – The maximum number of instances to start
  • key_name (string) – The name of the keypair to run this instance with.
  • security_groups
  • user_data
  • daddressing_type
  • instance_type (string) – The type of instance to run. Current choices are: m1.small | m1.large | m1.xlarge | c1.medium | c1.xlarge | m2.xlarge | m2.2xlarge | m2.4xlarge | cc1.4xlarge
  • placement (string) – The availability zone in which to launch the instances
  • kernel_id (string) – The ID of the kernel with which to launch the instances
  • ramdisk_id (string) – The ID of the RAM disk with which to launch the instances
  • monitoring_enabled (bool) – Enable CloudWatch monitoring on the instance.
  • subnet_id (string) – The subnet ID within which to launch the instances for VPC.
  • private_ip_address (string) – If you’re using VPC, you can optionally use this parameter to assign the instance a specific available IP address from the subnet (e.g., 10.0.0.25).
  • block_device_map (boto.ec2.blockdevicemapping.BlockDeviceMapping) – A BlockDeviceMapping data structure describing the EBS volumes associated with the Image.
  • disable_api_termination (bool) – If True, the instances will be locked and will not be able to be terminated via the API.
  • instance_initiated_shutdown_behavior (string) – Specifies whether the instance stops or terminates on instance-initiated shutdown. Valid values are: stop | terminate
  • placement_group (string) – If specified, this is the name of the placement group in which the instance(s) will be launched.
  • security_group_ids
Return type:

Reservation

Returns:

The boto.ec2.instance.Reservation associated with the request for machines

set_launch_permissions(user_ids=None, group_names=None)
startElement(name, attrs, connection)
update(validate=False)

Update the image’s state information by making a call to fetch the current image attributes from the service.

Parameters:validate (bool) – By default, if EC2 returns no data about the image the update method returns quietly. If the validate param is True, however, it will raise a ValueError exception if no data is returned from EC2.
class boto.ec2.image.ImageAttribute(parent=None)
endElement(name, value, connection)
startElement(name, attrs, connection)
class boto.ec2.image.ProductCodes
endElement(name, value, connection)
startElement(name, attrs, connection)
boto.ec2.instance

Represents an EC2 Instance

class boto.ec2.instance.ConsoleOutput(parent=None)
endElement(name, value, connection)
startElement(name, attrs, connection)
class boto.ec2.instance.Group(parent=None)
endElement(name, value, connection)
startElement(name, attrs, connection)
class boto.ec2.instance.Instance(connection=None)
confirm_product(product_code)
endElement(name, value, connection)
get_attribute(attribute)

Gets an attribute from this instance.

Parameters:attribute (string) – The attribute you need information about Valid choices are: instanceType|kernel|ramdisk|userData| disableApiTermination| instanceInitiatedShutdownBehavior| rootDeviceName|blockDeviceMapping
Return type:boto.ec2.image.InstanceAttribute
Returns:An InstanceAttribute object representing the value of the attribute requested
get_console_output()

Retrieves the console output for the instance.

Return type:boto.ec2.instance.ConsoleOutput
Returns:The console output as a ConsoleOutput object
modify_attribute(attribute, value)

Changes an attribute of this instance

Parameters:
  • attribute (string) – The attribute you wish to change. AttributeName - Expected value (default) instanceType - A valid instance type (m1.small) kernel - Kernel ID (None) ramdisk - Ramdisk ID (None) userData - Base64 encoded String (None) disableApiTermination - Boolean (true) instanceInitiatedShutdownBehavior - stop|terminate rootDeviceName - device name (None)
  • value (string) – The new value for the attribute
Return type:

bool

Returns:

Whether the operation succeeded or not

monitor()
reboot()
reset_attribute(attribute)

Resets an attribute of this instance to its default value.

Parameters:attribute (string) – The attribute to reset. Valid values are: kernel|ramdisk
Return type:bool
Returns:Whether the operation succeeded or not
start()

Start the instance.

startElement(name, attrs, connection)
stop(force=False)

Stop the instance

Parameters:force (bool) – Forces the instance to stop
Return type:list
Returns:A list of the instances stopped
terminate()

Terminate the instance

unmonitor()
update(validate=False)

Update the instance’s state information by making a call to fetch the current instance attributes from the service.

Parameters:validate (bool) – By default, if EC2 returns no data about the instance the update method returns quietly. If the validate param is True, however, it will raise a ValueError exception if no data is returned from EC2.
use_ip(ip_address)
class boto.ec2.instance.InstanceAttribute(parent=None)
endElement(name, value, connection)
startElement(name, attrs, connection)
class boto.ec2.instance.Reservation(connection=None)
endElement(name, value, connection)
startElement(name, attrs, connection)
stop_all()
class boto.ec2.instance.StateReason(parent=None)
endElement(name, value, connection)
startElement(name, attrs, connection)
boto.ec2.instanceinfo
class boto.ec2.instanceinfo.InstanceInfo(connection=None, id=None, state=None)

Represents an EC2 Instance status response from CloudWatch

endElement(name, value, connection)
startElement(name, attrs, connection)
boto.ec2.keypair

Represents an EC2 Keypair

class boto.ec2.keypair.KeyPair(connection=None)
copy_to_region(region)

Create a new key pair of the same new in another region. Note that the new key pair will use a different ssh cert than the this key pair. After doing the copy, you will need to save the material associated with the new key pair (use the save method) to a local file.

Parameters:region (boto.ec2.regioninfo.RegionInfo) – The region to which this security group will be copied.
Return type:boto.ec2.keypair.KeyPair
Returns:The new key pair
delete()

Delete the KeyPair.

Return type:bool
Returns:True if successful, otherwise False.
endElement(name, value, connection)
save(directory_path)

Save the material (the unencrypted PEM encoded RSA private key) of a newly created KeyPair to a local file.

Parameters:directory_path (string) – The fully qualified path to the directory in which the keypair will be saved. The keypair file will be named using the name of the keypair as the base name and .pem for the file extension. If a file of that name already exists in the directory, an exception will be raised and the old file will not be overwritten.
Return type:bool
Returns:True if successful.
boto.ec2.regioninfo
class boto.ec2.regioninfo.EC2RegionInfo(connection=None, name=None, endpoint=None)

Represents an EC2 Region

boto.ec2.reservedinstance
class boto.ec2.reservedinstance.ReservedInstance(connection=None, id=None, instance_type=None, availability_zone=None, duration=None, fixed_price=None, usage_price=None, description=None, instance_count=None, state=None)
endElement(name, value, connection)
class boto.ec2.reservedinstance.ReservedInstancesOffering(connection=None, id=None, instance_type=None, availability_zone=None, duration=None, fixed_price=None, usage_price=None, description=None)
describe()
endElement(name, value, connection)
purchase(instance_count=1)
startElement(name, attrs, connection)
boto.ec2.securitygroup

Represents an EC2 Security Group

class boto.ec2.securitygroup.GroupOrCIDR(parent=None)
endElement(name, value, connection)
startElement(name, attrs, connection)
class boto.ec2.securitygroup.IPPermissions(parent=None)
add_grant(name=None, owner_id=None, cidr_ip=None)
endElement(name, value, connection)
startElement(name, attrs, connection)
class boto.ec2.securitygroup.SecurityGroup(connection=None, owner_id=None, name=None, description=None, id=None)
add_rule(ip_protocol, from_port, to_port, src_group_name, src_group_owner_id, cidr_ip)

Add a rule to the SecurityGroup object. Note that this method only changes the local version of the object. No information is sent to EC2.

authorize(ip_protocol=None, from_port=None, to_port=None, cidr_ip=None, src_group=None)

Add a new rule to this security group. You need to pass in either src_group_name OR ip_protocol, from_port, to_port, and cidr_ip. In other words, either you are authorizing another group or you are authorizing some ip-based rule.

Parameters:
Return type:

bool

Returns:

True if successful.

copy_to_region(region, name=None)

Create a copy of this security group in another region. Note that the new security group will be a separate entity and will not stay in sync automatically after the copy operation.

Parameters:
  • region (boto.ec2.regioninfo.RegionInfo) – The region to which this security group will be copied.
  • name (string) – The name of the copy. If not supplied, the copy will have the same name as this security group.
Return type:

boto.ec2.securitygroup.SecurityGroup

Returns:

The new security group.

delete()
endElement(name, value, connection)
instances()

Find all of the current instances that are running within this security group.

Return type:list of boto.ec2.instance.Instance
Returns:A list of Instance objects
remove_rule(ip_protocol, from_port, to_port, src_group_name, src_group_owner_id, cidr_ip)

Remove a rule to the SecurityGroup object. Note that this method only changes the local version of the object. No information is sent to EC2.

revoke(ip_protocol=None, from_port=None, to_port=None, cidr_ip=None, src_group=None)
startElement(name, attrs, connection)
boto.ec2.snapshot

Represents an EC2 Elastic IP Snapshot

class boto.ec2.snapshot.Snapshot(connection=None)
delete()
endElement(name, value, connection)
get_permissions()
reset_permissions()
share(user_ids=None, groups=None)
unshare(user_ids=None, groups=None)
update(validate=False)

Update the data associated with this snapshot by querying EC2.

Parameters:validate (bool) – By default, if EC2 returns no data about the snapshot the update method returns quietly. If the validate param is True, however, it will raise a ValueError exception if no data is returned from EC2.
class boto.ec2.snapshot.SnapshotAttribute(parent=None)
endElement(name, value, connection)
startElement(name, attrs, connection)
boto.ec2.volume

Represents an EC2 Elastic Block Storage Volume

class boto.ec2.volume.AttachmentSet
endElement(name, value, connection)
startElement(name, attrs, connection)
class boto.ec2.volume.Volume(connection=None)
attach(instance_id, device)

Attach this EBS volume to an EC2 instance.

Parameters:
  • instance_id (str) – The ID of the EC2 instance to which it will be attached.
  • device (str) – The device on the instance through which the volume will be exposed (e.g. /dev/sdh)
Return type:

bool

Returns:

True if successful

attachment_state()

Get the attachment state.

create_snapshot(description=None)

Create a snapshot of this EBS Volume.

Parameters:description (str) – A description of the snapshot. Limited to 256 characters.
Return type:bool
Returns:True if successful
delete()

Delete this EBS volume.

Return type:bool
Returns:True if successful
detach(force=False)

Detach this EBS volume from an EC2 instance.

Parameters:force (bool) – Forces detachment if the previous detachment attempt did not occur cleanly. This option can lead to data loss or a corrupted file system. Use this option only as a last resort to detach a volume from a failed instance. The instance will not have an opportunity to flush file system caches nor file system meta data. If you use this option, you must perform file system check and repair procedures.
Return type:bool
Returns:True if successful
endElement(name, value, connection)
snapshots(owner=None, restorable_by=None)

Get all snapshots related to this volume. Note that this requires that all available snapshots for the account be retrieved from EC2 first and then the list is filtered client-side to contain only those for this volume.

Parameters:
  • owner (str) – If present, only the snapshots owned by the specified user will be returned. Valid values are: self | amazon | AWS Account ID
  • restorable_by (str) – If present, only the snapshots that are restorable by the specified account id will be returned.
Return type:

list of L{boto.ec2.snapshot.Snapshot}

Returns:

The requested Snapshot objects

startElement(name, attrs, connection)
update(validate=False)

Update the data associated with this volume by querying EC2.

Parameters:validate (bool) – By default, if EC2 returns no data about the volume the update method returns quietly. If the validate param is True, however, it will raise a ValueError exception if no data is returned from EC2.
volume_state()

Returns the state of the volume. Same value as the status attribute.

boto.ec2.zone

Represents an EC2 Availability Zone

class boto.ec2.zone.Zone(connection=None)
endElement(name, value, connection)

ECS

boto.ecs
class boto.ecs.ECSConnection(aws_access_key_id=None, aws_secret_access_key=None, is_secure=True, port=None, proxy=None, proxy_port=None, proxy_user=None, proxy_pass=None, host='ecs.amazonaws.com', debug=0, https_connection_factory=None, path='/')

ECommerce Connection

For more information on how to use this module see:

http://blog.coredumped.org/2010/09/search-for-books-on-amazon-using-boto.html

APIVersion = '2010-11-01'
get_response(action, params, page=0, itemSet=None)

Utility method to handle calls to ECS and parsing of responses.

Returns items that satisfy the search criteria, including one or more search indices.

For a full list of search terms, :see: http://docs.amazonwebservices.com/AWSECommerceService/2010-09-01/DG/index.html?ItemSearch.html

boto.iam.response
boto.ecs.item
class boto.ecs.item.Item(connection=None)

A single Item

Initialize this Item

class boto.ecs.item.ItemSet(connection, action, params, page=0)

A special ResponseGroup that has built-in paging, and only creates new Items on the “Item” tag

endElement(name, value, connection)
next()

Special paging functionality

startElement(name, attrs, connection)
to_xml()

Override to first fetch everything

class boto.ecs.item.ResponseGroup(connection=None, nodename=None)

A Generic “Response Group”, which can be anything from the entire list of Items to specific response elements within an item

Initialize this Item

endElement(name, value, connection)
get(name)
set(name, value)
startElement(name, attrs, connection)
to_xml()

EMR

boto.emr

This module provies an interface to the Elastic MapReduce (EMR) service from AWS.

boto.emr.connection

Represents a connection to the EMR service

class boto.emr.connection.EmrConnection(aws_access_key_id=None, aws_secret_access_key=None, is_secure=True, port=None, proxy=None, proxy_port=None, proxy_user=None, proxy_pass=None, debug=0, https_connection_factory=None, region=None, path='/')
APIVersion = '2009-03-31'
DebuggingArgs = 's3n://us-east-1.elasticmapreduce/libs/state-pusher/0.1/fetch'
DebuggingJar = 's3n://us-east-1.elasticmapreduce/libs/script-runner/script-runner.jar'
DefaultRegionEndpoint = 'elasticmapreduce.amazonaws.com'
DefaultRegionName = 'us-east-1'
ResponseError

alias of EmrResponseError

add_jobflow_steps(jobflow_id, steps)

Adds steps to a jobflow

Parameters:
  • jobflow_id (str) – The job flow id
  • steps (list(boto.emr.Step)) – A list of steps to add to the job
describe_jobflow(jobflow_id)

Describes a single Elastic MapReduce job flow

Parameters:jobflow_id (str) – The job flow id of interest
describe_jobflows(states=None, jobflow_ids=None, created_after=None, created_before=None)

Retrieve all the Elastic MapReduce job flows on your account

Parameters:
  • states (list) – A list of strings with job flow states wanted
  • jobflow_ids (list) – A list of job flow IDs
  • created_after (datetime) – Bound on job flow creation time
  • created_before (datetime) – Bound on job flow creation time
run_jobflow(name, log_uri, ec2_keyname=None, availability_zone=None, master_instance_type='m1.small', slave_instance_type='m1.small', num_instances=1, action_on_failure='TERMINATE_JOB_FLOW', keep_alive=False, enable_debugging=False, hadoop_version='0.18', steps=[], bootstrap_actions=[])

Runs a job flow

Parameters:
  • name (str) – Name of the job flow
  • log_uri (str) – URI of the S3 bucket to place logs
  • ec2_keyname (str) – EC2 key used for the instances
  • availability_zone (str) – EC2 availability zone of the cluster
  • master_instance_type (str) – EC2 instance type of the master
  • slave_instance_type (str) – EC2 instance type of the slave nodes
  • num_instances (int) – Number of instances in the Hadoop cluster
  • action_on_failure (str) – Action to take if a step terminates
  • keep_alive (bool) – Denotes whether the cluster should stay alive upon completion
  • enable_debugging (bool) – Denotes whether AWS console debugging should be enabled.
  • steps (list(boto.emr.Step)) – List of steps to add with the job
Return type:

str

Returns:

The jobflow id

terminate_jobflow(jobflow_id)

Terminate an Elastic MapReduce job flow

Parameters:jobflow_id (str) – A jobflow id
terminate_jobflows(jobflow_ids)

Terminate an Elastic MapReduce job flow

Parameters:jobflow_ids (list) – A list of job flow IDs
boto.emr.step
class boto.emr.step.JarStep(name, jar, main_class=None, action_on_failure='TERMINATE_JOB_FLOW', step_args=None)

Custom jar step

A elastic mapreduce step that executes a jar

Parameters:
  • name (str) – The name of the step
  • jar (str) – S3 URI to the Jar file
  • main_class (str) – The class to execute in the jar
  • action_on_failure (str) – An action, defined in the EMR docs to take on failure.
  • step_args (list(str)) – A list of arguments to pass to the step
args()
jar()
main_class()
class boto.emr.step.Step

Jobflow Step base class

args()
Return type:list(str)
Returns:List of arguments for the step
jar()
Return type:str
Returns:URI to the jar
main_class()
Return type:str
Returns:The main class name
class boto.emr.step.StreamingStep(name, mapper, reducer=None, action_on_failure='TERMINATE_JOB_FLOW', cache_files=None, cache_archives=None, step_args=None, input=None, output=None, jar='/home/hadoop/contrib/streaming/hadoop-streaming.jar')

Hadoop streaming step

A hadoop streaming elastic mapreduce step

Parameters:
  • name (str) – The name of the step
  • mapper (str) – The mapper URI
  • reducer (str) – The reducer URI
  • action_on_failure (str) – An action, defined in the EMR docs to take on failure.
  • cache_files (list(str)) – A list of cache files to be bundled with the job
  • cache_archives (list(str)) – A list of jar archives to be bundled with the job
  • step_args (list(str)) – A list of arguments to pass to the step
  • input (str or a list of str) – The input uri
  • output (str) – The output uri
  • jar (str) – The hadoop streaming jar. This can be either a local path on the master node, or an s3:// URI.
args()
jar()
main_class()
boto.emr.emrobject

This module contains EMR response objects

class boto.emr.emrobject.AddInstanceGroupsResponse(connection=None)
Fields = set(['InstanceGroupIds', 'JobFlowId'])
class boto.emr.emrobject.Arg(connection=None)
endElement(name, value, connection)
class boto.emr.emrobject.BootstrapAction(connection=None)
Fields = set(['Path', 'Args', 'Name'])
startElement(name, attrs, connection)
class boto.emr.emrobject.EmrObject(connection=None)
Fields = set([])
endElement(name, value, connection)
startElement(name, attrs, connection)
class boto.emr.emrobject.InstanceGroup(connection=None)
Fields = set(['ReadyDateTime', 'InstanceType', 'InstanceRole', 'EndDateTime', 'InstanceRunningCount', 'State', 'BidPrice', 'Market', 'StartDateTime', 'Name', 'InstanceGroupId', 'CreationDateTime', 'InstanceRequestCount', 'LastStateChangeReason', 'LaunchGroup'])
class boto.emr.emrobject.JobFlow(connection=None)
Fields = set(['TerminationProtected', 'MasterInstanceId', 'State', 'HadoopVersion', 'LogUri', 'Ec2KeyName', 'ReadyDateTime', 'Type', 'JobFlowId', 'CreationDateTime', 'LastStateChangeReason', 'Name', 'EndDateTime', 'Value', 'InstanceCount', 'RequestId', 'StartDateTime', 'SlaveInstanceType', 'AvailabilityZone', 'MasterPublicDnsName', 'NormalizedInstanceHours', 'MasterInstanceType', 'KeepJobFlowAliveWhenNoSteps', 'Id'])
startElement(name, attrs, connection)
class boto.emr.emrobject.KeyValue(connection=None)
Fields = set(['Value', 'Key'])
class boto.emr.emrobject.ModifyInstanceGroupsResponse(connection=None)
Fields = set(['RequestId'])
class boto.emr.emrobject.RunJobFlowResponse(connection=None)
Fields = set(['JobFlowId'])
class boto.emr.emrobject.Step(connection=None)
Fields = set(['Name', 'EndDateTime', 'Jar', 'ActionOnFailure', 'State', 'MainClass', 'StartDateTime', 'CreationDateTime', 'LastStateChangeReason'])
startElement(name, attrs, connection)

file

boto.file.bucket
class boto.file.bucket.Bucket(name, contained_key)

Instantiate an anonymous file-based Bucket around a single key.

delete_key(key_name, headers=None, version_id=None, mfa_token=None)

Deletes a key from the bucket.

Parameters:
  • key_name (string) – The key name to delete
  • version_id (string) – Unused in this subclass.
  • mfa_token (tuple or list of strings) – Unused in this subclass.
get_all_keys(headers=None, **params)

This method returns the single key around which this anonymous Bucket was instantiated.

Return type:SimpleResultSet
Returns:The result from file system listing the keys requested
get_key(key_name, headers=None, version_id=None)

Check to see if a particular key exists within the bucket. Returns: An instance of a Key object or None

Parameters:
  • key_name (string) – The name of the key to retrieve
  • version_id (string) – Unused in this subclass.
Return type:

boto.file.key.Key

Returns:

A Key object from this bucket.

new_key(key_name=None)

Creates a new key

Parameters:key_name (string) – The name of the key to create
Return type:boto.file.key.Key
Returns:An instance of the newly created key object
boto.file.simpleresultset
class boto.file.simpleresultset.SimpleResultSet(input_list)

ResultSet facade built from a simple list, rather than via XML parsing.

boto.file.connection
class boto.file.connection.FileConnection(file_storage_uri)
get_bucket(bucket_name, validate=True, headers=None)
boto.file.key
class boto.file.key.Key(bucket, name, fp=None)
get_contents_as_string(headers=None, cb=None, num_cb=10, torrent=False)

Retrieve file data from the Key, and return contents as a string.

Parameters:
  • headers (dict) – ignored in this subclass.
  • cb (int) – ignored in this subclass.
  • num_cb – ignored in this subclass.
  • num_cb – ignored in this subclass.
  • torrent (bool) – ignored in this subclass.
Return type:

string

Returns:

The contents of the file as a string

get_file(fp, headers=None, cb=None, num_cb=10, torrent=False)

Retrieves a file from a Key

Parameters:
  • fp (file) – File pointer to put the data into
  • cb (int) – ignored in this subclass.
  • num_cb – ignored in this subclass.
Param:

ignored in this subclass.

set_contents_from_file(fp, headers=None, replace=True, cb=None, num_cb=10, policy=None, md5=None)

Store an object in a file using the name of the Key object as the key in file URI and the contents of the file pointed to by ‘fp’ as the contents.

Parameters:
  • fp (file) – the file whose contents to upload
  • headers (dict) – ignored in this subclass.
  • replace (bool) – If this parameter is False, the method will first check to see if an object exists in the bucket with the same key. If it does, it won’t overwrite it. The default value is True which will overwrite the object.
  • cb (int) – ignored in this subclass.
  • num_cb – ignored in this subclass.
  • policy (boto.s3.acl.CannedACLStrings) – ignored in this subclass.
  • md5 (A tuple containing the hexdigest version of the MD5 checksum of the file as the first element and the Base64-encoded version of the plain checksum as the second element. This is the same format returned by the compute_md5 method.) – ignored in this subclass.

fps

boto.fps
boto.fps.connection
class boto.fps.connection.FPSConnection(aws_access_key_id=None, aws_secret_access_key=None, is_secure=True, port=None, proxy=None, proxy_port=None, proxy_user=None, proxy_pass=None, host='fps.sandbox.amazonaws.com', debug=0, https_connection_factory=None, path='/')
APIVersion = '2007-01-08'
cancel(transactionId, description=None)

Cancels a reserved or pending transaction.

get_recipient_verification_status(recipientTokenId)

Test that the intended recipient has a verified Amazon Payments account.

get_token_by_caller_reference(callerReference)

Returns details about the token specified by ‘callerReference’.

get_token_by_caller_token(tokenId)

Returns details about the token specified by ‘callerReference’.

get_transaction_status(transactionId)

Returns the status of a given transaction.

install_caller_instruction(token_type='Unrestricted', transaction_id=None)

Set us up as a caller This will install a new caller_token into the FPS section. This should really only be called to regenerate the caller token.

install_payment_instruction(instruction, token_type='Unrestricted', transaction_id=None)

InstallPaymentInstruction instruction: The PaymentInstruction to send, for example:

MyRole==’Caller’ orSay ‘Roles do not match’;

token_type: Defaults to “Unrestricted” transaction_id: Defaults to a new ID

install_recipient_instruction(token_type='Unrestricted', transaction_id=None)

Set us up as a Recipient This will install a new caller_token into the FPS section. This should really only be called to regenerate the recipient token.

make_marketplace_registration_url(returnURL, pipelineName, maxFixedFee=0.0, maxVariableFee=0.0, recipientPaysFee=True, **params)

Generate the URL with the signature required for signing up a recipient

make_url(returnURL, paymentReason, pipelineName, transactionAmount, **params)

Generate the URL with the signature required for a transaction

pay(transactionAmount, senderTokenId, recipientTokenId=None, callerTokenId=None, chargeFeeTo='Recipient', callerReference=None, senderReference=None, recipientReference=None, senderDescription=None, recipientDescription=None, callerDescription=None, metadata=None, transactionDate=None, reserve=False)

Make a payment transaction. You must specify the amount. This can also perform a Reserve request if ‘reserve’ is set to True.

refund(callerReference, transactionId, refundAmount=None, callerDescription=None)

Refund a transaction. This refunds the full amount by default unless ‘refundAmount’ is specified.

settle(reserveTransactionId, transactionAmount=None)

Charges for a reserved payment.

verify_signature(end_point_url, http_parameters)

GS

boto.gs.acl
class boto.gs.acl.ACL(parent=None)
acl
add_email_grant(permission, email_address)
add_group_email_grant(permission, email_address)
add_group_grant(permission, group_id)
add_user_grant(permission, user_id)
endElement(name, value, connection)
startElement(name, attrs, connection)
to_xml()
class boto.gs.acl.Entries(parent=None)
endElement(name, value, connection)
startElement(name, attrs, connection)
to_xml()
class boto.gs.acl.Entry(scope=None, type=None, id=None, name=None, email_address=None, domain=None, permission=None)
endElement(name, value, connection)
startElement(name, attrs, connection)
to_xml()
class boto.gs.acl.Scope(parent, type=None, id=None, name=None, email_address=None, domain=None)
ALLOWED_SCOPE_TYPE_SUB_ELEMS = {'GroupByDomain': ['Domain'], 'UserByEmail': ['EmailAddress', 'Name'], 'UserById': ['ID', 'Name'], 'AllUsers': [], 'GroupByEmail': ['EmailAddress', 'Name'], 'AllAuthenticatedUsers': [], 'GroupById': ['ID', 'Name']}
endElement(name, value, connection)
startElement(name, attrs, connection)
to_xml()
boto.gs.bucket
class boto.gs.bucket.Bucket(connection=None, name=None, key_class=<class 'boto.gs.key.Key'>)
add_email_grant(permission, email_address, recursive=False, headers=None)

Convenience method that provides a quick way to add an email grant to a bucket. This method retrieves the current ACL, creates a new grant based on the parameters passed in, adds that grant to the ACL and then PUT’s the new ACL back to GS.

Parameters:
  • permission (string) – The permission being granted. Should be one of: (READ, WRITE, FULL_CONTROL).
  • email_address (string) – The email address associated with the GS account your are granting the permission to.
  • recursive (boolean) – A boolean value to controls whether the call will apply the grant to all keys within the bucket or not. The default value is False. By passing a True value, the call will iterate through all keys in the bucket and apply the same grant to each key. CAUTION: If you have a lot of keys, this could take a long time!
add_group_email_grant(permission, email_address, recursive=False, headers=None)

Convenience method that provides a quick way to add an email group grant to a bucket. This method retrieves the current ACL, creates a new grant based on the parameters passed in, adds that grant to the ACL and then PUT’s the new ACL back to GS.

Parameters:
  • permission (string) – The permission being granted. Should be one of: READ|WRITE|FULL_CONTROL See http://code.google.com/apis/storage/docs/developer-guide.html#authorization for more details on permissions.
  • email_address (string) – The email address associated with the Google Group to which you are granting the permission.
  • recursive (bool) – A boolean value to controls whether the call will apply the grant to all keys within the bucket or not. The default value is False. By passing a True value, the call will iterate through all keys in the bucket and apply the same grant to each key. CAUTION: If you have a lot of keys, this could take a long time!
add_user_grant(permission, user_id, recursive=False, headers=None)

Convenience method that provides a quick way to add a canonical user grant to a bucket. This method retrieves the current ACL, creates a new grant based on the parameters passed in, adds that grant to the ACL and then PUTs the new ACL back to GS.

Parameters:
  • permission (string) – The permission being granted. Should be one of: (READ|WRITE|FULL_CONTROL)
  • user_id (string) – The canonical user id associated with the GS account you are granting the permission to.
  • recursive (bool) – A boolean value to controls whether the call will apply the grant to all keys within the bucket or not. The default value is False. By passing a True value, the call will iterate through all keys in the bucket and apply the same grant to each key. CAUTION: If you have a lot of keys, this could take a long time!
get_acl(key_name='', headers=None, version_id=None)
list_grants(headers=None)
set_acl(acl_or_str, key_name='', headers=None, version_id=None)
set_canned_acl(acl_str, key_name='', headers=None, version_id=None)
boto.gs.connection
class boto.gs.connection.GSConnection(gs_access_key_id=None, gs_secret_access_key=None, is_secure=True, port=None, proxy=None, proxy_port=None, proxy_user=None, proxy_pass=None, host='commondatastorage.googleapis.com', debug=0, https_connection_factory=None, calling_format=<boto.s3.connection.SubdomainCallingFormat instance>, path='/')
DefaultHost = 'commondatastorage.googleapis.com'
QueryString = 'Signature=%s&Expires=%d&AWSAccessKeyId=%s'
create_bucket(bucket_name, headers=None, location='', policy=None)

Creates a new bucket. By default it’s located in the USA. You can pass Location.EU to create an European bucket. You can also pass a LocationConstraint, which (in addition to locating the bucket in the specified location) informs Google that Google services must not copy data out of that location.

Parameters:
  • bucket_name (string) – The name of the new bucket
  • headers (dict) – Additional headers to pass along with the request to AWS.
  • location (boto.gs.connection.Location) – The location of the new bucket
  • policy (boto.s3.acl.CannedACLStrings) – A canned ACL policy that will be applied to the new key in S3.
class boto.gs.connection.Location
DEFAULT = ''
EU = 'EU'
boto.gs.key
class boto.gs.key.Key(bucket=None, name=None)
add_email_grant(permission, email_address)

Convenience method that provides a quick way to add an email grant to a key. This method retrieves the current ACL, creates a new grant based on the parameters passed in, adds that grant to the ACL and then PUT’s the new ACL back to GS.

Parameters:
add_group_email_grant(permission, email_address, headers=None)

Convenience method that provides a quick way to add an email group grant to a key. This method retrieves the current ACL, creates a new grant based on the parameters passed in, adds that grant to the ACL and then PUT’s the new ACL back to GS.

Parameters:
add_group_grant(permission, group_id)

Convenience method that provides a quick way to add a canonical group grant to a key. This method retrieves the current ACL, creates a new grant based on the parameters passed in, adds that grant to the ACL and then PUT’s the new ACL back to GS.

Parameters:
add_user_grant(permission, user_id)

Convenience method that provides a quick way to add a canonical user grant to a key. This method retrieves the current ACL, creates a new grant based on the parameters passed in, adds that grant to the ACL and then PUT’s the new ACL back to GS.

Parameters:
set_contents_from_file(fp, headers=None, replace=True, cb=None, num_cb=10, policy=None, md5=None, res_upload_handler=None)

Store an object in GS using the name of the Key object as the key in GS and the contents of the file pointed to by ‘fp’ as the contents.

Parameters:
  • fp (file) – the file whose contents are to be uploaded
  • headers (dict) – additional HTTP headers to be sent with the PUT request.
  • replace (bool) – If this parameter is False, the method will first check to see if an object exists in the bucket with the same key. If it does, it won’t overwrite it. The default value is True which will overwrite the object.
  • cb (function) – a callback function that will be called to report progress on the upload. The callback should accept two integer parameters, the first representing the number of bytes that have been successfully transmitted to GS and the second representing the total number of bytes that need to be transmitted.
  • num_cb (int) – (optional) If a callback is specified with the cb parameter, this parameter determines the granularity of the callback by defining the maximum number of times the callback will be called during the file transfer.
  • policy (boto.gs.acl.CannedACLStrings) – A canned ACL policy that will be applied to the new key in GS.
  • md5 (A tuple containing the hexdigest version of the MD5 checksum of the file as the first element and the Base64-encoded version of the plain checksum as the second element. This is the same format returned by the compute_md5 method.) – If you need to compute the MD5 for any reason prior to upload, it’s silly to have to do it twice so this param, if present, will be used as the MD5 values of the file. Otherwise, the checksum will be computed.
  • res_upload_handler (ResumableUploadHandler) – If provided, this handler will perform the upload.

TODO: At some point we should refactor the Bucket and Key classes, to move functionality common to all providers into a parent class, and provider-specific functionality into subclasses (rather than just overriding/sharing code the way it currently works).

set_contents_from_filename(filename, headers=None, replace=True, cb=None, num_cb=10, policy=None, md5=None, reduced_redundancy=None, res_upload_handler=None)

Store an object in GS using the name of the Key object as the key in GS and the contents of the file named by ‘filename’. See set_contents_from_file method for details about the parameters.

Parameters:
  • filename (string) – The name of the file that you want to put onto GS
  • headers (dict) – Additional headers to pass along with the request to GS.
  • replace (bool) – If True, replaces the contents of the file if it already exists.
  • cb (int) – (optional) a callback function that will be called to report progress on the download. The callback should accept two integer parameters, the first representing the number of bytes that have been successfully transmitted from GS and the second representing the total number of bytes that need to be transmitted.
  • num_cb – (optional) If a callback is specified with the cb parameter this parameter determines the granularity of the callback by defining the maximum number of times the callback will be called during the file transfer.
  • policy (boto.gs.acl.CannedACLStrings) – A canned ACL policy that will be applied to the new key in GS.
  • md5 (A tuple containing the hexdigest version of the MD5 checksum of the file as the first element and the Base64-encoded version of the plain checksum as the second element. This is the same format returned by the compute_md5 method.) – If you need to compute the MD5 for any reason prior to upload, it’s silly to have to do it twice so this param, if present, will be used as the MD5 values of the file. Otherwise, the checksum will be computed.
  • res_upload_handler (ResumableUploadHandler) – If provided, this handler will perform the upload.
boto.gs.user
class boto.gs.user.User(parent=None, id='', name='')
endElement(name, value, connection)
startElement(name, attrs, connection)
to_xml(element_name='Owner')
boto.gs.resumable_upload_handler
class boto.gs.resumable_upload_handler.ResumableUploadHandler(tracker_file_name=None, num_retries=None)

Constructor. Instantiate once for each uploaded file.

Parameters:
  • tracker_file_name (string) – optional file name to save tracker URI. If supplied and the current process fails the upload, it can be retried in a new process. If called with an existing file containing a valid tracker URI, we’ll resume the upload from this URI; else we’ll start a new resumable upload (and write the URI to this tracker file).
  • num_retries (int) – the number of times we’ll re-try a resumable upload making no progress. (Count resets every time we get progress, so upload can span many more than this number of retries.)
BUFFER_SIZE = 8192
RETRYABLE_EXCEPTIONS = (<class 'httplib.HTTPException'>, <type 'exceptions.IOError'>, <class 'socket.error'>, <class 'socket.gaierror'>)
SERVER_HAS_NOTHING = (0, -1)
get_tracker_uri()

Returns upload tracker URI, or None if the upload has not yet started.

send_file(key, fp, headers, cb=None, num_cb=10)

Upload a file to a key into a bucket on GS, using GS resumable upload protocol.

Parameters:
  • key (boto.s3.key.Key or subclass) – The Key object to which data is to be uploaded
  • fp (file-like object) – The file pointer to upload
  • headers (dict) – The headers to pass along with the PUT request
  • cb (function) – a callback function that will be called to report progress on the upload. The callback should accept two integer parameters, the first representing the number of bytes that have been successfully transmitted to GS, and the second representing the total number of bytes that need to be transmitted.
  • num_cb (int) – (optional) If a callback is specified with the cb parameter, this parameter determines the granularity of the callback by defining the maximum number of times the callback will be called during the file transfer. Providing a negative integer will cause your callback to be called with each buffer read.

Raises ResumableUploadException if a problem occurs during the transfer.

IAM

boto.iam
boto.iam.connection
class boto.iam.connection.IAMConnection(aws_access_key_id=None, aws_secret_access_key=None, is_secure=True, port=None, proxy=None, proxy_port=None, proxy_user=None, proxy_pass=None, host='iam.amazonaws.com', debug=0, https_connection_factory=None, path='/')
APIVersion = '2010-05-08'
add_user_to_group(group_name, user_name)

Add a user to a group

Parameters:
  • group_name (string) – The name of the group
  • user_name (string) – The to be added to the group.
create_access_key(user_name=None)

Create a new AWS Secret Access Key and corresponding AWS Access Key ID for the specified user. The default status for new keys is Active

If the user_name is not specified, the user_name is determined implicitly based on the AWS Access Key ID used to sign the request.

Parameters:user_name (string) – The username of the user
create_account_alias(alias)

Creates a new alias for the AWS account.

For more information on account id aliases, please see http://goo.gl/ToB7G

Parameters:alias (string) – The alias to attach to the account.
create_group(group_name, path='/')

Create a group.

Parameters:
  • group_name (string) – The name of the new group
  • path (string) – The path to the group (Optional). Defaults to /.
create_login_profile(user_name, password)

Creates a login profile for the specified user, give the user the ability to access AWS services and the AWS Management Console.

Parameters:
  • user_name (string) – The name of the user
  • password (string) – The new password for the user
create_user(user_name, path='/')

Create a user.

Parameters:
  • user_name (string) – The name of the new user
  • path (string) – The path in which the user will be created. Defaults to /.
deactivate_mfa_device(user_name, serial_number)

Deactivates the specified MFA device and removes it from association with the user.

Parameters:
  • user_name (string) – The username of the user
  • seriasl_number – The serial number which uniquely identifies the MFA device.
delete_access_key(access_key_id, user_name=None)

Delete an access key associated with a user.

If the user_name is not specified, it is determined implicitly based on the AWS Access Key ID used to sign the request.

Parameters:
  • access_key_id (string) – The ID of the access key to be deleted.
  • user_name (string) – The username of the user
delete_account_alias(alias)

Deletes an alias for the AWS account.

For more information on account id aliases, please see http://goo.gl/ToB7G

Parameters:alias (string) – The alias to remove from the account.
delete_group(group_name)

Delete a group. The group must not contain any Users or have any attached policies

Parameters:group_name (string) – The name of the group to delete.
delete_group_policy(group_name, policy_name)

Deletes the specified policy document for the specified group.

Parameters:
  • group_name (string) – The name of the group the policy is associated with.
  • policy_name (string) – The policy document to delete.
delete_login_profile(user_name)

Deletes the login profile associated with the specified user.

Parameters:user_name (string) – The name of the user to delete.
delete_server_cert(cert_name)

Delete the specified server certificate.

Parameters:cert_name (string) – The name of the server certificate you want to delete.
delete_signing_cert(cert_id, user_name=None)

Delete a signing certificate associated with a user.

If the user_name is not specified, it is determined implicitly based on the AWS Access Key ID used to sign the request.

Parameters:
  • user_name (string) – The username of the user
  • cert_id (string) – The ID of the certificate.
delete_user(user_name)

Delete a user including the user’s path, GUID and ARN.

If the user_name is not specified, the user_name is determined implicitly based on the AWS Access Key ID used to sign the request.

Parameters:user_name (string) – The name of the user to delete.
delete_user_policy(user_name, policy_name)

Deletes the specified policy document for the specified user.

Parameters:
  • user_name (string) – The name of the user the policy is associated with.
  • policy_name (string) – The policy document to delete.
enable_mfa_device(user_name, serial_number, auth_code_1, auth_code_2)

Enables the specified MFA device and associates it with the specified user.

Parameters:
  • user_name (string) – The username of the user
  • seriasl_number – The serial number which uniquely identifies the MFA device.
  • auth_code_1 (string) – An authentication code emitted by the device.
  • auth_code_2 (string) – A subsequent authentication code emitted by the device.
get_account_alias()

Get the alias for the current account.

This is referred to in the docs as list_account_aliases, but it seems you can only have one account alias currently.

For more information on account id aliases, please see http://goo.gl/ToB7G

get_all_access_keys(user_name, marker=None, max_items=None)

Get all access keys associated with an account.

Parameters:
  • user_name (string) – The username of the user
  • marker (string) – Use this only when paginating results and only in follow-up request after you’ve received a response where the results are truncated. Set this to the value of the Marker element in the response you just received.
  • max_items (int) – Use this only when paginating results to indicate the maximum number of groups you want in the response.
get_all_group_policies(group_name, marker=None, max_items=None)

List the names of the policies associated with the specified group.

Parameters:
  • group_name (string) – The name of the group the policy is associated with.
  • marker (string) – Use this only when paginating results and only in follow-up request after you’ve received a response where the results are truncated. Set this to the value of the Marker element in the response you just received.
  • max_items (int) – Use this only when paginating results to indicate the maximum number of groups you want in the response.
get_all_groups(path_prefix='/', marker=None, max_items=None)

List the groups that have the specified path prefix.

Parameters:
  • path_prefix (string) – If provided, only groups whose paths match the provided prefix will be returned.
  • marker (string) – Use this only when paginating results and only in follow-up request after you’ve received a response where the results are truncated. Set this to the value of the Marker element in the response you just received.
  • max_items (int) – Use this only when paginating results to indicate the maximum number of groups you want in the response.
get_all_mfa_devices(user_name, marker=None, max_items=None)

Get all MFA devices associated with an account.

Parameters:
  • user_name (string) – The username of the user
  • marker (string) – Use this only when paginating results and only in follow-up request after you’ve received a response where the results are truncated. Set this to the value of the Marker element in the response you just received.
  • max_items (int) – Use this only when paginating results to indicate the maximum number of groups you want in the response.
get_all_server_certs(path_prefix='/', marker=None, max_items=None)

Lists the server certificates that have the specified path prefix. If none exist, the action returns an empty list.

Parameters:
  • path_prefix (string) – If provided, only certificates whose paths match the provided prefix will be returned.
  • marker (string) – Use this only when paginating results and only in follow-up request after you’ve received a response where the results are truncated. Set this to the value of the Marker element in the response you just received.
  • max_items (int) – Use this only when paginating results to indicate the maximum number of groups you want in the response.
get_all_signing_certs(marker=None, max_items=None, user_name=None)

Get all signing certificates associated with an account.

If the user_name is not specified, it is determined implicitly based on the AWS Access Key ID used to sign the request.

Parameters:
  • marker (string) – Use this only when paginating results and only in follow-up request after you’ve received a response where the results are truncated. Set this to the value of the Marker element in the response you just received.
  • max_items (int) – Use this only when paginating results to indicate the maximum number of groups you want in the response.
  • user_name (string) – The username of the user
get_all_user_policies(user_name, marker=None, max_items=None)

List the names of the policies associated with the specified user.

Parameters:
  • user_name (string) – The name of the user the policy is associated with.
  • marker (string) – Use this only when paginating results and only in follow-up request after you’ve received a response where the results are truncated. Set this to the value of the Marker element in the response you just received.
  • max_items (int) – Use this only when paginating results to indicate the maximum number of groups you want in the response.
get_all_users(path_prefix='/', marker=None, max_items=None)

List the users that have the specified path prefix.

Parameters:
  • path_prefix (string) – If provided, only users whose paths match the provided prefix will be returned.
  • marker (string) – Use this only when paginating results and only in follow-up request after you’ve received a response where the results are truncated. Set this to the value of the Marker element in the response you just received.
  • max_items (int) – Use this only when paginating results to indicate the maximum number of groups you want in the response.
get_group(group_name, marker=None, max_items=None)

Return a list of users that are in the specified group.

Parameters:
  • group_name (string) – The name of the group whose information should be returned.
  • marker (string) – Use this only when paginating results and only in follow-up request after you’ve received a response where the results are truncated. Set this to the value of the Marker element in the response you just received.
  • max_items (int) – Use this only when paginating results to indicate the maximum number of groups you want in the response.
get_group_policy(group_name, policy_name)

Retrieves the specified policy document for the specified group.

Parameters:
  • group_name (string) – The name of the group the policy is associated with.
  • policy_name (string) – The policy document to get.
get_groups_for_user(user_name, marker=None, max_items=None)

List the groups that a specified user belongs to.

Parameters:
  • user_name (string) – The name of the user to list groups for.
  • marker (string) – Use this only when paginating results and only in follow-up request after you’ve received a response where the results are truncated. Set this to the value of the Marker element in the response you just received.
  • max_items (int) – Use this only when paginating results to indicate the maximum number of groups you want in the response.
get_login_profiles(user_name)

Retrieves the login profile for the specified user.

Parameters:user_name (string) – The username of the user
get_response(action, params, path='/', parent=None, verb='GET', list_marker='Set')

Utility method to handle calls to IAM and parsing of responses.

get_server_certificate(cert_name)

Retrieves information about the specified server certificate.

Parameters:cert_name (string) – The name of the server certificate you want to retrieve information about.
get_signin_url(service='ec2')

Get the URL where IAM users can use their login profile to sign in to this account’s console.

Parameters:service (string) – Default service to go to in the console.
get_user(user_name=None)

Retrieve information about the specified user.

If the user_name is not specified, the user_name is determined implicitly based on the AWS Access Key ID used to sign the request.

Parameters:user_name (string) – The name of the user to delete. If not specified, defaults to user making request.
get_user_policy(user_name, policy_name)

Retrieves the specified policy document for the specified user.

Parameters:
  • user_name (string) – The name of the user the policy is associated with.
  • policy_name (string) – The policy document to get.
put_group_policy(group_name, policy_name, policy_json)

Adds or updates the specified policy document for the specified group.

Parameters:
  • group_name (string) – The name of the group the policy is associated with.
  • policy_name (string) – The policy document to get.
  • policy_json (string) – The policy document.
put_user_policy(user_name, policy_name, policy_json)

Adds or updates the specified policy document for the specified user.

Parameters:
  • user_name (string) – The name of the user the policy is associated with.
  • policy_name (string) – The policy document to get.
  • policy_json (string) – The policy document.
remove_user_from_group(group_name, user_name)

Remove a user from a group.

Parameters:
  • group_name (string) – The name of the group
  • user_name (string) – The user to remove from the group.
resync_mfa_device(user_name, serial_number, auth_code_1, auth_code_2)

Syncronizes the specified MFA device with the AWS servers.

Parameters:
  • user_name (string) – The username of the user
  • seriasl_number – The serial number which uniquely identifies the MFA device.
  • auth_code_1 (string) – An authentication code emitted by the device.
  • auth_code_2 (string) – A subsequent authentication code emitted by the device.
update_access_key(access_key_id, status, user_name=None)

Changes the status of the specified access key from Active to Inactive or vice versa. This action can be used to disable a user’s key as part of a key rotation workflow.

If the user_name is not specified, the user_name is determined implicitly based on the AWS Access Key ID used to sign the request.

Parameters:
  • access_key_id (string) – The ID of the access key.
  • status (string) – Either Active or Inactive.
  • user_name (string) – The username of user (optional).
update_group(group_name, new_group_name=None, new_path=None)

Updates name and/or path of the specified group.

Parameters:
  • group_name (string) – The name of the new group
  • new_group_name (string) – If provided, the name of the group will be changed to this name.
  • new_path (string) – If provided, the path of the group will be changed to this path.
update_login_profile(user_name, password)

Resets the password associated with the user’s login profile.

Parameters:
  • user_name (string) – The name of the user
  • password (string) – The new password for the user
update_server_cert(cert_name, new_cert_name=None, new_path=None)

Updates the name and/or the path of the specified server certificate.

Parameters:
  • cert_name (string) – The name of the server certificate that you want to update.
  • new_cert_name (string) – The new name for the server certificate. Include this only if you are updating the server certificate’s name.
  • new_path (string) – If provided, the path of the certificate will be changed to this path.
update_signing_cert(cert_id, status, user_name=None)

Change the status of the specified signing certificate from Active to Inactive or vice versa.

If the user_name is not specified, it is determined implicitly based on the AWS Access Key ID used to sign the request.

Parameters:
  • cert_id (string) – The ID of the signing certificate
  • status (string) – Either Active or Inactive.
  • user_name (string) – The username of the user
update_user(user_name, new_user_name=None, new_path=None)

Updates name and/or path of the specified user.

Parameters:
  • user_name (string) – The name of the user
  • new_user_name (string) – If provided, the username of the user will be changed to this username.
  • new_path (string) – If provided, the path of the user will be changed to this path.
upload_server_cert(cert_name, cert_body, private_key, cert_chain=None, path=None)

Uploads a server certificate entity for the AWS Account. The server certificate entity includes a public key certificate, a private key, and an optional certificate chain, which should all be PEM-encoded.

Parameters:
  • cert_name (string) – The name for the server certificate. Do not include the path in this value.
  • cert_body (string) – The contents of the public key certificate in PEM-encoded format.
  • private_key (string) – The contents of the private key in PEM-encoded format.
  • cert_chain (string) – The contents of the certificate chain. This is typically a concatenation of the PEM-encoded public key certificates of the chain.
  • path (string) – The path for the server certificate.
upload_signing_cert(cert_body, user_name=None)

Uploads an X.509 signing certificate and associates it with the specified user.

If the user_name is not specified, it is determined implicitly based on the AWS Access Key ID used to sign the request.

Parameters:
  • cert_body (string) – The body of the signing certificate.
  • user_name (string) – The username of the user
boto.iam.response

manage

boto.manage
boto.manage.cmdshell
boto.manage.propget
boto.manage.propget.get(prop, choices=None)
boto.manage.server

High-level abstraction of an EC2 server

class boto.manage.server.Bundler(server, uname='root')
bundle(bucket=None, prefix=None, key_file=None, cert_file=None, size=None, ssh_key=None, fp=None, clear_history=True)
bundle_image(prefix, size, ssh_key)
copy_x509(key_file, cert_file)
upload_bundle(bucket, prefix, ssh_key)
class boto.manage.server.CommandLineGetter
get(cls, params)
get_ami_id(params)
get_ami_list()
get_description(params)
get_group(params)
get_instance_type(params)
get_key(params)
get_name(params)
get_quantity(params)
get_region(params)
get_zone(params)
class boto.manage.server.Server(id=None, **kw)
classmethod add_credentials(cfg, aws_access_key_id, aws_secret_access_key)
ami_id = None
console_output = None
classmethod create(config_file=None, logical_volume=None, cfg=None, **params)

Create a new instance based on the specified configuration file or the specified configuration and the passed in parameters.

If the config_file argument is not None, the configuration is read from there. Otherwise, the cfg argument is used.

The config file may include other config files with a #import reference. The included config files must reside in the same directory as the specified file.

The logical_volume argument, if supplied, will be used to get the current physical volume ID and use that as an override of the value specified in the config file. This may be useful for debugging purposes when you want to debug with a production config file but a test Volume.

The dictionary argument may be used to override any EC2 configuration values in the config file.

classmethod create_from_current_instances()
classmethod create_from_instance_id(instance_id, name, description='')
delete()
description = None
elastic_ip = None
get_bundler(uname='root')
get_cmdshell()
get_ssh_client(uname='root', ssh_pwd=None)
get_ssh_key_file()
groups = None
hostname = None
install(pkg)
instance_id = None
instance_type = None
key_name = None
launch_time = None
name = None
packages = []
plugins = []
private_hostname = None
production = None
put()
reboot()
region_name = None
reset_cmdshell()
run(command)
security_group = None
status = None
stop()
terminate()
wait()
zone = None
boto.manage.task
class boto.manage.task.Task(id=None, **kw)

A scheduled, repeating task that can be executed by any participating servers. The scheduling is similar to cron jobs. Each task has an hour attribute. The allowable values for hour are [0-23|*].

To keep the operation reasonably efficient and not cause excessive polling, the minimum granularity of a Task is hourly. Some examples:

hour=’*’ - the task would be executed each hour hour=‘3’ - the task would be executed at 3AM GMT each day.
check()

Determine how long until the next scheduled time for a Task. Returns the number of seconds until the next scheduled time or zero if the task needs to be run immediately. If it’s an hourly task and it’s never been run, run it now. If it’s a daily task and it’s never been run and the hour is right, run it now.

command = None
hour = None
last_executed = None
last_output = None
last_status = None
message_id = None
name = None
run(msg, vtimeout=60)
start(queue_name)
classmethod start_all(queue_name)
class boto.manage.task.TaskPoller(queue_name)
poll(wait=60, vtimeout=60)
boto.manage.task.check_hour(val)
boto.manage.volume

mashups

boto.mashups
boto.mashups.interactive
boto.mashups.interactive.interactive_shell(chan)
boto.mashups.interactive.posix_shell(chan)
boto.mashups.interactive.windows_shell(chan)
boto.mashups.iobject
class boto.mashups.iobject.IObject
choose_from_list(item_list, search_str='', prompt='Enter Selection')
get_filename(prompt)
get_int(prompt)
get_string(prompt, validation_fn=None)
boto.mashups.iobject.int_val_fn(v)
boto.mashups.order
boto.mashups.server

High-level abstraction of an EC2 server

class boto.mashups.server.Server(id=None, **kw)
classmethod Inventory()

Returns a list of Server instances, one for each Server object persisted in the db

classmethod Register(name, instance_id, description='')
ami

The AMI for the server

ami_id = None
attach_volume(volume, device='/dev/sdp')

Attach an EBS volume to this server

Parameters:
bundle_image(prefix, key_file, cert_file, size)
config

The instance data for this server

config_uri = None
console_output

Retrieve the console output for server

create_image(bucket=None, prefix=None, key_file=None, cert_file=None, size=None)
description = None
detach_volume(volume)

Detach an EBS volume from this server

Parameters:volume (boto.ec2.volume.Volume) – EBS Volume to detach
ec2
elastic_ip = None
getAMI()
getConfig()
getConsoleOutput()
getGroups()
getHostname()
getInstance()
getLaunchTime()
getPrivateHostname()
getStatus()
get_file(remotepath, localpath)
get_ssh_client(key_file=None, host_key_file='~/.ssh/known_hosts', uname='root')
groups

The Security Groups controlling access to this server

hostname

The public DNS name of the server

install_package(package_name)
instance

The Instance for the server

instance_id = None
instance_type = None
key_name = None
launch_time

The time the Server was started

listdir(remotepath)
load_config()
log = None
name = None
private_hostname

The private DNS name of the server

put_file(localpath, remotepath)
reboot()
security_group = None
setConfig(config)
setReadOnly(value)
set_config(config)

Set SDB based config

shell(key_file=None)
start()
status

The status of the server

stop()
upload_bundle(bucket, prefix)
zone = None
class boto.mashups.server.ServerSet
map(*args)

mturk

boto.mturk
boto.mturk.connection
class boto.mturk.connection.Assignment(connection)

Class to extract an Assignment structure from a response (used in ResultSet)

Will have attributes named as per the Developer Guide, e.g. AssignmentId, WorkerId, HITId, Answer, etc

endElement(name, value, connection)
class boto.mturk.connection.BaseAutoResultElement(connection)

Base class to automatically add attributes when parsing XML

endElement(name, value, connection)
startElement(name, attrs, connection)
class boto.mturk.connection.HIT(connection)

Class to extract a HIT structure from a response (used in ResultSet)

Will have attributes named as per the Developer Guide, e.g. HITId, HITTypeId, CreationTime

expired

Has this HIT expired yet?

class boto.mturk.connection.MTurkConnection(aws_access_key_id=None, aws_secret_access_key=None, is_secure=False, port=None, proxy=None, proxy_port=None, proxy_user=None, proxy_pass=None, host=None, debug=0, https_connection_factory=None)
APIVersion = '2008-08-02'
approve_assignment(assignment_id, feedback=None)
assign_qualification(qualification_type_id, worker_id, value=1, send_notification=True)
block_worker(worker_id, reason)

Block a worker from working on my tasks.

change_hit_type_of_hit(hit_id, hit_type)

Change the HIT type of an existing HIT. Note that the reward associated with the new HIT type must match the reward of the current HIT type in order for the operation to be valid.

create_hit(hit_type=None, question=None, lifetime=datetime.timedelta(7), max_assignments=1, title=None, description=None, keywords=None, reward=None, duration=datetime.timedelta(7), approval_delay=None, annotation=None, questions=None, qualifications=None, response_groups=None)

Creates a new HIT. Returns a ResultSet See: http://docs.amazonwebservices.com/AWSMechanicalTurkRequester/2006-10-31/ApiReference_CreateHITOperation.html

create_qualification_type(name, description, status, keywords=None, retry_delay=None, test=None, answer_key=None, answer_key_xml=None, test_duration=None, auto_granted=False, auto_granted_value=1)

Create a new Qualification Type.

name: This will be visible to workers and must be unique for a
given requester.

description: description shown to workers. Max 2000 characters.

status: ‘Active’ or ‘Inactive’

keywords: list of keyword strings or comma separated string.
Max length of 1000 characters when concatenated with commas.
retry_delay: number of seconds after requesting a
qualification the worker must wait before they can ask again. If not specified, workers can only request this qualification once.

test: a QuestionForm

answer_key: an XML string of your answer key, for automatically
scored qualification tests. (Consider implementing an AnswerKey class for this to support.)

test_duration: the number of seconds a worker has to complete the test.

auto_granted: if True, requests for the Qualification are granted immediately.
Can’t coexist with a test.

auto_granted_value: auto_granted qualifications are given this value.

disable_hit(hit_id, response_groups=None)

Remove a HIT from the Mechanical Turk marketplace, approves all submitted assignments that have not already been approved or rejected, and disposes of the HIT and all assignment data.

Assignments for the HIT that have already been submitted, but not yet approved or rejected, will be automatically approved. Assignments in progress at the time of the call to DisableHIT will be approved once the assignments are submitted. You will be charged for approval of these assignments. DisableHIT completely disposes of the HIT and all submitted assignment data. Assignment results data cannot be retrieved for a HIT that has been disposed.

It is not possible to re-enable a HIT once it has been disabled. To make the work from a disabled HIT available again, create a new HIT.

dispose_hit(hit_id)

Dispose of a HIT that is no longer needed.

Only HITs in the “reviewable” state, with all submitted assignments approved or rejected, can be disposed. A Requester can call GetReviewableHITs to determine which HITs are reviewable, then call GetAssignmentsForHIT to retrieve the assignments. Disposing of a HIT removes the HIT from the results of a call to GetReviewableHITs.

dispose_qualification_type(qualification_type_id)

TODO: Document.

static duration_as_seconds(duration)
expire_hit(hit_id)

Expire a HIT that is no longer needed.

The effect is identical to the HIT expiring on its own. The HIT no longer appears on the Mechanical Turk web site, and no new Workers are allowed to accept the HIT. Workers who have accepted the HIT prior to expiration are allowed to complete it or return it, or allow the assignment duration to elapse (abandon the HIT). Once all remaining assignments have been submitted, the expired HIT becomes”reviewable”, and will be returned by a call to GetReviewableHITs.

extend_hit(hit_id, assignments_increment=None, expiration_increment=None)

Increase the maximum number of assignments, or extend the expiration date, of an existing HIT.

NOTE: If a HIT has a status of Reviewable and the HIT is extended to make it Available, the HIT will not be returned by GetReviewableHITs, and its submitted assignments will not be returned by GetAssignmentsForHIT, until the HIT is Reviewable again. Assignment auto-approval will still happen on its original schedule, even if the HIT has been extended. Be sure to retrieve and approve (or reject) submitted assignments before extending the HIT, if so desired.

get_account_balance()
get_all_hits()

Return all of a Requester’s HITs

Despite what search_hits says, it does not return all hits, but instead returns a page of hits. This method will pull the hits from the server 100 at a time, but will yield the results iteratively, so subsequent requests are made on demand.

get_assignments(hit_id, status=None, sort_by='SubmitTime', sort_direction='Ascending', page_size=10, page_number=1, response_groups=None)

Retrieves completed assignments for a HIT. Use this operation to retrieve the results for a HIT.

The returned ResultSet will have the following attributes:

NumResults
The number of assignments on the page in the filtered results list, equivalent to the number of assignments being returned by this call. A non-negative integer
PageNumber
The number of the page in the filtered results list being returned. A positive integer
TotalNumResults
The total number of HITs in the filtered results list based on this call. A non-negative integer

The ResultSet will contain zero or more Assignment objects

get_help(about, help_type='Operation')

Return information about the Mechanical Turk Service operations and response group NOTE - this is basically useless as it just returns the URL of the documentation

help_type: either ‘Operation’ or ‘ResponseGroup’

get_hit(hit_id, response_groups=None)
static get_keywords_as_string(keywords)

Returns a comma+space-separated string of keywords from either a list or a string

static get_price_as_price(reward)

Returns a Price data structure from either a float or a Price

get_qualification_requests(qualification_type_id, sort_by='Expiration', sort_direction='Ascending', page_size=10, page_number=1)

TODO: Document.

get_qualification_score(qualification_type_id, worker_id)

TODO: Document.

get_qualification_type(qualification_type_id)
get_qualifications_for_qualification_type(qualification_type_id)
get_reviewable_hits(hit_type=None, status='Reviewable', sort_by='Expiration', sort_direction='Ascending', page_size=10, page_number=1)

Retrieve the HITs that have a status of Reviewable, or HITs that have a status of Reviewing, and that belong to the Requester calling the operation.

grant_bonus(worker_id, assignment_id, bonus_price, reason)

Issues a payment of money from your account to a Worker. To be eligible for a bonus, the Worker must have submitted results for one of your HITs, and have had those results approved or rejected. This payment happens separately from the reward you pay to the Worker when you approve the Worker’s assignment. The Bonus must be passed in as an instance of the Price object.

grant_qualification(qualification_request_id, integer_value=1)

TODO: Document.

notify_workers(worker_ids, subject, message_text)

Send a text message to workers.

register_hit_type(title, description, reward, duration, keywords=None, approval_delay=None, qual_req=None)

Register a new HIT Type title, description are strings reward is a Price object duration can be a timedelta, or an object castable to an int

reject_assignment(assignment_id, feedback=None)
revoke_qualification(subject_id, qualification_type_id, reason=None)

TODO: Document.

search_hits(sort_by='CreationTime', sort_direction='Ascending', page_size=10, page_number=1, response_groups=None)

Return a page of a Requester’s HITs, on behalf of the Requester. The operation returns HITs of any status, except for HITs that have been disposed with the DisposeHIT operation. Note: The SearchHITs operation does not accept any search parameters that filter the results.

search_qualification_types(query=None, sort_by='Name', sort_direction='Ascending', page_size=10, page_number=1, must_be_requestable=True, must_be_owned_by_caller=True)

TODO: Document.

set_email_notification(hit_type, email, event_types=None)

Performs a SetHITTypeNotification operation to set email notification for a specified HIT type

set_rest_notification(hit_type, url, event_types=None)

Performs a SetHITTypeNotification operation to set REST notification for a specified HIT type

set_reviewing(hit_id, revert=None)

Update a HIT with a status of Reviewable to have a status of Reviewing, or reverts a Reviewing HIT back to the Reviewable status.

Only HITs with a status of Reviewable can be updated with a status of Reviewing. Similarly, only Reviewing HITs can be reverted back to a status of Reviewable.

unblock_worker(worker_id, reason)

Unblock a worker from working on my tasks.

update_qualification_score(qualification_type_id, worker_id, value)

TODO: Document.

update_qualification_type(qualification_type_id, description=None, status=None, retry_delay=None, test=None, answer_key=None, test_duration=None, auto_granted=None, auto_granted_value=None)
exception boto.mturk.connection.MTurkRequestError(status, reason, body=None)

Error for MTurk Requests

class boto.mturk.connection.Qualification(connection)

Class to extract an Qualification structure from a response (used in ResultSet)

Will have attributes named as per the Developer Guide such as QualificationTypeId, IntegerValue. Does not seem to contain GrantTime.

class boto.mturk.connection.QualificationRequest(connection)

Class to extract an QualificationRequest structure from a response (used in ResultSet)

Will have attributes named as per the Developer Guide, e.g. QualificationRequestId, QualificationTypeId, SubjectId, etc

TODO: Ensure that Test and Answer attribute are treated properly if the
qualification requires a test. These attributes are XML-encoded.
class boto.mturk.connection.QualificationType(connection)

Class to extract an QualificationType structure from a response (used in ResultSet)

Will have attributes named as per the Developer Guide, e.g. QualificationTypeId, CreationTime, Name, etc

class boto.mturk.connection.QuestionFormAnswer(connection)

Class to extract Answers from inside the embedded XML QuestionFormAnswers element inside the Answer element which is part of the Assignment structure

A QuestionFormAnswers element contains an Answer element for each question in the HIT or Qualification test for which the Worker provided an answer. Each Answer contains a QuestionIdentifier element whose value corresponds to the QuestionIdentifier of a Question in the QuestionForm. See the QuestionForm data structure for more information about questions and answer specifications.

If the question expects a free-text answer, the Answer element contains a FreeText element. This element contains the Worker’s answer

NOTE - currently really only supports free-text and selection answers

endElement(name, value, connection)
boto.mturk.notification

Provides NotificationMessage and Event classes, with utility methods, for implementations of the Mechanical Turk Notification API.

class boto.mturk.notification.Event(d)
class boto.mturk.notification.NotificationMessage(d)

Constructor; expects parameter d to be a dict of string parameters from a REST transport notification message

EVENT_PATTERN = 'Event\\.(?P<n>\\d+)\\.(?P<param>\\w+)'
EVENT_RE = <_sre.SRE_Pattern object>
NOTIFICATION_VERSION = '2006-05-05'
NOTIFICATION_WSDL = 'http://mechanicalturk.amazonaws.com/AWSMechanicalTurk/2006-05-05/AWSMechanicalTurkRequesterNotification.wsdl'
OPERATION_NAME = 'Notify'
SERVICE_NAME = 'AWSMechanicalTurkRequesterNotification'
verify(secret_key)

Verifies the authenticity of a notification message.

TODO: This is doing a form of authentication and
this functionality should really be merged with the pluggable authentication mechanism at some point.
boto.mturk.price
class boto.mturk.price.Price(amount=0.0, currency_code='USD')
endElement(name, value, connection)
get_as_params(label, ord=1)
startElement(name, attrs, connection)
boto.mturk.qualification
class boto.mturk.qualification.AdultRequirement(comparator, integer_value, required_to_preview=False)

Requires workers to acknowledge that they are over 18 and that they agree to work on potentially offensive content. The value type is boolean, 1 (required), 0 (not required, the default).

class boto.mturk.qualification.LocaleRequirement(comparator, locale, required_to_preview=False)

A Qualification requirement based on the Worker’s location. The Worker’s location is specified by the Worker to Mechanical Turk when the Worker creates his account.

get_as_params()
class boto.mturk.qualification.NumberHitsApprovedRequirement(comparator, integer_value, required_to_preview=False)

Specifies the total number of HITs submitted by a Worker that have been approved. The value is an integer greater than or equal to 0.

class boto.mturk.qualification.PercentAssignmentsAbandonedRequirement(comparator, integer_value, required_to_preview=False)

The percentage of assignments the Worker has abandoned (allowed the deadline to elapse), over all assignments the Worker has accepted. The value is an integer between 0 and 100.

class boto.mturk.qualification.PercentAssignmentsApprovedRequirement(comparator, integer_value, required_to_preview=False)

The percentage of assignments the Worker has submitted that were subsequently approved by the Requester, over all assignments the Worker has submitted. The value is an integer between 0 and 100.

class boto.mturk.qualification.PercentAssignmentsRejectedRequirement(comparator, integer_value, required_to_preview=False)

The percentage of assignments the Worker has submitted that were subsequently rejected by the Requester, over all assignments the Worker has submitted. The value is an integer between 0 and 100.

class boto.mturk.qualification.PercentAssignmentsReturnedRequirement(comparator, integer_value, required_to_preview=False)

The percentage of assignments the Worker has returned, over all assignments the Worker has accepted. The value is an integer between 0 and 100.

class boto.mturk.qualification.PercentAssignmentsSubmittedRequirement(comparator, integer_value, required_to_preview=False)

The percentage of assignments the Worker has submitted, over all assignments the Worker has accepted. The value is an integer between 0 and 100.

class boto.mturk.qualification.Qualifications(requirements=None)
add(req)
get_as_params()
class boto.mturk.qualification.Requirement(qualification_type_id, comparator, integer_value=None, required_to_preview=False)

Representation of a single requirement

get_as_params()
boto.mturk.question
class boto.mturk.question.AnswerSpecification(spec)
get_as_xml()
template = '<AnswerSpecification>%(spec)s</AnswerSpecification>'
class boto.mturk.question.Application(width, height, **parameters)
get_as_xml()
get_inner_content(content)
parameter_template = '<Name>%(name)s</Name><Value>%(value)s</Value>'
template = '<Application><%(class_)s>%(content)s</%(class_)s></Application>'
class boto.mturk.question.Binary(type, subtype, url, alt_text)
template = '<Binary><MimeType><Type>%(type)s</Type><SubType>%(subtype)s</SubType></MimeType><DataURL>%(url)s</DataURL><AltText>%(alt_text)s</AltText></Binary>'
class boto.mturk.question.Constraint
get_as_xml()
get_attributes()
class boto.mturk.question.Constraints
get_as_xml()
template = '<Constraints>%(content)s</Constraints>'
class boto.mturk.question.ExternalQuestion(external_url, frame_height)

An object for constructing an External Question.

get_as_params(label='ExternalQuestion')
get_as_xml()
schema_url = 'http://mechanicalturk.amazonaws.com/AWSMechanicalTurkDataSchemas/2006-07-14/ExternalQuestion.xsd'
template = '<ExternalQuestion xmlns="http://mechanicalturk.amazonaws.com/AWSMechanicalTurkDataSchemas/2006-07-14/ExternalQuestion.xsd"><ExternalURL>%(external_url)s</ExternalURL><FrameHeight>%(frame_height)s</FrameHeight></ExternalQuestion>'
class boto.mturk.question.FileUploadAnswer(min_bytes, max_bytes)
get_as_xml()
template = '<FileUploadAnswer><MinFileSizeInBytes>%(min_bytes)d</MinFileSizeInBytes><MaxFileSizeInBytes>%(max_bytes)d</MaxFileSizeInBytes></FileUploadAnswer>'
class boto.mturk.question.Flash(url, *args, **kwargs)
get_inner_content(content)
class boto.mturk.question.FormattedContent(content)
schema_url = 'http://mechanicalturk.amazonaws.com/AWSMechanicalTurkDataSchemas/2006-07-14/FormattedContentXHTMLSubset.xsd'
template = '<FormattedContent><![CDATA[%(content)s]]></FormattedContent>'
class boto.mturk.question.FreeTextAnswer(default=None, constraints=None, num_lines=None)
get_as_xml()
template = '<FreeTextAnswer>%(items)s</FreeTextAnswer>'
class boto.mturk.question.JavaApplet(path, filename, *args, **kwargs)
get_inner_content(content)
class boto.mturk.question.LengthConstraint(min_length=None, max_length=None)
attribute_names = ('minLength', 'maxLength')
template = '<Length %(attrs)s />'
class boto.mturk.question.List

A bulleted list suitable for OrderedContent or Overview content

get_as_xml()
class boto.mturk.question.NumberOfLinesSuggestion(num_lines=1)
get_as_xml()
template = '<NumberOfLinesSuggestion>%(num_lines)s</NumberOfLinesSuggestion>'
class boto.mturk.question.NumericConstraint(min_value=None, max_value=None)
attribute_names = ('minValue', 'maxValue')
template = '<IsNumeric %(attrs)s />'
class boto.mturk.question.OrderedContent
append_field(field, value)
get_as_xml()
class boto.mturk.question.Overview
get_as_params(label='Overview')
get_as_xml()
template = '<Overview>%(content)s</Overview>'
class boto.mturk.question.Question(identifier, content, answer_spec, is_required=False, display_name=None)
get_as_params(label='Question')
get_as_xml()
template = '<Question>%(items)s</Question>'
class boto.mturk.question.QuestionContent
get_as_xml()
template = '<QuestionContent>%(content)s</QuestionContent>'
class boto.mturk.question.QuestionForm

From the AMT API docs:

The top-most element of the QuestionForm data structure is a QuestionForm element. This element contains optional Overview elements and one or more Question elements. There can be any number of these two element types listed in any order. The following example structure has an Overview element and a Question element followed by a second Overview element and Question element–all within the same QuestionForm.

<QuestionForm xmlns="[the QuestionForm schema URL]">
    <Overview>
        [...]
    </Overview>
    <Question>
        [...]
    </Question>
    <Overview>
        [...]
    </Overview>
    <Question>
        [...]
    </Question>
    [...]
</QuestionForm>

QuestionForm is implemented as a list, so to construct a QuestionForm, simply append Questions and Overviews (with at least one Question).

get_as_xml()
is_valid()
schema_url = 'http://mechanicalturk.amazonaws.com/AWSMechanicalTurkDataSchemas/2005-10-01/QuestionForm.xsd'
xml_template = '<QuestionForm xmlns="http://mechanicalturk.amazonaws.com/AWSMechanicalTurkDataSchemas/2005-10-01/QuestionForm.xsd">%(items)s</QuestionForm>'
class boto.mturk.question.RegExConstraint(pattern, error_text=None, flags=None)
attribute_names = ('regex', 'errorText', 'flags')
template = '<AnswerFormatRegex %(attrs)s />'
class boto.mturk.question.SelectionAnswer(min=1, max=1, style=None, selections=None, type='text', other=False)

A class to generate SelectionAnswer XML data structures. Does not yet implement Binary selection options.

ACCEPTED_STYLES = ['radiobutton', 'dropdown', 'checkbox', 'list', 'combobox', 'multichooser']
MAX_SELECTION_COUNT_XML_TEMPLATE = '<MaxSelectionCount>%s</MaxSelectionCount>'
MIN_SELECTION_COUNT_XML_TEMPLATE = '<MinSelectionCount>%s</MinSelectionCount>'
OTHER_SELECTION_ELEMENT_NAME = 'OtherSelection'
SELECTIONANSWER_XML_TEMPLATE = '<SelectionAnswer>%s%s<Selections>%s</Selections></SelectionAnswer>'
SELECTION_VALUE_XML_TEMPLATE = '<%s>%s</%s>'
SELECTION_XML_TEMPLATE = '<Selection><SelectionIdentifier>%s</SelectionIdentifier>%s</Selection>'
STYLE_XML_TEMPLATE = '<StyleSuggestion>%s</StyleSuggestion>'
get_as_xml()
class boto.mturk.question.SimpleField(field, value)

A Simple name/value pair that can be easily rendered as XML.

>>> SimpleField('Text', 'A text string').get_as_xml()
'<Text>A text string</Text>'
template = '<%(field)s>%(value)s</%(field)s>'
class boto.mturk.question.ValidatingXML
validate()
class boto.mturk.question.XMLTemplate
get_as_xml()

pyami

boto.pyami
boto.pyami.bootstrap
class boto.pyami.bootstrap.Bootstrap

The Bootstrap class is instantiated and run as part of the PyAMI instance initialization process. The methods in this class will be run from the rc.local script of the instance and will be run as the root user.

The main purpose of this class is to make sure the boto distribution on the instance is the one required.

create_working_dir()
fetch_s3_file(s3_file)
load_boto()
load_packages()
main()
write_metadata()
boto.pyami.config
class boto.pyami.config.Config(path=None, fp=None, do_load=True)
dump()
dump_safe(fp=None)
dump_to_sdb(domain_name, item_name)
get(section, name, default=None)
get_instance(name, default=None)
get_user(name, default=None)
get_value(section, name, default=None)
getbool(section, name, default=False)
getfloat(section, name, default=0.0)
getint(section, name, default=0)
getint_user(name, default=0)
load_credential_file(path)

Load a credential file as is setup like the Java utilities

load_from_path(path)
load_from_sdb(domain_name, item_name)
save_option(path, section, option, value)

Write the specified Section.Option to the config file specified by path. Replace any previous value. If the path doesn’t exist, create it. Also add the option the the in-memory config.

save_system_option(section, option, value)
save_user_option(section, option, value)
setbool(section, name, value)
boto.pyami.copybot
class boto.pyami.copybot.CopyBot
copy_bucket_acl()
copy_key_acl(src, dst)
copy_keys()
copy_log()
main()
boto.pyami.installers
boto.pyami.installers.ubuntu
boto.pyami.installers.ubuntu.apache
boto.pyami.installers.ubuntu.ebs
boto.pyami.installers.ubuntu.installer
boto.pyami.installers.ubuntu.mysql
boto.pyami.installers.ubuntu.trac
boto.pyami.launch_ami
boto.pyami.launch_ami.main()
boto.pyami.launch_ami.usage()
boto.pyami.scriptbase
class boto.pyami.scriptbase.ScriptBase(config_file=None)
main()
mkdir(path)
notify(subject, body='')
run(command, notify=True, exit_on_error=False, cwd=None)
umount(path)
boto.pyami.startup
class boto.pyami.startup.Startup(config_file=None)
main()
run_scripts()

RDS

boto.rds
class boto.rds.RDSConnection(aws_access_key_id=None, aws_secret_access_key=None, is_secure=True, port=None, proxy=None, proxy_port=None, proxy_user=None, proxy_pass=None, debug=0, https_connection_factory=None, region=None, path='/')
APIVersion = '2009-10-16'
DefaultRegionEndpoint = 'rds.amazonaws.com'
DefaultRegionName = 'us-east-1'
authorize_dbsecurity_group(group_name, cidr_ip=None, ec2_security_group_name=None, ec2_security_group_owner_id=None)

Add a new rule to an existing security group. You need to pass in either src_security_group_name and src_security_group_owner_id OR a CIDR block but not both.

Parameters:
  • group_name (string) – The name of the security group you are adding the rule to.
  • ec2_security_group_name (string) – The name of the EC2 security group you are granting access to.
  • ec2_security_group_owner_id (string) – The ID of the owner of the EC2 security group you are granting access to.
  • cidr_ip (string) – The CIDR block you are providing access to. See http://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing
Return type:

bool

Returns:

True if successful.

create_dbinstance(id, allocated_storage, instance_class, master_username, master_password, port=3306, engine='MySQL5.1', db_name=None, param_group=None, security_groups=None, availability_zone=None, preferred_maintenance_window=None, backup_retention_period=None, preferred_backup_window=None, multi_az=False, engine_version=None, auto_minor_version_upgrade=True)

Create a new DBInstance.

Parameters:
  • id (str) – Unique identifier for the new instance. Must contain 1-63 alphanumeric characters. First character must be a letter. May not end with a hyphen or contain two consecutive hyphens
  • allocated_storage (int) – Initially allocated storage size, in GBs. Valid values are [5-1024]
  • instance_class (str) –

    The compute and memory capacity of the DBInstance.

    Valid values are:

    • db.m1.small
    • db.m1.large
    • db.m1.xlarge
    • db.m2.xlarge
    • db.m2.2xlarge
    • db.m2.4xlarge
  • engine (str) – Name of database engine. Must be MySQL5.1 for now.
  • master_username (str) – Name of master user for the DBInstance. Must be 1-15 alphanumeric characters, first must be a letter.
  • master_password (str) – Password of master user for the DBInstance. Must be 4-16 alphanumeric characters.
  • port (int) – Port number on which database accepts connections. Valid values [1115-65535]. Defaults to 3306.
  • db_name (str) – Name of a database to create when the DBInstance is created. Default is to create no databases.
  • param_group (str) – Name of DBParameterGroup to associate with this DBInstance. If no groups are specified no parameter groups will be used.
  • security_groups (list of str or list of DBSecurityGroup objects) – List of names of DBSecurityGroup to authorize on this DBInstance.
  • availability_zone (str) – Name of the availability zone to place DBInstance into.
  • preferred_maintenance_window (str) – The weekly time range (in UTC) during which maintenance can occur. Default is Sun:05:00-Sun:09:00
  • backup_retention_period (int) – The number of days for which automated backups are retained. Setting this to zero disables automated backups.
  • preferred_backup_window (str) – The daily time range during which automated backups are created (if enabled). Must be in h24:mi-hh24:mi format (UTC).
  • multi_az (bool) – If True, specifies the DB Instance will be deployed in multiple availability zones.
  • engine_version (str) – Version number of the database engine to use.
  • auto_minor_version_upgrade (bool) – Indicates that minor engine upgrades will be applied automatically to the Read Replica during the maintenance window. Default is True.
Return type:

boto.rds.dbinstance.DBInstance

Returns:

The new db instance.

create_dbinstance_read_replica(id, source_id, instance_class=None, port=3306, availability_zone=None, auto_minor_version_upgrade=None)

Create a new DBInstance Read Replica.

Parameters:
  • id (str) – Unique identifier for the new instance. Must contain 1-63 alphanumeric characters. First character must be a letter. May not end with a hyphen or contain two consecutive hyphens
  • source_id (str) – Unique identifier for the DB Instance for which this DB Instance will act as a Read Replica.
  • instance_class (str) –

    The compute and memory capacity of the DBInstance. Default is to inherit from the source DB Instance.

    Valid values are:

    • db.m1.small
    • db.m1.large
    • db.m1.xlarge
    • db.m2.xlarge
    • db.m2.2xlarge
    • db.m2.4xlarge
  • port (int) – Port number on which database accepts connections. Default is to inherit from source DB Instance. Valid values [1115-65535]. Defaults to 3306.
  • availability_zone (str) – Name of the availability zone to place DBInstance into.
  • auto_minor_version_upgrade (bool) – Indicates that minor engine upgrades will be applied automatically to the Read Replica during the maintenance window. Default is to inherit this value from the source DB Instance.
Return type:

boto.rds.dbinstance.DBInstance

Returns:

The new db instance.

create_dbsecurity_group(name, description=None)

Create a new security group for your account. This will create the security group within the region you are currently connected to.

Parameters:
  • name (string) – The name of the new security group
  • description (string) – The description of the new security group
Return type:

boto.rds.dbsecuritygroup.DBSecurityGroup

Returns:

The newly created DBSecurityGroup

create_dbsnapshot(snapshot_id, dbinstance_id)

Create a new DB snapshot.

Parameters:
  • snapshot_id (string) – The identifier for the DBSnapshot
  • dbinstance_id (string) – The source identifier for the RDS instance from which the snapshot is created.
Return type:

boto.rds.dbsnapshot.DBSnapshot

Returns:

The newly created DBSnapshot

create_parameter_group(name, engine='MySQL5.1', description='')

Create a new dbparameter group for your account.

Parameters:
  • name (string) – The name of the new dbparameter group
  • engine (str) – Name of database engine. Must be MySQL5.1 for now.
  • description (string) – The description of the new security group
Return type:

boto.rds.dbsecuritygroup.DBSecurityGroup

Returns:

The newly created DBSecurityGroup

delete_dbinstance(id, skip_final_snapshot=False, final_snapshot_id='')

Delete an existing DBInstance.

Parameters:
  • id (str) – Unique identifier for the new instance.
  • skip_final_snapshot (bool) – This parameter determines whether a final db snapshot is created before the instance is deleted. If True, no snapshot is created. If False, a snapshot is created before deleting the instance.
  • final_snapshot_id (str) – If a final snapshot is requested, this is the identifier used for that snapshot.
Return type:

boto.rds.dbinstance.DBInstance

Returns:

The deleted db instance.

delete_dbsecurity_group(name)

Delete a DBSecurityGroup from your account.

Parameters:key_name (string) – The name of the DBSecurityGroup to delete
delete_dbsnapshot(identifier)

Delete a DBSnapshot

Parameters:identifier (string) – The identifier of the DBSnapshot to delete
delete_parameter_group(name)

Delete a DBSecurityGroup from your account.

Parameters:key_name (string) – The name of the DBSecurityGroup to delete
get_all_dbinstances(instance_id=None, max_records=None, marker=None)

Retrieve all the DBInstances in your account.

Parameters:
  • instance_id (str) – DB Instance identifier. If supplied, only information this instance will be returned. Otherwise, info about all DB Instances will be returned.
  • max_records (int) – The maximum number of records to be returned. If more results are available, a MoreToken will be returned in the response that can be used to retrieve additional records. Default is 100.
  • marker (str) – The marker provided by a previous request.
Return type:

list

Returns:

A list of boto.rds.dbinstance.DBInstance

get_all_dbparameter_groups(groupname=None, max_records=None, marker=None)

Get all parameter groups associated with your account in a region.

Parameters:
  • groupname (str) – The name of the DBParameter group to retrieve. If not provided, all DBParameter groups will be returned.
  • max_records (int) – The maximum number of records to be returned. If more results are available, a MoreToken will be returned in the response that can be used to retrieve additional records. Default is 100.
  • marker (str) – The marker provided by a previous request.
Return type:

list

Returns:

A list of boto.ec2.parametergroup.ParameterGroup

get_all_dbparameters(groupname, source=None, max_records=None, marker=None)

Get all parameters associated with a ParameterGroup

Parameters:
  • groupname (str) – The name of the DBParameter group to retrieve.
  • source (str) – Specifies which parameters to return. If not specified, all parameters will be returned. Valid values are: user|system|engine-default
  • max_records (int) – The maximum number of records to be returned. If more results are available, a MoreToken will be returned in the response that can be used to retrieve additional records. Default is 100.
  • marker (str) – The marker provided by a previous request.
Return type:

boto.ec2.parametergroup.ParameterGroup

Returns:

The ParameterGroup

get_all_dbsecurity_groups(groupname=None, max_records=None, marker=None)

Get all security groups associated with your account in a region.

Parameters:
  • groupnames (list) – A list of the names of security groups to retrieve. If not provided, all security groups will be returned.
  • max_records (int) – The maximum number of records to be returned. If more results are available, a MoreToken will be returned in the response that can be used to retrieve additional records. Default is 100.
  • marker (str) – The marker provided by a previous request.
Return type:

list

Returns:

A list of boto.rds.dbsecuritygroup.DBSecurityGroup

get_all_dbsnapshots(snapshot_id=None, instance_id=None, max_records=None, marker=None)

Get information about DB Snapshots.

Parameters:
  • snapshot_id (str) – The unique identifier of an RDS snapshot. If not provided, all RDS snapshots will be returned.
  • instance_id (str) – The identifier of a DBInstance. If provided, only the DBSnapshots related to that instance will be returned. If not provided, all RDS snapshots will be returned.
  • max_records (int) – The maximum number of records to be returned. If more results are available, a MoreToken will be returned in the response that can be used to retrieve additional records. Default is 100.
  • marker (str) – The marker provided by a previous request.
Return type:

list

Returns:

A list of boto.rds.dbsnapshot.DBSnapshot

get_all_events(source_identifier=None, source_type=None, start_time=None, end_time=None, max_records=None, marker=None)

Get information about events related to your DBInstances, DBSecurityGroups and DBParameterGroups.

Parameters:
  • source_identifier (str) – If supplied, the events returned will be limited to those that apply to the identified source. The value of this parameter depends on the value of source_type. If neither parameter is specified, all events in the time span will be returned.
  • source_type (str) – Specifies how the source_identifier should be interpreted. Valid values are: b-instance | db-security-group | db-parameter-group | db-snapshot
  • start_time (datetime) – The beginning of the time interval for events. If not supplied, all available events will be returned.
  • end_time (datetime) – The ending of the time interval for events. If not supplied, all available events will be returned.
  • max_records (int) – The maximum number of records to be returned. If more results are available, a MoreToken will be returned in the response that can be used to retrieve additional records. Default is 100.
  • marker (str) – The marker provided by a previous request.
Return type:

list

Returns:

A list of class:boto.rds.event.Event

modify_dbinstance(id, param_group=None, security_groups=None, preferred_maintenance_window=None, master_password=None, allocated_storage=None, instance_class=None, backup_retention_period=None, preferred_backup_window=None, multi_az=False, apply_immediately=False)

Modify an existing DBInstance.

Parameters:
  • id (str) – Unique identifier for the new instance.
  • security_groups (list of str or list of DBSecurityGroup objects) – List of names of DBSecurityGroup to authorize on this DBInstance.
  • preferred_maintenance_window (str) – The weekly time range (in UTC) during which maintenance can occur. Default is Sun:05:00-Sun:09:00
  • master_password (str) – Password of master user for the DBInstance. Must be 4-15 alphanumeric characters.
  • allocated_storage (int) – The new allocated storage size, in GBs. Valid values are [5-1024]
  • instance_class (str) –

    The compute and memory capacity of the DBInstance. Changes will be applied at next maintenance window unless apply_immediately is True.

    Valid values are:

    • db.m1.small
    • db.m1.large
    • db.m1.xlarge
    • db.m2.xlarge
    • db.m2.2xlarge
    • db.m2.4xlarge
  • apply_immediately (bool) – If true, the modifications will be applied as soon as possible rather than waiting for the next preferred maintenance window.
  • backup_retention_period (int) – The number of days for which automated backups are retained. Setting this to zero disables automated backups.
  • preferred_backup_window (str) – The daily time range during which automated backups are created (if enabled). Must be in h24:mi-hh24:mi format (UTC).
  • multi_az (bool) – If True, specifies the DB Instance will be deployed in multiple availability zones.
Return type:

boto.rds.dbinstance.DBInstance

Returns:

The modified db instance.

modify_parameter_group(name, parameters=None)

Modify a parameter group for your account.

Parameters:
Return type:

boto.rds.parametergroup.ParameterGroup

Returns:

The newly created ParameterGroup

reboot_dbinstance(id)

Reboot DBInstance.

Parameters:id (str) – Unique identifier of the instance.
Return type:boto.rds.dbinstance.DBInstance
Returns:The rebooting db instance.
reset_parameter_group(name, reset_all_params=False, parameters=None)

Resets some or all of the parameters of a ParameterGroup to the default value

Parameters:
restore_dbinstance_from_dbsnapshot(identifier, instance_id, instance_class, port=None, availability_zone=None)

Create a new DBInstance from a DB snapshot.

Parameters:
  • identifier (string) – The identifier for the DBSnapshot
  • instance_id (string) – The source identifier for the RDS instance from which the snapshot is created.
  • instance_class (str) – The compute and memory capacity of the DBInstance. Valid values are: db.m1.small | db.m1.large | db.m1.xlarge | db.m2.2xlarge | db.m2.4xlarge
  • port (int) – Port number on which database accepts connections. Valid values [1115-65535]. Defaults to 3306.
  • availability_zone (str) – Name of the availability zone to place DBInstance into.
Return type:

boto.rds.dbinstance.DBInstance

Returns:

The newly created DBInstance

restore_dbinstance_from_point_in_time(source_instance_id, target_instance_id, use_latest=False, restore_time=None, dbinstance_class=None, port=None, availability_zone=None)

Create a new DBInstance from a point in time.

Parameters:
  • source_instance_id (string) – The identifier for the source DBInstance.
  • target_instance_id (string) – The identifier of the new DBInstance.
  • use_latest (bool) – If True, the latest snapshot availabile will be used.
  • restore_time (datetime) – The date and time to restore from. Only used if use_latest is False.
  • instance_class (str) – The compute and memory capacity of the DBInstance. Valid values are: db.m1.small | db.m1.large | db.m1.xlarge | db.m2.2xlarge | db.m2.4xlarge
  • port (int) – Port number on which database accepts connections. Valid values [1115-65535]. Defaults to 3306.
  • availability_zone (str) – Name of the availability zone to place DBInstance into.
Return type:

boto.rds.dbinstance.DBInstance

Returns:

The newly created DBInstance

revoke_dbsecurity_group(group_name, ec2_security_group_name=None, ec2_security_group_owner_id=None, cidr_ip=None)

Remove an existing rule from an existing security group. You need to pass in either ec2_security_group_name and ec2_security_group_owner_id OR a CIDR block.

Parameters:
  • group_name (string) – The name of the security group you are removing the rule from.
  • ec2_security_group_name (string) – The name of the EC2 security group from which you are removing access.
  • ec2_security_group_owner_id (string) – The ID of the owner of the EC2 security from which you are removing access.
  • cidr_ip (string) – The CIDR block from which you are removing access. See http://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing
Return type:

bool

Returns:

True if successful.

revoke_security_group(group_name, ec2_security_group_name=None, ec2_security_group_owner_id=None, cidr_ip=None)

Remove an existing rule from an existing security group. You need to pass in either ec2_security_group_name and ec2_security_group_owner_id OR a CIDR block.

Parameters:
  • group_name (string) – The name of the security group you are removing the rule from.
  • ec2_security_group_name (string) – The name of the EC2 security group from which you are removing access.
  • ec2_security_group_owner_id (string) – The ID of the owner of the EC2 security from which you are removing access.
  • cidr_ip (string) – The CIDR block from which you are removing access. See http://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing
Return type:

bool

Returns:

True if successful.

boto.rds.connect_to_region(region_name)
boto.rds.regions()

Get all available regions for the RDS service.

Return type:list
Returns:A list of boto.rds.regioninfo.RDSRegionInfo
boto.rds.dbinstance
class boto.rds.dbinstance.DBInstance(connection=None, id=None)

Represents a RDS DBInstance

endElement(name, value, connection)
modify(param_group=None, security_groups=None, preferred_maintenance_window=None, master_password=None, allocated_storage=None, instance_class=None, backup_retention_period=None, preferred_backup_window=None, multi_az=False, apply_immediately=False)

Modify this DBInstance.

Parameters:
  • security_groups (list of str or list of DBSecurityGroup objects) – List of names of DBSecurityGroup to authorize on this DBInstance.
  • preferred_maintenance_window (str) – The weekly time range (in UTC) during which maintenance can occur. Default is Sun:05:00-Sun:09:00
  • master_password (str) – Password of master user for the DBInstance. Must be 4-15 alphanumeric characters.
  • allocated_storage (int) – The new allocated storage size, in GBs. Valid values are [5-1024]
  • instance_class (str) –

    The compute and memory capacity of the DBInstance. Changes will be applied at next maintenance window unless apply_immediately is True.

    Valid values are:

    • db.m1.small
    • db.m1.large
    • db.m1.xlarge
    • db.m2.xlarge
    • db.m2.2xlarge
    • db.m2.4xlarge
  • apply_immediately (bool) – If true, the modifications will be applied as soon as possible rather than waiting for the next preferred maintenance window.
  • backup_retention_period (int) – The number of days for which automated backups are retained. Setting this to zero disables automated backups.
  • preferred_backup_window (str) – The daily time range during which automated backups are created (if enabled). Must be in h24:mi-hh24:mi format (UTC).
  • multi_az (bool) – If True, specifies the DB Instance will be deployed in multiple availability zones.
Return type:

boto.rds.dbinstance.DBInstance

Returns:

The modified db instance.

reboot()

Reboot this DBInstance

Return type:boto.rds.dbsnapshot.DBSnapshot
Returns:The newly created DBSnapshot
snapshot(snapshot_id)

Create a new DB snapshot of this DBInstance.

Parameters:identifier (string) – The identifier for the DBSnapshot
Return type:boto.rds.dbsnapshot.DBSnapshot
Returns:The newly created DBSnapshot
startElement(name, attrs, connection)
stop(skip_final_snapshot=False, final_snapshot_id='')

Delete this DBInstance.

Parameters:
  • skip_final_snapshot (bool) – This parameter determines whether a final db snapshot is created before the instance is deleted. If True, no snapshot is created. If False, a snapshot is created before deleting the instance.
  • final_snapshot_id (str) – If a final snapshot is requested, this is the identifier used for that snapshot.
Return type:

boto.rds.dbinstance.DBInstance

Returns:

The deleted db instance.

update(validate=False)

Update the DB instance’s status information by making a call to fetch the current instance attributes from the service.

Parameters:validate (bool) – By default, if EC2 returns no data about the instance the update method returns quietly. If the validate param is True, however, it will raise a ValueError exception if no data is returned from EC2.
class boto.rds.dbinstance.PendingModifiedValues
endElement(name, value, connection)
startElement(name, attrs, connection)
boto.rds.dbsecuritygroup

Represents an DBSecurityGroup

class boto.rds.dbsecuritygroup.DBSecurityGroup(connection=None, owner_id=None, name=None, description=None)
authorize(cidr_ip=None, ec2_group=None)

Add a new rule to this DBSecurity group. You need to pass in either a CIDR block to authorize or and EC2 SecurityGroup.

@type cidr_ip: string @param cidr_ip: A valid CIDR IP range to authorize

@type ec2_group: boto.ec2.securitygroup.SecurityGroup>

@rtype: bool @return: True if successful.

delete()
endElement(name, value, connection)
revoke(cidr_ip=None, ec2_group=None)

Revoke access to a CIDR range or EC2 SecurityGroup. You need to pass in either a CIDR block or an EC2 SecurityGroup from which to revoke access.

@type cidr_ip: string @param cidr_ip: A valid CIDR IP range to revoke

@type ec2_group: boto.ec2.securitygroup.SecurityGroup>

@rtype: bool @return: True if successful.

startElement(name, attrs, connection)
class boto.rds.dbsecuritygroup.EC2SecurityGroup(parent=None)
endElement(name, value, connection)
startElement(name, attrs, connection)
class boto.rds.dbsecuritygroup.IPRange(parent=None)
endElement(name, value, connection)
startElement(name, attrs, connection)
boto.rds.dbsnapshot
class boto.rds.dbsnapshot.DBSnapshot(connection=None, id=None)

Represents a RDS DB Snapshot

endElement(name, value, connection)
startElement(name, attrs, connection)
boto.rds.event
class boto.rds.event.Event(connection=None)
endElement(name, value, connection)
startElement(name, attrs, connection)
boto.rds.parametergroup
class boto.rds.parametergroup.Parameter(group=None, name=None)

Represents a RDS Parameter

ValidApplyMethods = ['immediate', 'pending-reboot']
ValidApplyTypes = ['static', 'dynamic']
ValidSources = ['user', 'system', 'engine-default']
ValidTypes = {'integer': <type 'int'>, 'boolean': <type 'bool'>, 'string': <type 'str'>}
apply(immediate=False)
endElement(name, value, connection)
get_value()
merge(d, i)
set_value(value)
startElement(name, attrs, connection)
value
class boto.rds.parametergroup.ParameterGroup(connection=None)
add_param(name, value, apply_method)
endElement(name, value, connection)
get_params()
modifiable()
startElement(name, attrs, connection)

route53

boto.route53
boto.route53.connection

class boto.route53.hostedzone.HostedZone(id=None, name=None, owner=None, version=None, caller_reference=None, config=None)
endElement(name, value, connection)
startElement(name, attrs, connection)
boto.route53.exception
exception boto.route53.exception.DNSServerError(status, reason, body=None, *args)

S3

boto.s3.acl
class boto.s3.acl.ACL(policy=None)
add_email_grant(permission, email_address)
add_grant(grant)
add_user_grant(permission, user_id, display_name=None)
endElement(name, value, connection)
startElement(name, attrs, connection)
to_xml()
class boto.s3.acl.Grant(permission=None, type=None, id=None, display_name=None, uri=None, email_address=None)
NameSpace = 'xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"'
endElement(name, value, connection)
startElement(name, attrs, connection)
to_xml()
class boto.s3.acl.Policy(parent=None)
endElement(name, value, connection)
startElement(name, attrs, connection)
to_xml()
boto.s3.bucket
class boto.s3.bucket.Bucket(connection=None, name=None, key_class=<class 'boto.s3.key.Key'>)
BucketLoggingBody = '<?xml version="1.0" encoding="UTF-8"?>\n <BucketLoggingStatus xmlns="http://s3.amazonaws.com/doc/2006-03-01/">\n <LoggingEnabled>\n <TargetBucket>%s</TargetBucket>\n <TargetPrefix>%s</TargetPrefix>\n </LoggingEnabled>\n </BucketLoggingStatus>'
BucketPaymentBody = '<?xml version="1.0" encoding="UTF-8"?>\n <RequestPaymentConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">\n <Payer>%s</Payer>\n </RequestPaymentConfiguration>'
EmptyBucketLoggingBody = '<?xml version="1.0" encoding="UTF-8"?>\n <BucketLoggingStatus xmlns="http://s3.amazonaws.com/doc/2006-03-01/">\n </BucketLoggingStatus>'
LoggingGroup = 'http://acs.amazonaws.com/groups/s3/LogDelivery'
MFADeleteRE = '<MfaDelete>([A-Za-z]+)</MfaDelete>'
VersionRE = '<Status>([A-Za-z]+)</Status>'
VersioningBody = '<?xml version="1.0" encoding="UTF-8"?>\n <VersioningConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">\n <Status>%s</Status>\n <MfaDelete>%s</MfaDelete>\n </VersioningConfiguration>'
WebsiteBody = '<?xml version="1.0" encoding="UTF-8"?>\n <WebsiteConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">\n <IndexDocument><Suffix>%s</Suffix></IndexDocument>\n %s\n </WebsiteConfiguration>'
WebsiteErrorFragment = '<ErrorDocument><Key>%s</Key></ErrorDocument>'
add_email_grant(permission, email_address, recursive=False, headers=None)

Convenience method that provides a quick way to add an email grant to a bucket. This method retrieves the current ACL, creates a new grant based on the parameters passed in, adds that grant to the ACL and then PUT’s the new ACL back to S3.

Parameters:
  • permission (string) – The permission being granted. Should be one of: (READ, WRITE, READ_ACP, WRITE_ACP, FULL_CONTROL).
  • email_address (string) – The email address associated with the AWS account your are granting the permission to.
  • recursive (boolean) – A boolean value to controls whether the command will apply the grant to all keys within the bucket or not. The default value is False. By passing a True value, the call will iterate through all keys in the bucket and apply the same grant to each key. CAUTION: If you have a lot of keys, this could take a long time!
add_user_grant(permission, user_id, recursive=False, headers=None, display_name=None)

Convenience method that provides a quick way to add a canonical user grant to a bucket. This method retrieves the current ACL, creates a new grant based on the parameters passed in, adds that grant to the ACL and then PUT’s the new ACL back to S3.

Parameters:
  • permission (string) – The permission being granted. Should be one of: (READ, WRITE, READ_ACP, WRITE_ACP, FULL_CONTROL).
  • user_id (string) – The canonical user id associated with the AWS account your are granting the permission to.
  • recursive (boolean) – A boolean value to controls whether the command will apply the grant to all keys within the bucket or not. The default value is False. By passing a True value, the call will iterate through all keys in the bucket and apply the same grant to each key. CAUTION: If you have a lot of keys, this could take a long time!
  • display_name (string) – An option string containing the user’s Display Name. Only required on Walrus.
cancel_multipart_upload(key_name, upload_id, headers=None)
complete_multipart_upload(key_name, upload_id, xml_body, headers=None)

Complete a multipart upload operation.

configure_versioning(versioning, mfa_delete=False, mfa_token=None, headers=None)

Configure versioning for this bucket.

..note:: This feature is currently in beta release and is available
only in the Northern California region.
Parameters:
  • versioning (bool) – A boolean indicating whether version is enabled (True) or disabled (False).
  • mfa_delete (bool) – A boolean indicating whether the Multi-Factor Authentication Delete feature is enabled (True) or disabled (False). If mfa_delete is enabled then all Delete operations will require the token from your MFA device to be passed in the request.
  • mfa_token (tuple or list of strings) – A tuple or list consisting of the serial number from the MFA device and the current value of the six-digit token associated with the device. This value is required when you are changing the status of the MfaDelete property of the bucket.
configure_website(suffix, error_key='', headers=None)

Configure this bucket to act as a website

Parameters:
  • suffix (str) – Suffix that is appended to a request that is for a “directory” on the website endpoint (e.g. if the suffix is index.html and you make a request to samplebucket/images/ the data that is returned will be for the object with the key name images/index.html). The suffix must not be empty and must not include a slash character.
  • error_key (str) – The object key name to use when a 4XX class error occurs. This is optional.
copy_key(new_key_name, src_bucket_name, src_key_name, metadata=None, src_version_id=None, storage_class='STANDARD', preserve_acl=False)

Create a new key in the bucket by copying another existing key.

Parameters:
  • new_key_name (string) – The name of the new key
  • src_bucket_name (string) – The name of the source bucket
  • src_key_name (string) – The name of the source key
  • src_version_id (string) – The version id for the key. This param is optional. If not specified, the newest version of the key will be copied.
  • metadata (dict) – Metadata to be associated with new key. If metadata is supplied, it will replace the metadata of the source key being copied. If no metadata is supplied, the source key’s metadata will be copied to the new key.
  • storage_class (string) – The storage class of the new key. By default, the new key will use the standard storage class. Possible values are: STANDARD | REDUCED_REDUNDANCY
  • preserve_acl (bool) – If True, the ACL from the source key will be copied to the destination key. If False, the destination key will have the default ACL. Note that preserving the ACL in the new key object will require two additional API calls to S3, one to retrieve the current ACL and one to set that ACL on the new object. If you don’t care about the ACL, a value of False will be significantly more efficient.
Return type:

boto.s3.key.Key or subclass

Returns:

An instance of the newly created key object

delete(headers=None)
delete_key(key_name, headers=None, version_id=None, mfa_token=None)

Deletes a key from the bucket. If a version_id is provided, only that version of the key will be deleted.

Parameters:
  • key_name (string) – The key name to delete
  • version_id (string) – The version ID (optional)
  • mfa_token (tuple or list of strings) – A tuple or list consisting of the serial number from the MFA device and the current value of the six-digit token associated with the device. This value is required anytime you are deleting versioned objects from a bucket that has the MFADelete option on the bucket.
delete_website_configuration(headers=None)

Removes all website configuration from the bucket.

disable_logging(headers=None)
enable_logging(target_bucket, target_prefix='', headers=None)
endElement(name, value, connection)
generate_url(expires_in, method='GET', headers=None, force_http=False, response_headers=None)
get_acl(key_name='', headers=None, version_id=None)
get_all_keys(headers=None, **params)

A lower-level method for listing contents of a bucket. This closely models the actual S3 API and requires you to manually handle the paging of results. For a higher-level method that handles the details of paging for you, you can use the list method.

Parameters:
  • max_keys (int) – The maximum number of keys to retrieve
  • prefix (string) – The prefix of the keys you want to retrieve
  • marker (string) – The “marker” of where you are in the result set
  • delimiter (string) – If this optional, Unicode string parameter is included with your request, then keys that contain the same string between the prefix and the first occurrence of the delimiter will be rolled up into a single result element in the CommonPrefixes collection. These rolled-up keys are not returned elsewhere in the response.
Return type:

ResultSet

Returns:

The result from S3 listing the keys requested

get_all_multipart_uploads(headers=None, **params)

A lower-level, version-aware method for listing active MultiPart uploads for a bucket. This closely models the actual S3 API and requires you to manually handle the paging of results. For a higher-level method that handles the details of paging for you, you can use the list method.

Parameters:
  • max_uploads (int) – The maximum number of uploads to retrieve. Default value is 1000.
  • key_marker (string) –

    Together with upload_id_marker, this parameter specifies the multipart upload after which listing should begin. If upload_id_marker is not specified, only the keys lexicographically greater than the specified key_marker will be included in the list.

    If upload_id_marker is specified, any multipart uploads for a key equal to the key_marker might also be included, provided those multipart uploads have upload IDs lexicographically greater than the specified upload_id_marker.

  • upload_id_marker (string) – Together with key-marker, specifies the multipart upload after which listing should begin. If key_marker is not specified, the upload_id_marker parameter is ignored. Otherwise, any multipart uploads for a key equal to the key_marker might be included in the list only if they have an upload ID lexicographically greater than the specified upload_id_marker.
Return type:

ResultSet

Returns:

The result from S3 listing the uploads requested

get_all_versions(headers=None, **params)

A lower-level, version-aware method for listing contents of a bucket. This closely models the actual S3 API and requires you to manually handle the paging of results. For a higher-level method that handles the details of paging for you, you can use the list method.

Parameters:
  • max_keys (int) – The maximum number of keys to retrieve
  • prefix (string) – The prefix of the keys you want to retrieve
  • key_marker (string) – The “marker” of where you are in the result set with respect to keys.
  • version_id_marker (string) – The “marker” of where you are in the result set with respect to version-id’s.
  • delimiter (string) – If this optional, Unicode string parameter is included with your request, then keys that contain the same string between the prefix and the first occurrence of the delimiter will be rolled up into a single result element in the CommonPrefixes collection. These rolled-up keys are not returned elsewhere in the response.
Return type:

ResultSet

Returns:

The result from S3 listing the keys requested

get_key(key_name, headers=None, version_id=None)

Check to see if a particular key exists within the bucket. This method uses a HEAD request to check for the existance of the key. Returns: An instance of a Key object or None

Parameters:key_name (string) – The name of the key to retrieve
Return type:boto.s3.key.Key
Returns:A Key object from this bucket.
get_location()

Returns the LocationConstraint for the bucket.

Return type:str
Returns:The LocationConstraint for the bucket or the empty string if no constraint was specified when bucket was created.
get_logging_status(headers=None)
get_policy(headers=None)
get_request_payment(headers=None)
get_versioning_status(headers=None)

Returns the current status of versioning on the bucket.

Return type:dict
Returns:A dictionary containing a key named ‘Versioning’ that can have a value of either Enabled, Disabled, or Suspended. Also, if MFADelete has ever been enabled on the bucket, the dictionary will contain a key named ‘MFADelete’ which will have a value of either Enabled or Suspended.
get_website_configuration(headers=None)

Returns the current status of website configuration on the bucket.

Return type:dict
Returns:
A dictionary containing a Python representation
of the XML response from S3. The overall structure is:
  • WebsiteConfiguration
    • IndexDocument
      • Suffix : suffix that is appended to request that

      is for a “directory” on the website endpoint * ErrorDocument

      • Key : name of object to serve when an error occurs
get_website_endpoint()

Returns the fully qualified hostname to use is you want to access this bucket as a website. This doesn’t validate whether the bucket has been correctly configured as a website or not.

get_xml_acl(key_name='', headers=None, version_id=None)
initiate_multipart_upload(key_name, headers=None, reduced_redundancy=False, metadata=None)

Start a multipart upload operation.

Parameters:
  • key_name (string) – The name of the key that will ultimately result from this multipart upload operation. This will be exactly as the key appears in the bucket after the upload process has been completed.
  • headers (dict) – Additional HTTP headers to send and store with the resulting key in S3.
  • reduced_redundancy (boolean) – In multipart uploads, the storage class is specified when initiating the upload, not when uploading individual parts. So if you want the resulting key to use the reduced redundancy storage class set this flag when you initiate the upload.
  • metadata (dict) – Any metadata that you would like to set on the key that results from the multipart upload.
list(prefix='', delimiter='', marker='', headers=None)

List key objects within a bucket. This returns an instance of an BucketListResultSet that automatically handles all of the result paging, etc. from S3. You just need to keep iterating until there are no more results.

Called with no arguments, this will return an iterator object across all keys within the bucket.

The Key objects returned by the iterator are obtained by parsing the results of a GET on the bucket, also known as the List Objects request. The XML returned by this request contains only a subset of the information about each key. Certain metadata fields such as Content-Type and user metadata are not available in the XML. Therefore, if you want these additional metadata fields you will have to do a HEAD request on the Key in the bucket.

Parameters:
  • prefix (string) – allows you to limit the listing to a particular prefix. For example, if you call the method with prefix=’/foo/’ then the iterator will only cycle through the keys that begin with the string ‘/foo/’.
  • delimiter (string) – can be used in conjunction with the prefix to allow you to organize and browse your keys hierarchically. See: http://docs.amazonwebservices.com/AmazonS3/2006-03-01/ for more details.
  • marker (string) – The “marker” of where you are in the result set
Return type:

boto.s3.bucketlistresultset.BucketListResultSet

Returns:

an instance of a BucketListResultSet that handles paging, etc

list_grants(headers=None)
list_multipart_uploads(key_marker='', upload_id_marker='', headers=None)

List multipart upload objects within a bucket. This returns an instance of an MultiPartUploadListResultSet that automatically handles all of the result paging, etc. from S3. You just need to keep iterating until there are no more results.

Parameters:marker (string) – The “marker” of where you are in the result set
Return type:boto.s3.bucketlistresultset.BucketListResultSet
Returns:an instance of a BucketListResultSet that handles paging, etc
list_versions(prefix='', delimiter='', key_marker='', version_id_marker='', headers=None)

List version objects within a bucket. This returns an instance of an VersionedBucketListResultSet that automatically handles all of the result paging, etc. from S3. You just need to keep iterating until there are no more results. Called with no arguments, this will return an iterator object across all keys within the bucket.

Parameters:
  • prefix (string) – allows you to limit the listing to a particular prefix. For example, if you call the method with prefix=’/foo/’ then the iterator will only cycle through the keys that begin with the string ‘/foo/’.
  • delimiter (string) – can be used in conjunction with the prefix to allow you to organize and browse your keys hierarchically. See: http://docs.amazonwebservices.com/AmazonS3/2006-03-01/ for more details.
  • marker (string) – The “marker” of where you are in the result set
Return type:

boto.s3.bucketlistresultset.BucketListResultSet

Returns:

an instance of a BucketListResultSet that handles paging, etc

lookup(key_name, headers=None)

Deprecated: Please use get_key method.

Parameters:key_name (string) – The name of the key to retrieve
Return type:boto.s3.key.Key
Returns:A Key object from this bucket.
make_public(recursive=False, headers=None)
new_key(key_name=None)

Creates a new key

Parameters:key_name (string) – The name of the key to create
Return type:boto.s3.key.Key or subclass
Returns:An instance of the newly created key object
set_acl(acl_or_str, key_name='', headers=None, version_id=None)
set_as_logging_target(headers=None)
set_canned_acl(acl_str, key_name='', headers=None, version_id=None)
set_key_class(key_class)

Set the Key class associated with this bucket. By default, this would be the boto.s3.key.Key class but if you want to subclass that for some reason this allows you to associate your new class with a bucket so that when you call bucket.new_key() or when you get a listing of keys in the bucket you will get an instances of your key class rather than the default.

Parameters:key_class (class) – A subclass of Key that can be more specific
set_policy(policy, headers=None)
set_request_payment(payer='BucketOwner', headers=None)
set_xml_acl(acl_str, key_name='', headers=None, version_id=None)
startElement(name, attrs, connection)
class boto.s3.bucket.S3WebsiteEndpointTranslate
trans_region = defaultdict(<function <lambda> at 0x7fa8fc768758>, {'EU': 's3-website-eu-west-1', 'ap-northeast-1': 's3-website-ap-northeast-1', 'us-west-1': 's3-website-us-west-1', 'ap-southeast-1': 's3-website-ap-southeast-1'})
classmethod translate_region(reg)
boto.s3.bucketlistresultset
class boto.s3.bucketlistresultset.BucketListResultSet(bucket=None, prefix='', delimiter='', marker='', headers=None)

A resultset for listing keys within a bucket. Uses the bucket_lister generator function and implements the iterator interface. This transparently handles the results paging from S3 so even if you have many thousands of keys within the bucket you can iterate over all keys in a reasonably efficient manner.

class boto.s3.bucketlistresultset.MultiPartUploadListResultSet(bucket=None, key_marker='', upload_id_marker='', headers=None)

A resultset for listing multipart uploads within a bucket. Uses the multipart_upload_lister generator function and implements the iterator interface. This transparently handles the results paging from S3 so even if you have many thousands of uploads within the bucket you can iterate over all keys in a reasonably efficient manner.

class boto.s3.bucketlistresultset.VersionedBucketListResultSet(bucket=None, prefix='', delimiter='', key_marker='', version_id_marker='', headers=None)

A resultset for listing versions within a bucket. Uses the bucket_lister generator function and implements the iterator interface. This transparently handles the results paging from S3 so even if you have many thousands of keys within the bucket you can iterate over all keys in a reasonably efficient manner.

boto.s3.bucketlistresultset.bucket_lister(bucket, prefix='', delimiter='', marker='', headers=None)

A generator function for listing keys in a bucket.

boto.s3.bucketlistresultset.multipart_upload_lister(bucket, key_marker='', upload_id_marker='', headers=None)

A generator function for listing multipart uploads in a bucket.

boto.s3.bucketlistresultset.versioned_bucket_lister(bucket, prefix='', delimiter='', key_marker='', version_id_marker='', headers=None)

A generator function for listing versions in a bucket.

boto.s3.connection
class boto.s3.connection.Location
APNortheast = 'ap-northeast-1'
APSoutheast = 'ap-southeast-1'
DEFAULT = ''
EU = 'EU'
USWest = 'us-west-1'
class boto.s3.connection.OrdinaryCallingFormat
build_path_base(bucket, key='')
get_bucket_server(server, bucket)
class boto.s3.connection.ProtocolIndependentOrdinaryCallingFormat
build_url_base(connection, protocol, server, bucket, key='')
class boto.s3.connection.S3Connection(aws_access_key_id=None, aws_secret_access_key=None, is_secure=True, port=None, proxy=None, proxy_port=None, proxy_user=None, proxy_pass=None, host='s3.amazonaws.com', debug=0, https_connection_factory=None, calling_format=<boto.s3.connection.SubdomainCallingFormat instance>, path='/', provider='aws', bucket_class=<class 'boto.s3.bucket.Bucket'>)
DefaultHost = 's3.amazonaws.com'
QueryString = 'Signature=%s&Expires=%d&AWSAccessKeyId=%s'
build_post_form_args(bucket_name, key, expires_in=6000, acl=None, success_action_redirect=None, max_content_length=None, http_method='http', fields=None, conditions=None)

Taken from the AWS book Python examples and modified for use with boto This only returns the arguments required for the post form, not the actual form This does not return the file input field which also needs to be added

Parameters:
  • bucket_name (string) – Bucket to submit to
  • key (string) – Key name, optionally add ${filename} to the end to attach the submitted filename
  • expires_in (integer) – Time (in seconds) before this expires, defaults to 6000
  • acl (boto.s3.acl.ACL) – ACL rule to use, if any
  • success_action_redirect (string) – URL to redirect to on success
  • max_content_length (integer) – Maximum size for this file
  • http_method (string) – HTTP Method to use, “http” or “https”
Return type:

dict

Returns:

A dictionary containing field names/values as well as a url to POST to

{
    "action": action_url_to_post_to, 
    "fields": [ 
        {
            "name": field_name, 
            "value":  field_value
        }, 
        {
            "name": field_name2, 
            "value": field_value2
        } 
    ] 
}

build_post_policy(expiration_time, conditions)

Taken from the AWS book Python examples and modified for use with boto

create_bucket(bucket_name, headers=None, location='', policy=None)

Creates a new located bucket. By default it’s in the USA. You can pass Location.EU to create an European bucket.

Parameters:
  • bucket_name (string) – The name of the new bucket
  • headers (dict) – Additional headers to pass along with the request to AWS.
  • location (boto.s3.connection.Location) – The location of the new bucket
  • policy (boto.s3.acl.CannedACLStrings) – A canned ACL policy that will be applied to the new key in S3.
delete_bucket(bucket, headers=None)
generate_url(expires_in, method, bucket='', key='', headers=None, query_auth=True, force_http=False, response_headers=None)
get_all_buckets(headers=None)
get_bucket(bucket_name, validate=True, headers=None)
get_canonical_user_id(headers=None)

Convenience method that returns the “CanonicalUserID” of the user who’s credentials are associated with the connection. The only way to get this value is to do a GET request on the service which returns all buckets associated with the account. As part of that response, the canonical userid is returned. This method simply does all of that and then returns just the user id.

Return type:string
Returns:A string containing the canonical user id.
lookup(bucket_name, validate=True, headers=None)
make_request(method, bucket='', key='', headers=None, data='', query_args=None, sender=None, override_num_retries=None)
set_bucket_class(bucket_class)

Set the Bucket class associated with this bucket. By default, this would be the boto.s3.key.Bucket class but if you want to subclass that for some reason this allows you to associate your new class.

Parameters:bucket_class (class) – A subclass of Bucket that can be more specific
class boto.s3.connection.SubdomainCallingFormat
get_bucket_server(*args, **kwargs)
class boto.s3.connection.VHostCallingFormat
get_bucket_server(*args, **kwargs)
boto.s3.connection.assert_case_insensitive(f)
boto.s3.connection.check_lowercase_bucketname(n)

Bucket names must not contain uppercase characters. We check for this by appending a lowercase character and testing with islower(). Note this also covers cases like numeric bucket names with dashes.

>>> check_lowercase_bucketname("Aaaa")
Traceback (most recent call last):
...
BotoClientError: S3Error: Bucket names cannot contain upper-case
characters when using either the sub-domain or virtual hosting calling
format.
>>> check_lowercase_bucketname("1234-5678-9123")
True
>>> check_lowercase_bucketname("abcdefg1234")
True
boto.s3.key
class boto.s3.key.Key(bucket=None, name=None)
BufferSize = 8192
DefaultContentType = 'application/octet-stream'
add_email_grant(permission, email_address, headers=None)

Convenience method that provides a quick way to add an email grant to a key. This method retrieves the current ACL, creates a new grant based on the parameters passed in, adds that grant to the ACL and then PUT’s the new ACL back to S3.

Parameters:
  • permission (string) – The permission being granted. Should be one of: (READ, WRITE, READ_ACP, WRITE_ACP, FULL_CONTROL).
  • email_address (string) – The email address associated with the AWS account your are granting the permission to.
  • recursive (boolean) – A boolean value to controls whether the command will apply the grant to all keys within the bucket or not. The default value is False. By passing a True value, the call will iterate through all keys in the bucket and apply the same grant to each key. CAUTION: If you have a lot of keys, this could take a long time!
add_user_grant(permission, user_id, headers=None, display_name=None)

Convenience method that provides a quick way to add a canonical user grant to a key. This method retrieves the current ACL, creates a new grant based on the parameters passed in, adds that grant to the ACL and then PUT’s the new ACL back to S3.

Parameters:
  • permission (string) – The permission being granted. Should be one of: (READ, WRITE, READ_ACP, WRITE_ACP, FULL_CONTROL).
  • user_id (string) – The canonical user id associated with the AWS account your are granting the permission to.
  • display_name (string) – An option string containing the user’s Display Name. Only required on Walrus.
change_storage_class(new_storage_class, dst_bucket=None)

Change the storage class of an existing key. Depending on whether a different destination bucket is supplied or not, this will either move the item within the bucket, preserving all metadata and ACL info bucket changing the storage class or it will copy the item to the provided destination bucket, also preserving metadata and ACL info.

Parameters:
  • new_storage_class (string) – The new storage class for the Key. Possible values are: * STANDARD * REDUCED_REDUNDANCY
  • dst_bucket (string) – The name of a destination bucket. If not provided the current bucket of the key will be used.
close()
closed = False
compute_md5(fp)
Parameters:fp (file) – File pointer to the file to MD5 hash. The file pointer will be reset to the beginning of the file before the method returns.
Return type:tuple
Returns:A tuple containing the hex digest version of the MD5 hash as the first element and the base64 encoded version of the plain digest as the second element.
copy(dst_bucket, dst_key, metadata=None, reduced_redundancy=False, preserve_acl=False)

Copy this Key to another bucket.

Parameters:
  • dst_bucket (string) – The name of the destination bucket
  • dst_key (string) – The name of the destination key
  • metadata (dict) – Metadata to be associated with new key. If metadata is supplied, it will replace the metadata of the source key being copied. If no metadata is supplied, the source key’s metadata will be copied to the new key.
  • reduced_redundancy (bool) – If True, this will force the storage class of the new Key to be REDUCED_REDUNDANCY regardless of the storage class of the key being copied. The Reduced Redundancy Storage (RRS) feature of S3, provides lower redundancy at lower storage cost.
  • preserve_acl (bool) – If True, the ACL from the source key will be copied to the destination key. If False, the destination key will have the default ACL. Note that preserving the ACL in the new key object will require two additional API calls to S3, one to retrieve the current ACL and one to set that ACL on the new object. If you don’t care about the ACL, a value of False will be significantly more efficient.
Return type:

boto.s3.key.Key or subclass

Returns:

An instance of the newly created key object

delete()

Delete this key from S3

endElement(name, value, connection)
exists()

Returns True if the key exists

Return type:bool
Returns:Whether the key exists on S3
generate_url(expires_in, method='GET', headers=None, query_auth=True, force_http=False, response_headers=None)

Generate a URL to access this key.

Parameters:
  • expires_in (int) – How long the url is valid for, in seconds
  • method (string) – The method to use for retrieving the file (default is GET)
  • headers (dict) – Any headers to pass along in the request
  • query_auth (bool) –
Return type:

string

Returns:

The URL to access the key

get_acl(headers=None)
get_contents_as_string(headers=None, cb=None, num_cb=10, torrent=False, version_id=None, response_headers=None)

Retrieve an object from S3 using the name of the Key object as the key in S3. Return the contents of the object as a string. See get_contents_to_file method for details about the parameters.

Parameters:
  • headers (dict) – Any additional headers to send in the request
  • cb (int) – a callback function that will be called to report progress on the upload. The callback should accept two integer parameters, the first representing the number of bytes that have been successfully transmitted to S3 and the second representing the size of the to be transmitted object.
  • num_cb – (optional) If a callback is specified with the cb parameter this parameter determines the granularity of the callback by defining the maximum number of times the callback will be called during the file transfer.
  • torrent (bool) – If True, returns the contents of a torrent file as a string.
  • response_headers (dict) – A dictionary containing HTTP headers/values that will override any headers associated with the stored object in the response. See http://goo.gl/EWOPb for details.
Return type:

string

Returns:

The contents of the file as a string

get_contents_to_file(fp, headers=None, cb=None, num_cb=10, torrent=False, version_id=None, res_download_handler=None, response_headers=None)

Retrieve an object from S3 using the name of the Key object as the key in S3. Write the contents of the object to the file pointed to by ‘fp’.

Parameters:
  • fp (File -like object) –
  • headers (dict) – additional HTTP headers that will be sent with the GET request.
  • cb (int) – a callback function that will be called to report progress on the upload. The callback should accept two integer parameters, the first representing the number of bytes that have been successfully transmitted to S3 and the second representing the size of the to be transmitted object.
  • num_cb – (optional) If a callback is specified with the cb parameter this parameter determines the granularity of the callback by defining the maximum number of times the callback will be called during the file transfer.
  • torrent (bool) – If True, returns the contents of a torrent file as a string.
  • res_download_handler – If provided, this handler will perform the download.
  • response_headers (dict) – A dictionary containing HTTP headers/values that will override any headers associated with the stored object in the response. See http://goo.gl/EWOPb for details.
get_contents_to_filename(filename, headers=None, cb=None, num_cb=10, torrent=False, version_id=None, res_download_handler=None, response_headers=None)

Retrieve an object from S3 using the name of the Key object as the key in S3. Store contents of the object to a file named by ‘filename’. See get_contents_to_file method for details about the parameters.

Parameters:
  • filename (string) – The filename of where to put the file contents
  • headers (dict) – Any additional headers to send in the request
  • cb (int) – a callback function that will be called to report progress on the upload. The callback should accept two integer parameters, the first representing the number of bytes that have been successfully transmitted to S3 and the second representing the size of the to be transmitted object.
  • num_cb – (optional) If a callback is specified with the cb parameter this parameter determines the granularity of the callback by defining the maximum number of times the callback will be called during the file transfer.
  • torrent (bool) – If True, returns the contents of a torrent file as a string.
  • res_download_handler – If provided, this handler will perform the download.
  • response_headers (dict) – A dictionary containing HTTP headers/values that will override any headers associated with the stored object in the response. See http://goo.gl/EWOPb for details.
get_file(fp, headers=None, cb=None, num_cb=10, torrent=False, version_id=None, override_num_retries=None, response_headers=None)

Retrieves a file from an S3 Key

Parameters:
  • fp (file) – File pointer to put the data into
  • cb (int) – a callback function that will be called to report progress on the upload. The callback should accept two integer parameters, the first representing the number of bytes that have been successfully transmitted to S3 and the second representing the size of the to be transmitted object.
  • num_cb – (optional) If a callback is specified with the cb parameter this parameter determines the granularity of the callback by defining the maximum number of times the callback will be called during the file transfer.
  • torrent (bool) – Flag for whether to get a torrent for the file
  • override_num_retries (int) – If not None will override configured num_retries parameter for underlying GET.
  • response_headers (dict) – A dictionary containing HTTP headers/values that will override any headers associated with the stored object in the response. See http://goo.gl/EWOPb for details.
Param:

headers to send when retrieving the files

get_md5_from_hexdigest(md5_hexdigest)

A utility function to create the 2-tuple (md5hexdigest, base64md5) from just having a precalculated md5_hexdigest.

get_metadata(name)
get_torrent_file(fp, headers=None, cb=None, num_cb=10)

Get a torrent file (see to get_file)

Parameters:
  • fp (file) – The file pointer of where to put the torrent
  • headers (dict) – Headers to be passed
  • cb (int) – a callback function that will be called to report progress on the upload. The callback should accept two integer parameters, the first representing the number of bytes that have been successfully transmitted to S3 and the second representing the size of the to be transmitted object.
  • num_cb – (optional) If a callback is specified with the cb parameter this parameter determines the granularity of the callback by defining the maximum number of times the callback will be called during the file transfer.
get_xml_acl(headers=None)
handle_version_headers(resp, force=False)
make_public(headers=None)
next()

By providing a next method, the key object supports use as an iterator. For example, you can now say:

for bytes in key:
write bytes to a file or whatever

All of the HTTP connection stuff is handled for you.

open(mode='r', headers=None, query_args=None, override_num_retries=None)
open_read(headers=None, query_args=None, override_num_retries=None, response_headers=None)

Open this key for reading

Parameters:
  • headers (dict) – Headers to pass in the web request
  • query_args (string) – Arguments to pass in the query string (ie, ‘torrent’)
  • override_num_retries (int) – If not None will override configured num_retries parameter for underlying GET.
  • response_headers (dict) – A dictionary containing HTTP headers/values that will override any headers associated with the stored object in the response. See http://goo.gl/EWOPb for details.
open_write(headers=None, override_num_retries=None)

Open this key for writing. Not yet implemented

Parameters:
  • headers (dict) – Headers to pass in the write request
  • override_num_retries (int) – If not None will override configured num_retries parameter for underlying PUT.
provider
read(size=0)
send_file(fp, headers=None, cb=None, num_cb=10, query_args=None)

Upload a file to a key into a bucket on S3.

Parameters:
  • fp (file) – The file pointer to upload
  • headers (dict) – The headers to pass along with the PUT request
  • cb (function) – a callback function that will be called to report progress on the upload. The callback should accept two integer parameters, the first representing the number of bytes that have been successfully transmitted to S3 and the second representing the size of the to be transmitted object.
  • num_cb (int) – (optional) If a callback is specified with the cb parameter this parameter determines the granularity of the callback by defining the maximum number of times the callback will be called during the file transfer. Providing a negative integer will cause your callback to be called with each buffer read.
set_acl(acl_str, headers=None)
set_canned_acl(acl_str, headers=None)
set_contents_from_file(fp, headers=None, replace=True, cb=None, num_cb=10, policy=None, md5=None, reduced_redundancy=False, query_args=None)

Store an object in S3 using the name of the Key object as the key in S3 and the contents of the file pointed to by ‘fp’ as the contents.

Parameters:
  • fp (file) – the file whose contents to upload
  • headers (dict) – Additional HTTP headers that will be sent with the PUT request.
  • replace (bool) – If this parameter is False, the method will first check to see if an object exists in the bucket with the same key. If it does, it won’t overwrite it. The default value is True which will overwrite the object.
  • cb (int) – a callback function that will be called to report progress on the upload. The callback should accept two integer parameters, the first representing the number of bytes that have been successfully transmitted to S3 and the second representing the size of the to be transmitted object.
  • num_cb – (optional) If a callback is specified with the cb parameter this parameter determines the granularity of the callback by defining the maximum number of times the callback will be called during the file transfer.
  • policy (boto.s3.acl.CannedACLStrings) – A canned ACL policy that will be applied to the new key in S3.
  • md5 (A tuple containing the hexdigest version of the MD5 checksum of the file as the first element and the Base64-encoded version of the plain checksum as the second element. This is the same format returned by the compute_md5 method.) – If you need to compute the MD5 for any reason prior to upload, it’s silly to have to do it twice so this param, if present, will be used as the MD5 values of the file. Otherwise, the checksum will be computed.
  • reduced_redundancy (bool) – If True, this will set the storage class of the new Key to be REDUCED_REDUNDANCY. The Reduced Redundancy Storage (RRS) feature of S3, provides lower redundancy at lower storage cost.
set_contents_from_filename(filename, headers=None, replace=True, cb=None, num_cb=10, policy=None, md5=None, reduced_redundancy=False)

Store an object in S3 using the name of the Key object as the key in S3 and the contents of the file named by ‘filename’. See set_contents_from_file method for details about the parameters.

Parameters:
  • filename (string) – The name of the file that you want to put onto S3
  • headers (dict) – Additional headers to pass along with the request to AWS.
  • replace (bool) – If True, replaces the contents of the file if it already exists.
  • cb (int) – a callback function that will be called to report progress on the upload. The callback should accept two integer parameters, the first representing the number of bytes that have been successfully transmitted to S3 and the second representing the size of the to be transmitted object.
  • num_cb – (optional) If a callback is specified with the cb parameter this parameter determines the granularity of the callback by defining the maximum number of times the callback will be called during the file transfer.
  • policy (boto.s3.acl.CannedACLStrings) – A canned ACL policy that will be applied to the new key in S3.
  • md5 (A tuple containing the hexdigest version of the MD5 checksum of the file as the first element and the Base64-encoded version of the plain checksum as the second element. This is the same format returned by the compute_md5 method.) – If you need to compute the MD5 for any reason prior to upload, it’s silly to have to do it twice so this param, if present, will be used as the MD5 values of the file. Otherwise, the checksum will be computed.
  • reduced_redundancy (bool) – If True, this will set the storage class of the new Key to be REDUCED_REDUNDANCY. The Reduced Redundancy Storage (RRS) feature of S3, provides lower redundancy at lower storage cost.
set_contents_from_string(s, headers=None, replace=True, cb=None, num_cb=10, policy=None, md5=None, reduced_redundancy=False)

Store an object in S3 using the name of the Key object as the key in S3 and the string ‘s’ as the contents. See set_contents_from_file method for details about the parameters.

Parameters:
  • headers (dict) – Additional headers to pass along with the request to AWS.
  • replace (bool) – If True, replaces the contents of the file if it already exists.
  • cb (int) – a callback function that will be called to report progress on the upload. The callback should accept two integer parameters, the first representing the number of bytes that have been successfully transmitted to S3 and the second representing the size of the to be transmitted object.
  • num_cb – (optional) If a callback is specified with the cb parameter this parameter determines the granularity of the callback by defining the maximum number of times the callback will be called during the file transfer.
  • policy (boto.s3.acl.CannedACLStrings) – A canned ACL policy that will be applied to the new key in S3.
  • md5 (A tuple containing the hexdigest version of the MD5 checksum of the file as the first element and the Base64-encoded version of the plain checksum as the second element. This is the same format returned by the compute_md5 method.) – If you need to compute the MD5 for any reason prior to upload, it’s silly to have to do it twice so this param, if present, will be used as the MD5 values of the file. Otherwise, the checksum will be computed.
  • reduced_redundancy (bool) – If True, this will set the storage class of the new Key to be REDUCED_REDUNDANCY. The Reduced Redundancy Storage (RRS) feature of S3, provides lower redundancy at lower storage cost.
set_metadata(name, value)
set_xml_acl(acl_str, headers=None)
startElement(name, attrs, connection)
update_metadata(d)
boto.s3.prefix
class boto.s3.prefix.Prefix(bucket=None, name=None)
endElement(name, value, connection)
startElement(name, attrs, connection)
boto.s3.user
class boto.s3.user.User(parent=None, id='', display_name='')
endElement(name, value, connection)
startElement(name, attrs, connection)
to_xml(element_name='Owner')
boto.s3.multipart
class boto.s3.multipart.CompleteMultiPartUpload(bucket=None)

Represents a completed MultiPart Upload. Contains the following useful attributes:

  • location - The URI of the completed upload

  • bucket_name - The name of the bucket in which the upload

    is contained

  • key_name - The name of the new, completed key

  • etag - The MD5 hash of the completed, combined upload

endElement(name, value, connection)
startElement(name, attrs, connection)
class boto.s3.multipart.MultiPartUpload(bucket=None)

Represents a MultiPart Upload operation.

cancel_upload()

Cancels a MultiPart Upload operation. The storage consumed by any previously uploaded parts will be freed. However, if any part uploads are currently in progress, those part uploads might or might not succeed. As a result, it might be necessary to abort a given multipart upload multiple times in order to completely free all storage consumed by all parts.

complete_upload()

Complete the MultiPart Upload operation. This method should be called when all parts of the file have been successfully uploaded to S3.

Return type:boto.s3.multipart.CompletedMultiPartUpload
Returns:An object representing the completed upload.
endElement(name, value, connection)
get_all_parts(max_parts=None, part_number_marker=None)

Return the uploaded parts of this MultiPart Upload. This is a lower-level method that requires you to manually page through results. To simplify this process, you can just use the object itself as an iterator and it will automatically handle all of the paging with S3.

startElement(name, attrs, connection)
to_xml()
upload_part_from_file(fp, part_num, headers=None, replace=True, cb=None, num_cb=10, policy=None, md5=None)

Upload another part of this MultiPart Upload.

Parameters:
  • fp (file) – The file object you want to upload.
  • part_num (int) – The number of this part.

The other parameters are exactly as defined for the boto.s3.key.Key set_contents_from_file method.

class boto.s3.multipart.Part(bucket=None)

Represents a single part in a MultiPart upload. Attributes include:

  • part_number - The integer part number
  • last_modified - The last modified date of this part
  • etag - The MD5 hash of this part
  • size - The size, in bytes, of this part
endElement(name, value, connection)
startElement(name, attrs, connection)
boto.s3.multipart.part_lister(mpupload, part_number_marker=None)

A generator function for listing parts of a multipart upload.

boto.s3.resumable_download_handler
class boto.s3.resumable_download_handler.ByteTranslatingCallbackHandler(proxied_cb, download_start_point)

Proxy class that translates progress callbacks made by boto.s3.Key.get_file(), taking into account that we’re resuming a download.

call(total_bytes_uploaded, total_size)
class boto.s3.resumable_download_handler.ResumableDownloadHandler(tracker_file_name=None, num_retries=None)

Handler for resumable downloads.

Constructor. Instantiate once for each downloaded file.

Parameters:
  • tracker_file_name (string) – optional file name to save tracking info about this download. If supplied and the current process fails the download, it can be retried in a new process. If called with an existing file containing an unexpired timestamp, we’ll resume the transfer for this file; else we’ll start a new resumable download.
  • num_retries (int) – the number of times we’ll re-try a resumable download making no progress. (Count resets every time we get progress, so download can span many more than this number of retries.)
ETAG_REGEX = '([a-z0-9]{32})\n'
RETRYABLE_EXCEPTIONS = (<class 'httplib.HTTPException'>, <type 'exceptions.IOError'>, <class 'socket.error'>, <class 'socket.gaierror'>)
get_file(key, fp, headers, cb=None, num_cb=10, torrent=False, version_id=None)

Retrieves a file from a Key :type key: boto.s3.key.Key or subclass :param key: The Key object from which upload is to be downloaded

Parameters:
  • fp (file) – File pointer into which data should be downloaded
  • cb (function) – (optional) a callback function that will be called to report progress on the download. The callback should accept two integer parameters, the first representing the number of bytes that have been successfully transmitted from the storage service and the second representing the total number of bytes that need to be transmitted.
  • num_cb (int) – (optional) If a callback is specified with the cb parameter this parameter determines the granularity of the callback by defining the maximum number of times the callback will be called during the file transfer.
  • torrent (bool) – Flag for whether to get a torrent for the file
  • version_id (string) – The version ID (optional)
Param:

headers to send when retrieving the files

Raises ResumableDownloadException if a problem occurs during
the transfer.
boto.s3.resumable_download_handler.get_cur_file_size(fp, position_to_eof=False)

Returns size of file, optionally leaving fp positioned at EOF.

boto.s3.deletemarker
class boto.s3.deletemarker.DeleteMarker(bucket=None, name=None)
endElement(name, value, connection)
startElement(name, attrs, connection)

sdb

boto.sdb
boto.sdb.connect_to_region(region_name, **kw_params)

Given a valid region name, return a boto.sdb.connection.SDBConnection.

Type:str
Parameters:region_name – The name of the region to connect to.
Return type:boto.sdb.connection.SDBConnection or None
Returns:A connection to the given region, or None if an invalid region name is given
boto.sdb.get_region(region_name, **kw_params)

Find and return a boto.sdb.regioninfo.RegionInfo object given a region name.

Type:str
Param:The name of the region.
Return type:boto.sdb.regioninfo.RegionInfo
Returns:The RegionInfo object for the given region or None if an invalid region name is provided.
boto.sdb.regions()

Get all available regions for the SDB service.

Return type:list
Returns:A list of boto.sdb.regioninfo.RegionInfo instances
boto.sdb.connection
class boto.sdb.connection.ItemThread(name, domain_name, item_names)

A threaded Item retriever utility class. Retrieved Item objects are stored in the items instance variable after run() is called.

Tip

The item retrieval will not start until the run() method is called.

Parameters:
  • name (str) – A thread name. Used for identification.
  • domain_name (str) – The name of a SimpleDB Domain
  • item_names (string or list of strings) – The name(s) of the items to retrieve from the specified Domain.
Variables:

items (list) – A list of items retrieved. Starts as empty list.

run()

Start the threaded retrieval of items. Populates the items list with Item objects.

class boto.sdb.connection.SDBConnection(aws_access_key_id=None, aws_secret_access_key=None, is_secure=True, port=None, proxy=None, proxy_port=None, proxy_user=None, proxy_pass=None, debug=0, https_connection_factory=None, region=None, path='/', converter=None)

This class serves as a gateway to your SimpleDB region (defaults to us-east-1). Methods within allow access to SimpleDB Domain objects and their associated Item objects.

Tip

While you may instantiate this class directly, it may be easier to go through boto.connect_sdb().

For any keywords that aren’t documented, refer to the parent class, boto.connection.AWSAuthConnection. You can avoid having to worry about these keyword arguments by instantiating these objects via boto.connect_sdb().

Parameters:region (boto.sdb.regioninfo.SDBRegionInfo) – Explicitly specify a region. Defaults to us-east-1 if not specified.
APIVersion = '2009-04-15'
DefaultRegionEndpoint = 'sdb.amazonaws.com'
DefaultRegionName = 'us-east-1'
ResponseError

alias of SDBResponseError

batch_delete_attributes(domain_or_name, items)

Delete multiple items in a domain.

Parameters:
  • domain_or_name (string or boto.sdb.domain.Domain object.) – Either the name of a domain or a Domain object
  • items (dict or dict-like object) –

    A dictionary-like object. The keys of the dictionary are the item names and the values are either:

    • dictionaries of attribute names/values, exactly the same as the attribute_names parameter of the scalar put_attributes call. The attribute name/value pairs will only be deleted if they match the name/value pairs passed in.
    • None which means that all attributes associated with the item should be deleted.
Returns:

True if successful

batch_put_attributes(domain_or_name, items, replace=True)

Store attributes for multiple items in a domain.

Parameters:
  • domain_or_name (string or boto.sdb.domain.Domain object.) – Either the name of a domain or a Domain object
  • items (dict or dict-like object) – A dictionary-like object. The keys of the dictionary are the item names and the values are themselves dictionaries of attribute names/values, exactly the same as the attribute_names parameter of the scalar put_attributes call.
  • replace (bool) – Whether the attribute values passed in will replace existing values or will be added as addition values. Defaults to True.
Return type:

bool

Returns:

True if successful

create_domain(domain_name)

Create a SimpleDB domain.

Parameters:domain_name (string) – The name of the new domain
Return type:boto.sdb.domain.Domain object
Returns:The newly created domain
delete_attributes(domain_or_name, item_name, attr_names=None, expected_value=None)

Delete attributes from a given item in a domain.

Parameters:
  • domain_or_name (string or boto.sdb.domain.Domain object.) – Either the name of a domain or a Domain object
  • item_name (string) – The name of the item whose attributes are being deleted.
  • attributes (dict, list or boto.sdb.item.Item) – Either a list containing attribute names which will cause all values associated with that attribute name to be deleted or a dict or Item containing the attribute names and keys and list of values to delete as the value. If no value is supplied, all attribute name/values for the item will be deleted.
  • expected_value (list) –

    If supplied, this is a list or tuple consisting of a single attribute name and expected value. The list can be of the form:

    • [‘name’, ‘value’]

    In which case the call will first verify that the attribute “name” of this item has a value of “value”. If it does, the delete will proceed, otherwise a ConditionalCheckFailed error will be returned. The list can also be of the form:

    • [‘name’, True|False]

    which will simply check for the existence (True) or non-existence (False) of the attribute.

Return type:

bool

Returns:

True if successful

delete_domain(domain_or_name)

Delete a SimpleDB domain.

Caution

This will delete the domain and all items within the domain.

Parameters:domain_or_name (string or boto.sdb.domain.Domain object.) – Either the name of a domain or a Domain object
Return type:bool
Returns:True if successful
domain_metadata(domain_or_name)

Get the Metadata for a SimpleDB domain.

Parameters:domain_or_name (string or boto.sdb.domain.Domain object.) – Either the name of a domain or a Domain object
Return type:boto.sdb.domain.DomainMetaData object
Returns:The newly created domain metadata object
get_all_domains(max_domains=None, next_token=None)

Returns a boto.resultset.ResultSet containing all boto.sdb.domain.Domain objects associated with this connection’s Access Key ID.

Parameters:
  • max_domains (int) – Limit the returned ResultSet to the specified number of members.
  • next_token (str) – A token string that was returned in an earlier call to this method as the next_token attribute on the returned ResultSet object. This attribute is set if there are more than Domains than the value specified in the max_domains keyword. Pass the next_token value from you earlier query in this keyword to get the next ‘page’ of domains.
get_attributes(domain_or_name, item_name, attribute_names=None, consistent_read=False, item=None)

Retrieve attributes for a given item in a domain.

Parameters:
  • domain_or_name (string or boto.sdb.domain.Domain object.) – Either the name of a domain or a Domain object
  • item_name (string) – The name of the item whose attributes are being retrieved.
  • attribute_names (string or list of strings) – An attribute name or list of attribute names. This parameter is optional. If not supplied, all attributes will be retrieved for the item.
  • consistent_read (bool) – When set to true, ensures that the most recent data is returned.
  • item (boto.sdb.item.Item) – Instead of instantiating a new Item object, you may specify one to update.
Return type:

boto.sdb.item.Item

Returns:

An Item with the requested attribute name/values set on it

get_domain(domain_name, validate=True)

Retrieves a boto.sdb.domain.Domain object whose name matches domain_name.

Parameters:
  • domain_name (str) – The name of the domain to retrieve
  • validate (bool) – When True, check to see if the domain actually exists. If False, blindly return a Domain object with the specified name set.
Raises:

boto.exception.SDBResponseError if validate is True and no match could be found.

Return type:

boto.sdb.domain.Domain

Returns:

The requested domain

get_domain_and_name(domain_or_name)

Given a str or boto.sdb.domain.Domain, return a tuple with the following members (in order):

Parameters:domain_or_name (str or boto.sdb.domain.Domain) – The domain or domain name to get the domain and name for.
Raises:boto.exception.SDBResponseError when an invalid domain name is specified.
Return type:tuple
Returns:A tuple with contents outlined as per above.
get_usage()

Returns the BoxUsage (in USD) accumulated on this specific SDBConnection instance.

Tip

This can be out of date, and should only be treated as a rough estimate. Also note that this estimate only applies to the requests made on this specific connection instance. It is by no means an account-wide estimate.

Return type:float
Returns:The accumulated BoxUsage of all requests made on the connection.
lookup(domain_name, validate=True)

Lookup an existing SimpleDB domain. This differs from get_domain() in that None is returned if validate is True and no match was found (instead of raising an exception).

Parameters:
  • domain_name (str) – The name of the domain to retrieve
  • validate (bool) – If True, a None value will be returned if the specified domain can’t be found. If False, a Domain object will be dumbly returned, regardless of whether it actually exists.
Return type:

boto.sdb.domain.Domain object or None

Returns:

The Domain object or None if the domain does not exist.

print_usage()

Print the BoxUsage and approximate costs of all requests made on this specific SDBConnection instance.

Tip

This can be out of date, and should only be treated as a rough estimate. Also note that this estimate only applies to the requests made on this specific connection instance. It is by no means an account-wide estimate.

put_attributes(domain_or_name, item_name, attributes, replace=True, expected_value=None)

Store attributes for a given item in a domain.

Parameters:
  • domain_or_name (string or boto.sdb.domain.Domain object.) – Either the name of a domain or a Domain object
  • item_name (string) – The name of the item whose attributes are being stored.
  • attribute_names (dict or dict-like object) – The name/value pairs to store as attributes
  • expected_value (list) –

    If supplied, this is a list or tuple consisting of a single attribute name and expected value. The list can be of the form:

    • [‘name’, ‘value’]

    In which case the call will first verify that the attribute “name” of this item has a value of “value”. If it does, the delete will proceed, otherwise a ConditionalCheckFailed error will be returned. The list can also be of the form:

    • [‘name’, True|False]

    which will simply check for the existence (True) or non-existence (False) of the attribute.

  • replace (bool) – Whether the attribute values passed in will replace existing values or will be added as addition values. Defaults to True.
Return type:

bool

Returns:

True if successful

select(domain_or_name, query='', next_token=None, consistent_read=False)

Returns a set of Attributes for item names within domain_name that match the query. The query must be expressed in using the SELECT style syntax rather than the original SimpleDB query language. Even though the select request does not require a domain object, a domain object must be passed into this method so the Item objects returned can point to the appropriate domain.

Parameters:
  • domain_or_name (string or boto.sdb.domain.Domain object) – Either the name of a domain or a Domain object
  • query (string) – The SimpleDB query to be performed.
  • consistent_read (bool) – When set to true, ensures that the most recent data is returned.
Return type:

ResultSet

Returns:

An iterator containing the results.

set_item_cls(cls)

While the default item class is boto.sdb.item.Item, this default may be overridden. Use this method to change a connection’s item class.

Parameters:cls (object) – The new class to set as this connection’s item class. See the default item class for inspiration as to what your replacement should/could look like.
boto.sdb.db
boto.sdb.db.blob
class boto.sdb.db.blob.Blob(value=None, file=None, id=None)

Blob object

file
next()
read()
readline()
size
boto.sdb.db.key
class boto.sdb.db.key.Key(encoded=None, obj=None)
app()
classmethod from_path(*args, **kwds)
has_id_or_name()
id()
id_or_name()
kind()
name()
parent()
boto.sdb.db.manager
boto.sdb.db.manager.get_manager(cls)

Returns the appropriate Manager class for a given Model class. It does this by looking in the boto config for a section like this:

[DB]
db_type = SimpleDB
db_user = <aws access key id>
db_passwd = <aws secret access key>
db_name = my_domain
[DB_TestBasic]
db_type = SimpleDB
db_user = <another aws access key id>
db_passwd = <another aws secret access key>
db_name = basic_domain
db_port = 1111

The values in the DB section are “generic values” that will be used if nothing more specific is found. You can also create a section for a specific Model class that gives the db info for that class. In the example above, TestBasic is a Model subclass.

boto.sdb.db.manager.pgmanager

Note

This module requires psycopg2 to be installed in the Python path.

boto.sdb.db.manager.sdbmanager
class boto.sdb.db.manager.sdbmanager.SDBConverter(manager)

Responsible for converting base Python types to format compatible with underlying database. For SimpleDB, that means everything needs to be converted to a string when stored in SimpleDB and from a string when retrieved.

To convert a value, pass it to the encode or decode method. The encode method will take a Python native value and convert to DB format. The decode method will take a DB format value and convert it to Python native format. To find the appropriate method to call, the generic encode/decode methods will look for the type-specific method by searching for a method called “encode_<type name>” or “decode_<type name>”.

decode(item_type, value)
decode_blob(value)
decode_bool(value)
decode_date(value)
decode_datetime(value)
decode_float(value)
decode_int(value)
decode_list(prop, value)
decode_long(value)
decode_map(prop, value)
decode_map_element(item_type, value)

Decode a single element for a map

decode_prop(prop, value)
decode_reference(value)
decode_string(value)

Decoding a string is really nothing, just return the value as-is

decode_time(value)

converts strings in the form of HH:MM:SS.mmmmmm (created by datetime.time.isoformat()) to datetime.time objects.

Timzone-aware strings (“HH:MM:SS.mmmmmm+HH:MM”) won’t be handled right now and will raise TimeDecodeError.

encode(item_type, value)
encode_blob(value)
encode_bool(value)
encode_date(value)
encode_datetime(value)
encode_float(value)

See http://tools.ietf.org/html/draft-wood-ldapext-float-00.

encode_int(value)
encode_list(prop, value)
encode_long(value)
encode_map(prop, value)
encode_prop(prop, value)
encode_reference(value)
encode_string(value)

Convert ASCII, Latin-1 or UTF-8 to pure Unicode

encode_time(value)
class boto.sdb.db.manager.sdbmanager.SDBManager(cls, db_name, db_user, db_passwd, db_host, db_port, db_table, ddl_dir, enable_ssl, consistent=None)
count(cls, filters, quick=True, sort_by=None, select=None)

Get the number of results that would be returned in this query

decode_value(prop, value)
delete_key_value(obj, name)
delete_object(obj)
domain
encode_value(prop, value)
get_blob_bucket(bucket_name=None)
get_key_value(obj, name)
get_object(cls, id, a=None)
get_object_from_id(id)
get_property(prop, obj, name)
get_raw_item(obj)
get_s3_connection()
load_object(obj)
query(query)
query_gql(query_string, *args, **kwds)
save_object(obj, expected_value=None)
sdb
set_key_value(obj, name, value)
set_property(prop, obj, name, value)
exception boto.sdb.db.manager.sdbmanager.TimeDecodeError
boto.sdb.db.manager.xmlmanager
class boto.sdb.db.manager.xmlmanager.XMLConverter(manager)

Responsible for converting base Python types to format compatible with underlying database. For SimpleDB, that means everything needs to be converted to a string when stored in SimpleDB and from a string when retrieved.

To convert a value, pass it to the encode or decode method. The encode method will take a Python native value and convert to DB format. The decode method will take a DB format value and convert it to Python native format. To find the appropriate method to call, the generic encode/decode methods will look for the type-specific method by searching for a method called “encode_<type name>” or “decode_<type name>”.

decode(item_type, value)
decode_bool(value)
decode_datetime(value)
decode_int(value)
decode_long(value)
decode_password(value)
decode_prop(prop, value)
decode_reference(value)
encode(item_type, value)
encode_bool(value)
encode_datetime(value)
encode_int(value)
encode_long(value)
encode_password(value)
encode_prop(prop, value)
encode_reference(value)
get_text_value(parent_node)
class boto.sdb.db.manager.xmlmanager.XMLManager(cls, db_name, db_user, db_passwd, db_host, db_port, db_table, ddl_dir, enable_ssl)
decode_value(prop, value)
delete_key_value(obj, name)
delete_object(obj)
encode_value(prop, value)
get_doc()
get_key_value(obj, name)
get_list(prop_node, item_type)
get_object(cls, id)
get_object_from_doc(cls, id, doc)
get_property(prop, obj, name)
get_props_from_doc(cls, id, doc)

Pull out the properties from this document Returns the class, the properties in a hash, and the id if provided as a tuple :return: (cls, props, id)

get_raw_item(obj)
get_s3_connection()
load_object(obj)
marshal_object(obj, doc=None)
new_doc()
query(cls, filters, limit=None, order_by=None)
query_gql(query_string, *args, **kwds)
reset()
save_list(doc, items, prop_node)
save_object(obj, expected_value=None)

Marshal the object and do a PUT

set_key_value(obj, name, value)
set_property(prop, obj, name, value)
unmarshal_object(fp, cls=None, id=None)
unmarshal_props(fp, cls=None, id=None)

Same as unmarshalling an object, except it returns from “get_props_from_doc”

boto.sdb.db.model
class boto.sdb.db.model.Expando(id=None, **kw)
class boto.sdb.db.model.Model(id=None, **kw)
classmethod all(limit=None, next_token=None)
delete()
classmethod find(limit=None, next_token=None, **params)
classmethod find_property(prop_name)
classmethod find_subclass(name)

Find a subclass with a given name

classmethod from_xml(fp)
classmethod get_by_id(ids=None, parent=None)
classmethod get_by_ids(ids=None, parent=None)
classmethod get_by_key_name(key_names, parent=None)
classmethod get_lineage()
classmethod get_or_insert(key_name, **kw)
classmethod get_xmlmanager()
id = None
key()
classmethod kind()
load()
classmethod properties(hidden=True)
put(expected_value=None)
reload()
save(expected_value=None)
set_manager(manager)
to_dict()
to_xml(doc=None)
class boto.sdb.db.model.ModelMeta(name, bases, dict)

Metaclass for all Models

boto.sdb.db.property
class boto.sdb.db.property.BlobProperty(verbose_name=None, name=None, default=None, required=False, validator=None, choices=None, unique=False)
data_type

alias of Blob

type_name = 'blob'
class boto.sdb.db.property.BooleanProperty(verbose_name=None, name=None, default=False, required=False, validator=None, choices=None, unique=False)
data_type

alias of bool

empty(value)
type_name = 'Boolean'
class boto.sdb.db.property.CalculatedProperty(verbose_name=None, name=None, default=None, required=False, validator=None, choices=None, calculated_type=<type 'int'>, unique=False, use_method=False)
get_value_for_datastore(model_instance)
class boto.sdb.db.property.DateProperty(verbose_name=None, auto_now=False, auto_now_add=False, name=None, default=None, required=False, validator=None, choices=None, unique=False)
data_type

alias of date

default_value()
get_value_for_datastore(model_instance)
now()
type_name = 'Date'
validate(value)
class boto.sdb.db.property.DateTimeProperty(verbose_name=None, auto_now=False, auto_now_add=False, name=None, default=None, required=False, validator=None, choices=None, unique=False)
data_type

alias of datetime

default_value()
get_value_for_datastore(model_instance)
now()
type_name = 'DateTime'
validate(value)
class boto.sdb.db.property.FloatProperty(verbose_name=None, name=None, default=0.0, required=False, validator=None, choices=None, unique=False)
data_type

alias of float

empty(value)
type_name = 'Float'
validate(value)
class boto.sdb.db.property.IntegerProperty(verbose_name=None, name=None, default=0, required=False, validator=None, choices=None, unique=False, max=2147483647, min=-2147483648)
data_type

alias of int

empty(value)
type_name = 'Integer'
validate(value)
class boto.sdb.db.property.ListProperty(item_type, verbose_name=None, name=None, default=None, **kwds)
data_type

alias of list

default_value()
empty(value)
type_name = 'List'
validate(value)
class boto.sdb.db.property.LongProperty(verbose_name=None, name=None, default=0, required=False, validator=None, choices=None, unique=False)
data_type

alias of long

empty(value)
type_name = 'Long'
validate(value)
class boto.sdb.db.property.MapProperty(item_type=<type 'str'>, verbose_name=None, name=None, default=None, **kwds)
data_type

alias of dict

default_value()
empty(value)
type_name = 'Map'
validate(value)
class boto.sdb.db.property.PasswordProperty(verbose_name=None, name=None, default='', required=False, validator=None, choices=None, unique=False, hashfunc=None)

Hashed property whose original value can not be retrieved, but still can be compared.

Works by storing a hash of the original value instead of the original value. Once that’s done all that can be retrieved is the hash.

The comparison

obj.password == ‘foo’

generates a hash of ‘foo’ and compares it to the stored hash.

Underlying data type for hashing, storing, and comparing is boto.utils.Password. The default hash function is defined there ( currently sha512 in most cases, md5 where sha512 is not available )

It’s unlikely you’ll ever need to use a different hash function, but if you do, you can control the behavior in one of two ways:

  1. Specifying hashfunc in PasswordProperty constructor

    import hashlib

    class MyModel(model):

    password = PasswordProperty(hashfunc=hashlib.sha224)

  2. Subclassing Password and PasswordProperty

    class SHA224Password(Password):

    hashfunc=hashlib.sha224

    class SHA224PasswordProperty(PasswordProperty):

    data_type=MyPassword type_name=”MyPassword”

    class MyModel(Model):

    password = SHA224PasswordProperty()

The hashfunc parameter overrides the default hashfunc in boto.utils.Password.

The remaining parameters are passed through to StringProperty.__init__

data_type

alias of Password

get_value_for_datastore(model_instance)
make_value_from_datastore(value)
type_name = 'Password'
validate(value)
class boto.sdb.db.property.Property(verbose_name=None, name=None, default=None, required=False, validator=None, choices=None, unique=False)
data_type

alias of str

default_validator(value)
default_value()
empty(value)
get_choices()
get_value_for_datastore(model_instance)
make_value_from_datastore(value)
name = ''
type_name = ''
validate(value)
verbose_name = ''
class boto.sdb.db.property.ReferenceProperty(reference_class=None, collection_name=None, verbose_name=None, name=None, default=None, required=False, validator=None, choices=None, unique=False)
check_instance(value)
check_uuid(value)
data_type

alias of Key

type_name = 'Reference'
validate(value)
class boto.sdb.db.property.S3KeyProperty(verbose_name=None, name=None, default=None, required=False, validator=None, choices=None, unique=False)
data_type

alias of Key

get_value_for_datastore(model_instance)
type_name = 'S3Key'
validate(value)
validate_regex = '^s3:\\/\\/([^\\/]*)\\/(.*)$'
class boto.sdb.db.property.StringProperty(verbose_name=None, name=None, default='', required=False, validator=<function validate_string>, choices=None, unique=False)
type_name = 'String'
class boto.sdb.db.property.TextProperty(verbose_name=None, name=None, default='', required=False, validator=None, choices=None, unique=False, max_length=None)
type_name = 'Text'
validate(value)
class boto.sdb.db.property.TimeProperty(verbose_name=None, name=None, default=None, required=False, validator=None, choices=None, unique=False)
data_type

alias of time

type_name = 'Time'
validate(value)
boto.sdb.db.property.validate_string(value)
boto.sdb.db.query
class boto.sdb.db.query.Query(model_class, limit=None, next_token=None, manager=None)
count(quick=True)
fetch(limit, offset=0)

Not currently fully supported, but we can use this to allow them to set a limit in a chainable method

filter(property_operator, value)
get_next_token()
get_query()
next()
next_token
order(key)
set_next_token(token)
to_xml(doc=None)
boto.sdb.domain

Represents an SDB Domain

class boto.sdb.domain.Domain(connection=None, name=None)
batch_delete_attributes(items)

Delete multiple items in this domain.

Parameters:items (dict or dict-like object) –

A dictionary-like object. The keys of the dictionary are the item names and the values are either:

  • dictionaries of attribute names/values, exactly the same as the attribute_names parameter of the scalar put_attributes call. The attribute name/value pairs will only be deleted if they match the name/value pairs passed in.
  • None which means that all attributes associated with the item should be deleted.
Return type:bool
Returns:True if successful
batch_put_attributes(items, replace=True)

Store attributes for multiple items.

Parameters:
  • items (dict or dict-like object) – A dictionary-like object. The keys of the dictionary are the item names and the values are themselves dictionaries of attribute names/values, exactly the same as the attribute_names parameter of the scalar put_attributes call.
  • replace (bool) – Whether the attribute values passed in will replace existing values or will be added as addition values. Defaults to True.
Return type:

bool

Returns:

True if successful

delete()

Delete this domain, and all items under it

delete_attributes(item_name, attributes=None, expected_values=None)

Delete attributes from a given item.

Parameters:
  • item_name (string) – The name of the item whose attributes are being deleted.
  • attributes (dict, list or boto.sdb.item.Item) – Either a list containing attribute names which will cause all values associated with that attribute name to be deleted or a dict or Item containing the attribute names and keys and list of values to delete as the value. If no value is supplied, all attribute name/values for the item will be deleted.
  • expected_value (list) –

    If supplied, this is a list or tuple consisting of a single attribute name and expected value. The list can be of the form:

    • [‘name’, ‘value’]

    In which case the call will first verify that the attribute “name” of this item has a value of “value”. If it does, the delete will proceed, otherwise a ConditionalCheckFailed error will be returned. The list can also be of the form:

    • [‘name’, True|False]

    which will simply check for the existence (True) or non-existence (False) of the attribute.

Return type:

bool

Returns:

True if successful

delete_item(item)
endElement(name, value, connection)
from_xml(doc)

Load this domain based on an XML document

get_attributes(item_name, attribute_name=None, consistent_read=False, item=None)

Retrieve attributes for a given item.

Parameters:
  • item_name (string) – The name of the item whose attributes are being retrieved.
  • attribute_names (string or list of strings) – An attribute name or list of attribute names. This parameter is optional. If not supplied, all attributes will be retrieved for the item.
Return type:

boto.sdb.item.Item

Returns:

An Item mapping type containing the requested attribute name/values

get_item(item_name, consistent_read=False)

Retrieves an item from the domain, along with all of its attributes.

Parameters:
  • item_name (string) – The name of the item to retrieve.
  • consistent_read (bool) – When set to true, ensures that the most recent data is returned.
Return type:

boto.sdb.item.Item or None

Returns:

The requested item, or None if there was no match found

get_metadata()
new_item(item_name)
put_attributes(item_name, attributes, replace=True, expected_value=None)

Store attributes for a given item.

Parameters:
  • item_name (string) – The name of the item whose attributes are being stored.
  • attribute_names (dict or dict-like object) – The name/value pairs to store as attributes
  • expected_value (list) –

    If supplied, this is a list or tuple consisting of a single attribute name and expected value. The list can be of the form:

    • [‘name’, ‘value’]

    In which case the call will first verify that the attribute “name” of this item has a value of “value”. If it does, the delete will proceed, otherwise a ConditionalCheckFailed error will be returned. The list can also be of the form:

    • [‘name’, True|False]

    which will simply check for the existence (True) or non-existence (False) of the attribute.

  • replace (bool) – Whether the attribute values passed in will replace existing values or will be added as addition values. Defaults to True.
Return type:

bool

Returns:

True if successful

select(query='', next_token=None, consistent_read=False, max_items=None)

Returns a set of Attributes for item names within domain_name that match the query. The query must be expressed in using the SELECT style syntax rather than the original SimpleDB query language.

Parameters:query (string) – The SimpleDB query to be performed.
Return type:iter
Returns:An iterator containing the results. This is actually a generator function that will iterate across all search results, not just the first page.
startElement(name, attrs, connection)
to_xml(f=None)

Get this domain as an XML DOM Document :param f: Optional File to dump directly to :type f: File or Stream

Returns:File object where the XML has been dumped to
Return type:file
class boto.sdb.domain.DomainDumpParser(domain)

SAX parser for a domain that has been dumped

characters(ch)
endElement(name)
startElement(name, attrs)
class boto.sdb.domain.DomainMetaData(domain=None)
endElement(name, value, connection)
startElement(name, attrs, connection)
class boto.sdb.domain.UploaderThread(domain)

Uploader Thread

run()
boto.sdb.item
class boto.sdb.item.Item(domain, name='', active=False)

A dict sub-class that serves as an object representation of a SimpleDB item. An item in SDB is similar to a row in a relational database. Items belong to a Domain, which is similar to a table in a relational database.

The keys on instances of this object correspond to attributes that are stored on the SDB item.

Tip

While it is possible to instantiate this class directly, you may want to use the convenience methods on boto.sdb.domain.Domain for that purpose. For example, boto.sdb.domain.Domain.get_item().

Parameters:
add_value(key, value)

Helps set or add to attributes on this item. If you are adding a new attribute that has yet to be set, it will simply create an attribute named key with your given value as its value. If you are adding a value to an existing attribute, this method will convert the attribute to a list (if it isn’t already) and append your new value to said list.

For clarification, consider the following interactive session:

>>> item = some_domain.get_item('some_item')
>>> item.has_key('some_attr')
False
>>> item.add_value('some_attr', 1)
>>> item['some_attr']
1
>>> item.add_value('some_attr', 2)
>>> item['some_attr']
[1, 2]
Parameters:
  • key (str) – The attribute to add a value to.
  • value (object) – The value to set or append to the attribute.
decode_value(value)
delete()

Deletes this item in SDB.

Note

This local Python object remains in its current state after deletion, this only deletes the remote item in SDB.

endElement(name, value, connection)
load()

Loads or re-loads this item’s attributes from SDB.

Warning

If you have changed attribute values on an Item instance, this method will over-write the values if they are different in SDB. For any local attributes that don’t yet exist in SDB, they will be safe.

save(replace=True)

Saves this item to SDB.

Parameters:replace (bool) – If True, delete any attributes on the remote SDB item that have a None value on this object.
startElement(name, attrs, connection)
boto.sdb.persist
boto.sdb.persist.checker
boto.sdb.persist.object
boto.sdb.persist.property
boto.sdb.queryresultset
class boto.sdb.queryresultset.QueryResultSet(domain=None, query='', max_items=None, attr_names=None)
class boto.sdb.queryresultset.SelectResultSet(domain=None, query='', max_items=None, next_token=None, consistent_read=False)
next()
boto.sdb.queryresultset.query_lister(domain, query='', max_items=None, attr_names=None)
boto.sdb.queryresultset.select_lister(domain, query='', max_items=None)

services

boto.services
boto.services.bs
class boto.services.bs.BS
Commands = {'reset': 'Clear input queue and output bucket', 'status': 'Report on the status of the service buckets and queues', 'batches': 'List all batches stored in current output_domain', 'retrieve': 'Retrieve output generated by a batch', 'submit': 'Submit local files to the service', 'start': 'Start the service'}
Usage = 'usage: %prog [options] config_file command'
do_batches()
do_reset()
do_retrieve()
do_start()
do_status()
do_submit()
main()
print_command_help()
boto.services.message
class boto.services.message.ServiceMessage(queue=None, body=None, xml_attrs=None)
for_key(key, params=None, bucket_name=None)
boto.services.result
class boto.services.result.ResultProcessor(batch_name, sd, mimetype_files=None)
LogFileName = 'log.csv'
calculate_stats(msg)
get_results(path, get_file=True, delete_msg=True)
get_results_from_bucket(path)
get_results_from_domain(path, get_file=True)
get_results_from_queue(path, get_file=True, delete_msg=True)
log_message(msg, path)
process_record(record, path, get_file=True)
boto.services.service
class boto.services.service.Service(config_file=None, mimetype_files=None)
ProcessingTime = 60
cleanup()
delete_message(message)
get_file(message)
main(notify=False)
process_file(in_file_name, msg)
put_file(bucket_name, file_path, key_name=None)
read_message()
save_results(results, input_message, output_message)
shutdown()
split_key(key)
write_message(message)
boto.services.servicedef
class boto.services.servicedef.ServiceDef(config_file, aws_access_key_id=None, aws_secret_access_key=None)
get(name, default=None)
get_obj(name)

Returns the AWS object associated with a given option.

The heuristics used are a bit lame. If the option name contains the word ‘bucket’ it is assumed to be an S3 bucket, if the name contains the word ‘queue’ it is assumed to be an SQS queue and if it contains the word ‘domain’ it is assumed to be a SimpleDB domain. If the option name specified does not exist in the config file or if the AWS object cannot be retrieved this returns None.

getbool(option, default=False)
getint(option, default=0)
has_option(option)
boto.services.sonofmmm
class boto.services.sonofmmm.SonOfMMM(config_file=None)
process_file(in_file_name, msg)
queue_files()
shutdown()
boto.services.submit
class boto.services.submit.Submitter(sd)
get_key_name(fullpath, prefix)
submit_file(path, metadata=None, cb=None, num_cb=0, prefix='/')
submit_path(path, tags=None, ignore_dirs=None, cb=None, num_cb=0, status=False, prefix='/')
write_message(key, metadata)

SES

boto.ses
boto.ses.connection
class boto.ses.connection.SESConnection(aws_access_key_id=None, aws_secret_access_key=None, port=None, proxy=None, proxy_port=None, host='email.us-east-1.amazonaws.com', debug=0)
APIVersion = '2010-12-01'
DefaultHost = 'email.us-east-1.amazonaws.com'
ResponseError

alias of BotoServerError

delete_verified_email_address(email_address)

Deletes the specified email address from the list of verified addresses.

Parameters:email_address – The email address to be removed from the list of verified addreses.
Return type:dict
Returns:A DeleteVerifiedEmailAddressResponse structure. Note that keys must be unicode strings.
get_send_quota()

Fetches the user’s current activity limits.

Return type:dict
Returns:A GetSendQuotaResponse structure. Note that keys must be unicode strings.
get_send_statistics()

Fetches the user’s sending statistics. The result is a list of data points, representing the last two weeks of sending activity.

Each data point in the list contains statistics for a 15-minute interval.

Return type:dict
Returns:A GetSendStatisticsResponse structure. Note that keys must be unicode strings.
list_verified_email_addresses()

Fetch a list of the email addresses that have been verified.

Return type:dict
Returns:A ListVerifiedEmailAddressesResponse structure. Note that keys must be unicode strings.
send_email(source, subject, body, to_addresses, cc_addresses=None, bcc_addresses=None, format='text', reply_addresses=None, return_path=None, text_body=None, html_body=None)

Composes an email message based on input data, and then immediately queues the message for sending.

Parameters:
  • source (string) – The sender’s email address.
  • subject (string) – The subject of the message: A short summary of the content, which will appear in the recipient’s inbox.
  • body (string) – The message body.
  • to_addresses (list of strings or string) – The To: field(s) of the message.
  • cc_addresses (list of strings or string) – The CC: field(s) of the message.
  • bcc_addresses (list of strings or string) – The BCC: field(s) of the message.
  • format (string) – The format of the message’s body, must be either “text” or “html”.
  • reply_addresses (list of strings or string) – The reply-to email address(es) for the message. If the recipient replies to the message, each reply-to address will receive the reply.
  • return_path (string) – The email address to which bounce notifications are to be forwarded. If the message cannot be delivered to the recipient, then an error message will be returned from the recipient’s ISP; this message will then be forwarded to the email address specified by the ReturnPath parameter.
  • text_body (string) – The text body to send with this email.
  • html_body (string) – The html body to send with this email.
send_raw_email(raw_message, source=None, destinations=None)

Sends an email message, with header and content specified by the client. The SendRawEmail action is useful for sending multipart MIME emails, with attachments or inline content. The raw text of the message must comply with Internet email standards; otherwise, the message cannot be sent.

Parameters:
  • source (string) –

    The sender’s email address. Amazon’s docs say:

    If you specify the Source parameter, then bounce notifications and complaints will be sent to this email address. This takes precedence over any Return-Path header that you might include in the raw text of the message.

  • raw_message (string) –

    The raw text of the message. The client is responsible for ensuring the following:

    • Message must contain a header and a body, separated by a blank line.
    • All required header fields must be present.
    • Each part of a multipart MIME message must be formatted properly.
    • MIME content types must be among those supported by Amazon SES. Refer to the Amazon SES Developer Guide for more details.
    • Content must be base64-encoded, if MIME requires it.
  • destinations (list of strings or string) – A list of destinations for the message.
verify_email_address(email_address)

Verifies an email address. This action causes a confirmation email message to be sent to the specified address.

Parameters:email_address – The email address to be verified.
Return type:dict
Returns:A VerifyEmailAddressResponse structure. Note that keys must be unicode strings.

SNS

boto.sns
class boto.sns.SNSConnection(aws_access_key_id=None, aws_secret_access_key=None, is_secure=True, port=None, proxy=None, proxy_port=None, proxy_user=None, proxy_pass=None, debug=0, https_connection_factory=None, region=None, path='/', converter=None)
APIVersion = '2010-03-31'
DefaultRegionEndpoint = 'sns.us-east-1.amazonaws.com'
DefaultRegionName = 'us-east-1'
add_permission(topic, label, account_ids, actions)

Adds a statement to a topic’s access control policy, granting access for the specified AWS accounts to the specified actions.

Parameters:
  • topic (string) – The ARN of the topic.
  • label (string) – A unique identifier for the new policy statement.
  • account_ids (list of strings) – The AWS account ids of the users who will be give access to the specified actions.
  • actions (list of strings) – The actions you want to allow for each of the specified principal(s).
confirm_subscription(topic, token, authenticate_on_unsubscribe=False)

Get properties of a Topic

Parameters:
  • topic (string) – The ARN of the new topic.
  • token (string) – Short-lived token sent to and endpoint during the Subscribe operation.
  • authenticate_on_unsubscribe (bool) – Optional parameter indicating that you wish to disable unauthenticated unsubscription of the subscription.
create_topic(topic)

Create a new Topic.

Parameters:topic (string) – The name of the new topic.
delete_topic(topic)

Delete an existing topic

Parameters:topic (string) – The ARN of the topic
get_all_subscriptions(next_token=None)

Get list of all subscriptions.

Parameters:next_token (string) – Token returned by the previous call to this method.
get_all_subscriptions_by_topic(topic, next_token=None)

Get list of all subscriptions to a specific topic.

Parameters:
  • topic (string) – The ARN of the topic for which you wish to find subscriptions.
  • next_token (string) – Token returned by the previous call to this method.
get_all_topics(next_token=None)
Parameters:next_token (string) – Token returned by the previous call to this method.
get_topic_attributes(topic)

Get attributes of a Topic

Parameters:topic (string) – The ARN of the topic.
publish(topic, message, subject=None)

Get properties of a Topic

Parameters:
  • topic (string) – The ARN of the new topic.
  • message (string) – The message you want to send to the topic. Messages must be UTF-8 encoded strings and be at most 4KB in size.
  • subject (string) – Optional parameter to be used as the “Subject” line of the email notifications.
remove_permission(topic, label)

Removes a statement from a topic’s access control policy.

Parameters:
  • topic (string) – The ARN of the topic.
  • label (string) – A unique identifier for the policy statement to be removed.
subscribe(topic, protocol, endpoint)

Subscribe to a Topic.

Parameters:
  • topic (string) – The name of the new topic.
  • protocol (string) – The protocol used to communicate with the subscriber. Current choices are: email|email-json|http|https|sqs
  • endpoint (string) – The location of the endpoint for the subscriber. * For email, this would be a valid email address * For email-json, this would be a valid email address * For http, this would be a URL beginning with http * For https, this would be a URL beginning with https * For sqs, this would be the ARN of an SQS Queue
Return type:

boto.sdb.domain.Domain object

Returns:

The newly created domain

subscribe_sqs_queue(topic, queue)

Subscribe an SQS queue to a topic.

This is convenience method that handles most of the complexity involved in using ans SQS queue as an endpoint for an SNS topic. To achieve this the following operations are performed:

  • The correct ARN is constructed for the SQS queue and that ARN is then subscribed to the topic.
  • A JSON policy document is contructed that grants permission to the SNS topic to send messages to the SQS queue.
  • This JSON policy is then associated with the SQS queue using the queue’s set_attribute method. If the queue already has a policy associated with it, this process will add a Statement to that policy. If no policy exists, a new policy will be created.
Parameters:
  • topic (string) – The name of the new topic.
  • queue (A boto Queue object) – The queue you wish to subscribe to the SNS Topic.
unsubscribe(subscription)

Allows endpoint owner to delete subscription. Confirmation message will be delivered.

Parameters:subscription (string) – The ARN of the subscription to be deleted.

SQS

boto.sqs
boto.sqs.connect_to_region(region_name, **kw_params)
boto.sqs.regions()

Get all available regions for the SQS service.

Return type:list
Returns:A list of boto.ec2.regioninfo.RegionInfo
boto.sqs.attributes

Represents an SQS Attribute Name/Value set

class boto.sqs.attributes.Attributes(parent)
endElement(name, value, connection)
startElement(name, attrs, connection)
boto.sqs.connection
class boto.sqs.connection.SQSConnection(aws_access_key_id=None, aws_secret_access_key=None, is_secure=True, port=None, proxy=None, proxy_port=None, proxy_user=None, proxy_pass=None, debug=0, https_connection_factory=None, region=None, path='/')

A Connection to the SQS Service.

APIVersion = '2009-02-01'
DefaultContentType = 'text/plain'
DefaultRegionEndpoint = 'queue.amazonaws.com'
DefaultRegionName = 'us-east-1'
ResponseError

alias of SQSError

add_permission(queue, label, aws_account_id, action_name)

Add a permission to a queue.

Parameters:
  • queue (boto.sqs.queue.Queue) – The queue object
  • label (str or unicode) – A unique identification of the permission you are setting. Maximum of 80 characters [0-9a-zA-Z_-] Example, AliceSendMessage
  • principal_id – The AWS account number of the principal who will be given permission. The principal must have an AWS account, but does not need to be signed up for Amazon SQS. For information about locating the AWS account identification.
  • action_name (str or unicode) – The action. Valid choices are: *|SendMessage|ReceiveMessage|DeleteMessage| ChangeMessageVisibility|GetQueueAttributes
Return type:

bool

Returns:

True if successful, False otherwise.

change_message_visibility(queue, receipt_handle, visibility_timeout)

Extends the read lock timeout for the specified message from the specified queue to the specified value.

Parameters:
  • queue (A boto.sqs.queue.Queue object) – The Queue from which messages are read.
  • queue – The receipt handle associated with the message whose visibility timeout will be changed.
  • visibility_timeout (int) – The new value of the message’s visibility timeout in seconds.
create_queue(queue_name, visibility_timeout=None)

Create an SQS Queue.

Parameters:
  • queue_name (str or unicode) – The name of the new queue. Names are scoped to an account and need to be unique within that account. Calling this method on an existing queue name will not return an error from SQS unless the value for visibility_timeout is different than the value of the existing queue of that name. This is still an expensive operation, though, and not the preferred way to check for the existence of a queue. See the boto.sqs.connection.SQSConnection.lookup() method.
  • visibility_timeout (int) – The default visibility timeout for all messages written in the queue. This can be overridden on a per-message.
Return type:

boto.sqs.queue.Queue

Returns:

The newly created queue.

delete_message(queue, message)

Delete a message from a queue.

Parameters:
Return type:

bool

Returns:

True if successful, False otherwise.

delete_message_from_handle(queue, receipt_handle)

Delete a message from a queue, given a receipt handle.

Parameters:
  • queue (A boto.sqs.queue.Queue object) – The Queue from which messages are read.
  • receipt_handle (str) – The receipt handle for the message
Return type:

bool

Returns:

True if successful, False otherwise.

delete_queue(queue, force_deletion=False)

Delete an SQS Queue.

Parameters:
  • queue (A Queue object) – The SQS queue to be deleted
  • force_deletion (Boolean) – Normally, SQS will not delete a queue that contains messages. However, if the force_deletion argument is True, the queue will be deleted regardless of whether there are messages in the queue or not. USE WITH CAUTION. This will delete all messages in the queue as well.
Return type:

bool

Returns:

True if the command succeeded, False otherwise

get_all_queues(prefix='')
get_queue(queue_name)
get_queue_attributes(queue, attribute='All')

Gets one or all attributes of a Queue

Parameters:queue (A Queue object) – The SQS queue to be deleted
Return type:boto.sqs.attributes.Attributes
Returns:An Attributes object containing request value(s).
lookup(queue_name)
receive_message(queue, number_messages=1, visibility_timeout=None, attributes=None)

Read messages from an SQS Queue.

Parameters:
  • queue (A Queue object) – The Queue from which messages are read.
  • number_messages (int) – The maximum number of messages to read (default=1)
  • visibility_timeout (int) – The number of seconds the message should remain invisible to other queue readers (default=None which uses the Queues default)
  • attributes (str) – The name of additional attribute to return with response or All if you want all attributes. The default is to return no additional attributes. Valid values: All SenderId SentTimestamp ApproximateReceiveCount ApproximateFirstReceiveTimestamp
Return type:

list

Returns:

A list of boto.sqs.message.Message objects.

remove_permission(queue, label)

Remove a permission from a queue.

Parameters:
  • queue (boto.sqs.queue.Queue) – The queue object
  • label (str or unicode) – The unique label associated with the permission being removed.
Return type:

bool

Returns:

True if successful, False otherwise.

send_message(queue, message_content)
set_queue_attribute(queue, attribute, value)
boto.sqs.jsonmessage
class boto.sqs.jsonmessage.JSONMessage(queue=None, body=None, xml_attrs=None)

Acts like a dictionary but encodes it’s data as a Base64 encoded JSON payload.

decode(value)
encode(value)
boto.sqs.message

SQS Message

A Message represents the data stored in an SQS queue. The rules for what is allowed within an SQS Message are here:

So, at it’s simplest level a Message just needs to allow a developer to store bytes in it and get the bytes back out. However, to allow messages to have richer semantics, the Message class must support the following interfaces:

The constructor for the Message class must accept a keyword parameter “queue” which is an instance of a boto Queue object and represents the queue that the message will be stored in. The default value for this parameter is None.

The constructor for the Message class must accept a keyword parameter “body” which represents the content or body of the message. The format of this parameter will depend on the behavior of the particular Message subclass. For example, if the Message subclass provides dictionary-like behavior to the user the body passed to the constructor should be a dict-like object that can be used to populate the initial state of the message.

The Message class must provide an encode method that accepts a value of the same type as the body parameter of the constructor and returns a string of characters that are able to be stored in an SQS message body (see rules above).

The Message class must provide a decode method that accepts a string of characters that can be stored (and probably were stored!) in an SQS message and return an object of a type that is consistent with the “body” parameter accepted on the class constructor.

The Message class must provide a __len__ method that will return the size of the encoded message that would be stored in SQS based on the current state of the Message object.

The Message class must provide a get_body method that will return the body of the message in the same format accepted in the constructor of the class.

The Message class must provide a set_body method that accepts a message body in the same format accepted by the constructor of the class. This method should alter to the internal state of the Message object to reflect the state represented in the message body parameter.

The Message class must provide a get_body_encoded method that returns the current body of the message in the format in which it would be stored in SQS.

class boto.sqs.message.EncodedMHMessage(queue=None, body=None, xml_attrs=None)

The EncodedMHMessage class provides a message that provides RFC821-like headers like this:

HeaderName: HeaderValue

This variation encodes/decodes the body of the message in base64 automatically. The message instance can be treated like a mapping object, i.e. m[‘HeaderName’] would return ‘HeaderValue’.

decode(value)
encode(value)
class boto.sqs.message.MHMessage(queue=None, body=None, xml_attrs=None)

The MHMessage class provides a message that provides RFC821-like headers like this:

HeaderName: HeaderValue

The encoding/decoding of this is handled automatically and after the message body has been read, the message instance can be treated like a mapping object, i.e. m[‘HeaderName’] would return ‘HeaderValue’.

decode(value)
encode(value)
get(key, default=None)
has_key(key)
items()
keys()
update(d)
values()
class boto.sqs.message.Message(queue=None, body='')

The default Message class used for SQS queues. This class automatically encodes/decodes the message body using Base64 encoding to avoid any illegal characters in the message body. See:

http://developer.amazonwebservices.com/connect/thread.jspa?messageID=49680%EC%88%90

for details on why this is a good idea. The encode/decode is meant to be transparent to the end-user.

decode(value)
encode(value)
class boto.sqs.message.RawMessage(queue=None, body='')

Base class for SQS messages. RawMessage does not encode the message in any way. Whatever you store in the body of the message is what will be written to SQS and whatever is returned from SQS is stored directly into the body of the message.

change_visibility(visibility_timeout)
decode(value)

Transform seralized byte array into any object.

delete()
encode(value)

Transform body object into serialized byte array format.

endElement(name, value, connection)
get_body()
get_body_encoded()

This method is really a semi-private method used by the Queue.write method when writing the contents of the message to SQS. You probably shouldn’t need to call this method in the normal course of events.

set_body(body)

Override the current body for this object, using decoded format.

startElement(name, attrs, connection)
boto.sqs.queue

Represents an SQS Queue

class boto.sqs.queue.Queue(connection=None, url=None, message_class=<class boto.sqs.message.Message>)
add_permission(label, aws_account_id, action_name)

Add a permission to a queue.

Parameters:
  • label (str or unicode) – A unique identification of the permission you are setting. Maximum of 80 characters [0-9a-zA-Z_-] Example, AliceSendMessage
  • principal_id – The AWS account number of the principal who will be given permission. The principal must have an AWS account, but does not need to be signed up for Amazon SQS. For information about locating the AWS account identification.
  • action_name (str or unicode) – The action. Valid choices are: *|SendMessage|ReceiveMessage|DeleteMessage| ChangeMessageVisibility|GetQueueAttributes
Return type:

bool

Returns:

True if successful, False otherwise.

clear(page_size=10, vtimeout=10)

Utility function to remove all messages from a queue

count(page_size=10, vtimeout=10)

Utility function to count the number of messages in a queue. Note: This function now calls GetQueueAttributes to obtain an ‘approximate’ count of the number of messages in a queue.

count_slow(page_size=10, vtimeout=10)

Deprecated. This is the old ‘count’ method that actually counts the messages by reading them all. This gives an accurate count but is very slow for queues with non-trivial number of messasges. Instead, use get_attribute(‘ApproximateNumberOfMessages’) to take advantage of the new SQS capability. This is retained only for the unit tests.

delete()

Delete the queue.

delete_message(message)

Delete a message from the queue.

Parameters:message (boto.sqs.message.Message) – The boto.sqs.message.Message object to delete.
Return type:bool
Returns:True if successful, False otherwise
dump(file_name, page_size=10, vtimeout=10, sep='\n')

Utility function to dump the messages in a queue to a file NOTE: Page size must be < 10 else SQS errors

endElement(name, value, connection)
get_attributes(attributes='All')

Retrieves attributes about this queue object and returns them in an Attribute instance (subclass of a Dictionary).

Parameters:attributes (string) – String containing one of: ApproximateNumberOfMessages, ApproximateNumberOfMessagesNotVisible, VisibilityTimeout, CreatedTimestamp, LastModifiedTimestamp, Policy
Return type:Attribute object
Returns:An Attribute object which is a mapping type holding the requested name/value pairs
get_messages(num_messages=1, visibility_timeout=None, attributes=None)

Get a variable number of messages.

Parameters:
  • num_messages (int) – The maximum number of messages to read from the queue.
  • visibility_timeout (int) – The VisibilityTimeout for the messages read.
  • attributes (str) – The name of additional attribute to return with response or All if you want all attributes. The default is to return no additional attributes. Valid values: All SenderId SentTimestamp ApproximateReceiveCount ApproximateFirstReceiveTimestamp
Return type:

list

Returns:

A list of boto.sqs.message.Message objects.

get_timeout()

Get the visibility timeout for the queue.

Return type:int
Returns:The number of seconds as an integer.
id
load(file_name, sep='\n')

Utility function to load messages from a local filename to a queue

load_from_file(fp, sep='\n')

Utility function to load messages from a file-like object to a queue

load_from_filename(file_name, sep='\n')

Utility function to load messages from a local filename to a queue

load_from_s3(bucket, prefix=None)

Load messages previously saved to S3.

name
new_message(body='')

Create new message of appropriate class.

Parameters:body (message body) – The body of the newly created message (optional).
Return type:boto.sqs.message.Message
Returns:A new Message object
read(visibility_timeout=None)

Read a single message from the queue.

Parameters:visibility_timeout (int) – The timeout for this message in seconds
Return type:boto.sqs.message.Message
Returns:A single message or None if queue is empty
remove_permission(label)

Remove a permission from a queue.

Parameters:label (str or unicode) – The unique label associated with the permission being removed.
Return type:bool
Returns:True if successful, False otherwise.
save(file_name, sep='\n')

Read all messages from the queue and persist them to local file. Messages are written to the file and the ‘sep’ string is written in between messages. Messages are deleted from the queue after being written to the file. Returns the number of messages saved.

save_to_file(fp, sep='\n')

Read all messages from the queue and persist them to file-like object. Messages are written to the file and the ‘sep’ string is written in between messages. Messages are deleted from the queue after being written to the file. Returns the number of messages saved.

save_to_filename(file_name, sep='\n')

Read all messages from the queue and persist them to local file. Messages are written to the file and the ‘sep’ string is written in between messages. Messages are deleted from the queue after being written to the file. Returns the number of messages saved.

save_to_s3(bucket)

Read all messages from the queue and persist them to S3. Messages are stored in the S3 bucket using a naming scheme of:

<queue_id>/<message_id>

Messages are deleted from the queue after being saved to S3. Returns the number of messages saved.

set_attribute(attribute, value)

Set a new value for an attribute of the Queue.

Parameters:
  • attribute (String) – The name of the attribute you want to set. The only valid value at this time is: VisibilityTimeout
  • value (int) – The new value for the attribute. For VisibilityTimeout the value must be an integer number of seconds from 0 to 86400.
Return type:

bool

Returns:

True if successful, otherwise False.

set_message_class(message_class)

Set the message class that should be used when instantiating messages read from the queue. By default, the class boto.sqs.message.Message is used but this can be overriden with any class that behaves like a message.

Parameters:message_class (Message-like class) – The new Message class
set_timeout(visibility_timeout)

Set the visibility timeout for the queue.

Parameters:visibility_timeout (int) – The desired timeout in seconds
startElement(name, attrs, connection)
write(message)

Add a single message to the queue.

Parameters:message (Message) – The message to be written to the queue
Return type:boto.sqs.message.Message
Returns:The boto.sqs.message.Message object that was written.
boto.sqs.regioninfo
class boto.sqs.regioninfo.SQSRegionInfo(connection=None, name=None, endpoint=None)

VPC

boto.vpc

Represents a connection to the EC2 service.

class boto.vpc.VPCConnection(aws_access_key_id=None, aws_secret_access_key=None, is_secure=True, host=None, port=None, proxy=None, proxy_port=None, proxy_user=None, proxy_pass=None, debug=0, https_connection_factory=None, region=None, path='/', api_version=None)

Init method to create a new connection to EC2.

B{Note:} The host argument is overridden by the host specified in the
boto configuration file.
associate_dhcp_options(dhcp_options_id, vpc_id)

Associate a set of Dhcp Options with a VPC.

Parameters:
  • dhcp_options_id (str) – The ID of the Dhcp Options
  • vpc_id (str) – The ID of the VPC.
Return type:

bool

Returns:

True if successful

attach_vpn_gateway(vpn_gateway_id, vpc_id)

Attaches a VPN gateway to a VPC.

Parameters:
  • vpn_gateway_id (str) – The ID of the vpn_gateway to attach
  • vpc_id (str) – The ID of the VPC you want to attach the gateway to.
Return type:

An attachment

Returns:

a boto.vpc.vpngateway.Attachment

create_customer_gateway(type, ip_address, bgp_asn)

Create a new Customer Gateway

Parameters:
  • type (str) – Type of VPN Connection. Only valid valid currently is ‘ipsec.1’
  • ip_address (str) – Internet-routable IP address for customer’s gateway. Must be a static address.
  • bgp_asn (str) – Customer gateway’s Border Gateway Protocol (BGP) Autonomous System Number (ASN)
Return type:

The newly created CustomerGateway

Returns:

A boto.vpc.customergateway.CustomerGateway object

create_dhcp_options(vpc_id, cidr_block, availability_zone=None)

Create a new DhcpOption

Parameters:
  • vpc_id (str) – The ID of the VPC where you want to create the subnet.
  • cidr_block (str) – The CIDR block you want the subnet to cover.
  • availability_zone (str) – The AZ you want the subnet in
Return type:

The newly created DhcpOption

Returns:

A boto.vpc.customergateway.DhcpOption object

create_subnet(vpc_id, cidr_block, availability_zone=None)

Create a new Subnet

Parameters:
  • vpc_id (str) – The ID of the VPC where you want to create the subnet.
  • cidr_block (str) – The CIDR block you want the subnet to cover.
  • availability_zone (str) – The AZ you want the subnet in
Return type:

The newly created Subnet

Returns:

A boto.vpc.customergateway.Subnet object

create_vpc(cidr_block)

Create a new Virtual Private Cloud.

Parameters:cidr_block (str) – A valid CIDR block
Return type:The newly created VPC
Returns:A boto.vpc.vpc.VPC object
create_vpn_connection(type, customer_gateway_id, vpn_gateway_id)

Create a new VPN Connection.

Parameters:
  • type (str) – The type of VPN Connection. Currently only ‘ipsec.1’ is supported
  • customer_gateway_id (str) – The ID of the customer gateway.
  • vpn_gateway_id (str) – The ID of the VPN gateway.
Return type:

The newly created VpnConnection

Returns:

A boto.vpc.vpnconnection.VpnConnection object

create_vpn_gateway(type, availability_zone=None)

Create a new Vpn Gateway

Parameters:
  • type (str) – Type of VPN Connection. Only valid valid currently is ‘ipsec.1’
  • availability_zone (str) – The Availability Zone where you want the VPN gateway.
Return type:

The newly created VpnGateway

Returns:

A boto.vpc.vpngateway.VpnGateway object

delete_customer_gateway(customer_gateway_id)

Delete a Customer Gateway.

Parameters:customer_gateway_id (str) – The ID of the customer_gateway to be deleted.
Return type:bool
Returns:True if successful
delete_dhcp_options(dhcp_options_id)

Delete a DHCP Options

Parameters:dhcp_options_id (str) – The ID of the DHCP Options to be deleted.
Return type:bool
Returns:True if successful
delete_subnet(subnet_id)

Delete a subnet.

Parameters:subnet_id (str) – The ID of the subnet to be deleted.
Return type:bool
Returns:True if successful
delete_vpc(vpc_id)

Delete a Virtual Private Cloud.

Parameters:vpc_id (str) – The ID of the vpc to be deleted.
Return type:bool
Returns:True if successful
delete_vpn_connection(vpn_connection_id)

Delete a VPN Connection.

Parameters:vpn_connection_id (str) – The ID of the vpn_connection to be deleted.
Return type:bool
Returns:True if successful
delete_vpn_gateway(vpn_gateway_id)

Delete a Vpn Gateway.

Parameters:vpn_gateway_id (str) – The ID of the vpn_gateway to be deleted.
Return type:bool
Returns:True if successful
get_all_customer_gateways(customer_gateway_ids=None, filters=None)

Retrieve information about your CustomerGateways. You can filter results to return information only about those CustomerGateways that match your search parameters. Otherwise, all CustomerGateways associated with your account are returned.

Parameters:
  • customer_gateway_ids (list) – A list of strings with the desired CustomerGateway ID’s
  • filters (list of tuples) –

    A list of tuples containing filters. Each tuple consists of a filter key and a filter value. Possible filter keys are:

    • state, the state of the CustomerGateway (pending,available,deleting,deleted)
    • type, the type of customer gateway (ipsec.1)
    • ipAddress the IP address of customer gateway’s internet-routable external inteface
Return type:

list

Returns:

A list of boto.vpc.customergateway.CustomerGateway

get_all_dhcp_options(dhcp_options_ids=None)

Retrieve information about your DhcpOptions.

Parameters:dhcp_options_ids (list) – A list of strings with the desired DhcpOption ID’s
Return type:list
Returns:A list of boto.vpc.dhcpoptions.DhcpOptions
get_all_subnets(subnet_ids=None, filters=None)

Retrieve information about your Subnets. You can filter results to return information only about those Subnets that match your search parameters. Otherwise, all Subnets associated with your account are returned.

Parameters:
  • subnet_ids (list) – A list of strings with the desired Subnet ID’s
  • filters (list of tuples) –

    A list of tuples containing filters. Each tuple consists of a filter key and a filter value. Possible filter keys are:

    • state, the state of the Subnet (pending,available)
    • vpdId, the ID of teh VPC the subnet is in.
    • cidrBlock, CIDR block of the subnet
    • availabilityZone, the Availability Zone the subnet is in.
Return type:

list

Returns:

A list of boto.vpc.subnet.Subnet

get_all_vpcs(vpc_ids=None, filters=None)

Retrieve information about your VPCs. You can filter results to return information only about those VPCs that match your search parameters. Otherwise, all VPCs associated with your account are returned.

Parameters:
  • vpc_ids (list) – A list of strings with the desired VPC ID’s
  • filters (list of tuples) –

    A list of tuples containing filters. Each tuple consists of a filter key and a filter value. Possible filter keys are:

    • state, the state of the VPC (pending or available)
    • cidrBlock, CIDR block of the VPC
    • dhcpOptionsId, the ID of a set of DHCP options
Return type:

list

Returns:

A list of boto.vpc.vpc.VPC

get_all_vpn_connections(vpn_connection_ids=None, filters=None)

Retrieve information about your VPN_CONNECTIONs. You can filter results to return information only about those VPN_CONNECTIONs that match your search parameters. Otherwise, all VPN_CONNECTIONs associated with your account are returned.

Parameters:
  • vpn_connection_ids (list) – A list of strings with the desired VPN_CONNECTION ID’s
  • filters (list of tuples) –

    A list of tuples containing filters. Each tuple consists of a filter key and a filter value. Possible filter keys are:

    • state, the state of the VPN_CONNECTION pending,available,deleting,deleted
    • type, the type of connection, currently ‘ipsec.1’
    • customerGatewayId, the ID of the customer gateway associated with the VPN
    • vpnGatewayId, the ID of the VPN gateway associated with the VPN connection
Return type:

list

Returns:

A list of boto.vpn_connection.vpnconnection.VpnConnection

get_all_vpn_gateways(vpn_gateway_ids=None, filters=None)

Retrieve information about your VpnGateways. You can filter results to return information only about those VpnGateways that match your search parameters. Otherwise, all VpnGateways associated with your account are returned.

Parameters:
  • vpn_gateway_ids (list) – A list of strings with the desired VpnGateway ID’s
  • filters (list of tuples) –

    A list of tuples containing filters. Each tuple consists of a filter key and a filter value. Possible filter keys are:

    • state, the state of the VpnGateway (pending,available,deleting,deleted)
    • type, the type of customer gateway (ipsec.1)
    • availabilityZone, the Availability zone the VPN gateway is in.
Return type:

list

Returns:

A list of boto.vpc.customergateway.VpnGateway

boto.vpc.customergateway

Represents a Customer Gateway

class boto.vpc.customergateway.CustomerGateway(connection=None)
endElement(name, value, connection)
boto.vpc.dhcpoptions

Represents a DHCP Options set

class boto.vpc.dhcpoptions.DhcpConfigSet
endElement(name, value, connection)
startElement(name, attrs, connection)
class boto.vpc.dhcpoptions.DhcpOptions(connection=None)
endElement(name, value, connection)
startElement(name, attrs, connection)
class boto.vpc.dhcpoptions.DhcpValueSet
endElement(name, value, connection)
startElement(name, attrs, connection)
boto.vpc.subnet

Represents a Subnet

class boto.vpc.subnet.Subnet(connection=None)
endElement(name, value, connection)
boto.vpc.vpc

Represents a Virtual Private Cloud.

class boto.vpc.vpc.VPC(connection=None)
delete()
endElement(name, value, connection)
boto.vpc.vpnconnection

Represents a VPN Connectionn

class boto.vpc.vpnconnection.VpnConnection(connection=None)
delete()
endElement(name, value, connection)
boto.vpc.vpngateway

Represents a Vpn Gateway

class boto.vpc.vpngateway.Attachment(connection=None)
endElement(name, value, connection)
startElement(name, attrs, connection)
class boto.vpc.vpngateway.VpnGateway(connection=None)
attach(vpc_id)
endElement(name, value, connection)
startElement(name, attrs, connection)

About the Documentation

boto’s documentation uses the Sphinx documentation system, which in turn is based on docutils. The basic idea is that lightly-formatted plain-text documentation is transformed into HTML, PDF, and any other output format.

To actually build the documentation locally, you’ll currently need to install Sphinx – easy_install Sphinx should do the trick.

Then, building the html is easy; just make html from the docs directory.

To get started contributing, you’ll want to read the ReStructuredText Primer. After that, you’ll want to read about the Sphinx-specific markup that’s used to manage metadata, indexing, and cross-references.

The main thing to keep in mind as you write and edit docs is that the more semantic markup you can add the better. So:

Import ``boto`` to your script...

Isn’t nearly as helpful as:

Add :mod:`boto` to your script...

This is because Sphinx will generate a proper link for the latter, which greatly helps readers. There’s basically no limit to the amount of useful markup you can add.

The fabfile

There is a Fabric file that can be used to build and deploy the documentation to a webserver that you ssh access to.

To build and deploy:

cd docs/
fab deploy:remote_path='/var/www/folder/whatever' --hosts=user@host

This will get the latest code from subversion, add the revision number to the docs conf.py file, call make html to build the documentation, then it will tarball it up and scp up to the host you specified and untarball it in the folder you specified creating a symbolic link from the untarballed versioned folder to {remote_path}/boto-docs.

Indices and tables