Update BOTO to revision 3968

git-svn-id: svn://svn.chromium.org/boto@5 4f2e627c-b00b-48dd-b1fb-2c643665b734
diff --git a/MANIFEST.in b/MANIFEST.in
index fceffb7..d5e4f61 100644
--- a/MANIFEST.in
+++ b/MANIFEST.in
@@ -1 +1,12 @@
 include boto/cacerts/cacerts.txt
+include README.rst
+include Changelog.rst
+include boto/file/README
+include .gitignore
+include pylintrc
+include boto/pyami/copybot.cfg
+include boto/services/sonofmmm.cfg
+include boto/mturk/test/*.doctest
+include boto/mturk/test/.gitignore
+recursive-include tests *.py *.txt
+recursive-include docs *
diff --git a/README.chromium b/README.chromium
index 14982ac..054e7e4 100644
--- a/README.chromium
+++ b/README.chromium
@@ -1,22 +1,5 @@
 URL: http://github.com/boto/boto
-Version: 2.1.1
+Version: 2.6.0
 License: MIT License
 
-This is a forked copy of boto v2.1.1.
-
-
-Fix checksum support to be compatible with Windows.
-See http://bugs.python.org/issue1735418 for more info.
-
-index 5492e14..d7d2aa0 100644
---- a/boto/s3/resumable_download_handler.py
-+++ b/boto/s3/resumable_download_handler.py
-@@ -220,7 +220,7 @@ class ResumableDownloadHandler(object):
-         gsutil runs), and the user could change some of the file and not
-         realize they have inconsistent data.
-         """
--        fp = open(file_name, 'r')
-+        fp = open(file_name, 'rb')
-         if key.bucket.connection.debug >= 1:
-             print 'Checking md5 against etag.'
-         hex_md5 = key.compute_md5(fp)[0]
+This is a forked copy of boto at revision 3968
diff --git a/README.markdown b/README.markdown
deleted file mode 100644
index 95d32f0..0000000
--- a/README.markdown
+++ /dev/null
@@ -1,72 +0,0 @@
-# boto
-boto 2.1.1
-31-Oct-2011
-
-## Introduction
-
-Boto is a Python package that provides interfaces to Amazon Web Services.
-At the moment, boto supports:
-
- * Simple Storage Service (S3)
- * SimpleQueue Service (SQS)
- * Elastic Compute Cloud (EC2)
- * Mechanical Turk
- * SimpleDB
- * CloudFront
- * CloudWatch
- * AutoScale
- * Elastic Load Balancer (ELB)
- * Virtual Private Cloud (VPC)
- * Elastic Map Reduce (EMR)
- * Relational Data Service (RDS) 
- * Simple Notification Server (SNS)
- * Google Storage
- * Identity and Access Management (IAM)
- * Route53 DNS Service (route53)
- * Simple Email Service (SES)
- * Flexible Payment Service (FPS)
- * CloudFormation
-
-The goal of boto is to support the full breadth and depth of Amazon
-Web Services.  In addition, boto provides support for other public
-services such as Google Storage in addition to private cloud systems
-like Eucalyptus, OpenStack and Open Nebula.
-
-Boto is developed mainly using Python 2.6.6 and Python 2.7.1 on Mac OSX
-and Ubuntu Maverick.  It is known to work on other Linux distributions
-and on Windows.  Boto requires no additional libraries or packages
-other than those that are distributed with Python.  Efforts are made
-to keep boto compatible with Python 2.5.x but no guarantees are made.
-
-## Finding Out More About Boto
-
-The main source code repository for boto can be found on
-[github.com](http://github.com/boto/boto)
-
-[Online documentation](http://readthedocs.org/docs/boto/) is also
-available.  The online documentation includes full API documentation
-as well as Getting Started Guides for many of the boto modules.
-
-Boto releases can be found on the [Google Project
-page](http://code.google.com/p/boto/downloads/list) or on the [Python
-Cheese Shop](http://pypi.python.org/).
-
-Join our `IRC channel`_ (#boto on FreeNode).
-    IRC channel: http://webchat.freenode.net/?channels=boto
-
-## Getting Started with Boto
-
-Your credentials can be passed into the methods that create 
-connections.  Alternatively, boto will check for the existance of the
-following environment variables to ascertain your credentials:
-
-AWS_ACCESS_KEY_ID - Your AWS Access Key ID
-AWS_SECRET_ACCESS_KEY - Your AWS Secret Access Key
-
-Credentials and other boto-related settings can also be stored in a
-boto config file.  See
-[this](http://code.google.com/p/boto/wiki/BotoConfig) for details.
-
-Copyright (c) 2006-2011 Mitch Garnaat <mitch@garnaat.com>
-Copyright (c) 2010-2011, Eucalyptus Systems, Inc.
-All rights reserved.
diff --git a/README.rst b/README.rst
new file mode 100644
index 0000000..3499ad4
--- /dev/null
+++ b/README.rst
@@ -0,0 +1,143 @@
+####
+boto
+####
+boto 2.6.0
+19-Sep-2012
+
+.. image:: https://secure.travis-ci.org/boto/boto.png?branch=develop
+        :target: https://secure.travis-ci.org/boto/boto
+
+************
+Introduction
+************
+
+Boto is a Python package that provides interfaces to Amazon Web Services.
+At the moment, boto supports:
+
+* Compute
+  * Amazon Elastic Compute Cloud (EC2)
+  * Amazon Elastic Map Reduce (EMR)
+  * AutoScaling
+  * Elastic Load Balancing (ELB)
+* Content Delivery
+  * Amazon CloudFront
+* Database
+  * Amazon Relational Data Service (RDS)
+  * Amazon DynamoDB
+  * Amazon SimpleDB
+* Deployment and Management
+  * AWS Identity and Access Management (IAM)
+  * Amazon CloudWatch
+  * AWS Elastic Beanstalk
+  * AWS CloudFormation
+* Application Services
+  * Amazon CloudSearch
+  * Amazon Simple Workflow Service (SWF)
+  * Amazon Simple Queue Service (SQS)
+  * Amazon Simple Notification Server (SNS)
+  * Amazon Simple Email Service (SES)
+* Networking
+  * Amazon Route53
+  * Amazon Virtual Private Cloud (VPC)
+* Payments and Billing
+  * Amazon Flexible Payment Service (FPS)
+* Storage
+  * Amazon Simple Storage Service (S3)
+  * Amazon Glacier
+  * Amazon Elastic Block Store (EBS)
+  * Google Cloud Storage
+* Workforce
+  * Amazon Mechanical Turk
+* Other
+  * Marketplace Web Services
+
+The goal of boto is to support the full breadth and depth of Amazon
+Web Services.  In addition, boto provides support for other public
+services such as Google Storage in addition to private cloud systems
+like Eucalyptus, OpenStack and Open Nebula.
+
+Boto is developed mainly using Python 2.6.6 and Python 2.7.1 on Mac OSX
+and Ubuntu Maverick.  It is known to work on other Linux distributions
+and on Windows.  Boto requires no additional libraries or packages
+other than those that are distributed with Python.  Efforts are made
+to keep boto compatible with Python 2.5.x but no guarantees are made.
+
+************
+Installation
+************
+
+Install via `pip`_:
+
+::
+
+	$ pip install boto
+
+Install from source:
+
+::
+
+	$ git clone git://github.com/boto/boto.git
+	$ cd boto
+	$ python setup.py install
+
+**********
+ChangeLogs
+**********
+
+To see what has changed over time in boto, you can check out the
+`release notes`_ in the wiki.
+
+*********************************
+Special Note for Python 3.x Users
+*********************************
+
+If you are interested in trying out boto with Python 3.x, check out the
+`neo`_ branch.  This is under active development and the goal is a version
+of boto that works in Python 2.6, 2.7, and 3.x.  Not everything is working
+just yet but many things are and it's worth a look if you are an active
+Python 3.x user.
+
+***************************
+Finding Out More About Boto
+***************************
+
+The main source code repository for boto can be found on `github.com`_.
+The boto project uses the `gitflow`_ model for branching.
+
+`Online documentation`_ is also available. The online documentation includes
+full API documentation as well as Getting Started Guides for many of the boto
+modules.
+
+Boto releases can be found on the `Python Cheese Shop`_.
+
+Join our IRC channel `#boto` on FreeNode.
+Webchat IRC channel: http://webchat.freenode.net/?channels=boto
+
+*************************
+Getting Started with Boto
+*************************
+
+Your credentials can be passed into the methods that create
+connections.  Alternatively, boto will check for the existance of the
+following environment variables to ascertain your credentials:
+
+**AWS_ACCESS_KEY_ID** - Your AWS Access Key ID
+
+**AWS_SECRET_ACCESS_KEY** - Your AWS Secret Access Key
+
+Credentials and other boto-related settings can also be stored in a
+boto config file.  See `this`_ for details.
+
+Copyright (c) 2006-2012 Mitch Garnaat <mitch@garnaat.com>
+Copyright (c) 2010-2011, Eucalyptus Systems, Inc.
+Copyright (c) 2012 Amazon.com, Inc. or its affiliates.
+All rights reserved.
+
+.. _pip: http://www.pip-installer.org/
+.. _release notes: https://github.com/boto/boto/wiki
+.. _github.com: http://github.com/boto/boto
+.. _Online documentation: http://docs.pythonboto.org
+.. _Python Cheese Shop: http://pypi.python.org/pypi/boto
+.. _this: http://code.google.com/p/boto/wiki/BotoConfig
+.. _gitflow: http://nvie.com/posts/a-successful-git-branching-model/
+.. _neo: https://github.com/boto/boto/tree/neo
diff --git a/bin/cq b/bin/cq
index dd9b914..242d0d2 100755
--- a/bin/cq
+++ b/bin/cq
@@ -31,8 +31,8 @@
 def main():
     try:
         opts, args = getopt.getopt(sys.argv[1:], 'hcq:o:t:r:',
-                                   ['help', 'clear', 'queue',
-                                    'output', 'timeout', 'region'])
+                                   ['help', 'clear', 'queue=',
+                                    'output=', 'timeout=', 'region='])
     except:
         usage()
         sys.exit(2)
diff --git a/bin/elbadmin b/bin/elbadmin
index a5ec6bb..6c8a8c7 100755
--- a/bin/elbadmin
+++ b/bin/elbadmin
@@ -15,136 +15,172 @@
 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
 # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
 # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
-# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
 # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 
 #
 # Elastic Load Balancer Tool
 #
-VERSION="0.1"
+VERSION = "0.2"
 usage = """%prog [options] [command]
 Commands:
-    list|ls                           List all Elastic Load Balancers
-    delete    <name>                  Delete ELB <name>
-    get       <name>                  Get all instances associated with <name>
-    create    <name>                  Create an ELB
-    add       <name> <instance>       Add <instance> in ELB <name>
-    remove|rm <name> <instance>       Remove <instance> from ELB <name>
-    enable|en <name> <zone>           Enable Zone <zone> for ELB <name>
-    disable   <name> <zone>           Disable Zone <zone> for ELB <name>
-    addl      <name>                  Add listeners (specified by -l) to the ELB <name>
-    rml       <name> <port>           Remove Listener(s) specified by the port on the ELB
+    list|ls                       List all Elastic Load Balancers
+    delete    <name>              Delete ELB <name>
+    get       <name>              Get all instances associated with <name>
+    create    <name>              Create an ELB; -z and -l are required
+    add       <name> <instance>   Add <instance> in ELB <name>
+    remove|rm <name> <instance>   Remove <instance> from ELB <name>
+    reap      <name>              Remove terminated instances from ELB <name>
+    enable|en <name> <zone>       Enable Zone <zone> for ELB <name>
+    disable   <name> <zone>       Disable Zone <zone> for ELB <name>
+    addl      <name>              Add listeners (specified by -l) to the ELB
+                                      <name>
+    rml       <name> <port>       Remove Listener(s) specified by the port on
+                                      the ELB <name>
 """
 
+
+def find_elb(elb, name):
+    try:
+        elbs = elb.get_all_load_balancers(name)
+    except boto.exception.BotoServerError as se:
+        if se.code == 'LoadBalancerNotFound':
+            elbs = []
+        else:
+            raise
+
+    if len(elbs) < 1:
+        print "No load balancer by the name of %s found" % name
+        return None
+    elif len(elbs) > 1:
+        print "More than one elb matches %s?" % name
+        return None
+
+    # Should not happen
+    if name not in elbs[0].name:
+        print "No load balancer by the name of %s found" % name
+        return None
+
+    return elbs[0]
+
+
 def list(elb):
     """List all ELBs"""
-    print "%-20s %s" %  ("Name", "DNS Name")
-    print "-"*80
+    print "%-20s %s" % ("Name", "DNS Name")
+    print "-" * 80
     for b in elb.get_all_load_balancers():
         print "%-20s %s" % (b.name, b.dns_name)
 
+
 def get(elb, name):
     """Get details about ELB <name>"""
-    elbs = elb.get_all_load_balancers(name)
-    if len(elbs) < 1:
-        print "No load balancer by the name of %s found" % name
-        return
-    for b in elbs:
-        if name in b.name:
-            print "="*80
-            print "Name: %s" % b.name
-            print "DNS Name: %s" % b.dns_name
 
-            print
+    b = find_elb(elb, name)
+    if b:
+        print "=" * 80
+        print "Name: %s" % b.name
+        print "DNS Name: %s" % b.dns_name
+        if b.canonical_hosted_zone_name:
+            chzn = b.canonical_hosted_zone_name
+            print "Canonical hosted zone name: %s" % chzn
+        if b.canonical_hosted_zone_name_id:
+            chznid = b.canonical_hosted_zone_name_id
+            print "Canonical hosted zone name id: %s" % chznid
+        print
 
-            print "Listeners"
-            print "---------"
-            print "%-8s %-8s %s" % ("IN", "OUT", "PROTO")
-            for l in b.listeners:
-                print "%-8s %-8s %s" % (l[0], l[1], l[2])
+        print "Health Check: %s" % b.health_check
+        print
 
-            print
+        print "Listeners"
+        print "---------"
+        print "%-8s %-8s %s" % ("IN", "OUT", "PROTO")
+        for l in b.listeners:
+            print "%-8s %-8s %s" % (l[0], l[1], l[2])
 
-            print "  Zones  "
-            print "---------"
-            for z in b.availability_zones:
-                print z
+        print
 
-            print
+        print "  Zones  "
+        print "---------"
+        for z in b.availability_zones:
+            print z
 
-            print "Instances"
-            print "---------"
-            for i in b.instances:
-                print i.id
+        print
 
-            print
+        print "Instances"
+        print "---------"
+        print "%-12s %-15s %s" % ("ID", "STATE", "DESCRIPTION")
+        for state in b.get_instance_health():
+            print "%-12s %-15s %s" % (state.instance_id, state.state,
+                                      state.description)
+
+        print
+
 
 def create(elb, name, zones, listeners):
     """Create an ELB named <name>"""
     l_list = []
     for l in listeners:
         l = l.split(",")
-        if l[2]=='HTTPS':
+        if l[2] == 'HTTPS':
             l_list.append((int(l[0]), int(l[1]), l[2], l[3]))
-        else : l_list.append((int(l[0]), int(l[1]), l[2]))
-    
+        else:
+            l_list.append((int(l[0]), int(l[1]), l[2]))
+
     b = elb.create_load_balancer(name, zones, l_list)
     return get(elb, name)
 
+
 def delete(elb, name):
     """Delete this ELB"""
-    b = elb.get_all_load_balancers(name)
-    if len(b) < 1:
-        print "No load balancer by the name of %s found" % name
-        return
-    for i in b:
-        if name in i.name:
-            i.delete()
-    print "Load Balancer %s deleted" % name
+    b = find_elb(elb, name)
+    if b:
+        b.delete()
+        print "Load Balancer %s deleted" % name
+
 
 def add_instance(elb, name, instance):
     """Add <instance> to ELB <name>"""
-    b = elb.get_all_load_balancers(name)
-    if len(b) < 1:
-        print "No load balancer by the name of %s found" % name
-        return
-    for i in b:
-        if name in i.name:
-            i.register_instances([instance])
-    return get(elb, name)
+    b = find_elb(elb, name)
+    if b:
+        b.register_instances([instance])
+        return get(elb, name)
 
 
 def remove_instance(elb, name, instance):
     """Remove instance from elb <name>"""
-    b = elb.get_all_load_balancers(name)
-    if len(b) < 1:
-        print "No load balancer by the name of %s found" % name
-        return
-    for i in b:
-        if name in i.name:
-            i.deregister_instances([instance])
-    return get(elb, name)
+    b = find_elb(elb, name)
+    if b:
+        b.deregister_instances([instance])
+        return get(elb, name)
+
+
+def reap_instances(elb, name):
+    """Remove terminated instances from elb <name>"""
+    b = find_elb(elb, name)
+    if b:
+        for state in b.get_instance_health():
+            if (state.state == 'OutOfService' and
+                state.description == 'Instance is in terminated state.'):
+                b.deregister_instances([state.instance_id])
+        return get(elb, name)
+
 
 def enable_zone(elb, name, zone):
     """Enable <zone> for elb"""
-    b = elb.get_all_load_balancers(name)
-    if len(b) < 1:
-        print "No load balancer by the name of %s found" % name
-        return
-    b = b[0]
-    b.enable_zones([zone])
-    return get(elb, name)
+    b = find_elb(elb, name)
+    if b:
+        b.enable_zones([zone])
+        return get(elb, name)
+
 
 def disable_zone(elb, name, zone):
     """Disable <zone> for elb"""
-    b = elb.get_all_load_balancers(name)
-    if len(b) < 1:
-        print "No load balancer by the name of %s found" % name
-        return
-    b = b[0]
-    b.disable_zones([zone])
-    return get(elb, name)
+    b = find_elb(elb, name)
+    if b:
+        b.disable_zones([zone])
+        return get(elb, name)
+
 
 def add_listener(elb, name, listeners):
     """Add listeners to a given load balancer"""
@@ -152,25 +188,18 @@
     for l in listeners:
         l = l.split(",")
         l_list.append((int(l[0]), int(l[1]), l[2]))
-    b = elb.get_all_load_balancers(name)
-    if len(b) < 1:
-        print "No load balancer by the name of %s found" % name
-        return
-    b = b[0]
-    b.create_listeners(l_list)
-    return get(elb, name)
+    b = find_elb(elb, name)
+    if b:
+        b.create_listeners(l_list)
+        return get(elb, name)
+
 
 def rm_listener(elb, name, ports):
     """Remove listeners from a given load balancer"""
-    b = elb.get_all_load_balancers(name)
-    if len(b) < 1:
-        print "No load balancer by the name of %s found" % name
-        return
-    b = b[0]
-    b.delete_listeners(ports)
-    return get(elb, name)
-
-
+    b = find_elb(elb, name)
+    if b:
+        b.delete_listeners(ports)
+        return get(elb, name)
 
 
 if __name__ == "__main__":
@@ -183,8 +212,12 @@
     from optparse import OptionParser
     from boto.mashups.iobject import IObject
     parser = OptionParser(version=VERSION, usage=usage)
-    parser.add_option("-z", "--zone", help="Operate on zone", action="append", default=[], dest="zones")
-    parser.add_option("-l", "--listener", help="Specify Listener in,out,proto", action="append", default=[], dest="listeners")
+    parser.add_option("-z", "--zone",
+                      help="Operate on zone",
+                      action="append", default=[], dest="zones")
+    parser.add_option("-l", "--listener",
+                      help="Specify Listener in,out,proto",
+                      action="append", default=[], dest="listeners")
 
     (options, args) = parser.parse_args()
 
@@ -202,6 +235,12 @@
     elif command == "get":
         get(elb, args[1])
     elif command == "create":
+        if not options.listeners:
+            print "-l option required for command create"
+            sys.exit(1)
+        if not options.zones:
+            print "-z option required for command create"
+            sys.exit(1)
         create(elb, args[1], options.zones, options.listeners)
     elif command == "delete":
         delete(elb, args[1])
@@ -209,11 +248,19 @@
         add_instance(elb, args[1], args[2])
     elif command in ("rm", "remove"):
         remove_instance(elb, args[1], args[2])
+    elif command == "reap":
+        reap_instances(elb, args[1])
     elif command in ("en", "enable"):
         enable_zone(elb, args[1], args[2])
     elif command == "disable":
         disable_zone(elb, args[1], args[2])
     elif command == "addl":
+        if not options.listeners:
+            print "-l option required for command addl"
+            sys.exit(1)
         add_listener(elb, args[1], options.listeners)
     elif command == "rml":
+        if not args[2:]:
+            print "port required"
+            sys.exit(2)
         rm_listener(elb, args[1], args[2:])
diff --git a/bin/glacier b/bin/glacier
new file mode 100755
index 0000000..aad1e8b
--- /dev/null
+++ b/bin/glacier
@@ -0,0 +1,154 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+# Copyright (c) 2012 Miguel Olivares http://moliware.com/
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+"""
+  glacier
+  ~~~~~~~
+
+    Amazon Glacier tool built on top of boto. Look at the usage method to see
+    how to use it.
+
+    Author: Miguel Olivares <miguel@moliware.com>
+"""
+import sys
+
+from boto.glacier import connect_to_region
+from getopt import getopt, GetoptError
+from os.path import isfile
+
+
+COMMANDS = ('vaults', 'jobs', 'upload')
+
+
+def usage():
+    print """
+glacier <command> [args]
+
+    Commands
+        vaults    - Operations with vaults
+        jobs      - Operations with jobs
+        upload    - Upload files to a vault. If the vault doesn't exits, it is
+                    created
+
+    Common args:
+        access_key - Your AWS Access Key ID.  If not supplied, boto will
+                     use the value of the environment variable
+                     AWS_ACCESS_KEY_ID
+        secret_key - Your AWS Secret Access Key.  If not supplied, boto
+                     will use the value of the environment variable
+                     AWS_SECRET_ACCESS_KEY
+        region     - AWS region to use. Possible vaules: us-east-1, us-west-1,
+                     us-west-2, ap-northeast-1, eu-west-1.
+                     Default: us-east-1
+
+    Vaults operations:
+
+        List vaults:
+            glacier vaults 
+
+    Jobs operations:
+
+        List jobs:
+            glacier jobs <vault name>
+
+    Uploading files:
+
+        glacier upload <vault name> <files>
+
+        Examples : 
+            glacier upload pics *.jpg
+            glacier upload pics a.jpg b.jpg
+"""
+    sys.exit()
+
+
+def connect(region, debug_level=0, access_key=None, secret_key=None):
+    """ Connect to a specific region """
+    return connect_to_region(region,
+                             aws_access_key_id=access_key, 
+                             aws_secret_access_key=secret_key,
+                             debug=debug_level)
+
+
+def list_vaults(region, access_key=None, secret_key=None):
+    layer2 = connect(region, access_key, secret_key)
+    for vault in layer2.list_vaults():
+        print vault.arn
+
+
+def list_jobs(vault_name, region, access_key=None, secret_key=None):
+    layer2 = connect(region, access_key, secret_key)
+    print layer2.layer1.list_jobs(vault_name)
+
+
+def upload_files(vault_name, filenames, region, access_key=None, secret_key=None):
+    layer2 = connect(region, access_key, secret_key)
+    layer2.create_vault(vault_name)
+    glacier_vault = layer2.get_vault(vault_name)
+    for filename in filenames:
+        if isfile(filename):
+            print 'Uploading %s to %s' % (filename, vault_name)
+            glacier_vault.upload_archive(filename)
+
+
+def main():
+    if len(sys.argv) < 2:
+        usage()
+    
+    command = sys.argv[1]
+    if command not in COMMANDS:
+        usage()
+
+    argv = sys.argv[2:]
+    options = 'a:s:r:'
+    long_options = ['access_key=', 'secret_key=', 'region=']
+    try:
+        opts, args = getopt(argv, options, long_options)
+    except GetoptError, e:
+        usage()
+
+    # Parse agument
+    access_key = secret_key = None
+    region = 'us-east-1'
+    for option, value in opts:
+        if option in ('a', '--access_key'):
+            access_key = value
+        elif option in ('s', '--secret_key'):
+            secret_key = value
+        elif option in ('r', '--region'):
+            region = value
+    # handle each command
+    if command == 'vaults':
+        list_vaults(region, access_key, secret_key)
+    elif command == 'jobs':
+        if len(args) != 1:
+            usage()
+        list_jobs(args[0], region, access_key, secret_key)
+    elif command == 'upload':
+        if len(args) < 2:
+            usage()
+        upload_files(args[0], args[1:], region, access_key, secret_key)
+
+
+if __name__ == '__main__':
+    main()
diff --git a/bin/instance_events b/bin/instance_events
new file mode 100755
index 0000000..b36a480
--- /dev/null
+++ b/bin/instance_events
@@ -0,0 +1,145 @@
+#!/usr/bin/env python
+# Copyright (c) 2011 Jim Browne http://www.42lines.net
+# Borrows heavily from boto/bin/list_instances which has no attribution
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+
+VERSION="0.1"
+usage = """%prog [options]
+Options:
+  -h, --help            show help message (including options list) and exit
+"""
+
+from operator import itemgetter
+
+HEADERS = {
+    'ID': {'get': itemgetter('id'), 'length':14},
+    'Zone': {'get': itemgetter('zone'), 'length':14},
+    'Hostname': {'get': itemgetter('dns'), 'length':20},
+    'Code': {'get': itemgetter('code'), 'length':18},
+    'Description': {'get': itemgetter('description'), 'length':30},
+    'NotBefore': {'get': itemgetter('not_before'), 'length':25},
+    'NotAfter': {'get': itemgetter('not_after'), 'length':25},
+    'T:': {'length': 30},
+}
+
+def get_column(name, event=None):
+    if name.startswith('T:'):
+        return event[name]
+    return HEADERS[name]['get'](event)
+
+def list(region, headers, order, completed):
+    """List status events for all instances in a given region"""
+
+    import re
+
+    ec2 = boto.connect_ec2(region=region)
+
+    reservations = ec2.get_all_instances()
+
+    instanceinfo = {}
+    events = {}
+    
+    displaytags = [ x for x in headers if x.startswith('T:') ]
+
+    # Collect the tag for every possible instance
+    for res in reservations:
+        for instance in res.instances:
+            iid = instance.id
+            instanceinfo[iid] = {}
+            for tagname in displaytags:
+                _, tag = tagname.split(':', 1)
+                instanceinfo[iid][tagname] = instance.tags.get(tag,'')
+            instanceinfo[iid]['dns'] = instance.public_dns_name
+        
+    stats = ec2.get_all_instance_status()
+
+    for stat in stats:
+        if stat.events:
+            for event in stat.events:
+                events[stat.id] = {}
+                events[stat.id]['id'] = stat.id
+                events[stat.id]['dns'] = instanceinfo[stat.id]['dns']
+                events[stat.id]['zone'] = stat.zone
+                for tag in displaytags:
+                    events[stat.id][tag] = instanceinfo[stat.id][tag]
+                events[stat.id]['code'] = event.code
+                events[stat.id]['description'] = event.description
+                events[stat.id]['not_before'] = event.not_before
+                events[stat.id]['not_after'] = event.not_after
+                if completed and re.match('^\[Completed\]',event.description):
+                    events[stat.id]['not_before'] = 'Completed'
+                    events[stat.id]['not_after'] = 'Completed'
+
+    # Create format string
+    format_string = ""
+    for h in headers:
+        if h.startswith('T:'):
+            format_string += "%%-%ds" % HEADERS['T:']['length']
+        else:
+            format_string += "%%-%ds" % HEADERS[h]['length']
+
+
+    print format_string % headers
+    print "-" * len(format_string % headers)
+                    
+    for instance in sorted(events,
+                           key=lambda ev: get_column(order, events[ev])):
+        e = events[instance]
+        print format_string % tuple(get_column(h, e) for h in headers)
+
+if __name__ == "__main__":
+    import boto
+    from optparse import OptionParser
+    from boto.ec2 import regions
+
+    parser = OptionParser(version=VERSION, usage=usage)
+    parser.add_option("-a", "--all", help="check all regions", dest="all", default=False,action="store_true")
+    parser.add_option("-r", "--region", help="region to check (default us-east-1)", dest="region", default="us-east-1")
+    parser.add_option("-H", "--headers", help="Set headers (use 'T:tagname' for including tags)", default=None, action="store", dest="headers", metavar="ID,Zone,Hostname,Code,Description,NotBefore,NotAfter,T:Name")
+    parser.add_option("-S", "--sort", help="Header for sort order", default=None, action="store", dest="order",metavar="HeaderName")
+    parser.add_option("-c", "--completed", help="List time fields as \"Completed\" for completed events (Default: false)", default=False, action="store_true", dest="completed")
+
+    (options, args) = parser.parse_args()
+
+    if options.headers:
+        headers = tuple(options.headers.split(','))
+    else:
+        headers = ('ID', 'Zone', 'Hostname', 'Code', 'NotBefore', 'NotAfter')
+
+    if options.order:
+        order = options.order
+    else:
+        order = 'ID'
+
+    if options.all:
+        for r in regions():
+            print "Region %s" % r.name
+            list(r, headers, order, options.completed)
+    else:
+        # Connect the region
+        for r in regions():
+            if r.name == options.region:
+                region = r
+                break
+        else:
+            print "Region %s not found." % options.region
+            sys.exit(1)
+
+        list(r, headers, order, options.completed)
diff --git a/bin/launch_instance b/bin/launch_instance
index 53032ad..77a5419 100755
--- a/bin/launch_instance
+++ b/bin/launch_instance
@@ -133,6 +133,7 @@
     parser.add_option("-d", "--dns", help="Returns public and private DNS (implicates --wait)", default=False, action="store_true", dest="dns")
     parser.add_option("-T", "--tag", help="Set tag", default=None, action="append", dest="tags", metavar="key:value")
     parser.add_option("-s", "--scripts", help="Pass in a script or a folder containing scripts to be run when the instance starts up, assumes cloud-init. Specify scripts in a list specified by commas. If multiple scripts are specified, they are run lexically (A good way to ensure they run in the order is to prefix filenames with numbers)", type='string', action="callback", callback=scripts_callback)
+    parser.add_option("--role", help="IAM Role to use, this implies --no-add-cred", dest="role")
 
     (options, args) = parser.parse_args()
 
@@ -152,7 +153,7 @@
         print "Region %s not found." % options.region
         sys.exit(1)
     ec2 = boto.connect_ec2(region=region)
-    if not options.nocred:
+    if not options.nocred and not options.role:
         cfg.add_creds(ec2)
 
     iobj = IObject()
@@ -214,10 +215,15 @@
     if options.save_ebs:
         shutdown_proc = "save"
 
+    instance_profile_name = None
+    if options.role:
+        instance_profile_name = options.role
+
     r = ami.run(min_count=int(options.min_count), max_count=int(options.max_count),
             key_name=key_name, user_data=user_data,
             security_groups=groups, instance_type=options.type,
-            placement=options.zone, instance_initiated_shutdown_behavior=shutdown_proc)
+            placement=options.zone, instance_initiated_shutdown_behavior=shutdown_proc,
+            instance_profile_name=instance_profile_name)
 
     instance = r.instances[0]
 
diff --git a/bin/list_instances b/bin/list_instances
index 5abe9b6..4da5596 100755
--- a/bin/list_instances
+++ b/bin/list_instances
@@ -13,6 +13,7 @@
     'Zone': {'get': attrgetter('placement'), 'length':15},
     'Groups': {'get': attrgetter('groups'), 'length':30},
     'Hostname': {'get': attrgetter('public_dns_name'), 'length':50},
+    'PrivateHostname': {'get': attrgetter('private_dns_name'), 'length':50},
     'State': {'get': attrgetter('state'), 'length':15},
     'Image': {'get': attrgetter('image_id'), 'length':15},
     'Type': {'get': attrgetter('instance_type'), 'length':15},
@@ -33,6 +34,7 @@
     parser = OptionParser()
     parser.add_option("-r", "--region", help="Region (default us-east-1)", dest="region", default="us-east-1")
     parser.add_option("-H", "--headers", help="Set headers (use 'T:tagname' for including tags)", default=None, action="store", dest="headers", metavar="ID,Zone,Groups,Hostname,State,T:Name")
+    parser.add_option("-t", "--tab", help="Tab delimited, skip header - useful in shell scripts", action="store_true", default=False)
     (options, args) = parser.parse_args()
 
     # Connect the region
@@ -61,13 +63,20 @@
 
 
     # List and print
-    print format_string % headers
-    print "-" * len(format_string % headers)
+
+    if not options.tab:
+        print format_string % headers
+        print "-" * len(format_string % headers)
+
     for r in ec2.get_all_instances():
-        groups = [g.id for g in r.groups]
+        groups = [g.name for g in r.groups]
         for i in r.instances:
             i.groups = ','.join(groups)
-            print format_string % tuple(get_column(h, i) for h in headers)
+            if options.tab: 
+                print "\t".join(tuple(get_column(h, i) for h in headers))
+            else:
+                print format_string % tuple(get_column(h, i) for h in headers)
+ 
 
 if __name__ == "__main__":
     main()
diff --git a/bin/lss3 b/bin/lss3
index 377a5a5..497d084 100755
--- a/bin/lss3
+++ b/bin/lss3
@@ -64,7 +64,7 @@
                 pairs.append([name, None])
             if pairs[-1][0].lower() != pairs[-1][0]:
                 mixedCase = True
-
+    
     if mixedCase:
         s3 = boto.connect_s3(calling_format=OrdinaryCallingFormat())
     else:
diff --git a/bin/route53 b/bin/route53
index c2f2cb4..488a9ca 100755
--- a/bin/route53
+++ b/bin/route53
@@ -3,6 +3,34 @@
 #
 # route53 is similar to sdbadmin for Route53, it's a simple
 # console utility to perform the most frequent tasks with Route53
+#
+# Example usage.  Use route53 get after each command to see how the
+# zone changes.
+#
+# Add a non-weighted record, change its value, then delete.  Default TTL:
+#
+# route53 add_record ZPO9LGHZ43QB9 rr.example.com A 4.3.2.1
+# route53 change_record ZPO9LGHZ43QB9 rr.example.com A 9.8.7.6
+# route53 del_record ZPO9LGHZ43QB9 rr.example.com A 9.8.7.6
+#
+# Add a weighted record with two different weights.  Note that the TTL
+# must be specified as route53 uses positional parameters rather than
+# option flags:
+#
+# route53 add_record ZPO9LGHZ43QB9 wrr.example.com A 1.2.3.4 600 foo9 10
+# route53 add_record ZPO9LGHZ43QB9 wrr.example.com A 4.3.2.1 600 foo8 10
+#
+# route53 change_record ZPO9LGHZ43QB9 wrr.example.com A 9.9.9.9 600 foo8 10
+#
+# route53 del_record ZPO9LGHZ43QB9 wrr.example.com A 1.2.3.4 600 foo9 10
+# route53 del_record ZPO9LGHZ43QB9 wrr.example.com A 9.9.9.9 600 foo8 10
+#
+# Add a non-weighted alias, change its value, then delete.  Alaises inherit
+# their TTLs from the backing ELB:
+#
+# route53 add_alias ZPO9LGHZ43QB9 alias.example.com A Z3DZXE0Q79N41H lb-1218761514.us-east-1.elb.amazonaws.com.
+# route53 change_alias ZPO9LGHZ43QB9 alias.example.com. A Z3DZXE0Q79N41H lb2-1218761514.us-east-1.elb.amazonaws.com.
+# route53 delete_alias ZPO9LGHZ43QB9 alias.example.com. A Z3DZXE0Q79N41H lb2-1218761514.us-east-1.elb.amazonaws.com.
 
 def _print_zone_info(zoneinfo):
     print "="*80
@@ -12,7 +40,7 @@
     print "="*80
     print zoneinfo['Config']
     print
-    
+
 
 def create(conn, hostname, caller_reference=None, comment=''):
     """Create a hosted zone, returning the nameservers"""
@@ -44,63 +72,88 @@
     for record in response:
         print '%-40s %-5s %-20s %s' % (record.name, record.type, record.ttl, record.to_print())
 
-
-def add_record(conn, hosted_zone_id, name, type, values, ttl=600, comment=""):
-    """Add a new record to a zone"""
+def _add_del(conn, hosted_zone_id, change, name, type, identifier, weight, values, ttl, comment):
     from boto.route53.record import ResourceRecordSets
     changes = ResourceRecordSets(conn, hosted_zone_id, comment)
-    change = changes.add_change("CREATE", name, type, ttl)
+    change = changes.add_change(change, name, type, ttl,
+                                identifier=identifier, weight=weight)
     for value in values.split(','):
         change.add_value(value)
     print changes.commit()
 
-def del_record(conn, hosted_zone_id, name, type, values, ttl=600, comment=""):
-    """Delete a record from a zone"""
+def _add_del_alias(conn, hosted_zone_id, change, name, type, identifier, weight, alias_hosted_zone_id, alias_dns_name, comment):
     from boto.route53.record import ResourceRecordSets
     changes = ResourceRecordSets(conn, hosted_zone_id, comment)
-    change = changes.add_change("DELETE", name, type, ttl)
-    for value in values.split(','):
-        change.add_value(value)
-    print changes.commit()
-
-def add_alias(conn, hosted_zone_id, name, type, alias_hosted_zone_id, alias_dns_name, comment=""):
-    """Add a new alias to a zone"""
-    from boto.route53.record import ResourceRecordSets
-    changes = ResourceRecordSets(conn, hosted_zone_id, comment)
-    change = changes.add_change("CREATE", name, type)
+    change = changes.add_change(change, name, type,
+                                identifier=identifier, weight=weight)
     change.set_alias(alias_hosted_zone_id, alias_dns_name)
     print changes.commit()
 
-def del_alias(conn, hosted_zone_id, name, type, alias_hosted_zone_id, alias_dns_name, comment=""):
-    """Delete an alias from a zone"""
-    from boto.route53.record import ResourceRecordSets
-    changes = ResourceRecordSets(conn, hosted_zone_id, comment)
-    change = changes.add_change("DELETE", name, type)
-    change.set_alias(alias_hosted_zone_id, alias_dns_name)
-    print changes.commit()
+def add_record(conn, hosted_zone_id, name, type, values, ttl=600,
+               identifier=None, weight=None, comment=""):
+    """Add a new record to a zone.  identifier and weight are optional."""
+    _add_del(conn, hosted_zone_id, "CREATE", name, type, identifier,
+             weight, values, ttl, comment)
 
-def change_record(conn, hosted_zone_id, name, type, values, ttl=600, comment=""):
-    """Delete and then add a record to a zone"""
+def del_record(conn, hosted_zone_id, name, type, values, ttl=600,
+               identifier=None, weight=None, comment=""):
+    """Delete a record from a zone: name, type, ttl, identifier, and weight must match."""
+    _add_del(conn, hosted_zone_id, "DELETE", name, type, identifier,
+             weight, values, ttl, comment)
+
+def add_alias(conn, hosted_zone_id, name, type, alias_hosted_zone_id,
+              alias_dns_name, identifier=None, weight=None, comment=""):
+    """Add a new alias to a zone.  identifier and weight are optional."""
+    _add_del_alias(conn, hosted_zone_id, "CREATE", name, type, identifier,
+                   weight, alias_hosted_zone_id, alias_dns_name, comment)
+
+def del_alias(conn, hosted_zone_id, name, type, alias_hosted_zone_id,
+              alias_dns_name, identifier=None, weight=None, comment=""):
+    """Delete an alias from a zone: name, type, alias_hosted_zone_id, alias_dns_name, weight and identifier must match."""
+    _add_del_alias(conn, hosted_zone_id, "DELETE", name, type, identifier,
+                   weight, alias_hosted_zone_id, alias_dns_name, comment)
+
+def change_record(conn, hosted_zone_id, name, type, newvalues, ttl=600,
+                   identifier=None, weight=None, comment=""):
+    """Delete and then add a record to a zone.  identifier and weight are optional."""
     from boto.route53.record import ResourceRecordSets
     changes = ResourceRecordSets(conn, hosted_zone_id, comment)
-    response = conn.get_all_rrsets(hosted_zone_id, type, name, maxitems=1)[0]
-    change1 = changes.add_change("DELETE", name, type, response.ttl)
-    for old_value in response.resource_records:
-        change1.add_value(old_value)
-    change2 = changes.add_change("CREATE", name, type, ttl)
-    for new_value in values.split(','):
+    # Assume there are not more than 10 WRRs for a given (name, type)
+    responses = conn.get_all_rrsets(hosted_zone_id, type, name, maxitems=10)
+    for response in responses:
+        if response.name != name or response.type != type:
+            continue
+        if response.identifier != identifier or response.weight != weight:
+            continue
+        change1 = changes.add_change("DELETE", name, type, response.ttl,
+                                     identifier=response.identifier,
+                                     weight=response.weight)
+        for old_value in response.resource_records:
+            change1.add_value(old_value)
+
+    change2 = changes.add_change("CREATE", name, type, ttl,
+            identifier=identifier, weight=weight)
+    for new_value in newvalues.split(','):
         change2.add_value(new_value)
     print changes.commit()
 
-def change_alias(conn, hosted_zone_id, name, type, alias_hosted_zone_id, alias_dns_name, comment=""):
-    """Delete and then add an alias to a zone"""
+def change_alias(conn, hosted_zone_id, name, type, new_alias_hosted_zone_id, new_alias_dns_name, identifier=None, weight=None, comment=""):
+    """Delete and then add an alias to a zone.  identifier and weight are optional."""
     from boto.route53.record import ResourceRecordSets
     changes = ResourceRecordSets(conn, hosted_zone_id, comment)
-    response = conn.get_all_rrsets(hosted_zone_id, type, name, maxitems=1)[0]
-    change1 = changes.add_change("DELETE", name, type)
-    change1.set_alias(response.alias_hosted_zone_id, response.alias_dns_name)
-    change2 = changes.add_change("CREATE", name, type)
-    change2.set_alias(alias_hosted_zone_id, alias_dns_name)
+    # Assume there are not more than 10 WRRs for a given (name, type)
+    responses = conn.get_all_rrsets(hosted_zone_id, type, name, maxitems=10)
+    for response in responses:
+        if response.name != name or response.type != type:
+            continue
+        if response.identifier != identifier or response.weight != weight:
+            continue
+        change1 = changes.add_change("DELETE", name, type, 
+                                     identifier=response.identifier,
+                                     weight=response.weight)
+        change1.set_alias(response.alias_hosted_zone_id, response.alias_dns_name)
+    change2 = changes.add_change("CREATE", name, type, identifier=identifier, weight=weight)
+    change2.set_alias(new_alias_hosted_zone_id, new_alias_dns_name)
     print changes.commit()
 
 def help(conn, fnc=None):
diff --git a/bin/s3multiput b/bin/s3multiput
index df6e9fe..7631174 100755
--- a/bin/s3multiput
+++ b/bin/s3multiput
@@ -41,8 +41,8 @@
     s3put [-a/--access_key <access_key>] [-s/--secret_key <secret_key>]
           -b/--bucket <bucket_name> [-c/--callback <num_cb>]
           [-d/--debug <debug_level>] [-i/--ignore <ignore_dirs>]
-          [-n/--no_op] [-p/--prefix <prefix>] [-q/--quiet]
-          [-g/--grant grant] [-w/--no_overwrite] [-r/--reduced] path
+          [-n/--no_op] [-p/--prefix <prefix>] [-k/--key_prefix <key_prefix>] 
+          [-q/--quiet] [-g/--grant grant] [-w/--no_overwrite] [-r/--reduced] path
 
     Where
         access_key - Your AWS Access Key ID.  If not supplied, boto will
@@ -76,6 +76,9 @@
                      /bar/fie.baz
                  The prefix must end in a trailing separator and if it
                  does not then one will be added.
+        key_prefix - A prefix to be added to the S3 key name, after any 
+                     stripping of the file path is done based on the 
+                     "-p/--prefix" option.
         reduced - Use Reduced Redundancy storage
         grant - A canned ACL policy that will be granted on each file
                 transferred to S3.  The value of provided must be one
@@ -98,10 +101,10 @@
 def submit_cb(bytes_so_far, total_bytes):
     print '%d bytes transferred / %d bytes total' % (bytes_so_far, total_bytes)
 
-def get_key_name(fullpath, prefix):
+def get_key_name(fullpath, prefix, key_prefix):
     key_name = fullpath[len(prefix):]
     l = key_name.split(os.sep)
-    return '/'.join(l)
+    return key_prefix + '/'.join(l)
 
 def _upload_part(bucketname, aws_key, aws_secret, multipart_id, part_num,
     source_path, offset, bytes, debug, cb, num_cb, amount_of_retries=10):
@@ -189,15 +192,16 @@
     quiet  = False
     no_op  = False
     prefix = '/'
+    key_prefix = ''
     grant  = None
     no_overwrite = False
     reduced = False
 
     try:
-        opts, args = getopt.getopt(sys.argv[1:], 'a:b:c::d:g:hi:np:qs:wr',
-                                   ['access_key', 'bucket', 'callback', 'debug', 'help', 'grant',
-                                    'ignore', 'no_op', 'prefix', 'quiet', 'secret_key', 'no_overwrite',
-                                    'reduced'])
+        opts, args = getopt.getopt(sys.argv[1:], 'a:b:c::d:g:hi:k:np:qs:wr',
+                                   ['access_key=', 'bucket=', 'callback=', 'debug=', 'help', 'grant=',
+                                    'ignore=', 'key_prefix=', 'no_op', 'prefix=', 'quiet', 'secret_key=', 
+                                    'no_overwrite', 'reduced'])
     except:
         usage()
 
@@ -226,6 +230,8 @@
             prefix = a
             if prefix[-1] != os.sep:
                 prefix = prefix + os.sep
+        if o in ('-k', '--key_prefix'):
+            key_prefix = a
         if o in ('-q', '--quiet'):
             quiet = True
         if o in ('-s', '--secret_key'):
@@ -256,7 +262,7 @@
             if not quiet:
                 print 'Getting list of existing keys to check against'
             keys = []
-            for key in b.list():
+            for key in b.list(get_key_name(path, prefix, key_prefix)):
                 keys.append(key.name)
         for root, dirs, files in os.walk(path):
             for ignore in ignore_dirs:
@@ -264,7 +270,7 @@
                     dirs.remove(ignore)
             for file in files:
                 fullpath = os.path.join(root, file)
-                key_name = get_key_name(fullpath, prefix)
+                key_name = get_key_name(fullpath, prefix, key_prefix)
                 copy_file = True
                 if no_overwrite:
                     if key_name in keys:
@@ -285,12 +291,12 @@
                         else:
                             upload(bucket_name, aws_access_key_id,
                                    aws_secret_access_key, fullpath, key_name,
-                                   reduced, debug, cb, num_cb)
+                                   reduced, debug, cb, num_cb, grant or 'private')
                 total += 1
 
     # upload a single file
     elif os.path.isfile(path):
-        key_name = get_key_name(os.path.abspath(path), prefix)
+        key_name = get_key_name(os.path.abspath(path), prefix, key_prefix)
         copy_file = True
         if no_overwrite:
             if b.get_key(key_name):
@@ -311,7 +317,7 @@
                 else:
                     upload(bucket_name, aws_access_key_id,
                            aws_secret_access_key, path, key_name,
-                           reduced, debug, cb, num_cb)
+                           reduced, debug, cb, num_cb, grant or 'private')
 
 if __name__ == "__main__":
-    main()
+    main()
\ No newline at end of file
diff --git a/bin/s3put b/bin/s3put
index a748ec3..9e5c5f2 100755
--- a/bin/s3put
+++ b/bin/s3put
@@ -96,9 +96,9 @@
     try:
         opts, args = getopt.getopt(
                 sys.argv[1:], 'a:b:c::d:g:hi:np:qs:vwr',
-                ['access_key', 'bucket', 'callback', 'debug', 'help', 'grant',
-                 'ignore', 'no_op', 'prefix', 'quiet', 'secret_key',
-                 'no_overwrite', 'reduced']
+                ['access_key=', 'bucket=', 'callback=', 'debug=', 'help',
+                 'grant=', 'ignore=', 'no_op', 'prefix=', 'quiet',
+                 'secret_key=', 'no_overwrite', 'reduced', "header="]
                 )
     except:
         usage()
@@ -116,6 +116,7 @@
     grant = None
     no_overwrite = False
     reduced = False
+    headers = {}
     for o, a in opts:
         if o in ('-h', '--help'):
             usage()
@@ -147,6 +148,9 @@
             quiet = True
         if o in ('-s', '--secret_key'):
             aws_secret_access_key = a
+        if o in ('--header'):
+            (k,v) = a.split("=")
+            headers[k] = v
     if len(args) != 1:
         print usage()
     path = os.path.expanduser(args[0])
@@ -162,13 +166,15 @@
                 if not quiet:
                     print 'Getting list of existing keys to check against'
                 keys = []
-                for key in b.list():
+                for key in b.list(get_key_name(path, prefix)):
                     keys.append(key.name)
             for root, dirs, files in os.walk(path):
                 for ignore in ignore_dirs:
                     if ignore in dirs:
                         dirs.remove(ignore)
                 for file in files:
+                    if file.startswith("."):
+                        continue
                     fullpath = os.path.join(root, file)
                     key_name = get_key_name(fullpath, prefix)
                     copy_file = True
@@ -185,10 +191,11 @@
                             k.set_contents_from_filename(
                                     fullpath, cb=cb, num_cb=num_cb,
                                     policy=grant, reduced_redundancy=reduced,
+                                    headers=headers
                                     )
                     total += 1
         elif os.path.isfile(path):
-            key_name = os.path.split(path)[1]
+            key_name = get_key_name(path, prefix)
             copy_file = True
             if no_overwrite:
                 if b.get_key(key_name):
@@ -199,7 +206,7 @@
                 k = b.new_key(key_name)
                 k.set_contents_from_filename(path, cb=cb, num_cb=num_cb,
                                              policy=grant,
-                                             reduced_redundancy=reduced)
+                                             reduced_redundancy=reduced, headers=headers)
     else:
         print usage()
 
diff --git a/bin/sdbadmin b/bin/sdbadmin
index e8ff9b5..7e87c7b 100755
--- a/bin/sdbadmin
+++ b/bin/sdbadmin
@@ -27,6 +27,15 @@
 import time
 from boto import sdb
 
+# Allow support for JSON
+try:
+    import simplejson as json
+except:
+    try:
+        import json
+    except:
+        json = False
+
 def choice_input(options, default=None, title=None):
     """
     Choice input
@@ -50,11 +59,17 @@
     return choice and len(choice) > 0 and choice[0].lower() == "y"
 
 
-def dump_db(domain, file_name):
+def dump_db(domain, file_name, use_json=False):
     """
     Dump SDB domain to file
     """
-    doc = domain.to_xml(open(file_name, "w"))
+    f = open(file_name, "w")
+    if use_json:
+        for item in domain:
+            data = {"name": item.name, "attributes": item}
+            print >> f, json.dumps(data)
+    else:
+        doc = domain.to_xml(f)
 
 def empty_db(domain):
     """
@@ -63,7 +78,7 @@
     for item in domain:
         item.delete()
 
-def load_db(domain, file):
+def load_db(domain, file, use_json=False):
     """
     Load a domain from a file, this doesn't overwrite any existing
     data in the file so if you want to do a full recovery and restore
@@ -72,7 +87,16 @@
     :param domain: The SDB Domain object to load to
     :param file: The File to load the DB from
     """
-    domain.from_xml(file)
+    if use_json:
+        for line in file.readlines():
+            if line:
+                data = json.loads(line)
+                item = domain.new_item(data['name'])
+                item.update(data['attributes'])
+                item.save()
+                
+    else:
+        domain.from_xml(file)
 
 def create_db(domain_name, region_name):
     """Create a new DB
@@ -95,6 +119,8 @@
     parser.add_option("-c", "--create", help="Create domain", dest="create", default=False, action="store_true")
 
     parser.add_option("-a", "--all-domains", help="Operate on all domains", action="store_true", default=False, dest="all_domains")
+    if json:
+        parser.add_option("-j", "--use-json", help="Load/Store as JSON instead of XML", action="store_true", default=False, dest="json")
     parser.add_option("-d", "--domain", help="Do functions on domain (may be more then one)", action="append", dest="domains")
     parser.add_option("-f", "--file", help="Input/Output file we're operating on", dest="file_name")
     parser.add_option("-r", "--region", help="Region (e.g. us-east-1[default] or eu-west-1)", default="us-east-1", dest="region_name")
@@ -152,7 +178,7 @@
                 file_name = options.file_name
             else:
                 file_name = "%s.db" % domain.name
-            dump_db(domain, file_name)
+            dump_db(domain, file_name, options.json)
 
     if options.load:
         for domain in domains:
@@ -161,7 +187,7 @@
                 file_name = options.file_name
             else:
                 file_name = "%s.db" % domain.name
-            load_db(domain, open(file_name, "rb"))
+            load_db(domain, open(file_name, "rb"), options.json)
 
 
     total_time = round(time.time() - stime, 2)
diff --git a/boto/__init__.py b/boto/__init__.py
index 00e2fc8..b0eb6bd 100644
--- a/boto/__init__.py
+++ b/boto/__init__.py
@@ -1,6 +1,7 @@
-# Copyright (c) 2006-2011 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2006-2012 Mitch Garnaat http://garnaat.org/
 # Copyright (c) 2010-2011, Eucalyptus Systems, Inc.
 # Copyright (c) 2011, Nexenta Systems Inc.
+# Copyright (c) 2012 Amazon.com, Inc. or its affiliates.
 # All rights reserved.
 #
 # Permission is hereby granted, free of charge, to any person obtaining a
@@ -25,17 +26,22 @@
 from boto.pyami.config import Config, BotoConfigLocations
 from boto.storage_uri import BucketStorageUri, FileStorageUri
 import boto.plugin
-import os, re, sys
+import os
+import platform
+import re
+import sys
 import logging
 import logging.config
+import urlparse
 from boto.exception import InvalidUriError
 
-__version__ = '2.1.1'
-Version = __version__ # for backware compatibility
+__version__ = '2.6.0-dev'
+Version = __version__  # for backware compatibility
 
 UserAgent = 'Boto/%s (%s)' % (__version__, sys.platform)
 config = Config()
 
+
 def init_logging():
     for file in BotoConfigLocations:
         try:
@@ -43,15 +49,20 @@
         except:
             pass
 
+
 class NullHandler(logging.Handler):
     def emit(self, record):
         pass
 
 log = logging.getLogger('boto')
+perflog = logging.getLogger('boto.perf')
 log.addHandler(NullHandler())
+perflog.addHandler(NullHandler())
 init_logging()
 
 # convenience function to set logging to a particular file
+
+
 def set_file_logger(name, filepath, level=logging.INFO, format_string=None):
     global log
     if not format_string:
@@ -65,6 +76,7 @@
     logger.addHandler(fh)
     log = logger
 
+
 def set_stream_logger(name, level=logging.DEBUG, format_string=None):
     global log
     if not format_string:
@@ -78,6 +90,7 @@
     logger.addHandler(fh)
     log = logger
 
+
 def connect_sqs(aws_access_key_id=None, aws_secret_access_key=None, **kwargs):
     """
     :type aws_access_key_id: string
@@ -92,6 +105,7 @@
     from boto.sqs.connection import SQSConnection
     return SQSConnection(aws_access_key_id, aws_secret_access_key, **kwargs)
 
+
 def connect_s3(aws_access_key_id=None, aws_secret_access_key=None, **kwargs):
     """
     :type aws_access_key_id: string
@@ -106,6 +120,7 @@
     from boto.s3.connection import S3Connection
     return S3Connection(aws_access_key_id, aws_secret_access_key, **kwargs)
 
+
 def connect_gs(gs_access_key_id=None, gs_secret_access_key=None, **kwargs):
     """
     @type gs_access_key_id: string
@@ -120,6 +135,7 @@
     from boto.gs.connection import GSConnection
     return GSConnection(gs_access_key_id, gs_secret_access_key, **kwargs)
 
+
 def connect_ec2(aws_access_key_id=None, aws_secret_access_key=None, **kwargs):
     """
     :type aws_access_key_id: string
@@ -134,6 +150,7 @@
     from boto.ec2.connection import EC2Connection
     return EC2Connection(aws_access_key_id, aws_secret_access_key, **kwargs)
 
+
 def connect_elb(aws_access_key_id=None, aws_secret_access_key=None, **kwargs):
     """
     :type aws_access_key_id: string
@@ -148,7 +165,9 @@
     from boto.ec2.elb import ELBConnection
     return ELBConnection(aws_access_key_id, aws_secret_access_key, **kwargs)
 
-def connect_autoscale(aws_access_key_id=None, aws_secret_access_key=None, **kwargs):
+
+def connect_autoscale(aws_access_key_id=None, aws_secret_access_key=None,
+                      **kwargs):
     """
     :type aws_access_key_id: string
     :param aws_access_key_id: Your AWS Access Key ID
@@ -160,9 +179,12 @@
     :return: A connection to Amazon's Auto Scaling Service
     """
     from boto.ec2.autoscale import AutoScaleConnection
-    return AutoScaleConnection(aws_access_key_id, aws_secret_access_key, **kwargs)
+    return AutoScaleConnection(aws_access_key_id, aws_secret_access_key,
+                               **kwargs)
 
-def connect_cloudwatch(aws_access_key_id=None, aws_secret_access_key=None, **kwargs):
+
+def connect_cloudwatch(aws_access_key_id=None, aws_secret_access_key=None,
+                       **kwargs):
     """
     :type aws_access_key_id: string
     :param aws_access_key_id: Your AWS Access Key ID
@@ -174,7 +196,9 @@
     :return: A connection to Amazon's EC2 Monitoring service
     """
     from boto.ec2.cloudwatch import CloudWatchConnection
-    return CloudWatchConnection(aws_access_key_id, aws_secret_access_key, **kwargs)
+    return CloudWatchConnection(aws_access_key_id, aws_secret_access_key,
+                                **kwargs)
+
 
 def connect_sdb(aws_access_key_id=None, aws_secret_access_key=None, **kwargs):
     """
@@ -190,6 +214,7 @@
     from boto.sdb.connection import SDBConnection
     return SDBConnection(aws_access_key_id, aws_secret_access_key, **kwargs)
 
+
 def connect_fps(aws_access_key_id=None, aws_secret_access_key=None, **kwargs):
     """
     :type aws_access_key_id: string
@@ -204,7 +229,9 @@
     from boto.fps.connection import FPSConnection
     return FPSConnection(aws_access_key_id, aws_secret_access_key, **kwargs)
 
-def connect_mturk(aws_access_key_id=None, aws_secret_access_key=None, **kwargs):
+
+def connect_mturk(aws_access_key_id=None, aws_secret_access_key=None,
+                  **kwargs):
     """
     :type aws_access_key_id: string
     :param aws_access_key_id: Your AWS Access Key ID
@@ -218,7 +245,9 @@
     from boto.mturk.connection import MTurkConnection
     return MTurkConnection(aws_access_key_id, aws_secret_access_key, **kwargs)
 
-def connect_cloudfront(aws_access_key_id=None, aws_secret_access_key=None, **kwargs):
+
+def connect_cloudfront(aws_access_key_id=None, aws_secret_access_key=None,
+                       **kwargs):
     """
     :type aws_access_key_id: string
     :param aws_access_key_id: Your AWS Access Key ID
@@ -230,7 +259,9 @@
     :return: A connection to FPS
     """
     from boto.cloudfront import CloudFrontConnection
-    return CloudFrontConnection(aws_access_key_id, aws_secret_access_key, **kwargs)
+    return CloudFrontConnection(aws_access_key_id, aws_secret_access_key,
+                                **kwargs)
+
 
 def connect_vpc(aws_access_key_id=None, aws_secret_access_key=None, **kwargs):
     """
@@ -246,6 +277,7 @@
     from boto.vpc import VPCConnection
     return VPCConnection(aws_access_key_id, aws_secret_access_key, **kwargs)
 
+
 def connect_rds(aws_access_key_id=None, aws_secret_access_key=None, **kwargs):
     """
     :type aws_access_key_id: string
@@ -260,6 +292,7 @@
     from boto.rds import RDSConnection
     return RDSConnection(aws_access_key_id, aws_secret_access_key, **kwargs)
 
+
 def connect_emr(aws_access_key_id=None, aws_secret_access_key=None, **kwargs):
     """
     :type aws_access_key_id: string
@@ -274,6 +307,7 @@
     from boto.emr import EmrConnection
     return EmrConnection(aws_access_key_id, aws_secret_access_key, **kwargs)
 
+
 def connect_sns(aws_access_key_id=None, aws_secret_access_key=None, **kwargs):
     """
     :type aws_access_key_id: string
@@ -303,7 +337,9 @@
     from boto.iam import IAMConnection
     return IAMConnection(aws_access_key_id, aws_secret_access_key, **kwargs)
 
-def connect_route53(aws_access_key_id=None, aws_secret_access_key=None, **kwargs):
+
+def connect_route53(aws_access_key_id=None, aws_secret_access_key=None,
+                    **kwargs):
     """
     :type aws_access_key_id: string
     :param aws_access_key_id: Your AWS Access Key ID
@@ -315,7 +351,26 @@
     :return: A connection to Amazon's Route53 DNS Service
     """
     from boto.route53 import Route53Connection
-    return Route53Connection(aws_access_key_id, aws_secret_access_key, **kwargs)
+    return Route53Connection(aws_access_key_id, aws_secret_access_key,
+                             **kwargs)
+
+
+def connect_cloudformation(aws_access_key_id=None, aws_secret_access_key=None,
+                           **kwargs):
+    """
+    :type aws_access_key_id: string
+    :param aws_access_key_id: Your AWS Access Key ID
+
+    :type aws_secret_access_key: string
+    :param aws_secret_access_key: Your AWS Secret Access Key
+
+    :rtype: :class:`boto.cloudformation.CloudFormationConnection`
+    :return: A connection to Amazon's CloudFormation Service
+    """
+    from boto.cloudformation import CloudFormationConnection
+    return CloudFormationConnection(aws_access_key_id, aws_secret_access_key,
+                                    **kwargs)
+
 
 def connect_euca(host=None, aws_access_key_id=None, aws_secret_access_key=None,
                  port=8773, path='/services/Eucalyptus', is_secure=False,
@@ -355,7 +410,62 @@
                          region=reg, port=port, path=path,
                          is_secure=is_secure, **kwargs)
 
-def connect_walrus(host=None, aws_access_key_id=None, aws_secret_access_key=None,
+
+def connect_glacier(aws_access_key_id=None, aws_secret_access_key=None,
+                    **kwargs):
+    """
+    :type aws_access_key_id: string
+    :param aws_access_key_id: Your AWS Access Key ID
+
+    :type aws_secret_access_key: string
+    :param aws_secret_access_key: Your AWS Secret Access Key
+
+    :rtype: :class:`boto.glacier.layer2.Layer2`
+    :return: A connection to Amazon's Glacier Service
+    """
+    from boto.glacier.layer2 import Layer2
+    return Layer2(aws_access_key_id, aws_secret_access_key,
+                  **kwargs)
+
+
+def connect_ec2_endpoint(url, aws_access_key_id=None,
+                         aws_secret_access_key=None,
+                         **kwargs):
+    """
+    Connect to an EC2 Api endpoint.  Additional arguments are passed
+    through to connect_ec2.
+
+    :type url: string
+    :param url: A url for the ec2 api endpoint to connect to
+
+    :type aws_access_key_id: string
+    :param aws_access_key_id: Your AWS Access Key ID
+
+    :type aws_secret_access_key: string
+    :param aws_secret_access_key: Your AWS Secret Access Key
+
+    :rtype: :class:`boto.ec2.connection.EC2Connection`
+    :return: A connection to Eucalyptus server
+    """
+    from boto.ec2.regioninfo import RegionInfo
+
+    purl = urlparse.urlparse(url)
+    kwargs['port'] = purl.port
+    kwargs['host'] = purl.hostname
+    kwargs['path'] = purl.path
+    if not 'is_secure' in kwargs:
+        kwargs['is_secure'] = (purl.scheme == "https")
+
+    kwargs['region'] = RegionInfo(name=purl.hostname,
+                                  endpoint=purl.hostname)
+    kwargs['aws_access_key_id'] = aws_access_key_id
+    kwargs['aws_secret_access_key'] = aws_secret_access_key
+
+    return(connect_ec2(**kwargs))
+
+
+def connect_walrus(host=None, aws_access_key_id=None,
+                   aws_secret_access_key=None,
                    port=8773, path='/services/Walrus', is_secure=False,
                    **kwargs):
     """
@@ -387,12 +497,13 @@
                                            None)
     if not host:
         host = config.get('Boto', 'walrus_host', None)
-        
+
     return S3Connection(aws_access_key_id, aws_secret_access_key,
                         host=host, port=port, path=path,
                         calling_format=OrdinaryCallingFormat(),
                         is_secure=is_secure, **kwargs)
 
+
 def connect_ses(aws_access_key_id=None, aws_secret_access_key=None, **kwargs):
     """
     :type aws_access_key_id: string
@@ -407,6 +518,7 @@
     from boto.ses import SESConnection
     return SESConnection(aws_access_key_id, aws_secret_access_key, **kwargs)
 
+
 def connect_sts(aws_access_key_id=None, aws_secret_access_key=None, **kwargs):
     """
     :type aws_access_key_id: string
@@ -421,21 +533,21 @@
     from boto.sts import STSConnection
     return STSConnection(aws_access_key_id, aws_secret_access_key, **kwargs)
 
+
 def connect_ia(ia_access_key_id=None, ia_secret_access_key=None,
                is_secure=False, **kwargs):
     """
     Connect to the Internet Archive via their S3-like API.
 
     :type ia_access_key_id: string
-    :param ia_access_key_id: Your IA Access Key ID.  This will also look in your
-                             boto config file for an entry in the Credentials
-                             section called "ia_access_key_id"
+    :param ia_access_key_id: Your IA Access Key ID.  This will also look
+        in your boto config file for an entry in the Credentials
+        section called "ia_access_key_id"
 
     :type ia_secret_access_key: string
     :param ia_secret_access_key: Your IA Secret Access Key.  This will also
-                                 look in your boto config file for an entry
-                                 in the Credentials section called
-                                 "ia_secret_access_key"
+        look in your boto config file for an entry in the Credentials
+        section called "ia_secret_access_key"
 
     :rtype: :class:`boto.s3.connection.S3Connection`
     :return: A connection to the Internet Archive
@@ -453,45 +565,79 @@
                         calling_format=OrdinaryCallingFormat(),
                         is_secure=is_secure, **kwargs)
 
-def check_extensions(module_name, module_path):
+
+def connect_dynamodb(aws_access_key_id=None,
+                     aws_secret_access_key=None,
+                     **kwargs):
     """
-    This function checks for extensions to boto modules.  It should be called in the
-    __init__.py file of all boto modules.  See:
-    http://code.google.com/p/boto/wiki/ExtendModules
+    :type aws_access_key_id: string
+    :param aws_access_key_id: Your AWS Access Key ID
 
-    for details.
+    :type aws_secret_access_key: string
+    :param aws_secret_access_key: Your AWS Secret Access Key
+
+    :rtype: :class:`boto.dynamodb.layer2.Layer2`
+    :return: A connection to the Layer2 interface for DynamoDB.
     """
-    option_name = '%s_extend' % module_name
-    version = config.get('Boto', option_name, None)
-    if version:
-        dirname = module_path[0]
-        path = os.path.join(dirname, version)
-        if os.path.isdir(path):
-            log.info('extending module %s with: %s' % (module_name, path))
-            module_path.insert(0, path)
+    from boto.dynamodb.layer2 import Layer2
+    return Layer2(aws_access_key_id, aws_secret_access_key, **kwargs)
 
-_aws_cache = {}
 
-def _get_aws_conn(service):
-    global _aws_cache
-    conn = _aws_cache.get(service)
-    if not conn:
-        meth = getattr(sys.modules[__name__], 'connect_' + service)
-        conn = meth()
-        _aws_cache[service] = conn
-    return conn
+def connect_swf(aws_access_key_id=None,
+                aws_secret_access_key=None,
+                **kwargs):
+    """
+    :type aws_access_key_id: string
+    :param aws_access_key_id: Your AWS Access Key ID
 
-def lookup(service, name):
-    global _aws_cache
-    conn = _get_aws_conn(service)
-    obj = _aws_cache.get('.'.join((service, name)), None)
-    if not obj:
-        obj = conn.lookup(name)
-        _aws_cache['.'.join((service, name))] = obj
-    return obj
+    :type aws_secret_access_key: string
+    :param aws_secret_access_key: Your AWS Secret Access Key
+
+    :rtype: :class:`boto.swf.layer1.Layer1`
+    :return: A connection to the Layer1 interface for SWF.
+    """
+    from boto.swf.layer1 import Layer1
+    return Layer1(aws_access_key_id, aws_secret_access_key, **kwargs)
+
+
+def connect_cloudsearch(aws_access_key_id=None,
+                        aws_secret_access_key=None,
+                        **kwargs):
+    """
+    :type aws_access_key_id: string
+    :param aws_access_key_id: Your AWS Access Key ID
+
+    :type aws_secret_access_key: string
+    :param aws_secret_access_key: Your AWS Secret Access Key
+
+    :rtype: :class:`boto.ec2.autoscale.CloudSearchConnection`
+    :return: A connection to Amazon's CloudSearch service
+    """
+    from boto.cloudsearch.layer2 import Layer2
+    return Layer2(aws_access_key_id, aws_secret_access_key,
+                  **kwargs)
+
+
+def connect_beanstalk(aws_access_key_id=None,
+                      aws_secret_access_key=None,
+                      **kwargs):
+    """
+    :type aws_access_key_id: string
+    :param aws_access_key_id: Your AWS Access Key ID
+
+    :type aws_secret_access_key: string
+    :param aws_secret_access_key: Your AWS Secret Access Key
+
+    :rtype: :class:`boto.beanstalk.layer1.Layer1`
+    :return: A connection to Amazon's Elastic Beanstalk service
+    """
+    from boto.beanstalk.layer1 import Layer1
+    return Layer1(aws_access_key_id, aws_secret_access_key, **kwargs)
+
 
 def storage_uri(uri_str, default_scheme='file', debug=0, validate=True,
-                bucket_storage_uri_class=BucketStorageUri):
+                bucket_storage_uri_class=BucketStorageUri,
+                suppress_consec_slashes=True):
     """
     Instantiate a StorageUri from a URI string.
 
@@ -505,6 +651,8 @@
     :param validate: whether to check for bucket name validity.
     :type bucket_storage_uri_class: BucketStorageUri interface.
     :param bucket_storage_uri_class: Allows mocking for unit tests.
+    :param suppress_consec_slashes: If provided, controls whether
+        consecutive slashes will be suppressed in key paths.
 
     We allow validate to be disabled to allow caller
     to implement bucket-level wildcarding (outside the boto library;
@@ -519,7 +667,8 @@
     * s3://bucket/name
     * gs://bucket
     * s3://bucket
-    * filename
+    * filename (which could be a Unix path like /a/b/c or a Windows path like
+      C:\a\b\c)
 
     The last example uses the default scheme ('file', unless overridden)
     """
@@ -532,8 +681,14 @@
         # Check for common error: user specifies gs:bucket instead
         # of gs://bucket. Some URI parsers allow this, but it can cause
         # confusion for callers, so we don't.
-        if uri_str.find(':') != -1:
-            raise InvalidUriError('"%s" contains ":" instead of "://"' % uri_str)
+        colon_pos = uri_str.find(':')
+        if colon_pos != -1:
+            # Allow Windows path names including drive letter (C: etc.)
+            drive_char = uri_str[0].lower()
+            if not (platform.system().lower().startswith('windows')
+                    and colon_pos == 1
+                    and drive_char >= 'a' and drive_char <= 'z'):
+              raise InvalidUriError('"%s" contains ":" instead of "://"' % uri_str)
         scheme = default_scheme.lower()
         path = uri_str
     else:
@@ -566,7 +721,10 @@
         object_name = ''
         if len(path_parts) > 1:
             object_name = path_parts[1]
-        return bucket_storage_uri_class(scheme, bucket_name, object_name, debug)
+        return bucket_storage_uri_class(
+            scheme, bucket_name, object_name, debug,
+            suppress_consec_slashes=suppress_consec_slashes)
+
 
 def storage_uri_for_key(key):
     """Returns a StorageUri for the given key.
diff --git a/boto/auth.py b/boto/auth.py
index 084dde9..29f9ac5 100644
--- a/boto/auth.py
+++ b/boto/auth.py
@@ -35,6 +35,9 @@
 import hmac
 import sys
 import urllib
+import time
+import datetime
+import copy
 from email.utils import formatdate
 
 from boto.auth_handler import AuthHandler
@@ -68,12 +71,17 @@
     import sha
     sha256 = None
 
+
 class HmacKeys(object):
     """Key based Auth handler helper."""
 
     def __init__(self, host, config, provider):
         if provider.access_key is None or provider.secret_key is None:
             raise boto.auth_handler.NotReadyToAuthenticate()
+        self.host = host
+        self.update_provider(provider)
+
+    def update_provider(self, provider):
         self._provider = provider
         self._hmac = hmac.new(self._provider.secret_key, digestmod=sha)
         if sha256:
@@ -88,57 +96,97 @@
         else:
             return 'HmacSHA1'
 
-    def sign_string(self, string_to_sign):
-        boto.log.debug('Canonical: %s' % string_to_sign)
+    def _get_hmac(self):
         if self._hmac_256:
-            hmac = self._hmac_256.copy()
+            digestmod = sha256
         else:
-            hmac = self._hmac.copy()
-        hmac.update(string_to_sign)
-        return base64.encodestring(hmac.digest()).strip()
+            digestmod = sha
+        return hmac.new(self._provider.secret_key,
+                        digestmod=digestmod)
+
+    def sign_string(self, string_to_sign):
+        new_hmac = self._get_hmac()
+        new_hmac.update(string_to_sign)
+        return base64.encodestring(new_hmac.digest()).strip()
+
+    def __getstate__(self):
+        pickled_dict = copy.copy(self.__dict__)
+        del pickled_dict['_hmac']
+        del pickled_dict['_hmac_256']
+        return pickled_dict
+
+    def __setstate__(self, dct):
+        self.__dict__ = dct
+        self.update_provider(self._provider)
+
+
+class AnonAuthHandler(AuthHandler, HmacKeys):
+    """
+    Implements Anonymous requests.
+    """
+
+    capability = ['anon']
+
+    def __init__(self, host, config, provider):
+        AuthHandler.__init__(self, host, config, provider)
+
+    def add_auth(self, http_request, **kwargs):
+        pass
+
 
 class HmacAuthV1Handler(AuthHandler, HmacKeys):
     """    Implements the HMAC request signing used by S3 and GS."""
-    
+
     capability = ['hmac-v1', 's3']
-    
+
     def __init__(self, host, config, provider):
         AuthHandler.__init__(self, host, config, provider)
         HmacKeys.__init__(self, host, config, provider)
         self._hmac_256 = None
-        
+
+    def update_provider(self, provider):
+        super(HmacAuthV1Handler, self).update_provider(provider)
+        self._hmac_256 = None
+
     def add_auth(self, http_request, **kwargs):
         headers = http_request.headers
         method = http_request.method
         auth_path = http_request.auth_path
-        if not headers.has_key('Date'):
+        if 'Date' not in headers:
             headers['Date'] = formatdate(usegmt=True)
 
         if self._provider.security_token:
             key = self._provider.security_token_header
             headers[key] = self._provider.security_token
-        c_string = boto.utils.canonical_string(method, auth_path, headers,
-                                               None, self._provider)
-        b64_hmac = self.sign_string(c_string)
+        string_to_sign = boto.utils.canonical_string(method, auth_path,
+                                                     headers, None,
+                                                     self._provider)
+        boto.log.debug('StringToSign:\n%s' % string_to_sign)
+        b64_hmac = self.sign_string(string_to_sign)
         auth_hdr = self._provider.auth_header
         headers['Authorization'] = ("%s %s:%s" %
                                     (auth_hdr,
                                      self._provider.access_key, b64_hmac))
 
+
 class HmacAuthV2Handler(AuthHandler, HmacKeys):
     """
     Implements the simplified HMAC authorization used by CloudFront.
     """
     capability = ['hmac-v2', 'cloudfront']
-    
+
     def __init__(self, host, config, provider):
         AuthHandler.__init__(self, host, config, provider)
         HmacKeys.__init__(self, host, config, provider)
         self._hmac_256 = None
-        
+
+    def update_provider(self, provider):
+        super(HmacAuthV2Handler, self).update_provider(provider)
+        self._hmac_256 = None
+
     def add_auth(self, http_request, **kwargs):
         headers = http_request.headers
-        if not headers.has_key('Date'):
+        if 'Date' not in headers:
             headers['Date'] = formatdate(usegmt=True)
 
         b64_hmac = self.sign_string(headers['Date'])
@@ -146,28 +194,275 @@
         headers['Authorization'] = ("%s %s:%s" %
                                     (auth_hdr,
                                      self._provider.access_key, b64_hmac))
-        
+
+
 class HmacAuthV3Handler(AuthHandler, HmacKeys):
     """Implements the new Version 3 HMAC authorization used by Route53."""
-    
+
     capability = ['hmac-v3', 'route53', 'ses']
-    
+
     def __init__(self, host, config, provider):
         AuthHandler.__init__(self, host, config, provider)
         HmacKeys.__init__(self, host, config, provider)
-        
+
     def add_auth(self, http_request, **kwargs):
         headers = http_request.headers
-        if not headers.has_key('Date'):
+        if 'Date' not in headers:
             headers['Date'] = formatdate(usegmt=True)
 
+        if self._provider.security_token:
+            key = self._provider.security_token_header
+            headers[key] = self._provider.security_token
+
         b64_hmac = self.sign_string(headers['Date'])
         s = "AWS3-HTTPS AWSAccessKeyId=%s," % self._provider.access_key
         s += "Algorithm=%s,Signature=%s" % (self.algorithm(), b64_hmac)
         headers['X-Amzn-Authorization'] = s
 
+
+class HmacAuthV3HTTPHandler(AuthHandler, HmacKeys):
+    """
+    Implements the new Version 3 HMAC authorization used by DynamoDB.
+    """
+
+    capability = ['hmac-v3-http']
+
+    def __init__(self, host, config, provider):
+        AuthHandler.__init__(self, host, config, provider)
+        HmacKeys.__init__(self, host, config, provider)
+
+    def headers_to_sign(self, http_request):
+        """
+        Select the headers from the request that need to be included
+        in the StringToSign.
+        """
+        headers_to_sign = {}
+        headers_to_sign = {'Host': self.host}
+        for name, value in http_request.headers.items():
+            lname = name.lower()
+            if lname.startswith('x-amz'):
+                headers_to_sign[name] = value
+        return headers_to_sign
+
+    def canonical_headers(self, headers_to_sign):
+        """
+        Return the headers that need to be included in the StringToSign
+        in their canonical form by converting all header keys to lower
+        case, sorting them in alphabetical order and then joining
+        them into a string, separated by newlines.
+        """
+        l = sorted(['%s:%s' % (n.lower().strip(),
+                    headers_to_sign[n].strip()) for n in headers_to_sign])
+        return '\n'.join(l)
+
+    def string_to_sign(self, http_request):
+        """
+        Return the canonical StringToSign as well as a dict
+        containing the original version of all headers that
+        were included in the StringToSign.
+        """
+        headers_to_sign = self.headers_to_sign(http_request)
+        canonical_headers = self.canonical_headers(headers_to_sign)
+        string_to_sign = '\n'.join([http_request.method,
+                                    http_request.path,
+                                    '',
+                                    canonical_headers,
+                                    '',
+                                    http_request.body])
+        return string_to_sign, headers_to_sign
+
+    def add_auth(self, req, **kwargs):
+        """
+        Add AWS3 authentication to a request.
+
+        :type req: :class`boto.connection.HTTPRequest`
+        :param req: The HTTPRequest object.
+        """
+        # This could be a retry.  Make sure the previous
+        # authorization header is removed first.
+        if 'X-Amzn-Authorization' in req.headers:
+            del req.headers['X-Amzn-Authorization']
+        req.headers['X-Amz-Date'] = formatdate(usegmt=True)
+        if self._provider.security_token:
+            req.headers['X-Amz-Security-Token'] = self._provider.security_token
+        string_to_sign, headers_to_sign = self.string_to_sign(req)
+        boto.log.debug('StringToSign:\n%s' % string_to_sign)
+        hash_value = sha256(string_to_sign).digest()
+        b64_hmac = self.sign_string(hash_value)
+        s = "AWS3 AWSAccessKeyId=%s," % self._provider.access_key
+        s += "Algorithm=%s," % self.algorithm()
+        s += "SignedHeaders=%s," % ';'.join(headers_to_sign)
+        s += "Signature=%s" % b64_hmac
+        req.headers['X-Amzn-Authorization'] = s
+
+
+class HmacAuthV4Handler(AuthHandler, HmacKeys):
+    """
+    Implements the new Version 4 HMAC authorization.
+    """
+
+    capability = ['hmac-v4']
+
+    def __init__(self, host, config, provider):
+        AuthHandler.__init__(self, host, config, provider)
+        HmacKeys.__init__(self, host, config, provider)
+
+    def _sign(self, key, msg, hex=False):
+        if hex:
+            sig = hmac.new(key, msg.encode('utf-8'), sha256).hexdigest()
+        else:
+            sig = hmac.new(key, msg.encode('utf-8'), sha256).digest()
+        return sig
+
+    def headers_to_sign(self, http_request):
+        """
+        Select the headers from the request that need to be included
+        in the StringToSign.
+        """
+        headers_to_sign = {}
+        headers_to_sign = {'Host': self.host}
+        for name, value in http_request.headers.items():
+            lname = name.lower()
+            if lname.startswith('x-amz'):
+                headers_to_sign[name] = value
+        return headers_to_sign
+
+    def query_string(self, http_request):
+        parameter_names = sorted(http_request.params.keys())
+        pairs = []
+        for pname in parameter_names:
+            pval = str(http_request.params[pname]).encode('utf-8')
+            pairs.append(urllib.quote(pname, safe='') + '=' +
+                         urllib.quote(pval, safe='-_~'))
+        return '&'.join(pairs)
+
+    def canonical_query_string(self, http_request):
+        l = []
+        for param in http_request.params:
+            value = str(http_request.params[param])
+            l.append('%s=%s' % (urllib.quote(param, safe='-_.~'),
+                                urllib.quote(value, safe='-_.~')))
+        l = sorted(l)
+        return '&'.join(l)
+
+    def canonical_headers(self, headers_to_sign):
+        """
+        Return the headers that need to be included in the StringToSign
+        in their canonical form by converting all header keys to lower
+        case, sorting them in alphabetical order and then joining
+        them into a string, separated by newlines.
+        """
+        l = ['%s:%s' % (n.lower().strip(),
+                      headers_to_sign[n].strip()) for n in headers_to_sign]
+        l = sorted(l)
+        return '\n'.join(l)
+
+    def signed_headers(self, headers_to_sign):
+        l = ['%s' % n.lower().strip() for n in headers_to_sign]
+        l = sorted(l)
+        return ';'.join(l)
+
+    def canonical_uri(self, http_request):
+        return http_request.path
+
+    def payload(self, http_request):
+        body = http_request.body
+        # If the body is a file like object, we can use
+        # boto.utils.compute_hash, which will avoid reading
+        # the entire body into memory.
+        if hasattr(body, 'seek') and hasattr(body, 'read'):
+            return boto.utils.compute_hash(body, hash_algorithm=sha256)[0]
+        return sha256(http_request.body).hexdigest()
+
+    def canonical_request(self, http_request):
+        cr = [http_request.method.upper()]
+        cr.append(self.canonical_uri(http_request))
+        cr.append(self.canonical_query_string(http_request))
+        headers_to_sign = self.headers_to_sign(http_request)
+        cr.append(self.canonical_headers(headers_to_sign) + '\n')
+        cr.append(self.signed_headers(headers_to_sign))
+        cr.append(self.payload(http_request))
+        return '\n'.join(cr)
+
+    def scope(self, http_request):
+        scope = [self._provider.access_key]
+        scope.append(http_request.timestamp)
+        scope.append(http_request.region_name)
+        scope.append(http_request.service_name)
+        scope.append('aws4_request')
+        return '/'.join(scope)
+
+    def credential_scope(self, http_request):
+        scope = []
+        http_request.timestamp = http_request.headers['X-Amz-Date'][0:8]
+        scope.append(http_request.timestamp)
+        parts = http_request.host.split('.')
+        if len(parts) == 3:
+            http_request.region_name = 'us-east-1'
+        else:
+            http_request.region_name = parts[1]
+        scope.append(http_request.region_name)
+        http_request.service_name = parts[0]
+        scope.append(http_request.service_name)
+        scope.append('aws4_request')
+        return '/'.join(scope)
+
+    def string_to_sign(self, http_request, canonical_request):
+        """
+        Return the canonical StringToSign as well as a dict
+        containing the original version of all headers that
+        were included in the StringToSign.
+        """
+        sts = ['AWS4-HMAC-SHA256']
+        sts.append(http_request.headers['X-Amz-Date'])
+        sts.append(self.credential_scope(http_request))
+        sts.append(sha256(canonical_request).hexdigest())
+        return '\n'.join(sts)
+
+    def signature(self, http_request, string_to_sign):
+        key = self._provider.secret_key
+        k_date = self._sign(('AWS4' + key).encode('utf-8'),
+                              http_request.timestamp)
+        k_region = self._sign(k_date, http_request.region_name)
+        k_service = self._sign(k_region, http_request.service_name)
+        k_signing = self._sign(k_service, 'aws4_request')
+        return self._sign(k_signing, string_to_sign, hex=True)
+
+    def add_auth(self, req, **kwargs):
+        """
+        Add AWS4 authentication to a request.
+
+        :type req: :class`boto.connection.HTTPRequest`
+        :param req: The HTTPRequest object.
+        """
+        # This could be a retry.  Make sure the previous
+        # authorization header is removed first.
+        if 'X-Amzn-Authorization' in req.headers:
+            del req.headers['X-Amzn-Authorization']
+        now = datetime.datetime.utcnow()
+        req.headers['X-Amz-Date'] = now.strftime('%Y%m%dT%H%M%SZ')
+        if self._provider.security_token:
+            req.headers['X-Amz-Security-Token'] = self._provider.security_token
+        canonical_request = self.canonical_request(req)
+        boto.log.debug('CanonicalRequest:\n%s' % canonical_request)
+        string_to_sign = self.string_to_sign(req, canonical_request)
+        boto.log.debug('StringToSign:\n%s' % string_to_sign)
+        signature = self.signature(req, string_to_sign)
+        boto.log.debug('Signature:\n%s' % signature)
+        headers_to_sign = self.headers_to_sign(req)
+        l = ['AWS4-HMAC-SHA256 Credential=%s' % self.scope(req)]
+        l.append('SignedHeaders=%s' % self.signed_headers(headers_to_sign))
+        l.append('Signature=%s' % signature)
+        req.headers['Authorization'] = ','.join(l)
+        qs = self.query_string(req)
+        if qs:
+            req.path = req.path.split('?')[0]
+            req.path = req.path + '?' + qs
+
+
 class QuerySignatureHelper(HmacKeys):
-    """Helper for Query signature based Auth handler.
+    """
+    Helper for Query signature based Auth handler.
 
     Concrete sub class need to implement _calc_sigature method.
     """
@@ -184,7 +479,7 @@
         boto.log.debug('query_string: %s Signature: %s' % (qs, signature))
         if http_request.method == 'POST':
             headers['Content-Type'] = 'application/x-www-form-urlencoded; charset=UTF-8'
-            http_request.body = qs + '&Signature=' + urllib.quote(signature)
+            http_request.body = qs + '&Signature=' + urllib.quote_plus(signature)
             http_request.headers['Content-Length'] = str(len(http_request.body))
         else:
             http_request.body = ''
@@ -192,7 +487,8 @@
             # already be there, we need to get rid of that and rebuild it
             http_request.path = http_request.path.split('?')[0]
             http_request.path = (http_request.path + '?' + qs +
-                                 '&Signature=' + urllib.quote(signature))
+                                 '&Signature=' + urllib.quote_plus(signature))
+
 
 class QuerySignatureV0AuthHandler(QuerySignatureHelper, AuthHandler):
     """Provides Signature V0 Signing"""
@@ -202,11 +498,11 @@
 
     def _calc_signature(self, params, *args):
         boto.log.debug('using _calc_signature_0')
-        hmac = self._hmac.copy()
+        hmac = self._get_hmac()
         s = params['Action'] + params['Timestamp']
         hmac.update(s)
         keys = params.keys()
-        keys.sort(cmp = lambda x, y: cmp(x.lower(), y.lower()))
+        keys.sort(cmp=lambda x, y: cmp(x.lower(), y.lower()))
         pairs = []
         for key in keys:
             val = boto.utils.get_utf8_value(params[key])
@@ -214,6 +510,7 @@
         qs = '&'.join(pairs)
         return (qs, base64.b64encode(hmac.digest()))
 
+
 class QuerySignatureV1AuthHandler(QuerySignatureHelper, AuthHandler):
     """
     Provides Query Signature V1 Authentication.
@@ -224,9 +521,9 @@
 
     def _calc_signature(self, params, *args):
         boto.log.debug('using _calc_signature_1')
-        hmac = self._hmac.copy()
+        hmac = self._get_hmac()
         keys = params.keys()
-        keys.sort(cmp = lambda x, y: cmp(x.lower(), y.lower()))
+        keys.sort(cmp=lambda x, y: cmp(x.lower(), y.lower()))
         pairs = []
         for key in keys:
             hmac.update(key)
@@ -236,6 +533,7 @@
         qs = '&'.join(pairs)
         return (qs, base64.b64encode(hmac.digest()))
 
+
 class QuerySignatureV2AuthHandler(QuerySignatureHelper, AuthHandler):
     """Provides Query Signature V2 Authentication."""
 
@@ -246,16 +544,11 @@
     def _calc_signature(self, params, verb, path, server_name):
         boto.log.debug('using _calc_signature_2')
         string_to_sign = '%s\n%s\n%s\n' % (verb, server_name.lower(), path)
-        if self._hmac_256:
-            hmac = self._hmac_256.copy()
-            params['SignatureMethod'] = 'HmacSHA256'
-        else:
-            hmac = self._hmac.copy()
-            params['SignatureMethod'] = 'HmacSHA1'
+        hmac = self._get_hmac()
+        params['SignatureMethod'] = self.algorithm()
         if self._provider.security_token:
             params['SecurityToken'] = self._provider.security_token
-        keys = params.keys()
-        keys.sort()
+        keys = sorted(params.keys())
         pairs = []
         for key in keys:
             val = boto.utils.get_utf8_value(params[key])
@@ -272,6 +565,34 @@
         return (qs, b64)
 
 
+class POSTPathQSV2AuthHandler(QuerySignatureV2AuthHandler, AuthHandler):
+    """
+    Query Signature V2 Authentication relocating signed query
+    into the path and allowing POST requests with Content-Types.
+    """
+
+    capability = ['mws']
+
+    def add_auth(self, req, **kwargs):
+        req.params['AWSAccessKeyId'] = self._provider.access_key
+        req.params['SignatureVersion'] = self.SignatureVersion
+        req.params['Timestamp'] = boto.utils.get_ts()
+        qs, signature = self._calc_signature(req.params, req.method,
+                                             req.auth_path, req.host)
+        boto.log.debug('query_string: %s Signature: %s' % (qs, signature))
+        if req.method == 'POST':
+            req.headers['Content-Length'] = str(len(req.body))
+            req.headers['Content-Type'] = req.headers.get('Content-Type',
+                                                          'text/plain')
+        else:
+            req.body = ''
+        # if this is a retried req, the qs from the previous try will
+        # already be there, we need to get rid of that and rebuild it
+        req.path = req.path.split('?')[0]
+        req.path = (req.path + '?' + qs +
+                             '&Signature=' + urllib.quote_plus(signature))
+
+
 def get_auth_handler(host, config, provider, requested_capability=None):
     """Finds an AuthHandler that is ready to authenticate.
 
@@ -281,7 +602,7 @@
     :type host: string
     :param host: The name of the host
 
-    :type config: 
+    :type config:
     :param config:
 
     :type provider:
@@ -302,13 +623,13 @@
             ready_handlers.append(handler(host, config, provider))
         except boto.auth_handler.NotReadyToAuthenticate:
             pass
- 
+
     if not ready_handlers:
         checked_handlers = auth_handlers
         names = [handler.__name__ for handler in checked_handlers]
         raise boto.exception.NoAuthHandlerFound(
               'No handler was ready to authenticate. %d handlers were checked.'
-              ' %s ' 
+              ' %s '
               'Check your credentials' % (len(names), str(names)))
 
     if len(ready_handlers) > 1:
diff --git a/boto/fps/test/__init__.py b/boto/beanstalk/__init__.py
similarity index 100%
rename from boto/fps/test/__init__.py
rename to boto/beanstalk/__init__.py
diff --git a/boto/beanstalk/exception.py b/boto/beanstalk/exception.py
new file mode 100644
index 0000000..c209cef
--- /dev/null
+++ b/boto/beanstalk/exception.py
@@ -0,0 +1,64 @@
+import sys
+import json
+from boto.exception import BotoServerError
+
+
+def simple(e):
+    err = json.loads(e.error_message)
+    code = err['Error']['Code']
+
+    try:
+        # Dynamically get the error class.
+        simple_e = getattr(sys.modules[__name__], code)(e, err)
+    except AttributeError:
+        # Return original exception on failure.
+        return e
+
+    return simple_e
+
+
+class SimpleException(BotoServerError):
+    def __init__(self, e, err):
+        super(SimpleException, self).__init__(e.status, e.reason, e.body)
+        self.body = e.error_message
+        self.request_id = err['RequestId']
+        self.error_code = err['Error']['Code']
+        self.error_message = err['Error']['Message']
+
+    def __repr__(self):
+        return self.__class__.__name__ + ': ' + self.error_message
+    def __str__(self):
+        return self.__class__.__name__ + ': ' + self.error_message
+
+
+class ValidationError(SimpleException): pass
+
+# Common beanstalk exceptions.
+class IncompleteSignature(SimpleException): pass
+class InternalFailure(SimpleException): pass
+class InvalidAction(SimpleException): pass
+class InvalidClientTokenId(SimpleException): pass
+class InvalidParameterCombination(SimpleException): pass
+class InvalidParameterValue(SimpleException): pass
+class InvalidQueryParameter(SimpleException): pass
+class MalformedQueryString(SimpleException): pass
+class MissingAction(SimpleException): pass
+class MissingAuthenticationToken(SimpleException): pass
+class MissingParameter(SimpleException): pass
+class OptInRequired(SimpleException): pass
+class RequestExpired(SimpleException): pass
+class ServiceUnavailable(SimpleException): pass
+class Throttling(SimpleException): pass
+
+
+# Action specific exceptions.
+class TooManyApplications(SimpleException): pass
+class InsufficientPrivileges(SimpleException): pass
+class S3LocationNotInServiceRegion(SimpleException): pass
+class TooManyApplicationVersions(SimpleException): pass
+class TooManyConfigurationTemplates(SimpleException): pass
+class TooManyEnvironments(SimpleException): pass
+class S3SubscriptionRequired(SimpleException): pass
+class TooManyBuckets(SimpleException): pass
+class OperationInProgress(SimpleException): pass
+class SourceBundleDeletion(SimpleException): pass
diff --git a/boto/beanstalk/layer1.py b/boto/beanstalk/layer1.py
new file mode 100644
index 0000000..5e994e1
--- /dev/null
+++ b/boto/beanstalk/layer1.py
@@ -0,0 +1,1167 @@
+# Copyright (c) 2012 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2012 Amazon.com, Inc. or its affiliates.
+# All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+import json
+
+import boto
+import boto.jsonresponse
+from boto.regioninfo import RegionInfo
+from boto.connection import AWSQueryConnection
+
+
+class Layer1(AWSQueryConnection):
+
+    APIVersion = '2010-12-01'
+    DefaultRegionName = 'us-east-1'
+    DefaultRegionEndpoint = 'elasticbeanstalk.us-east-1.amazonaws.com'
+
+    def __init__(self, aws_access_key_id=None, aws_secret_access_key=None,
+                 is_secure=True, port=None,
+                 proxy=None, proxy_port=None,
+                 proxy_user=None, proxy_pass=None, debug=0,
+                 https_connection_factory=None, region=None, path='/',
+                 api_version=None, security_token=None):
+        if not region:
+            region = RegionInfo(self, self.DefaultRegionName,
+                                self.DefaultRegionEndpoint)
+        self.region = region
+        AWSQueryConnection.__init__(self, aws_access_key_id,
+                                    aws_secret_access_key,
+                                    is_secure, port, proxy, proxy_port,
+                                    proxy_user, proxy_pass,
+                                    self.region.endpoint, debug,
+                                    https_connection_factory, path,
+                                    security_token)
+
+    def _required_auth_capability(self):
+        return ['sign-v2']
+
+    def _encode_bool(self, v):
+        v = bool(v)
+        return {True: "true", False: "false"}[v]
+
+    def _get_response(self, action, params, path='/', verb='GET'):
+        params['ContentType'] = 'JSON'
+        response = self.make_request(action, params, path, verb)
+        body = response.read()
+        boto.log.debug(body)
+        if response.status == 200:
+            return json.loads(body)
+        else:
+            raise self.ResponseError(response.status, response.reason, body)
+
+    def check_dns_availability(self, cname_prefix):
+        """Checks if the specified CNAME is available.
+
+        :type cname_prefix: string
+        :param cname_prefix: The prefix used when this CNAME is
+        reserved.
+        """
+        params = {'CNAMEPrefix': cname_prefix}
+        return self._get_response('CheckDNSAvailability', params)
+
+    def create_application(self, application_name, description=None):
+        """
+        Creates an application that has one configuration template
+        named default and no application versions.
+
+        :type application_name: string
+        :param application_name: The name of the application.
+        Constraint: This name must be unique within your account. If the
+        specified name already exists, the action returns an
+        InvalidParameterValue error.
+
+        :type description: string
+        :param description: Describes the application.
+
+        :raises: TooManyApplicationsException
+        """
+        params = {'ApplicationName': application_name}
+        if description:
+            params['Description'] = description
+        return self._get_response('CreateApplication', params)
+
+    def create_application_version(self, application_name, version_label,
+                                   description=None, s3_bucket=None,
+                                   s3_key=None, auto_create_application=None):
+        """Creates an application version for the specified application.
+
+        :type application_name: string
+        :param application_name: The name of the application. If no
+        application is found with this name, and AutoCreateApplication
+        is false, returns an InvalidParameterValue error.
+
+        :type version_label: string
+        :param version_label: A label identifying this
+        version.Constraint: Must be unique per application. If an
+        application version already exists with this label for the
+        specified application, AWS Elastic Beanstalk returns an
+        InvalidParameterValue error.
+
+        :type description: string
+        :param description: Describes this version.
+
+        :type s3_bucket: string
+        :param s3_bucket: The Amazon S3 bucket where the data is
+        located.
+
+        :type s3_key: string
+        :param s3_key: The Amazon S3 key where the data is located.
+        Both s3_bucket and s3_key must be specified in order to use
+        a specific source bundle.  If both of these values are not specified
+        the sample application will be used.
+
+        :type auto_create_application: boolean
+        :param auto_create_application: Determines how the system
+        behaves if the specified application for this version does not
+        already exist:  true: Automatically creates the specified
+        application for this version if it does not already exist.
+        false: Returns an InvalidParameterValue if the specified
+        application for this version does not already exist.  Default:
+        false  Valid Values: true | false
+
+        :raises: TooManyApplicationsException,
+                 TooManyApplicationVersionsException,
+                 InsufficientPrivilegesException,
+                 S3LocationNotInServiceRegionException
+
+        """
+        params = {'ApplicationName': application_name,
+                  'VersionLabel': version_label}
+        if description:
+            params['Description'] = description
+        if s3_bucket and s3_key:
+            params['SourceBundle.S3Bucket'] = s3_bucket
+            params['SourceBundle.S3Key'] = s3_key
+        if auto_create_application:
+            params['AutoCreateApplication'] = self._encode_bool(
+                auto_create_application)
+        return self._get_response('CreateApplicationVersion', params)
+
+    def create_configuration_template(self, application_name, template_name,
+                                      solution_stack_name=None,
+                                      source_configuration_application_name=None,
+                                      source_configuration_template_name=None,
+                                      environment_id=None, description=None,
+                                      option_settings=None):
+        """Creates a configuration template.
+
+        Templates are associated with a specific application and are used to
+        deploy different versions of the application with the same
+        configuration settings.
+
+        :type application_name: string
+        :param application_name: The name of the application to
+        associate with this configuration template. If no application is
+        found with this name, AWS Elastic Beanstalk returns an
+        InvalidParameterValue error.
+
+        :type template_name: string
+        :param template_name: The name of the configuration
+        template.Constraint: This name must be unique per application.
+        Default: If a configuration template already exists with this
+        name, AWS Elastic Beanstalk returns an InvalidParameterValue
+        error.
+
+        :type solution_stack_name: string
+        :param solution_stack_name: The name of the solution stack used
+        by this configuration. The solution stack specifies the
+        operating system, architecture, and application server for a
+        configuration template. It determines the set of configuration
+        options as well as the possible and default values.  Use
+        ListAvailableSolutionStacks to obtain a list of available
+        solution stacks.  Default: If the SolutionStackName is not
+        specified and the source configuration parameter is blank, AWS
+        Elastic Beanstalk uses the default solution stack. If not
+        specified and the source configuration parameter is specified,
+        AWS Elastic Beanstalk uses the same solution stack as the source
+        configuration template.
+
+        :type source_configuration_application_name: string
+        :param source_configuration_application_name: The name of the
+        application associated with the configuration.
+
+        :type source_configuration_template_name: string
+        :param source_configuration_template_name: The name of the
+        configuration template.
+
+        :type environment_id: string
+        :param environment_id: The ID of the environment used with this
+        configuration template.
+
+        :type description: string
+        :param description: Describes this configuration.
+
+        :type option_settings: list
+        :param option_settings: If specified, AWS Elastic Beanstalk sets
+        the specified configuration option to the requested value. The
+        new value overrides the value obtained from the solution stack
+        or the source configuration template.
+
+        :raises: InsufficientPrivilegesException,
+                 TooManyConfigurationTemplatesException
+        """
+        params = {'ApplicationName': application_name,
+                  'TemplateName': template_name}
+        if solution_stack_name:
+            params['SolutionStackName'] = solution_stack_name
+        if source_configuration_application_name:
+            params['ApplicationName'] = source_configuration_application_name
+        if source_configuration_template_name:
+            params['TemplateName'] = source_configuration_template_name
+        if environment_id:
+            params['EnvironmentId'] = environment_id
+        if description:
+            params['Description'] = description
+        if option_settings:
+            self._build_list_params(params, option_settings,
+                                   'OptionSettings.member',
+                                   ('Namespace', 'OptionName', 'Value'))
+        return self._get_response('CreateConfigurationTemplate', params)
+
+    def create_environment(self, application_name, environment_name,
+                           version_label=None, template_name=None,
+                           solution_stack_name=None, cname_prefix=None,
+                           description=None, option_settings=None,
+                           options_to_remove=None):
+        """Launches an environment for the application using a configuration.
+
+        :type application_name: string
+        :param application_name: The name of the application that
+        contains the version to be deployed.  If no application is found
+        with this name, CreateEnvironment returns an
+        InvalidParameterValue error.
+
+        :type version_label: string
+        :param version_label: The name of the application version to
+        deploy. If the specified application has no associated
+        application versions, AWS Elastic Beanstalk UpdateEnvironment
+        returns an InvalidParameterValue error.  Default: If not
+        specified, AWS Elastic Beanstalk attempts to launch the most
+        recently created application version.
+
+        :type environment_name: string
+        :param environment_name: A unique name for the deployment
+        environment. Used in the application URL. Constraint: Must be
+        from 4 to 23 characters in length. The name can contain only
+        letters, numbers, and hyphens. It cannot start or end with a
+        hyphen. This name must be unique in your account. If the
+        specified name already exists, AWS Elastic Beanstalk returns an
+        InvalidParameterValue error. Default: If the CNAME parameter is
+        not specified, the environment name becomes part of the CNAME,
+        and therefore part of the visible URL for your application.
+
+        :type template_name: string
+        :param template_name: The name of the configuration template to
+        use in deployment. If no configuration template is found with
+        this name, AWS Elastic Beanstalk returns an
+        InvalidParameterValue error.  Condition: You must specify either
+        this parameter or a SolutionStackName, but not both. If you
+        specify both, AWS Elastic Beanstalk returns an
+        InvalidParameterCombination error. If you do not specify either,
+        AWS Elastic Beanstalk returns a MissingRequiredParameter error.
+
+        :type solution_stack_name: string
+        :param solution_stack_name: This is an alternative to specifying
+        a configuration name. If specified, AWS Elastic Beanstalk sets
+        the configuration values to the default values associated with
+        the specified solution stack.  Condition: You must specify
+        either this or a TemplateName, but not both. If you specify
+        both, AWS Elastic Beanstalk returns an
+        InvalidParameterCombination error. If you do not specify either,
+        AWS Elastic Beanstalk returns a MissingRequiredParameter error.
+
+        :type cname_prefix: string
+        :param cname_prefix: If specified, the environment attempts to
+        use this value as the prefix for the CNAME. If not specified,
+        the environment uses the environment name.
+
+        :type description: string
+        :param description: Describes this environment.
+
+        :type option_settings: list
+        :param option_settings: If specified, AWS Elastic Beanstalk sets
+        the specified configuration options to the requested value in
+        the configuration set for the new environment. These override
+        the values obtained from the solution stack or the configuration
+        template.  Each element in the list is a tuple of (Namespace,
+        OptionName, Value), for example::
+
+            [('aws:autoscaling:launchconfiguration',
+              'Ec2KeyName', 'mykeypair')]
+
+        :type options_to_remove: list
+        :param options_to_remove: A list of custom user-defined
+        configuration options to remove from the configuration set for
+        this new environment.
+
+        :raises: TooManyEnvironmentsException, InsufficientPrivilegesException
+
+        """
+        params = {'ApplicationName': application_name,
+                  'EnvironmentName': environment_name}
+        if version_label:
+            params['VersionLabel'] = version_label
+        if template_name:
+            params['TemplateName'] = template_name
+        if solution_stack_name:
+            params['SolutionStackName'] = solution_stack_name
+        if cname_prefix:
+            params['CNAMEPrefix'] = cname_prefix
+        if description:
+            params['Description'] = description
+        if option_settings:
+            self._build_list_params(params, option_settings,
+                                   'OptionSettings.member',
+                                   ('Namespace', 'OptionName', 'Value'))
+        if options_to_remove:
+            self.build_list_params(params, options_to_remove,
+                                   'OptionsToRemove.member')
+        return self._get_response('CreateEnvironment', params)
+
+    def create_storage_location(self):
+        """
+        Creates the Amazon S3 storage location for the account.  This
+        location is used to store user log files.
+
+        :raises: TooManyBucketsException,
+                 S3SubscriptionRequiredException,
+                 InsufficientPrivilegesException
+
+        """
+        return self._get_response('CreateStorageLocation', params={})
+
+    def delete_application(self, application_name,
+                           terminate_env_by_force=None):
+        """
+        Deletes the specified application along with all associated
+        versions and configurations. The application versions will not
+        be deleted from your Amazon S3 bucket.
+
+        :type application_name: string
+        :param application_name: The name of the application to delete.
+
+        :type terminate_env_by_force: boolean
+        :param terminate_env_by_force: When set to true, running
+        environments will be terminated before deleting the application.
+
+        :raises: OperationInProgressException
+
+        """
+        params = {'ApplicationName': application_name}
+        if terminate_env_by_force:
+            params['TerminateEnvByForce'] = self._encode_bool(
+                terminate_env_by_force)
+        return self._get_response('DeleteApplication', params)
+
+    def delete_application_version(self, application_name, version_label,
+                                   delete_source_bundle=None):
+        """Deletes the specified version from the specified application.
+
+        :type application_name: string
+        :param application_name: The name of the application to delete
+        releases from.
+
+        :type version_label: string
+        :param version_label: The label of the version to delete.
+
+        :type delete_source_bundle: boolean
+        :param delete_source_bundle: Indicates whether to delete the
+        associated source bundle from Amazon S3.  Valid Values: true | false
+
+        :raises: SourceBundleDeletionException,
+                 InsufficientPrivilegesException,
+                 OperationInProgressException,
+                 S3LocationNotInServiceRegionException
+        """
+        params = {'ApplicationName': application_name,
+                  'VersionLabel': version_label}
+        if delete_source_bundle:
+            params['DeleteSourceBundle'] = self._encode_bool(
+                delete_source_bundle)
+        return self._get_response('DeleteApplicationVersion', params)
+
+    def delete_configuration_template(self, application_name, template_name):
+        """Deletes the specified configuration template.
+
+        :type application_name: string
+        :param application_name: The name of the application to delete
+        the configuration template from.
+
+        :type template_name: string
+        :param template_name: The name of the configuration template to
+        delete.
+
+        :raises: OperationInProgressException
+
+        """
+        params = {'ApplicationName': application_name,
+                  'TemplateName': template_name}
+        return self._get_response('DeleteConfigurationTemplate', params)
+
+    def delete_environment_configuration(self, application_name,
+                                         environment_name):
+        """
+        Deletes the draft configuration associated with the running
+        environment.  Updating a running environment with any
+        configuration changes creates a draft configuration set. You can
+        get the draft configuration using DescribeConfigurationSettings
+        while the update is in progress or if the update fails. The
+        DeploymentStatus for the draft configuration indicates whether
+        the deployment is in process or has failed. The draft
+        configuration remains in existence until it is deleted with this
+        action.
+
+        :type application_name: string
+        :param application_name: The name of the application the
+        environment is associated with.
+
+        :type environment_name: string
+        :param environment_name: The name of the environment to delete
+        the draft configuration from.
+
+        """
+        params = {'ApplicationName': application_name,
+                  'EnvironmentName': environment_name}
+        return self._get_response('DeleteEnvironmentConfiguration', params)
+
+    def describe_application_versions(self, application_name=None,
+                                      version_labels=None):
+        """Returns descriptions for existing application versions.
+
+        :type application_name: string
+        :param application_name: If specified, AWS Elastic Beanstalk
+        restricts the returned descriptions to only include ones that
+        are associated with the specified application.
+
+        :type version_labels: list
+        :param version_labels: If specified, restricts the returned
+        descriptions to only include ones that have the specified
+        version labels.
+
+        """
+        params = {}
+        if application_name:
+            params['ApplicationName'] = application_name
+        if version_labels:
+            self.build_list_params(params, version_labels,
+                                   'VersionLabels.member')
+        return self._get_response('DescribeApplicationVersions', params)
+
+    def describe_applications(self, application_names=None):
+        """Returns the descriptions of existing applications.
+
+        :type application_names: list
+        :param application_names: If specified, AWS Elastic Beanstalk
+        restricts the returned descriptions to only include those with
+        the specified names.
+
+        """
+        params = {}
+        if application_names:
+            self.build_list_params(params, application_names,
+                                   'ApplicationNames.member')
+        return self._get_response('DescribeApplications', params)
+
+    def describe_configuration_options(self, application_name=None,
+                                       template_name=None,
+                                       environment_name=None,
+                                       solution_stack_name=None, options=None):
+        """Describes configuration options used in a template or environment.
+
+        Describes the configuration options that are used in a
+        particular configuration template or environment, or that a
+        specified solution stack defines. The description includes the
+        values the options, their default values, and an indication of
+        the required action on a running environment if an option value
+        is changed.
+
+        :type application_name: string
+        :param application_name: The name of the application associated
+        with the configuration template or environment. Only needed if
+        you want to describe the configuration options associated with
+        either the configuration template or environment.
+
+        :type template_name: string
+        :param template_name: The name of the configuration template
+        whose configuration options you want to describe.
+
+        :type environment_name: string
+        :param environment_name: The name of the environment whose
+        configuration options you want to describe.
+
+        :type solution_stack_name: string
+        :param solution_stack_name: The name of the solution stack whose
+        configuration options you want to describe.
+
+        :type options: list
+        :param options: If specified, restricts the descriptions to only
+        the specified options.
+        """
+        params = {}
+        if application_name:
+            params['ApplicationName'] = application_name
+        if template_name:
+            params['TemplateName'] = template_name
+        if environment_name:
+            params['EnvironmentName'] = environment_name
+        if solution_stack_name:
+            params['SolutionStackName'] = solution_stack_name
+        if options:
+            self.build_list_params(params, options, 'Options.member')
+        return self._get_response('DescribeConfigurationOptions', params)
+
+    def describe_configuration_settings(self, application_name,
+                                        template_name=None,
+                                        environment_name=None):
+        """
+        Returns a description of the settings for the specified
+        configuration set, that is, either a configuration template or
+        the configuration set associated with a running environment.
+        When describing the settings for the configuration set
+        associated with a running environment, it is possible to receive
+        two sets of setting descriptions. One is the deployed
+        configuration set, and the other is a draft configuration of an
+        environment that is either in the process of deployment or that
+        failed to deploy.
+
+        :type application_name: string
+        :param application_name: The application for the environment or
+        configuration template.
+
+        :type template_name: string
+        :param template_name: The name of the configuration template to
+        describe.  Conditional: You must specify either this parameter
+        or an EnvironmentName, but not both. If you specify both, AWS
+        Elastic Beanstalk returns an InvalidParameterCombination error.
+        If you do not specify either, AWS Elastic Beanstalk returns a
+        MissingRequiredParameter error.
+
+        :type environment_name: string
+        :param environment_name: The name of the environment to
+        describe.  Condition: You must specify either this or a
+        TemplateName, but not both. If you specify both, AWS Elastic
+        Beanstalk returns an InvalidParameterCombination error. If you
+        do not specify either, AWS Elastic Beanstalk returns
+        MissingRequiredParameter error.
+        """
+        params = {'ApplicationName': application_name}
+        if template_name:
+            params['TemplateName'] = template_name
+        if environment_name:
+            params['EnvironmentName'] = environment_name
+        return self._get_response('DescribeConfigurationSettings', params)
+
+    def describe_environment_resources(self, environment_id=None,
+                                       environment_name=None):
+        """Returns AWS resources for this environment.
+
+        :type environment_id: string
+        :param environment_id: The ID of the environment to retrieve AWS
+        resource usage data.  Condition: You must specify either this or
+        an EnvironmentName, or both. If you do not specify either, AWS
+        Elastic Beanstalk returns MissingRequiredParameter error.
+
+        :type environment_name: string
+        :param environment_name: The name of the environment to retrieve
+        AWS resource usage data.  Condition: You must specify either
+        this or an EnvironmentId, or both. If you do not specify either,
+        AWS Elastic Beanstalk returns MissingRequiredParameter error.
+
+        :raises: InsufficientPrivilegesException
+        """
+        params = {}
+        if environment_id:
+            params['EnvironmentId'] = environment_id
+        if environment_name:
+            params['EnvironmentName'] = environment_name
+        return self._get_response('DescribeEnvironmentResources', params)
+
+    def describe_environments(self, application_name=None, version_label=None,
+                              environment_ids=None, environment_names=None,
+                              include_deleted=None,
+                              included_deleted_back_to=None):
+        """Returns descriptions for existing environments.
+
+        :type application_name: string
+        :param application_name: If specified, AWS Elastic Beanstalk
+        restricts the returned descriptions to include only those that
+        are associated with this application.
+
+        :type version_label: string
+        :param version_label: If specified, AWS Elastic Beanstalk
+        restricts the returned descriptions to include only those that
+        are associated with this application version.
+
+        :type environment_ids: list
+        :param environment_ids: If specified, AWS Elastic Beanstalk
+        restricts the returned descriptions to include only those that
+        have the specified IDs.
+
+        :type environment_names: list
+        :param environment_names: If specified, AWS Elastic Beanstalk
+        restricts the returned descriptions to include only those that
+        have the specified names.
+
+        :type include_deleted: boolean
+        :param include_deleted: Indicates whether to include deleted
+        environments:  true: Environments that have been deleted after
+        IncludedDeletedBackTo are displayed.  false: Do not include
+        deleted environments.
+
+        :type included_deleted_back_to: timestamp
+        :param included_deleted_back_to: If specified when
+        IncludeDeleted is set to true, then environments deleted after
+        this date are displayed.
+        """
+        params = {}
+        if application_name:
+            params['ApplicationName'] = application_name
+        if version_label:
+            params['VersionLabel'] = version_label
+        if environment_ids:
+            self.build_list_params(params, environment_ids,
+                                   'EnvironmentIds.member')
+        if environment_names:
+            self.build_list_params(params, environment_names,
+                                   'EnvironmentNames.member')
+        if include_deleted:
+            params['IncludeDeleted'] = self._encode_bool(include_deleted)
+        if included_deleted_back_to:
+            params['IncludedDeletedBackTo'] = included_deleted_back_to
+        return self._get_response('DescribeEnvironments', params)
+
+    def describe_events(self, application_name=None, version_label=None,
+                        template_name=None, environment_id=None,
+                        environment_name=None, request_id=None, severity=None,
+                        start_time=None, end_time=None, max_records=None,
+                        next_token=None):
+        """Returns event descriptions matching criteria up to the last 6 weeks.
+
+        :type application_name: string
+        :param application_name: If specified, AWS Elastic Beanstalk
+        restricts the returned descriptions to include only those
+        associated with this application.
+
+        :type version_label: string
+        :param version_label: If specified, AWS Elastic Beanstalk
+        restricts the returned descriptions to those associated with
+        this application version.
+
+        :type template_name: string
+        :param template_name: If specified, AWS Elastic Beanstalk
+        restricts the returned descriptions to those that are associated
+        with this environment configuration.
+
+        :type environment_id: string
+        :param environment_id: If specified, AWS Elastic Beanstalk
+        restricts the returned descriptions to those associated with
+        this environment.
+
+        :type environment_name: string
+        :param environment_name: If specified, AWS Elastic Beanstalk
+        restricts the returned descriptions to those associated with
+        this environment.
+
+        :type request_id: string
+        :param request_id: If specified, AWS Elastic Beanstalk restricts
+        the described events to include only those associated with this
+        request ID.
+
+        :type severity: string
+        :param severity: If specified, limits the events returned from
+        this call to include only those with the specified severity or
+        higher.
+
+        :type start_time: timestamp
+        :param start_time: If specified, AWS Elastic Beanstalk restricts
+        the returned descriptions to those that occur on or after this
+        time.
+
+        :type end_time: timestamp
+        :param end_time: If specified, AWS Elastic Beanstalk restricts
+        the returned descriptions to those that occur up to, but not
+        including, the EndTime.
+
+        :type max_records: integer
+        :param max_records: Specifies the maximum number of events that
+        can be returned, beginning with the most recent event.
+
+        :type next_token: string
+        :param next_token: Pagination token. If specified, the events
+        return the next batch of results.
+        """
+        params = {}
+        if application_name:
+            params['ApplicationName'] = application_name
+        if version_label:
+            params['VersionLabel'] = version_label
+        if template_name:
+            params['TemplateName'] = template_name
+        if environment_id:
+            params['EnvironmentId'] = environment_id
+        if environment_name:
+            params['EnvironmentName'] = environment_name
+        if request_id:
+            params['RequestId'] = request_id
+        if severity:
+            params['Severity'] = severity
+        if start_time:
+            params['StartTime'] = start_time
+        if end_time:
+            params['EndTime'] = end_time
+        if max_records:
+            params['MaxRecords'] = max_records
+        if next_token:
+            params['NextToken'] = next_token
+        return self._get_response('DescribeEvents', params)
+
+    def list_available_solution_stacks(self):
+        """Returns a list of the available solution stack names."""
+        return self._get_response('ListAvailableSolutionStacks', params={})
+
+    def rebuild_environment(self, environment_id=None, environment_name=None):
+        """
+        Deletes and recreates all of the AWS resources (for example:
+        the Auto Scaling group, load balancer, etc.) for a specified
+        environment and forces a restart.
+
+        :type environment_id: string
+        :param environment_id: The ID of the environment to rebuild.
+        Condition: You must specify either this or an EnvironmentName,
+        or both. If you do not specify either, AWS Elastic Beanstalk
+        returns MissingRequiredParameter error.
+
+        :type environment_name: string
+        :param environment_name: The name of the environment to rebuild.
+        Condition: You must specify either this or an EnvironmentId, or
+        both. If you do not specify either, AWS Elastic Beanstalk
+        returns MissingRequiredParameter error.
+
+        :raises: InsufficientPrivilegesException
+        """
+        params = {}
+        if environment_id:
+            params['EnvironmentId'] = environment_id
+        if environment_name:
+            params['EnvironmentName'] = environment_name
+        return self._get_response('RebuildEnvironment', params)
+
+    def request_environment_info(self, info_type='tail', environment_id=None,
+                                 environment_name=None):
+        """
+        Initiates a request to compile the specified type of
+        information of the deployed environment.  Setting the InfoType
+        to tail compiles the last lines from the application server log
+        files of every Amazon EC2 instance in your environment. Use
+        RetrieveEnvironmentInfo to access the compiled information.
+
+        :type info_type: string
+        :param info_type: The type of information to request.
+
+        :type environment_id: string
+        :param environment_id: The ID of the environment of the
+        requested data. If no such environment is found,
+        RequestEnvironmentInfo returns an InvalidParameterValue error.
+        Condition: You must specify either this or an EnvironmentName,
+        or both. If you do not specify either, AWS Elastic Beanstalk
+        returns MissingRequiredParameter error.
+
+        :type environment_name: string
+        :param environment_name: The name of the environment of the
+        requested data. If no such environment is found,
+        RequestEnvironmentInfo returns an InvalidParameterValue error.
+        Condition: You must specify either this or an EnvironmentId, or
+        both. If you do not specify either, AWS Elastic Beanstalk
+        returns MissingRequiredParameter error.
+        """
+        params = {'InfoType': info_type}
+        if environment_id:
+            params['EnvironmentId'] = environment_id
+        if environment_name:
+            params['EnvironmentName'] = environment_name
+        return self._get_response('RequestEnvironmentInfo', params)
+
+    def restart_app_server(self, environment_id=None, environment_name=None):
+        """
+        Causes the environment to restart the application container
+        server running on each Amazon EC2 instance.
+
+        :type environment_id: string
+        :param environment_id: The ID of the environment to restart the
+        server for.  Condition: You must specify either this or an
+        EnvironmentName, or both. If you do not specify either, AWS
+        Elastic Beanstalk returns MissingRequiredParameter error.
+
+        :type environment_name: string
+        :param environment_name: The name of the environment to restart
+        the server for.  Condition: You must specify either this or an
+        EnvironmentId, or both. If you do not specify either, AWS
+        Elastic Beanstalk returns MissingRequiredParameter error.
+        """
+        params = {}
+        if environment_id:
+            params['EnvironmentId'] = environment_id
+        if environment_name:
+            params['EnvironmentName'] = environment_name
+        return self._get_response('RestartAppServer', params)
+
+    def retrieve_environment_info(self, info_type='tail', environment_id=None,
+                                  environment_name=None):
+        """
+        Retrieves the compiled information from a RequestEnvironmentInfo
+        request.
+
+        :type info_type: string
+        :param info_type: The type of information to retrieve.
+
+        :type environment_id: string
+        :param environment_id: The ID of the data's environment. If no
+        such environment is found, returns an InvalidParameterValue
+        error.  Condition: You must specify either this or an
+        EnvironmentName, or both. If you do not specify either, AWS
+        Elastic Beanstalk returns MissingRequiredParameter error.
+
+        :type environment_name: string
+        :param environment_name: The name of the data's environment. If
+        no such environment is found, returns an InvalidParameterValue
+        error.  Condition: You must specify either this or an
+        EnvironmentId, or both. If you do not specify either, AWS
+        Elastic Beanstalk returns MissingRequiredParameter error.
+        """
+        params = {'InfoType': info_type}
+        if environment_id:
+            params['EnvironmentId'] = environment_id
+        if environment_name:
+            params['EnvironmentName'] = environment_name
+        return self._get_response('RetrieveEnvironmentInfo', params)
+
+    def swap_environment_cnames(self, source_environment_id=None,
+                                 source_environment_name=None,
+                                 destination_environment_id=None,
+                                 destination_environment_name=None):
+        """Swaps the CNAMEs of two environments.
+
+        :type source_environment_id: string
+        :param source_environment_id: The ID of the source environment.
+        Condition: You must specify at least the SourceEnvironmentID or
+        the SourceEnvironmentName. You may also specify both. If you
+        specify the SourceEnvironmentId, you must specify the
+        DestinationEnvironmentId.
+
+        :type source_environment_name: string
+        :param source_environment_name: The name of the source
+        environment.  Condition: You must specify at least the
+        SourceEnvironmentID or the SourceEnvironmentName. You may also
+        specify both. If you specify the SourceEnvironmentName, you must
+        specify the DestinationEnvironmentName.
+
+        :type destination_environment_id: string
+        :param destination_environment_id: The ID of the destination
+        environment.  Condition: You must specify at least the
+        DestinationEnvironmentID or the DestinationEnvironmentName. You
+        may also specify both. You must specify the SourceEnvironmentId
+        with the DestinationEnvironmentId.
+
+        :type destination_environment_name: string
+        :param destination_environment_name: The name of the destination
+        environment.  Condition: You must specify at least the
+        DestinationEnvironmentID or the DestinationEnvironmentName. You
+        may also specify both. You must specify the
+        SourceEnvironmentName with the DestinationEnvironmentName.
+        """
+        params = {}
+        if source_environment_id:
+            params['SourceEnvironmentId'] = source_environment_id
+        if source_environment_name:
+            params['SourceEnvironmentName'] = source_environment_name
+        if destination_environment_id:
+            params['DestinationEnvironmentId'] = destination_environment_id
+        if destination_environment_name:
+            params['DestinationEnvironmentName'] = destination_environment_name
+        return self._get_response('SwapEnvironmentCNAMEs', params)
+
+    def terminate_environment(self, environment_id=None, environment_name=None,
+                              terminate_resources=None):
+        """Terminates the specified environment.
+
+        :type environment_id: string
+        :param environment_id: The ID of the environment to terminate.
+        Condition: You must specify either this or an EnvironmentName,
+        or both. If you do not specify either, AWS Elastic Beanstalk
+        returns MissingRequiredParameter error.
+
+        :type environment_name: string
+        :param environment_name: The name of the environment to
+        terminate. Condition: You must specify either this or an
+        EnvironmentId, or both. If you do not specify either, AWS
+        Elastic Beanstalk returns MissingRequiredParameter error.
+
+        :type terminate_resources: boolean
+        :param terminate_resources: Indicates whether the associated AWS
+        resources should shut down when the environment is terminated:
+        true: (default) The user AWS resources (for example, the Auto
+        Scaling group, LoadBalancer, etc.) are terminated along with the
+        environment.  false: The environment is removed from the AWS
+        Elastic Beanstalk but the AWS resources continue to operate.
+        For more information, see the  AWS Elastic Beanstalk User Guide.
+        Default: true  Valid Values: true | false
+
+        :raises: InsufficientPrivilegesException
+        """
+        params = {}
+        if environment_id:
+            params['EnvironmentId'] = environment_id
+        if environment_name:
+            params['EnvironmentName'] = environment_name
+        if terminate_resources:
+            params['TerminateResources'] = self._encode_bool(
+                terminate_resources)
+        return self._get_response('TerminateEnvironment', params)
+
+    def update_application(self, application_name, description=None):
+        """
+        Updates the specified application to have the specified
+        properties.
+
+        :type application_name: string
+        :param application_name: The name of the application to update.
+        If no such application is found, UpdateApplication returns an
+        InvalidParameterValue error.
+
+        :type description: string
+        :param description: A new description for the application.
+        Default: If not specified, AWS Elastic Beanstalk does not update
+        the description.
+        """
+        params = {'ApplicationName': application_name}
+        if description:
+            params['Description'] = description
+        return self._get_response('UpdateApplication', params)
+
+    def update_application_version(self, application_name, version_label,
+                                   description=None):
+        """Updates the application version to have the properties.
+
+        :type application_name: string
+        :param application_name: The name of the application associated
+        with this version.  If no application is found with this name,
+        UpdateApplication returns an InvalidParameterValue error.
+
+        :type version_label: string
+        :param version_label: The name of the version to update. If no
+        application version is found with this label, UpdateApplication
+        returns an InvalidParameterValue error.
+
+        :type description: string
+        :param description: A new description for this release.
+        """
+        params = {'ApplicationName': application_name,
+                  'VersionLabel': version_label}
+        if description:
+            params['Description'] = description
+        return self._get_response('UpdateApplicationVersion', params)
+
+    def update_configuration_template(self, application_name, template_name,
+                                      description=None, option_settings=None,
+                                      options_to_remove=None):
+        """
+        Updates the specified configuration template to have the
+        specified properties or configuration option values.
+
+        :type application_name: string
+        :param application_name: The name of the application associated
+        with the configuration template to update. If no application is
+        found with this name, UpdateConfigurationTemplate returns an
+        InvalidParameterValue error.
+
+        :type template_name: string
+        :param template_name: The name of the configuration template to
+        update. If no configuration template is found with this name,
+        UpdateConfigurationTemplate returns an InvalidParameterValue
+        error.
+
+        :type description: string
+        :param description: A new description for the configuration.
+
+        :type option_settings: list
+        :param option_settings: A list of configuration option settings
+        to update with the new specified option value.
+
+        :type options_to_remove: list
+        :param options_to_remove: A list of configuration options to
+        remove from the configuration set.  Constraint: You can remove
+        only UserDefined configuration options.
+
+        :raises: InsufficientPrivilegesException
+        """
+        params = {'ApplicationName': application_name,
+                  'TemplateName': template_name}
+        if description:
+            params['Description'] = description
+        if option_settings:
+            self._build_list_params(params, option_settings,
+                                   'OptionSettings.member',
+                                   ('Namespace', 'OptionName', 'Value'))
+        if options_to_remove:
+            self.build_list_params(params, options_to_remove,
+                                   'OptionsToRemove.member')
+        return self._get_response('UpdateConfigurationTemplate', params)
+
+    def update_environment(self, environment_id=None, environment_name=None,
+                           version_label=None, template_name=None,
+                           description=None, option_settings=None,
+                           options_to_remove=None):
+        """
+        Updates the environment description, deploys a new application
+        version, updates the configuration settings to an entirely new
+        configuration template, or updates select configuration option
+        values in the running environment.  Attempting to update both
+        the release and configuration is not allowed and AWS Elastic
+        Beanstalk returns an InvalidParameterCombination error.  When
+        updating the configuration settings to a new template or
+        individual settings, a draft configuration is created and
+        DescribeConfigurationSettings for this environment returns two
+        setting descriptions with different DeploymentStatus values.
+
+        :type environment_id: string
+        :param environment_id: The ID of the environment to update. If
+        no environment with this ID exists, AWS Elastic Beanstalk
+        returns an InvalidParameterValue error.  Condition: You must
+        specify either this or an EnvironmentName, or both. If you do
+        not specify either, AWS Elastic Beanstalk returns
+        MissingRequiredParameter error.
+
+        :type environment_name: string
+        :param environment_name: The name of the environment to update.
+        If no environment with this name exists, AWS Elastic Beanstalk
+        returns an InvalidParameterValue error.  Condition: You must
+        specify either this or an EnvironmentId, or both. If you do not
+        specify either, AWS Elastic Beanstalk returns
+        MissingRequiredParameter error.
+
+        :type version_label: string
+        :param version_label: If this parameter is specified, AWS
+        Elastic Beanstalk deploys the named application version to the
+        environment. If no such application version is found, returns an
+        InvalidParameterValue error.
+
+        :type template_name: string
+        :param template_name: If this parameter is specified, AWS
+        Elastic Beanstalk deploys this configuration template to the
+        environment. If no such configuration template is found, AWS
+        Elastic Beanstalk returns an InvalidParameterValue error.
+
+        :type description: string
+        :param description: If this parameter is specified, AWS Elastic
+        Beanstalk updates the description of this environment.
+
+        :type option_settings: list
+        :param option_settings: If specified, AWS Elastic Beanstalk
+        updates the configuration set associated with the running
+        environment and sets the specified configuration options to the
+        requested value.
+
+        :type options_to_remove: list
+        :param options_to_remove: A list of custom user-defined
+        configuration options to remove from the configuration set for
+        this environment.
+
+        :raises: InsufficientPrivilegesException
+        """
+        params = {}
+        if environment_id:
+            params['EnvironmentId'] = environment_id
+        if environment_name:
+            params['EnvironmentName'] = environment_name
+        if version_label:
+            params['VersionLabel'] = version_label
+        if template_name:
+            params['TemplateName'] = template_name
+        if description:
+            params['Description'] = description
+        if option_settings:
+            self._build_list_params(params, option_settings,
+                                   'OptionSettings.member',
+                                   ('Namespace', 'OptionName', 'Value'))
+        if options_to_remove:
+            self.build_list_params(params, options_to_remove,
+                                   'OptionsToRemove.member')
+        return self._get_response('UpdateEnvironment', params)
+
+    def validate_configuration_settings(self, application_name,
+                                        option_settings, template_name=None,
+                                        environment_name=None):
+        """
+        Takes a set of configuration settings and either a
+        configuration template or environment, and determines whether
+        those values are valid.  This action returns a list of messages
+        indicating any errors or warnings associated with the selection
+        of option values.
+
+        :type application_name: string
+        :param application_name: The name of the application that the
+        configuration template or environment belongs to.
+
+        :type template_name: string
+        :param template_name: The name of the configuration template to
+        validate the settings against.  Condition: You cannot specify
+        both this and an environment name.
+
+        :type environment_name: string
+        :param environment_name: The name of the environment to validate
+        the settings against.  Condition: You cannot specify both this
+        and a configuration template name.
+
+        :type option_settings: list
+        :param option_settings: A list of the options and desired values
+        to evaluate.
+
+        :raises: InsufficientPrivilegesException
+        """
+        params = {'ApplicationName': application_name}
+        self._build_list_params(params, option_settings,
+                               'OptionSettings.member',
+                               ('Namespace', 'OptionName', 'Value'))
+        if template_name:
+            params['TemplateName'] = template_name
+        if environment_name:
+            params['EnvironmentName'] = environment_name
+        return self._get_response('ValidateConfigurationSettings', params)
+
+    def _build_list_params(self, params, user_values, prefix, tuple_names):
+        # For params such as the ConfigurationOptionSettings,
+        # they can specify a list of tuples where each tuple maps to a specific
+        # arg.  For example:
+        # user_values = [('foo', 'bar', 'baz']
+        # prefix=MyOption.member
+        # tuple_names=('One', 'Two', 'Three')
+        # would result in:
+        # MyOption.member.1.One = foo
+        # MyOption.member.1.Two = bar
+        # MyOption.member.1.Three = baz
+        for i, user_value in enumerate(user_values, 1):
+            current_prefix = '%s.%s' % (prefix, i)
+            for key, value in zip(tuple_names, user_value):
+                full_key = '%s.%s' % (current_prefix, key)
+                params[full_key] = value
diff --git a/boto/beanstalk/response.py b/boto/beanstalk/response.py
new file mode 100644
index 0000000..22bc102
--- /dev/null
+++ b/boto/beanstalk/response.py
@@ -0,0 +1,703 @@
+"""Classify responses from layer1 and strict type values."""
+from datetime import datetime
+
+
+class BaseObject(object):
+
+    def __repr__(self):
+        result = self.__class__.__name__ + '{ '
+        counter = 0
+        for key, value in self.__dict__.iteritems():
+            # first iteration no comma
+            counter += 1
+            if counter > 1:
+                result += ', '
+            result += key + ': '
+            result += self._repr_by_type(value)
+        result += ' }'
+        return result
+
+    def _repr_by_type(self, value):
+        # Everything is either a 'Response', 'list', or 'None/str/int/bool'.
+        result = ''
+        if isinstance(value, Response):
+            result += value.__repr__()
+        elif isinstance(value, list):
+            result += self._repr_list(value)
+        else:
+            result += str(value)
+        return result
+
+    def _repr_list(self, array):
+        result = '['
+        for value in array:
+            result += ' ' + self._repr_by_type(value) + ','
+        # Check for trailing comma with a space.
+        if len(result) > 1:
+            result = result[:-1] + ' '
+        result += ']'
+        return result
+
+
+class Response(BaseObject):
+    def __init__(self, response):
+        super(Response, self).__init__()
+
+        if response['ResponseMetadata']:
+            self.response_metadata = ResponseMetadata(response['ResponseMetadata'])
+        else:
+            self.response_metadata = None
+
+
+class ResponseMetadata(BaseObject):
+    def __init__(self, response):
+        super(ResponseMetadata, self).__init__()
+
+        self.request_id = str(response['RequestId'])
+
+
+class ApplicationDescription(BaseObject):
+    def __init__(self, response):
+        super(ApplicationDescription, self).__init__()
+
+        self.application_name = str(response['ApplicationName'])
+        self.configuration_templates = []
+        if response['ConfigurationTemplates']:
+            for member in response['ConfigurationTemplates']:
+                configuration_template = str(member)
+                self.configuration_templates.append(configuration_template)
+        self.date_created = datetime.fromtimestamp(response['DateCreated'])
+        self.date_updated = datetime.fromtimestamp(response['DateUpdated'])
+        self.description = str(response['Description'])
+        self.versions = []
+        if response['Versions']:
+            for member in response['Versions']:
+                version = str(member)
+                self.versions.append(version)
+
+
+class ApplicationVersionDescription(BaseObject):
+    def __init__(self, response):
+        super(ApplicationVersionDescription, self).__init__()
+
+        self.application_name = str(response['ApplicationName'])
+        self.date_created = datetime.fromtimestamp(response['DateCreated'])
+        self.date_updated = datetime.fromtimestamp(response['DateUpdated'])
+        self.description = str(response['Description'])
+        if response['SourceBundle']:
+            self.source_bundle = S3Location(response['SourceBundle'])
+        else:
+            self.source_bundle = None
+        self.version_label = str(response['VersionLabel'])
+
+
+class AutoScalingGroup(BaseObject):
+    def __init__(self, response):
+        super(AutoScalingGroup, self).__init__()
+
+        self.name = str(response['Name'])
+
+
+class ConfigurationOptionDescription(BaseObject):
+    def __init__(self, response):
+        super(ConfigurationOptionDescription, self).__init__()
+
+        self.change_severity = str(response['ChangeSeverity'])
+        self.default_value = str(response['DefaultValue'])
+        self.max_length = int(response['MaxLength']) if response['MaxLength'] else None
+        self.max_value = int(response['MaxValue']) if response['MaxValue'] else None
+        self.min_value = int(response['MinValue']) if response['MinValue'] else None
+        self.name = str(response['Name'])
+        self.namespace = str(response['Namespace'])
+        if response['Regex']:
+            self.regex = OptionRestrictionRegex(response['Regex'])
+        else:
+            self.regex = None
+        self.user_defined = str(response['UserDefined'])
+        self.value_options = []
+        if response['ValueOptions']:
+            for member in response['ValueOptions']:
+                value_option = str(member)
+                self.value_options.append(value_option)
+        self.value_type = str(response['ValueType'])
+
+
+class ConfigurationOptionSetting(BaseObject):
+    def __init__(self, response):
+        super(ConfigurationOptionSetting, self).__init__()
+
+        self.namespace = str(response['Namespace'])
+        self.option_name = str(response['OptionName'])
+        self.value = str(response['Value'])
+
+
+class ConfigurationSettingsDescription(BaseObject):
+    def __init__(self, response):
+        super(ConfigurationSettingsDescription, self).__init__()
+
+        self.application_name = str(response['ApplicationName'])
+        self.date_created = datetime.fromtimestamp(response['DateCreated'])
+        self.date_updated = datetime.fromtimestamp(response['DateUpdated'])
+        self.deployment_status = str(response['DeploymentStatus'])
+        self.description = str(response['Description'])
+        self.environment_name = str(response['EnvironmentName'])
+        self.option_settings = []
+        if response['OptionSettings']:
+            for member in response['OptionSettings']:
+                option_setting = ConfigurationOptionSetting(member)
+                self.option_settings.append(option_setting)
+        self.solution_stack_name = str(response['SolutionStackName'])
+        self.template_name = str(response['TemplateName'])
+
+
+class EnvironmentDescription(BaseObject):
+    def __init__(self, response):
+        super(EnvironmentDescription, self).__init__()
+
+        self.application_name = str(response['ApplicationName'])
+        self.cname = str(response['CNAME'])
+        self.date_created = datetime.fromtimestamp(response['DateCreated'])
+        self.date_updated = datetime.fromtimestamp(response['DateUpdated'])
+        self.description = str(response['Description'])
+        self.endpoint_url = str(response['EndpointURL'])
+        self.environment_id = str(response['EnvironmentId'])
+        self.environment_name = str(response['EnvironmentName'])
+        self.health = str(response['Health'])
+        if response['Resources']:
+            self.resources = EnvironmentResourcesDescription(response['Resources'])
+        else:
+            self.resources = None
+        self.solution_stack_name = str(response['SolutionStackName'])
+        self.status = str(response['Status'])
+        self.template_name = str(response['TemplateName'])
+        self.version_label = str(response['VersionLabel'])
+
+
+class EnvironmentInfoDescription(BaseObject):
+    def __init__(self, response):
+        EnvironmentInfoDescription(Response, self).__init__()
+
+        self.ec2_instance_id = str(response['Ec2InstanceId'])
+        self.info_type = str(response['InfoType'])
+        self.message = str(response['Message'])
+        self.sample_timestamp = datetime.fromtimestamp(response['SampleTimestamp'])
+
+
+class EnvironmentResourceDescription(BaseObject):
+    def __init__(self, response):
+        super(EnvironmentResourceDescription, self).__init__()
+
+        self.auto_scaling_groups = []
+        if response['AutoScalingGroups']:
+            for member in response['AutoScalingGroups']:
+                auto_scaling_group = AutoScalingGroup(member)
+                self.auto_scaling_groups.append(auto_scaling_group)
+        self.environment_name = str(response['EnvironmentName'])
+        self.instances = []
+        if response['Instances']:
+            for member in response['Instances']:
+                instance = Instance(member)
+                self.instances.append(instance)
+        self.launch_configurations = []
+        if response['LaunchConfigurations']:
+            for member in response['LaunchConfigurations']:
+                launch_configuration = LaunchConfiguration(member)
+                self.launch_configurations.append(launch_configuration)
+        self.load_balancers = []
+        if response['LoadBalancers']:
+            for member in response['LoadBalancers']:
+                load_balancer = LoadBalancer(member)
+                self.load_balancers.append(load_balancer)
+        self.triggers = []
+        if response['Triggers']:
+            for member in response['Triggers']:
+                trigger = Trigger(member)
+                self.triggers.append(trigger)
+
+
+class EnvironmentResourcesDescription(BaseObject):
+    def __init__(self, response):
+        super(EnvironmentResourcesDescription, self).__init__()
+
+        if response['LoadBalancer']:
+            self.load_balancer = LoadBalancerDescription(response['LoadBalancer'])
+        else:
+            self.load_balancer = None
+
+
+class EventDescription(BaseObject):
+    def __init__(self, response):
+        super(EventDescription, self).__init__()
+
+        self.application_name = str(response['ApplicationName'])
+        self.environment_name = str(response['EnvironmentName'])
+        self.event_date = datetime.fromtimestamp(response['EventDate'])
+        self.message = str(response['Message'])
+        self.request_id = str(response['RequestId'])
+        self.severity = str(response['Severity'])
+        self.template_name = str(response['TemplateName'])
+        self.version_label = str(response['VersionLabel'])
+
+
+class Instance(BaseObject):
+    def __init__(self, response):
+        super(Instance, self).__init__()
+
+        self.id = str(response['Id'])
+
+
+class LaunchConfiguration(BaseObject):
+    def __init__(self, response):
+        super(LaunchConfiguration, self).__init__()
+
+        self.name = str(response['Name'])
+
+
+class Listener(BaseObject):
+    def __init__(self, response):
+        super(Listener, self).__init__()
+
+        self.port = int(response['Port']) if response['Port'] else None
+        self.protocol = str(response['Protocol'])
+
+
+class LoadBalancer(BaseObject):
+    def __init__(self, response):
+        super(LoadBalancer, self).__init__()
+
+        self.name = str(response['Name'])
+
+
+class LoadBalancerDescription(BaseObject):
+    def __init__(self, response):
+        super(LoadBalancerDescription, self).__init__()
+
+        self.domain = str(response['Domain'])
+        self.listeners = []
+        if response['Listeners']:
+            for member in response['Listeners']:
+                listener = Listener(member)
+                self.listeners.append(listener)
+        self.load_balancer_name = str(response['LoadBalancerName'])
+
+
+class OptionRestrictionRegex(BaseObject):
+    def __init__(self, response):
+        super(OptionRestrictionRegex, self).__init__()
+
+        self.label = response['Label']
+        self.pattern = response['Pattern']
+
+
+class SolutionStackDescription(BaseObject):
+    def __init__(self, response):
+        super(SolutionStackDescription, self).__init__()
+
+        self.permitted_file_types = []
+        if response['PermittedFileTypes']:
+            for member in response['PermittedFileTypes']:
+                permitted_file_type = str(member)
+                self.permitted_file_types.append(permitted_file_type)
+        self.solution_stack_name = str(response['SolutionStackName'])
+
+
+class S3Location(BaseObject):
+    def __init__(self, response):
+        super(S3Location, self).__init__()
+
+        self.s3_bucket = str(response['S3Bucket'])
+        self.s3_key = str(response['S3Key'])
+
+
+class Trigger(BaseObject):
+    def __init__(self, response):
+        super(Trigger, self).__init__()
+
+        self.name = str(response['Name'])
+
+
+class ValidationMessage(BaseObject):
+    def __init__(self, response):
+        super(ValidationMessage, self).__init__()
+
+        self.message = str(response['Message'])
+        self.namespace = str(response['Namespace'])
+        self.option_name = str(response['OptionName'])
+        self.severity = str(response['Severity'])
+
+
+# These are the response objects layer2 uses, one for each layer1 api call.
+class CheckDNSAvailabilityResponse(Response):
+    def __init__(self, response):
+        response = response['CheckDNSAvailabilityResponse']
+        super(CheckDNSAvailabilityResponse, self).__init__(response)
+
+        response = response['CheckDNSAvailabilityResult']
+        self.fully_qualified_cname = str(response['FullyQualifiedCNAME'])
+        self.available = bool(response['Available'])
+
+
+# Our naming convension produces this class name but api names it with more
+# capitals.
+class CheckDnsAvailabilityResponse(CheckDNSAvailabilityResponse): pass
+
+
+class CreateApplicationResponse(Response):
+    def __init__(self, response):
+        response = response['CreateApplicationResponse']
+        super(CreateApplicationResponse, self).__init__(response)
+
+        response = response['CreateApplicationResult']
+        if response['Application']:
+            self.application = ApplicationDescription(response['Application'])
+        else:
+            self.application = None
+
+
+class CreateApplicationVersionResponse(Response):
+    def __init__(self, response):
+        response = response['CreateApplicationVersionResponse']
+        super(CreateApplicationVersionResponse, self).__init__(response)
+
+        response = response['CreateApplicationVersionResult']
+        if response['ApplicationVersion']:
+            self.application_version = ApplicationVersionDescription(response['ApplicationVersion'])
+        else:
+            self.application_version = None
+
+
+class CreateConfigurationTemplateResponse(Response):
+    def __init__(self, response):
+        response = response['CreateConfigurationTemplateResponse']
+        super(CreateConfigurationTemplateResponse, self).__init__(response)
+
+        response = response['CreateConfigurationTemplateResult']
+        self.application_name = str(response['ApplicationName'])
+        self.date_created = datetime.fromtimestamp(response['DateCreated'])
+        self.date_updated = datetime.fromtimestamp(response['DateUpdated'])
+        self.deployment_status = str(response['DeploymentStatus'])
+        self.description = str(response['Description'])
+        self.environment_name = str(response['EnvironmentName'])
+        self.option_settings = []
+        if response['OptionSettings']:
+            for member in response['OptionSettings']:
+                option_setting = ConfigurationOptionSetting(member)
+                self.option_settings.append(option_setting)
+        self.solution_stack_name = str(response['SolutionStackName'])
+        self.template_name = str(response['TemplateName'])
+
+
+class CreateEnvironmentResponse(Response):
+    def __init__(self, response):
+        response = response['CreateEnvironmentResponse']
+        super(CreateEnvironmentResponse, self).__init__(response)
+
+        response = response['CreateEnvironmentResult']
+        self.application_name = str(response['ApplicationName'])
+        self.cname = str(response['CNAME'])
+        self.date_created = datetime.fromtimestamp(response['DateCreated'])
+        self.date_updated = datetime.fromtimestamp(response['DateUpdated'])
+        self.description = str(response['Description'])
+        self.endpoint_url = str(response['EndpointURL'])
+        self.environment_id = str(response['EnvironmentId'])
+        self.environment_name = str(response['EnvironmentName'])
+        self.health = str(response['Health'])
+        if response['Resources']:
+            self.resources = EnvironmentResourcesDescription(response['Resources'])
+        else:
+            self.resources = None
+        self.solution_stack_name = str(response['SolutionStackName'])
+        self.status = str(response['Status'])
+        self.template_name = str(response['TemplateName'])
+        self.version_label = str(response['VersionLabel'])
+
+
+class CreateStorageLocationResponse(Response):
+    def __init__(self, response):
+        response = response['CreateStorageLocationResponse']
+        super(CreateStorageLocationResponse, self).__init__(response)
+
+        response = response['CreateStorageLocationResult']
+        self.s3_bucket = str(response['S3Bucket'])
+
+
+class DeleteApplicationResponse(Response):
+    def __init__(self, response):
+        response = response['DeleteApplicationResponse']
+        super(DeleteApplicationResponse, self).__init__(response)
+
+
+class DeleteApplicationVersionResponse(Response):
+    def __init__(self, response):
+        response = response['DeleteApplicationVersionResponse']
+        super(DeleteApplicationVersionResponse, self).__init__(response)
+
+
+class DeleteConfigurationTemplateResponse(Response):
+    def __init__(self, response):
+        response = response['DeleteConfigurationTemplateResponse']
+        super(DeleteConfigurationTemplateResponse, self).__init__(response)
+
+
+class DeleteEnvironmentConfigurationResponse(Response):
+    def __init__(self, response):
+        response = response['DeleteEnvironmentConfigurationResponse']
+        super(DeleteEnvironmentConfigurationResponse, self).__init__(response)
+
+
+class DescribeApplicationVersionsResponse(Response):
+    def __init__(self, response):
+        response = response['DescribeApplicationVersionsResponse']
+        super(DescribeApplicationVersionsResponse, self).__init__(response)
+
+        response = response['DescribeApplicationVersionsResult']
+        self.application_versions = []
+        if response['ApplicationVersions']:
+            for member in response['ApplicationVersions']:
+                application_version = ApplicationVersionDescription(member)
+                self.application_versions.append(application_version)
+
+
+class DescribeApplicationsResponse(Response):
+    def __init__(self, response):
+        response = response['DescribeApplicationsResponse']
+        super(DescribeApplicationsResponse, self).__init__(response)
+
+        response = response['DescribeApplicationsResult']
+        self.applications = []
+        if response['Applications']:
+            for member in response['Applications']:
+                application = ApplicationDescription(member)
+                self.applications.append(application)
+
+
+class DescribeConfigurationOptionsResponse(Response):
+    def __init__(self, response):
+        response = response['DescribeConfigurationOptionsResponse']
+        super(DescribeConfigurationOptionsResponse, self).__init__(response)
+
+        response = response['DescribeConfigurationOptionsResult']
+        self.options = []
+        if response['Options']:
+            for member in response['Options']:
+                option = ConfigurationOptionDescription(member)
+                self.options.append(option)
+        self.solution_stack_name = str(response['SolutionStackName'])
+
+
+class DescribeConfigurationSettingsResponse(Response):
+    def __init__(self, response):
+        response = response['DescribeConfigurationSettingsResponse']
+        super(DescribeConfigurationSettingsResponse, self).__init__(response)
+
+        response = response['DescribeConfigurationSettingsResult']
+        self.configuration_settings = []
+        if response['ConfigurationSettings']:
+            for member in response['ConfigurationSettings']:
+                configuration_setting = ConfigurationSettingsDescription(member)
+                self.configuration_settings.append(configuration_setting)
+
+
+class DescribeEnvironmentResourcesResponse(Response):
+    def __init__(self, response):
+        response = response['DescribeEnvironmentResourcesResponse']
+        super(DescribeEnvironmentResourcesResponse, self).__init__(response)
+
+        response = response['DescribeEnvironmentResourcesResult']
+        if response['EnvironmentResources']:
+            self.environment_resources = EnvironmentResourceDescription(response['EnvironmentResources'])
+        else:
+            self.environment_resources = None
+
+
+class DescribeEnvironmentsResponse(Response):
+    def __init__(self, response):
+        response = response['DescribeEnvironmentsResponse']
+        super(DescribeEnvironmentsResponse, self).__init__(response)
+
+        response = response['DescribeEnvironmentsResult']
+        self.environments = []
+        if response['Environments']:
+            for member in response['Environments']:
+                environment = EnvironmentDescription(member)
+                self.environments.append(environment)
+
+
+class DescribeEventsResponse(Response):
+    def __init__(self, response):
+        response = response['DescribeEventsResponse']
+        super(DescribeEventsResponse, self).__init__(response)
+
+        response = response['DescribeEventsResult']
+        self.events = []
+        if response['Events']:
+            for member in response['Events']:
+                event = EventDescription(member)
+                self.events.append(event)
+        self.next_tokent = str(response['NextToken'])
+
+
+class ListAvailableSolutionStacksResponse(Response):
+    def __init__(self, response):
+        response = response['ListAvailableSolutionStacksResponse']
+        super(ListAvailableSolutionStacksResponse, self).__init__(response)
+
+        response = response['ListAvailableSolutionStacksResult']
+        self.solution_stack_details = []
+        if response['SolutionStackDetails']:
+            for member in response['SolutionStackDetails']:
+                solution_stack_detail = SolutionStackDescription(member)
+                self.solution_stack_details.append(solution_stack_detail)
+        self.solution_stacks = []
+        if response['SolutionStacks']:
+            for member in response['SolutionStacks']:
+                solution_stack = str(member)
+                self.solution_stacks.append(solution_stack)
+
+
+class RebuildEnvironmentResponse(Response):
+    def __init__(self, response):
+        response = response['RebuildEnvironmentResponse']
+        super(RebuildEnvironmentResponse, self).__init__(response)
+
+
+class RequestEnvironmentInfoResponse(Response):
+    def __init__(self, response):
+        response = response['RequestEnvironmentInfoResponse']
+        super(RequestEnvironmentInfoResponse, self).__init__(response)
+
+
+class RestartAppServerResponse(Response):
+    def __init__(self, response):
+        response = response['RestartAppServerResponse']
+        super(RestartAppServerResponse, self).__init__(response)
+
+
+class RetrieveEnvironmentInfoResponse(Response):
+    def __init__(self, response):
+        response = response['RetrieveEnvironmentInfoResponse']
+        super(RetrieveEnvironmentInfoResponse, self).__init__(response)
+
+        response = response['RetrieveEnvironmentInfoResult']
+        self.environment_info = []
+        if response['EnvironmentInfo']:
+            for member in response['EnvironmentInfo']:
+                environment_info = EnvironmentInfoDescription(member)
+                self.environment_info.append(environment_info)
+
+
+class SwapEnvironmentCNAMEsResponse(Response):
+    def __init__(self, response):
+        response = response['SwapEnvironmentCNAMEsResponse']
+        super(SwapEnvironmentCNAMEsResponse, self).__init__(response)
+
+
+class SwapEnvironmentCnamesResponse(SwapEnvironmentCNAMEsResponse): pass
+
+
+class TerminateEnvironmentResponse(Response):
+    def __init__(self, response):
+        response = response['TerminateEnvironmentResponse']
+        super(TerminateEnvironmentResponse, self).__init__(response)
+
+        response = response['TerminateEnvironmentResult']
+        self.application_name = str(response['ApplicationName'])
+        self.cname = str(response['CNAME'])
+        self.date_created = datetime.fromtimestamp(response['DateCreated'])
+        self.date_updated = datetime.fromtimestamp(response['DateUpdated'])
+        self.description = str(response['Description'])
+        self.endpoint_url = str(response['EndpointURL'])
+        self.environment_id = str(response['EnvironmentId'])
+        self.environment_name = str(response['EnvironmentName'])
+        self.health = str(response['Health'])
+        if response['Resources']:
+            self.resources = EnvironmentResourcesDescription(response['Resources'])
+        else:
+            self.resources = None
+        self.solution_stack_name = str(response['SolutionStackName'])
+        self.status = str(response['Status'])
+        self.template_name = str(response['TemplateName'])
+        self.version_label = str(response['VersionLabel'])
+
+
+class UpdateApplicationResponse(Response):
+    def __init__(self, response):
+        response = response['UpdateApplicationResponse']
+        super(UpdateApplicationResponse, self).__init__(response)
+
+        response = response['UpdateApplicationResult']
+        if response['Application']:
+            self.application = ApplicationDescription(response['Application'])
+        else:
+            self.application = None
+
+
+class UpdateApplicationVersionResponse(Response):
+    def __init__(self, response):
+        response = response['UpdateApplicationVersionResponse']
+        super(UpdateApplicationVersionResponse, self).__init__(response)
+
+        response = response['UpdateApplicationVersionResult']
+        if response['ApplicationVersion']:
+            self.application_version = ApplicationVersionDescription(response['ApplicationVersion'])
+        else:
+            self.application_version = None
+
+
+class UpdateConfigurationTemplateResponse(Response):
+    def __init__(self, response):
+        response = response['UpdateConfigurationTemplateResponse']
+        super(UpdateConfigurationTemplateResponse, self).__init__(response)
+
+        response = response['UpdateConfigurationTemplateResult']
+        self.application_name = str(response['ApplicationName'])
+        self.date_created = datetime.fromtimestamp(response['DateCreated'])
+        self.date_updated = datetime.fromtimestamp(response['DateUpdated'])
+        self.deployment_status = str(response['DeploymentStatus'])
+        self.description = str(response['Description'])
+        self.environment_name = str(response['EnvironmentName'])
+        self.option_settings = []
+        if response['OptionSettings']:
+            for member in response['OptionSettings']:
+                option_setting = ConfigurationOptionSetting(member)
+                self.option_settings.append(option_setting)
+        self.solution_stack_name = str(response['SolutionStackName'])
+        self.template_name = str(response['TemplateName'])
+
+
+class UpdateEnvironmentResponse(Response):
+    def __init__(self, response):
+        response = response['UpdateEnvironmentResponse']
+        super(UpdateEnvironmentResponse, self).__init__(response)
+
+        response = response['UpdateEnvironmentResult']
+        self.application_name = str(response['ApplicationName'])
+        self.cname = str(response['CNAME'])
+        self.date_created = datetime.fromtimestamp(response['DateCreated'])
+        self.date_updated = datetime.fromtimestamp(response['DateUpdated'])
+        self.description = str(response['Description'])
+        self.endpoint_url = str(response['EndpointURL'])
+        self.environment_id = str(response['EnvironmentId'])
+        self.environment_name = str(response['EnvironmentName'])
+        self.health = str(response['Health'])
+        if response['Resources']:
+            self.resources = EnvironmentResourcesDescription(response['Resources'])
+        else:
+            self.resources = None
+        self.solution_stack_name = str(response['SolutionStackName'])
+        self.status = str(response['Status'])
+        self.template_name = str(response['TemplateName'])
+        self.version_label = str(response['VersionLabel'])
+
+
+class ValidateConfigurationSettingsResponse(Response):
+    def __init__(self, response):
+        response = response['ValidateConfigurationSettingsResponse']
+        super(ValidateConfigurationSettingsResponse, self).__init__(response)
+
+        response = response['ValidateConfigurationSettingsResult']
+        self.messages = []
+        if response['Messages']:
+            for member in response['Messages']:
+                message = ValidationMessage(member)
+                self.messages.append(message)
diff --git a/boto/beanstalk/wrapper.py b/boto/beanstalk/wrapper.py
new file mode 100644
index 0000000..aa9a7d2
--- /dev/null
+++ b/boto/beanstalk/wrapper.py
@@ -0,0 +1,29 @@
+"""Wraps layer1 api methods and converts layer1 dict responses to objects."""
+from boto.beanstalk.layer1 import Layer1
+import boto.beanstalk.response
+from boto.exception import BotoServerError
+import boto.beanstalk.exception as exception
+
+
+def beanstalk_wrapper(func, name):
+    def _wrapped_low_level_api(*args, **kwargs):
+        try:
+            response = func(*args, **kwargs)
+        except BotoServerError, e:
+            raise exception.simple(e)
+        # Turn 'this_is_a_function_name' into 'ThisIsAFunctionNameResponse'.
+        cls_name = ''.join([part.capitalize() for part in name.split('_')]) + 'Response'
+        cls = getattr(boto.beanstalk.response, cls_name)
+        return cls(response)
+    return _wrapped_low_level_api
+
+
+class Layer1Wrapper(object):
+    def __init__(self, *args, **kwargs):
+        self.api = Layer1(*args, **kwargs)
+
+    def __getattr__(self, name):
+        try:
+            return beanstalk_wrapper(getattr(self.api, name), name)
+        except AttributeError:
+            raise AttributeError("%s has no attribute %r" % (self, name))
diff --git a/boto/cloudformation/__init__.py b/boto/cloudformation/__init__.py
index 4f8e090..53a02e5 100644
--- a/boto/cloudformation/__init__.py
+++ b/boto/cloudformation/__init__.py
@@ -15,11 +15,52 @@
 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
 # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
 # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
-# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
 # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
 
-# this is here for backward compatibility
-# originally, the SNSConnection class was defined here
 from connection import CloudFormationConnection
+from boto.regioninfo import RegionInfo
+
+RegionData = {
+    'us-east-1': 'cloudformation.us-east-1.amazonaws.com',
+    'us-west-1': 'cloudformation.us-west-1.amazonaws.com',
+    'us-west-2': 'cloudformation.us-west-2.amazonaws.com',
+    'sa-east-1': 'cloudformation.sa-east-1.amazonaws.com',
+    'eu-west-1': 'cloudformation.eu-west-1.amazonaws.com',
+    'ap-northeast-1': 'cloudformation.ap-northeast-1.amazonaws.com',
+    'ap-southeast-1': 'cloudformation.ap-southeast-1.amazonaws.com'}
+
+
+def regions():
+    """
+    Get all available regions for the CloudFormation service.
+
+    :rtype: list
+    :return: A list of :class:`boto.RegionInfo` instances
+    """
+    regions = []
+    for region_name in RegionData:
+        region = RegionInfo(name=region_name,
+                            endpoint=RegionData[region_name],
+                            connection_cls=CloudFormationConnection)
+        regions.append(region)
+    return regions
+
+
+def connect_to_region(region_name, **kw_params):
+    """
+    Given a valid region name, return a
+    :class:`boto.cloudformation.CloudFormationConnection`.
+
+    :param str region_name: The name of the region to connect to.
+
+    :rtype: :class:`boto.cloudformation.CloudFormationConnection` or ``None``
+    :return: A connection to the given region, or None if an invalid region
+        name is given
+    """
+    for region in regions():
+        if region.name == region_name:
+            return region.connect(**kw_params)
+    return None
diff --git a/boto/cloudformation/connection.py b/boto/cloudformation/connection.py
index 59640bd..816066c 100644
--- a/boto/cloudformation/connection.py
+++ b/boto/cloudformation/connection.py
@@ -14,7 +14,7 @@
 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
 # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
 # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
-# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
 # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
@@ -31,14 +31,16 @@
 from boto.connection import AWSQueryConnection
 from boto.regioninfo import RegionInfo
 
+
 class CloudFormationConnection(AWSQueryConnection):
 
     """
     A Connection to the CloudFormation Service.
     """
-    DefaultRegionName = 'us-east-1'
-    DefaultRegionEndpoint = 'cloudformation.us-east-1.amazonaws.com'
-    APIVersion = '2010-05-15'
+    APIVersion = boto.config.get('Boto', 'cfn_version', '2010-05-15')
+    DefaultRegionName = boto.config.get('Boto', 'cfn_region_name', 'us-east-1')
+    DefaultRegionEndpoint = boto.config.get('Boto', 'cfn_region_endpoint',
+                                            'cloudformation.us-east-1.amazonaws.com')
 
     valid_states = ("CREATE_IN_PROGRESS", "CREATE_FAILED", "CREATE_COMPLETE",
             "ROLLBACK_IN_PROGRESS", "ROLLBACK_FAILED", "ROLLBACK_COMPLETE",
@@ -47,27 +49,35 @@
     def __init__(self, aws_access_key_id=None, aws_secret_access_key=None,
                  is_secure=True, port=None, proxy=None, proxy_port=None,
                  proxy_user=None, proxy_pass=None, debug=0,
-                 https_connection_factory=None, region=None, path='/', converter=None):
+                 https_connection_factory=None, region=None, path='/',
+                 converter=None, security_token=None, validate_certs=True):
         if not region:
             region = RegionInfo(self, self.DefaultRegionName,
                 self.DefaultRegionEndpoint, CloudFormationConnection)
         self.region = region
-        AWSQueryConnection.__init__(self, aws_access_key_id, aws_secret_access_key,
-                                    is_secure, port, proxy, proxy_port, proxy_user, proxy_pass,
-                                    self.region.endpoint, debug, https_connection_factory, path)
+        AWSQueryConnection.__init__(self, aws_access_key_id,
+                                    aws_secret_access_key,
+                                    is_secure, port, proxy, proxy_port,
+                                    proxy_user, proxy_pass,
+                                    self.region.endpoint, debug,
+                                    https_connection_factory, path,
+                                    security_token,
+                                    validate_certs=validate_certs)
 
     def _required_auth_capability(self):
-        return ['cloudformation']
+        return ['hmac-v4']
 
     def encode_bool(self, v):
         v = bool(v)
         return {True: "true", False: "false"}[v]
 
-    def create_stack(self, stack_name, template_body=None, template_url=None,
-            parameters=[], notification_arns=[], disable_rollback=False,
-            timeout_in_minutes=None):
+    def _build_create_or_update_params(self, stack_name, template_body,
+                                       template_url, parameters,
+                                       notification_arns, disable_rollback,
+                                       timeout_in_minutes, capabilities, tags):
         """
-        Creates a CloudFormation Stack as specified by the template.
+        Helper that creates JSON parameters needed by a Stack Create or
+        Stack Update call.
 
         :type stack_name: string
         :param stack_name: The name of the Stack, must be unique amoung running
@@ -78,28 +88,37 @@
 
         :type template_url: string
         :param template_url: An S3 URL of a stored template JSON document. If
-                            both the template_body and template_url are
-                            specified, the template_body takes precedence
+            both the template_body and template_url are
+            specified, the template_body takes precedence
 
         :type parameters: list of tuples
         :param parameters: A list of (key, value) pairs for template input
-                            parameters.
+            parameters.
 
         :type notification_arns: list of strings
         :param notification_arns: A list of SNS topics to send Stack event
-                            notifications to
+            notifications to.
 
         :type disable_rollback: bool
         :param disable_rollback: Indicates whether or not to rollback on
-                            failure
+            failure.
 
         :type timeout_in_minutes: int
         :param timeout_in_minutes: Maximum amount of time to let the Stack
-                            spend creating itself. If this timeout is exceeded,
-                            the Stack will enter the CREATE_FAILED state
+            spend creating itself. If this timeout is exceeded,
+            the Stack will enter the CREATE_FAILED state.
 
-        :rtype: string
-        :return: The unique Stack ID
+        :type capabilities: list
+        :param capabilities: The list of capabilities you want to allow in
+            the stack.  Currently, the only valid capability is
+            'CAPABILITY_IAM'.
+
+        :type tags: dict
+        :param tags: A dictionary of (key, value) pairs of tags to
+            associate with this stack.
+
+        :rtype: dict
+        :return: JSON parameters represented as a Python dict.
         """
         params = {'ContentType': "JSON", 'StackName': stack_name,
                 'DisableRollback': self.encode_bool(disable_rollback)}
@@ -112,13 +131,72 @@
                 " specified, only TemplateBody will be honored by the API")
         if len(parameters) > 0:
             for i, (key, value) in enumerate(parameters):
-                params['Parameters.member.%d.ParameterKey' % (i+1)] = key
-                params['Parameters.member.%d.ParameterValue' % (i+1)] = value
+                params['Parameters.member.%d.ParameterKey' % (i + 1)] = key
+                params['Parameters.member.%d.ParameterValue' % (i + 1)] = value
+        if capabilities:
+            for i, value in enumerate(capabilities):
+                params['Capabilities.member.%d' % (i + 1)] = value
+        if tags:
+            for i, (key, value) in enumerate(tags.items()):
+                params['Tags.member.%d.Key' % (i + 1)] = key
+                params['Tags.member.%d.Value' % (i + 1)] = value
         if len(notification_arns) > 0:
-            self.build_list_params(params, notification_arns, "NotificationARNs.member")
+            self.build_list_params(params, notification_arns,
+                                   "NotificationARNs.member")
         if timeout_in_minutes:
             params['TimeoutInMinutes'] = int(timeout_in_minutes)
+        return params
 
+    def create_stack(self, stack_name, template_body=None, template_url=None,
+            parameters=[], notification_arns=[], disable_rollback=False,
+            timeout_in_minutes=None, capabilities=None, tags=None):
+        """
+        Creates a CloudFormation Stack as specified by the template.
+
+        :type stack_name: string
+        :param stack_name: The name of the Stack, must be unique amoung running
+                            Stacks
+
+        :type template_body: string
+        :param template_body: The template body (JSON string)
+
+        :type template_url: string
+        :param template_url: An S3 URL of a stored template JSON document. If
+            both the template_body and template_url are
+            specified, the template_body takes precedence
+
+        :type parameters: list of tuples
+        :param parameters: A list of (key, value) pairs for template input
+            parameters.
+
+        :type notification_arns: list of strings
+        :param notification_arns: A list of SNS topics to send Stack event
+            notifications to.
+
+        :type disable_rollback: bool
+        :param disable_rollback: Indicates whether or not to rollback on
+            failure.
+
+        :type timeout_in_minutes: int
+        :param timeout_in_minutes: Maximum amount of time to let the Stack
+            spend creating itself. If this timeout is exceeded,
+            the Stack will enter the CREATE_FAILED state.
+
+        :type capabilities: list
+        :param capabilities: The list of capabilities you want to allow in
+            the stack.  Currently, the only valid capability is
+            'CAPABILITY_IAM'.
+
+        :type tags: dict
+        :param tags: A dictionary of (key, value) pairs of tags to
+            associate with this stack.
+
+        :rtype: string
+        :return: The unique Stack ID.
+        """
+        params = self._build_create_or_update_params(stack_name,
+            template_body, template_url, parameters, notification_arns,
+            disable_rollback, timeout_in_minutes, capabilities, tags)
         response = self.make_request('CreateStack', params, '/', 'POST')
         body = response.read()
         if response.status == 200:
@@ -129,6 +207,66 @@
             boto.log.error('%s' % body)
             raise self.ResponseError(response.status, response.reason, body)
 
+    def update_stack(self, stack_name, template_body=None, template_url=None,
+            parameters=[], notification_arns=[], disable_rollback=False,
+            timeout_in_minutes=None, capabilities=None, tags=None):
+        """
+        Updates a CloudFormation Stack as specified by the template.
+
+        :type stack_name: string
+        :param stack_name: The name of the Stack, must be unique amoung running
+            Stacks.
+
+        :type template_body: string
+        :param template_body: The template body (JSON string)
+
+        :type template_url: string
+        :param template_url: An S3 URL of a stored template JSON document. If
+            both the template_body and template_url are
+            specified, the template_body takes precedence.
+
+        :type parameters: list of tuples
+        :param parameters: A list of (key, value) pairs for template input
+            parameters.
+
+        :type notification_arns: list of strings
+        :param notification_arns: A list of SNS topics to send Stack event
+            notifications to.
+
+        :type disable_rollback: bool
+        :param disable_rollback: Indicates whether or not to rollback on
+            failure.
+
+        :type timeout_in_minutes: int
+        :param timeout_in_minutes: Maximum amount of time to let the Stack
+                            spend creating itself. If this timeout is exceeded,
+                            the Stack will enter the CREATE_FAILED state
+
+        :type capabilities: list
+        :param capabilities: The list of capabilities you want to allow in
+            the stack.  Currently, the only valid capability is
+            'CAPABILITY_IAM'.
+
+        :type tags: dict
+        :param tags: A dictionary of (key, value) pairs of tags to
+            associate with this stack.
+
+        :rtype: string
+        :return: The unique Stack ID.
+        """
+        params = self._build_create_or_update_params(stack_name,
+            template_body, template_url, parameters, notification_arns,
+            disable_rollback, timeout_in_minutes, capabilities, tags)
+        response = self.make_request('UpdateStack', params, '/', 'POST')
+        body = response.read()
+        if response.status == 200:
+            body = json.loads(body)
+            return body['UpdateStackResponse']['UpdateStackResult']['StackId']
+        else:
+            boto.log.error('%s %s' % (response.status, response.reason))
+            boto.log.error('%s' % body)
+            raise self.ResponseError(response.status, response.reason, body)
+
     def delete_stack(self, stack_name_or_id):
         params = {'ContentType': "JSON", 'StackName': stack_name_or_id}
         # TODO: change this to get_status ?
@@ -153,7 +291,8 @@
     def describe_stack_resource(self, stack_name_or_id, logical_resource_id):
         params = {'ContentType': "JSON", 'StackName': stack_name_or_id,
                 'LogicalResourceId': logical_resource_id}
-        response = self.make_request('DescribeStackResource', params, '/', 'GET')
+        response = self.make_request('DescribeStackResource', params,
+                                     '/', 'GET')
         body = response.read()
         if response.status == 200:
             return json.loads(body)
@@ -172,8 +311,8 @@
             params['LogicalResourceId'] = logical_resource_id
         if physical_resource_id:
             params['PhysicalResourceId'] = physical_resource_id
-        return self.get_list('DescribeStackResources', params, [('member',
-            StackResource)])
+        return self.get_list('DescribeStackResources', params,
+                             [('member', StackResource)])
 
     def describe_stacks(self, stack_name_or_id=None):
         params = {}
@@ -196,8 +335,8 @@
         params = {'StackName': stack_name_or_id}
         if next_token:
             params['NextToken'] = next_token
-        return self.get_list('ListStackResources', params, [('member',
-            StackResourceSummary)])
+        return self.get_list('ListStackResources', params,
+                             [('member', StackResourceSummary)])
 
     def list_stacks(self, stack_status_filters=[], next_token=None):
         params = {}
@@ -207,15 +346,15 @@
             self.build_list_params(params, stack_status_filters,
                 "StackStatusFilter.member")
 
-        return self.get_list('ListStacks', params, [('member',
-            StackSummary)])
+        return self.get_list('ListStacks', params,
+                             [('member', StackSummary)])
 
     def validate_template(self, template_body=None, template_url=None):
         params = {}
         if template_body:
             params['TemplateBody'] = template_body
         if template_url:
-            params['TemplateUrl'] = template_url
+            params['TemplateURL'] = template_url
         if template_body and template_url:
             boto.log.warning("If both TemplateBody and TemplateURL are"
                 " specified, only TemplateBody will be honored by the API")
diff --git a/boto/cloudformation/stack.py b/boto/cloudformation/stack.py
index 8b9e115..9a9d63b 100644
--- a/boto/cloudformation/stack.py
+++ b/boto/cloudformation/stack.py
@@ -2,7 +2,8 @@
 
 from boto.resultset import ResultSet
 
-class Stack:
+
+class Stack(object):
     def __init__(self, connection=None):
         self.connection = connection
         self.creation_time = None
@@ -11,6 +12,8 @@
         self.notification_arns = []
         self.outputs = []
         self.parameters = []
+        self.capabilities = []
+        self.tags = []
         self.stack_id = None
         self.stack_status = None
         self.stack_name = None
@@ -24,6 +27,15 @@
         elif name == "Outputs":
             self.outputs = ResultSet([('member', Output)])
             return self.outputs
+        elif name == "Capabilities":
+            self.capabilities = ResultSet([('member', Capability)])
+            return self.capabilities
+        elif name == "Tags":
+            self.tags = Tag()
+            return self.tags
+        elif name == 'NotificationARNs':
+            self.notification_arns = ResultSet([('member', NotificationARN)])
+            return self.notification_arns
         else:
             return None
 
@@ -34,8 +46,6 @@
             self.description = value
         elif name == "DisableRollback":
             self.disable_rollback = bool(value)
-        elif name == "NotificationARNs":
-            self.notification_arns = value
         elif name == 'StackId':
             self.stack_id = value
         elif name == 'StackName':
@@ -91,7 +101,8 @@
     def get_template(self):
         return self.connection.get_template(stack_name_or_id=self.stack_id)
 
-class StackSummary:
+
+class StackSummary(object):
     def __init__(self, connection=None):
         self.connection = connection
         self.stack_id = None
@@ -122,7 +133,8 @@
         else:
             setattr(self, name, value)
 
-class Parameter:
+
+class Parameter(object):
     def __init__(self, connection=None):
         self.connection = None
         self.key = None
@@ -142,7 +154,8 @@
     def __repr__(self):
         return "Parameter:\"%s\"=\"%s\"" % (self.key, self.value)
 
-class Output:
+
+class Output(object):
     def __init__(self, connection=None):
         self.connection = connection
         self.description = None
@@ -165,7 +178,57 @@
     def __repr__(self):
         return "Output:\"%s\"=\"%s\"" % (self.key, self.value)
 
-class StackResource:
+
+class Capability(object):
+    def __init__(self, connection=None):
+        self.connection = None
+        self.value = None
+
+    def startElement(self, name, attrs, connection):
+        return None
+
+    def endElement(self, name, value, connection):
+        self.value = value
+
+    def __repr__(self):
+        return "Capability:\"%s\"" % (self.value)
+
+
+class Tag(dict):
+
+    def __init__(self, connection=None):
+        dict.__init__(self)
+        self.connection = connection
+        self._current_key = None
+
+    def startElement(self, name, attrs, connection):
+        return None
+
+    def endElement(self, name, value, connection):
+        if name == "Key":
+            self._current_key = value
+        elif name == "Value":
+            self[self._current_key] = value
+        else:
+            setattr(self, name, value)
+
+
+class NotificationARN(object):
+    def __init__(self, connection=None):
+        self.connection = None
+        self.value = None
+
+    def startElement(self, name, attrs, connection):
+        return None
+
+    def endElement(self, name, value, connection):
+        self.value = value
+
+    def __repr__(self):
+        return "NotificationARN:\"%s\"" % (self.value)
+
+
+class StackResource(object):
     def __init__(self, connection=None):
         self.connection = connection
         self.description = None
@@ -207,7 +270,8 @@
         return "StackResource:%s (%s)" % (self.logical_resource_id,
                 self.resource_type)
 
-class StackResourceSummary:
+
+class StackResourceSummary(object):
     def __init__(self, connection=None):
         self.connection = connection
         self.last_updated_timestamp = None
@@ -222,7 +286,7 @@
 
     def endElement(self, name, value, connection):
         if name == "LastUpdatedTimestamp":
-            self.last_updated_timestampe = datetime.strptime(value,
+            self.last_updated_timestamp = datetime.strptime(value,
                 '%Y-%m-%dT%H:%M:%SZ')
         elif name == "LogicalResourceId":
             self.logical_resource_id = value
@@ -241,7 +305,8 @@
         return "StackResourceSummary:%s (%s)" % (self.logical_resource_id,
                 self.resource_type)
 
-class StackEvent:
+
+class StackEvent(object):
     valid_states = ("CREATE_IN_PROGRESS", "CREATE_FAILED", "CREATE_COMPLETE",
             "DELETE_IN_PROGRESS", "DELETE_FAILED", "DELETE_COMPLETE")
     def __init__(self, connection=None):
diff --git a/boto/cloudfront/__init__.py b/boto/cloudfront/__init__.py
index 7f98b70..9888f50 100644
--- a/boto/cloudfront/__init__.py
+++ b/boto/cloudfront/__init__.py
@@ -14,7 +14,7 @@
 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
 # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
 # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
-# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
 # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
@@ -30,10 +30,11 @@
 from boto.cloudfront.identity import OriginAccessIdentity
 from boto.cloudfront.identity import OriginAccessIdentitySummary
 from boto.cloudfront.identity import OriginAccessIdentityConfig
-from boto.cloudfront.invalidation import InvalidationBatch
+from boto.cloudfront.invalidation import InvalidationBatch, InvalidationSummary, InvalidationListResultSet
 from boto.resultset import ResultSet
 from boto.cloudfront.exception import CloudFrontServerError
 
+
 class CloudFrontConnection(AWSAuthConnection):
 
     DefaultHost = 'cloudfront.amazonaws.com'
@@ -41,10 +42,13 @@
 
     def __init__(self, aws_access_key_id=None, aws_secret_access_key=None,
                  port=None, proxy=None, proxy_port=None,
-                 host=DefaultHost, debug=0):
+                 host=DefaultHost, debug=0, security_token=None,
+                 validate_certs=True):
         AWSAuthConnection.__init__(self, host,
-                aws_access_key_id, aws_secret_access_key,
-                True, port, proxy, proxy_port, debug=debug)
+                                   aws_access_key_id, aws_secret_access_key,
+                                   True, port, proxy, proxy_port, debug=debug,
+                                   security_token=security_token,
+                                   validate_certs=validate_certs)
 
     def get_etag(self, response):
         response_headers = response.msg
@@ -57,16 +61,20 @@
         return ['cloudfront']
 
     # Generics
-    
-    def _get_all_objects(self, resource, tags):
+
+    def _get_all_objects(self, resource, tags, result_set_class=None,
+                         result_set_kwargs=None):
         if not tags:
-            tags=[('DistributionSummary', DistributionSummary)]
-        response = self.make_request('GET', '/%s/%s' % (self.Version, resource))
+            tags = [('DistributionSummary', DistributionSummary)]
+        response = self.make_request('GET', '/%s/%s' % (self.Version,
+                                                        resource))
         body = response.read()
         boto.log.debug(body)
         if response.status >= 300:
             raise CloudFrontServerError(response.status, response.reason, body)
-        rs = ResultSet(tags)
+        rs_class = result_set_class or ResultSet
+        rs_kwargs = result_set_kwargs or dict()
+        rs = rs_class(tags, **rs_kwargs)
         h = handler.XmlHandler(rs, self)
         xml.sax.parseString(body, h)
         return rs
@@ -99,24 +107,26 @@
         h = handler.XmlHandler(d, self)
         xml.sax.parseString(body, h)
         return d
-    
+
     def _set_config(self, distribution_id, etag, config):
         if isinstance(config, StreamingDistributionConfig):
             resource = 'streaming-distribution'
         else:
             resource = 'distribution'
         uri = '/%s/%s/%s/config' % (self.Version, resource, distribution_id)
-        headers = {'If-Match' : etag, 'Content-Type' : 'text/xml'}
+        headers = {'If-Match': etag, 'Content-Type': 'text/xml'}
         response = self.make_request('PUT', uri, headers, config.to_xml())
         body = response.read()
         boto.log.debug(body)
         if response.status != 200:
             raise CloudFrontServerError(response.status, response.reason, body)
         return self.get_etag(response)
-    
+
     def _create_object(self, config, resource, dist_class):
-        response = self.make_request('POST', '/%s/%s' % (self.Version, resource),
-                                     {'Content-Type' : 'text/xml'}, data=config.to_xml())
+        response = self.make_request('POST', '/%s/%s' % (self.Version,
+                                                         resource),
+                                     {'Content-Type': 'text/xml'},
+                                     data=config.to_xml())
         body = response.read()
         boto.log.debug(body)
         if response.status == 201:
@@ -127,19 +137,19 @@
             return d
         else:
             raise CloudFrontServerError(response.status, response.reason, body)
-        
+
     def _delete_object(self, id, etag, resource):
         uri = '/%s/%s/%s' % (self.Version, resource, id)
-        response = self.make_request('DELETE', uri, {'If-Match' : etag})
+        response = self.make_request('DELETE', uri, {'If-Match': etag})
         body = response.read()
         boto.log.debug(body)
         if response.status != 204:
             raise CloudFrontServerError(response.status, response.reason, body)
 
     # Distributions
-        
+
     def get_all_distributions(self):
-        tags=[('DistributionSummary', DistributionSummary)]
+        tags = [('DistributionSummary', DistributionSummary)]
         return self._get_all_objects('distribution', tags)
 
     def get_distribution_info(self, distribution_id):
@@ -148,10 +158,10 @@
     def get_distribution_config(self, distribution_id):
         return self._get_config(distribution_id, 'distribution',
                                 DistributionConfig)
-    
+
     def set_distribution_config(self, distribution_id, etag, config):
         return self._set_config(distribution_id, etag, config)
-    
+
     def create_distribution(self, origin, enabled, caller_reference='',
                             cnames=None, comment='', trusted_signers=None):
         config = DistributionConfig(origin=origin, enabled=enabled,
@@ -159,14 +169,14 @@
                                     cnames=cnames, comment=comment,
                                     trusted_signers=trusted_signers)
         return self._create_object(config, 'distribution', Distribution)
-        
+
     def delete_distribution(self, distribution_id, etag):
         return self._delete_object(distribution_id, etag, 'distribution')
 
     # Streaming Distributions
-        
+
     def get_all_streaming_distributions(self):
-        tags=[('StreamingDistributionSummary', StreamingDistributionSummary)]
+        tags = [('StreamingDistributionSummary', StreamingDistributionSummary)]
         return self._get_all_objects('streaming-distribution', tags)
 
     def get_streaming_distribution_info(self, distribution_id):
@@ -176,10 +186,10 @@
     def get_streaming_distribution_config(self, distribution_id):
         return self._get_config(distribution_id, 'streaming-distribution',
                                 StreamingDistributionConfig)
-    
+
     def set_streaming_distribution_config(self, distribution_id, etag, config):
         return self._set_config(distribution_id, etag, config)
-    
+
     def create_streaming_distribution(self, origin, enabled,
                                       caller_reference='',
                                       cnames=None, comment='',
@@ -190,14 +200,15 @@
                                              trusted_signers=trusted_signers)
         return self._create_object(config, 'streaming-distribution',
                                    StreamingDistribution)
-        
+
     def delete_streaming_distribution(self, distribution_id, etag):
-        return self._delete_object(distribution_id, etag, 'streaming-distribution')
+        return self._delete_object(distribution_id, etag,
+                                   'streaming-distribution')
 
     # Origin Access Identity
 
     def get_all_origin_access_identity(self):
-        tags=[('CloudFrontOriginAccessIdentitySummary',
+        tags = [('CloudFrontOriginAccessIdentitySummary',
                OriginAccessIdentitySummary)]
         return self._get_all_objects('origin-access-identity/cloudfront', tags)
 
@@ -209,23 +220,23 @@
         return self._get_config(access_id,
                                 'origin-access-identity/cloudfront',
                                 OriginAccessIdentityConfig)
-    
+
     def set_origin_access_identity_config(self, access_id,
                                           etag, config):
         return self._set_config(access_id, etag, config)
-    
+
     def create_origin_access_identity(self, caller_reference='', comment=''):
         config = OriginAccessIdentityConfig(caller_reference=caller_reference,
                                             comment=comment)
         return self._create_object(config, 'origin-access-identity/cloudfront',
                                    OriginAccessIdentity)
-        
+
     def delete_origin_access_identity(self, access_id, etag):
         return self._delete_object(access_id, etag,
                                    'origin-access-identity/cloudfront')
 
     # Object Invalidation
-    
+
     def create_invalidation_request(self, distribution_id, paths,
                                     caller_reference=None):
         """Creates a new invalidation request
@@ -239,7 +250,7 @@
         uri = '/%s/distribution/%s/invalidation' % (self.Version,
                                                     distribution_id)
         response = self.make_request('POST', uri,
-                                     {'Content-Type' : 'text/xml'},
+                                     {'Content-Type': 'text/xml'},
                                      data=paths.to_xml())
         body = response.read()
         if response.status == 201:
@@ -249,9 +260,12 @@
         else:
             raise CloudFrontServerError(response.status, response.reason, body)
 
-    def invalidation_request_status (self, distribution_id, request_id, caller_reference=None):
-        uri = '/%s/distribution/%s/invalidation/%s' % (self.Version, distribution_id, request_id )
-        response = self.make_request('GET', uri, {'Content-Type' : 'text/xml'})
+    def invalidation_request_status(self, distribution_id,
+                                     request_id, caller_reference=None):
+        uri = '/%s/distribution/%s/invalidation/%s' % (self.Version,
+                                                       distribution_id,
+                                                       request_id)
+        response = self.make_request('GET', uri, {'Content-Type': 'text/xml'})
         body = response.read()
         if response.status == 200:
             paths = InvalidationBatch([])
@@ -261,4 +275,50 @@
         else:
             raise CloudFrontServerError(response.status, response.reason, body)
 
+    def get_invalidation_requests(self, distribution_id, marker=None,
+                                  max_items=None):
+        """
+        Get all invalidation requests for a given CloudFront distribution.
+        This returns an instance of an InvalidationListResultSet that
+        automatically handles all of the result paging, etc. from CF - you just
+        need to keep iterating until there are no more results.
 
+        :type distribution_id: string
+        :param distribution_id: The id of the CloudFront distribution
+
+        :type marker: string
+        :param marker: Use this only when paginating results and only in
+                       follow-up request after you've received a response where
+                       the results are truncated. Set this to the value of the
+                       Marker element in the response you just received.
+
+        :type max_items: int
+        :param max_items: Use this only when paginating results and only in a
+                          follow-up request to indicate the maximum number of
+                          invalidation requests you want in the response. You
+                          will need to pass the next_marker property from the
+                          previous InvalidationListResultSet response in the
+                          follow-up request in order to get the next 'page' of
+                          results.
+
+        :rtype: :class:`boto.cloudfront.invalidation.InvalidationListResultSet`
+        :returns: An InvalidationListResultSet iterator that lists invalidation
+                  requests for a given CloudFront distribution. Automatically
+                  handles paging the results.
+        """
+        uri = 'distribution/%s/invalidation' % distribution_id
+        params = dict()
+        if marker:
+            params['Marker'] = marker
+        if max_items:
+            params['MaxItems'] = max_items
+        if params:
+            uri += '?%s=%s' % params.popitem()
+            for k, v in params.items():
+                uri += '&%s=%s' % (k, v)
+        tags=[('InvalidationSummary', InvalidationSummary)]
+        rs_class = InvalidationListResultSet
+        rs_kwargs = dict(connection=self, distribution_id=distribution_id,
+                         max_items=max_items, marker=marker)
+        return self._get_all_objects(uri, tags, result_set_class=rs_class,
+                                     result_set_kwargs=rs_kwargs)
diff --git a/boto/cloudfront/distribution.py b/boto/cloudfront/distribution.py
index 01ceed4..718f2c2 100644
--- a/boto/cloudfront/distribution.py
+++ b/boto/cloudfront/distribution.py
@@ -21,7 +21,11 @@
 
 import uuid
 import base64
-import json
+import time
+try:
+    import simplejson as json
+except ImportError:
+    import json
 from boto.cloudfront.identity import OriginAccessIdentity
 from boto.cloudfront.object import Object, StreamingObject
 from boto.cloudfront.signers import ActiveTrustedSigners, TrustedSigners
@@ -508,46 +512,44 @@
 
         :type keypair_id: str
         :param keypair_id: The keypair ID of the Amazon KeyPair used to sign
-                           theURL.  This ID MUST correspond to the private key
-                           specified with private_key_file or
-                           private_key_string.
+            theURL.  This ID MUST correspond to the private key
+            specified with private_key_file or private_key_string.
 
         :type expire_time: int
         :param expire_time: The expiry time of the URL. If provided, the URL
-                            will expire after the time has passed. If not
-                            provided the URL will never expire. Format is a
-                            unix epoch. Use time.time() + duration_in_sec.
+            will expire after the time has passed. If not provided the URL will
+            never expire. Format is a unix epoch.
+            Use time.time() + duration_in_sec.
 
         :type valid_after_time: int
         :param valid_after_time: If provided, the URL will not be valid until
-                                 after valid_after_time. Format is a unix
-                                 epoch. Use time.time() + secs_until_valid.
+            after valid_after_time. Format is a unix epoch.
+            Use time.time() + secs_until_valid.
 
         :type ip_address: str
         :param ip_address: If provided, only allows access from the specified
-                           IP address.  Use '192.168.0.10' for a single IP or
-                           use '192.168.0.0/24' CIDR notation for a subnet.
+            IP address.  Use '192.168.0.10' for a single IP or
+            use '192.168.0.0/24' CIDR notation for a subnet.
 
         :type policy_url: str
         :param policy_url: If provided, allows the signature to contain
-                           wildcard globs in the URL.  For example, you could
-                           provide: 'http://example.com/media/*' and the policy
-                           and signature would allow access to all contents of
-                           the media subdirectory.  If not specified, only
-                           allow access to the exact url provided in 'url'.
+            wildcard globs in the URL.  For example, you could
+            provide: 'http://example.com/media/\*' and the policy
+            and signature would allow access to all contents of
+            the media subdirectory. If not specified, only
+            allow access to the exact url provided in 'url'.
 
         :type private_key_file: str or file object.
         :param private_key_file: If provided, contains the filename of the
-                                 private key file used for signing or an open
-                                 file object containing the private key
-                                 contents.  Only one of private_key_file or
-                                 private_key_string can be provided.
+            private key file used for signing or an open
+            file object containing the private key
+            contents.  Only one of private_key_file or
+            private_key_string can be provided.
 
         :type private_key_string: str
         :param private_key_string: If provided, contains the private key string
-                                   used for signing. Only one of
-                                   private_key_file or private_key_string can
-                                   be provided.
+            used for signing. Only one of private_key_file or
+            private_key_string can be provided.
 
         :rtype: str
         :return: The signed URL.
@@ -591,9 +593,10 @@
             if policy_url is None:
                 policy_url = url
             # Can't use canned policy
-            policy = self._custom_policy(policy_url, expires=None,
-                                         valid_after=None,
-                                         ip_address=None)
+            policy = self._custom_policy(policy_url, expires=expire_time,
+                                         valid_after=valid_after_time,
+                                         ip_address=ip_address)
+
             encoded_policy = self._url_base64_encode(policy)
             params["Policy"] = encoded_policy
         #sign the policy
@@ -620,8 +623,12 @@
         Creates a custom policy string based on the supplied parameters.
         """
         condition = {}
-        if expires:
-            condition["DateLessThan"] = {"AWS:EpochTime": expires}
+        # SEE: http://docs.amazonwebservices.com/AmazonCloudFront/latest/DeveloperGuide/RestrictingAccessPrivateContent.html#CustomPolicy
+        # The 'DateLessThan' property is required.
+        if not expires:
+            # Defaults to ONE day
+            expires = int(time.time()) + 86400
+        condition["DateLessThan"] = {"AWS:EpochTime": expires}
         if valid_after:
             condition["DateGreaterThan"] = {"AWS:EpochTime": valid_after}
         if ip_address:
diff --git a/boto/cloudfront/invalidation.py b/boto/cloudfront/invalidation.py
index b213e65..dcc3c4c 100644
--- a/boto/cloudfront/invalidation.py
+++ b/boto/cloudfront/invalidation.py
@@ -22,6 +22,9 @@
 import uuid
 import urllib
 
+from boto.resultset import ResultSet
+
+
 class InvalidationBatch(object):
     """A simple invalidation request.
         :see: http://docs.amazonwebservices.com/AmazonCloudFront/2010-08-01/APIReference/index.html?InvalidationBatchDatatype.html
@@ -40,10 +43,13 @@
         # If we passed in a distribution,
         # then we use that as the connection object
         if distribution:
-            self.connection = connection
+            self.connection = distribution
         else:
             self.connection = connection
 
+    def __repr__(self):
+        return '<InvalidationBatch: %s>' % self.id
+
     def add(self, path):
         """Add another path to this invalidation request"""
         return self.paths.append(path)
@@ -95,3 +101,116 @@
         elif name == "CallerReference":
             self.caller_reference = value
         return None
+
+
+class InvalidationListResultSet(object):
+    """
+    A resultset for listing invalidations on a given CloudFront distribution.
+    Implements the iterator interface and transparently handles paging results
+    from CF so even if you have many thousands of invalidations on the
+    distribution you can iterate over all invalidations in a reasonably
+    efficient manner.
+    """
+    def __init__(self, markers=None, connection=None, distribution_id=None,
+                 invalidations=None, marker='', next_marker=None,
+                 max_items=None, is_truncated=False):
+        self.markers = markers or []
+        self.connection = connection
+        self.distribution_id = distribution_id
+        self.marker = marker
+        self.next_marker = next_marker
+        self.max_items = max_items
+        self.auto_paginate = max_items is None
+        self.is_truncated = is_truncated
+        self._inval_cache = invalidations or []
+
+    def __iter__(self):
+        """
+        A generator function for listing invalidation requests for a given
+        CloudFront distribution.
+        """
+        conn = self.connection
+        distribution_id = self.distribution_id
+        result_set = self
+        for inval in result_set._inval_cache:
+            yield inval
+        if not self.auto_paginate:
+            return
+        while result_set.is_truncated:
+            result_set = conn.get_invalidation_requests(distribution_id,
+                                                        marker=result_set.next_marker,
+                                                        max_items=result_set.max_items)
+            for i in result_set._inval_cache:
+                yield i
+
+    def startElement(self, name, attrs, connection):
+        for root_elem, handler in self.markers:
+            if name == root_elem:
+                obj = handler(connection, distribution_id=self.distribution_id)
+                self._inval_cache.append(obj)
+                return obj
+
+    def endElement(self, name, value, connection):
+        if name == 'IsTruncated':
+            self.is_truncated = self.to_boolean(value)
+        elif name == 'Marker':
+            self.marker = value
+        elif name == 'NextMarker':
+            self.next_marker = value
+        elif name == 'MaxItems':
+            self.max_items = int(value)
+
+    def to_boolean(self, value, true_value='true'):
+        if value == true_value:
+            return True
+        else:
+            return False
+
+class InvalidationSummary(object):
+    """
+    Represents InvalidationSummary complex type in CloudFront API that lists
+    the id and status of a given invalidation request.
+    """
+    def __init__(self, connection=None, distribution_id=None, id='',
+                 status=''):
+        self.connection = connection
+        self.distribution_id = distribution_id
+        self.id = id
+        self.status = status
+
+    def __repr__(self):
+        return '<InvalidationSummary: %s>' % self.id
+
+    def startElement(self, name, attrs, connection):
+        pass
+
+    def endElement(self, name, value, connection):
+        if name == 'Id':
+            self.id = value
+        elif name == 'Status':
+            self.status = value
+
+    def get_distribution(self):
+        """
+        Returns a Distribution object representing the parent CloudFront
+        distribution of the invalidation request listed in the
+        InvalidationSummary.
+
+        :rtype: :class:`boto.cloudfront.distribution.Distribution`
+        :returns: A Distribution object representing the parent CloudFront
+                  distribution  of the invalidation request listed in the
+                  InvalidationSummary
+        """
+        return self.connection.get_distribution_info(self.distribution_id)
+
+    def get_invalidation_request(self):
+        """
+        Returns an InvalidationBatch object representing the invalidation
+        request referred to in the InvalidationSummary.
+
+        :rtype: :class:`boto.cloudfront.invalidation.InvalidationBatch`
+        :returns: An InvalidationBatch object representing the invalidation
+                  request referred to by the InvalidationSummary
+        """
+        return self.connection.invalidation_request_status(
+            self.distribution_id, self.id)
diff --git a/boto/cloudsearch/__init__.py b/boto/cloudsearch/__init__.py
new file mode 100644
index 0000000..9c8157a
--- /dev/null
+++ b/boto/cloudsearch/__init__.py
@@ -0,0 +1,45 @@
+# Copyright (c) 2012 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2012 Amazon.com, Inc. or its affiliates.
+# All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+from boto.ec2.regioninfo import RegionInfo
+
+
+def regions():
+    """
+    Get all available regions for the Amazon CloudSearch service.
+
+    :rtype: list
+    :return: A list of :class:`boto.regioninfo.RegionInfo`
+    """
+    import boto.cloudsearch.layer1
+    return [RegionInfo(name='us-east-1',
+                       endpoint='cloudsearch.us-east-1.amazonaws.com',
+                       connection_cls=boto.cloudsearch.layer1.Layer1),
+            ]
+
+
+def connect_to_region(region_name, **kw_params):
+    for region in regions():
+        if region.name == region_name:
+            return region.connect(**kw_params)
+    return None
diff --git a/boto/cloudsearch/document.py b/boto/cloudsearch/document.py
new file mode 100644
index 0000000..64a11e0
--- /dev/null
+++ b/boto/cloudsearch/document.py
@@ -0,0 +1,150 @@
+# Copyright (c) 2012 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2012 Amazon.com, Inc. or its affiliates.
+# All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+try:
+    import simplejson as json
+except ImportError:
+    import json
+
+import boto.exception
+import requests
+import boto
+
+class SearchServiceException(Exception):
+    pass
+
+
+class CommitMismatchError(Exception):
+    pass
+
+
+class DocumentServiceConnection(object):
+
+    def __init__(self, domain=None, endpoint=None):
+        self.domain = domain
+        self.endpoint = endpoint
+        if not self.endpoint:
+            self.endpoint = domain.doc_service_endpoint
+        self.documents_batch = []
+        self._sdf = None
+
+    def add(self, _id, version, fields, lang='en'):
+        d = {'type': 'add', 'id': _id, 'version': version, 'lang': lang,
+            'fields': fields}
+        self.documents_batch.append(d)
+
+    def delete(self, _id, version):
+        d = {'type': 'delete', 'id': _id, 'version': version}
+        self.documents_batch.append(d)
+
+    def get_sdf(self):
+        return self._sdf if self._sdf else json.dumps(self.documents_batch)
+
+    def clear_sdf(self):
+        self._sdf = None
+        self.documents_batch = []
+
+    def add_sdf_from_s3(self, key_obj):
+        """@todo (lucas) would be nice if this could just take an s3://uri..."""
+        self._sdf = key_obj.get_contents_as_string()
+
+    def commit(self):
+        sdf = self.get_sdf()
+
+        if ': null' in sdf:
+            boto.log.error('null value in sdf detected.  This will probably raise '
+                '500 error.')
+            index = sdf.index(': null')
+            boto.log.error(sdf[index - 100:index + 100])
+
+        url = "http://%s/2011-02-01/documents/batch" % (self.endpoint)
+
+        request_config = {
+            'pool_connections': 20,
+            'keep_alive': True,
+            'max_retries': 5,
+            'pool_maxsize': 50
+        }
+
+        r = requests.post(url, data=sdf, config=request_config,
+            headers={'Content-Type': 'application/json'})
+
+        return CommitResponse(r, self, sdf)
+
+
+class CommitResponse(object):
+    """Wrapper for response to Cloudsearch document batch commit.
+
+    :type response: :class:`requests.models.Response`
+    :param response: Response from Cloudsearch /documents/batch API
+
+    :type doc_service: :class:`exfm.cloudsearch.DocumentServiceConnection`
+    :param doc_service: Object containing the documents posted and methods to
+        retry
+
+    :raises: :class:`boto.exception.BotoServerError`
+    :raises: :class:`exfm.cloudsearch.SearchServiceException`
+    """
+    def __init__(self, response, doc_service, sdf):
+        self.response = response
+        self.doc_service = doc_service
+        self.sdf = sdf
+
+        try:
+            self.content = json.loads(response.content)
+        except:
+            boto.log.error('Error indexing documents.\nResponse Content:\n{}\n\n'
+                'SDF:\n{}'.format(response.content, self.sdf))
+            raise boto.exception.BotoServerError(self.response.status_code, '',
+                body=response.content)
+
+        self.status = self.content['status']
+        if self.status == 'error':
+            self.errors = [e.get('message') for e in self.content.get('errors',
+                [])]
+        else:
+            self.errors = []
+
+        self.adds = self.content['adds']
+        self.deletes = self.content['deletes']
+        self._check_num_ops('add', self.adds)
+        self._check_num_ops('delete', self.deletes)
+
+    def _check_num_ops(self, type_, response_num):
+        """Raise exception if number of ops in response doesn't match commit
+
+        :type type_: str
+        :param type_: Type of commit operation: 'add' or 'delete'
+
+        :type response_num: int
+        :param response_num: Number of adds or deletes in the response.
+
+        :raises: :class:`exfm.cloudsearch.SearchServiceException`
+        """
+        commit_num = len([d for d in self.doc_service.documents_batch
+            if d['type'] == type_])
+
+        if response_num != commit_num:
+            raise CommitMismatchError(
+                'Incorrect number of {}s returned. Commit: {} Respose: {}'\
+                .format(type_, commit_num, response_num))
diff --git a/boto/cloudsearch/domain.py b/boto/cloudsearch/domain.py
new file mode 100644
index 0000000..43fcac8
--- /dev/null
+++ b/boto/cloudsearch/domain.py
@@ -0,0 +1,397 @@
+# Copyright (c) 2012 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2012 Amazon.com, Inc. or its affiliates.
+# All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+
+import boto
+try:
+    import simplejson as json
+except ImportError:
+    import json
+from .optionstatus import OptionStatus
+from .optionstatus import IndexFieldStatus
+from .optionstatus import ServicePoliciesStatus
+from .optionstatus import RankExpressionStatus
+from .document import DocumentServiceConnection
+from .search import SearchConnection
+
+def handle_bool(value):
+    if value in [True, 'true', 'True', 'TRUE', 1]:
+        return True
+    return False
+
+            
+class Domain(object):
+    """
+    A Cloudsearch domain.
+
+    :ivar name: The name of the domain.
+
+    :ivar id: The internally generated unique identifier for the domain.
+
+    :ivar created: A boolean which is True if the domain is
+        created. It can take several minutes to initialize a domain
+        when CreateDomain is called. Newly created search domains are
+        returned with a False value for Created until domain creation
+        is complete
+
+    :ivar deleted: A boolean which is True if the search domain has
+        been deleted. The system must clean up resources dedicated to
+        the search domain when delete is called. Newly deleted
+        search domains are returned from list_domains with a True
+        value for deleted for several minutes until resource cleanup
+        is complete.
+
+    :ivar processing: True if processing is being done to activate the
+        current domain configuration.
+
+    :ivar num_searchable_docs: The number of documents that have been
+        submittted to the domain and indexed.
+
+    :ivar requires_index_document: True if index_documents needs to be
+        called to activate the current domain configuration.
+
+    :ivar search_instance_count: The number of search instances that are
+        available to process search requests.
+
+    :ivar search_instance_type: The instance type that is being used to
+        process search requests.
+
+    :ivar search_partition_count: The number of partitions across which
+        the search index is spread.
+    """
+
+    def __init__(self, layer1, data):
+        self.layer1 = layer1
+        self.update_from_data(data)
+
+    def update_from_data(self, data):
+        self.created = data['created']
+        self.deleted = data['deleted']
+        self.processing = data['processing']
+        self.requires_index_documents = data['requires_index_documents']
+        self.domain_id = data['domain_id']
+        self.domain_name = data['domain_name']
+        self.num_searchable_docs = data['num_searchable_docs']
+        self.search_instance_count = data['search_instance_count']
+        self.search_instance_type = data.get('search_instance_type', None)
+        self.search_partition_count = data['search_partition_count']
+        self._doc_service = data['doc_service']
+        self._search_service = data['search_service']
+
+    @property
+    def doc_service_arn(self):
+        return self._doc_service['arn']
+
+    @property
+    def doc_service_endpoint(self):
+        return self._doc_service['endpoint']
+
+    @property
+    def search_service_arn(self):
+        return self._search_service['arn']
+
+    @property
+    def search_service_endpoint(self):
+        return self._search_service['endpoint']
+
+    @property
+    def created(self):
+        return self._created
+
+    @created.setter
+    def created(self, value):
+        self._created = handle_bool(value)
+            
+    @property
+    def deleted(self):
+        return self._deleted
+
+    @deleted.setter
+    def deleted(self, value):
+        self._deleted = handle_bool(value)
+            
+    @property
+    def processing(self):
+        return self._processing
+
+    @processing.setter
+    def processing(self, value):
+        self._processing = handle_bool(value)
+            
+    @property
+    def requires_index_documents(self):
+        return self._requires_index_documents
+
+    @requires_index_documents.setter
+    def requires_index_documents(self, value):
+        self._requires_index_documents = handle_bool(value)
+            
+    @property
+    def search_partition_count(self):
+        return self._search_partition_count
+
+    @search_partition_count.setter
+    def search_partition_count(self, value):
+        self._search_partition_count = int(value)
+            
+    @property
+    def search_instance_count(self):
+        return self._search_instance_count
+
+    @search_instance_count.setter
+    def search_instance_count(self, value):
+        self._search_instance_count = int(value)
+            
+    @property
+    def num_searchable_docs(self):
+        return self._num_searchable_docs
+
+    @num_searchable_docs.setter
+    def num_searchable_docs(self, value):
+        self._num_searchable_docs = int(value)
+            
+    @property
+    def name(self):
+        return self.domain_name
+
+    @property
+    def id(self):
+        return self.domain_id
+
+    def delete(self):
+        """
+        Delete this domain and all index data associated with it.
+        """
+        return self.layer1.delete_domain(self.name)
+
+    def get_stemming(self):
+        """
+        Return a :class:`boto.cloudsearch.option.OptionStatus` object
+        representing the currently defined stemming options for
+        the domain.
+        """
+        return OptionStatus(self, None,
+                            self.layer1.describe_stemming_options,
+                            self.layer1.update_stemming_options)
+
+    def get_stopwords(self):
+        """
+        Return a :class:`boto.cloudsearch.option.OptionStatus` object
+        representing the currently defined stopword options for
+        the domain.
+        """
+        return OptionStatus(self, None,
+                            self.layer1.describe_stopword_options,
+                            self.layer1.update_stopword_options)
+
+    def get_synonyms(self):
+        """
+        Return a :class:`boto.cloudsearch.option.OptionStatus` object
+        representing the currently defined synonym options for
+        the domain.
+        """
+        return OptionStatus(self, None,
+                            self.layer1.describe_synonym_options,
+                            self.layer1.update_synonym_options)
+
+    def get_access_policies(self):
+        """
+        Return a :class:`boto.cloudsearch.option.OptionStatus` object
+        representing the currently defined access policies for
+        the domain.
+        """
+        return ServicePoliciesStatus(self, None,
+                                     self.layer1.describe_service_access_policies,
+                                     self.layer1.update_service_access_policies)
+
+    def index_documents(self):
+        """
+        Tells the search domain to start indexing its documents using
+        the latest text processing options and IndexFields. This
+        operation must be invoked to make options whose OptionStatus
+        has OptioState of RequiresIndexDocuments visible in search
+        results.
+        """
+        self.layer1.index_documents(self.name)
+
+    def get_index_fields(self, field_names=None):
+        """
+        Return a list of index fields defined for this domain.
+        """
+        data = self.layer1.describe_index_fields(self.name, field_names)
+        return [IndexFieldStatus(self, d) for d in data]
+
+    def create_index_field(self, field_name, field_type,
+        default='', facet=False, result=False, searchable=False,
+        source_attributes=[]):
+        """
+        Defines an ``IndexField``, either replacing an existing
+        definition or creating a new one.
+
+        :type field_name: string
+        :param field_name: The name of a field in the search index.
+
+        :type field_type: string
+        :param field_type: The type of field.  Valid values are
+            uint | literal | text
+
+        :type default: string or int
+        :param default: The default value for the field.  If the
+            field is of type ``uint`` this should be an integer value.
+            Otherwise, it's a string.
+
+        :type facet: bool
+        :param facet: A boolean to indicate whether facets
+            are enabled for this field or not.  Does not apply to
+            fields of type ``uint``.
+
+        :type results: bool
+        :param results: A boolean to indicate whether values
+            of this field can be returned in search results or
+            used in ranking.  Does not apply to fields of type ``uint``.
+
+        :type searchable: bool
+        :param searchable: A boolean to indicate whether search
+            is enabled for this field or not.  Applies only to fields
+            of type ``literal``.
+
+        :type source_attributes: list of dicts
+        :param source_attributes: An optional list of dicts that
+            provide information about attributes for this index field.
+            A maximum of 20 source attributes can be configured for
+            each index field.
+
+            Each item in the list is a dict with the following keys:
+
+            * data_copy - The value is a dict with the following keys:
+                * default - Optional default value if the source attribute
+                    is not specified in a document.
+                * name - The name of the document source field to add
+                    to this ``IndexField``.
+            * data_function - Identifies the transformation to apply
+                when copying data from a source attribute.
+            * data_map - The value is a dict with the following keys:
+                * cases - A dict that translates source field values
+                    to custom values.
+                * default - An optional default value to use if the
+                    source attribute is not specified in a document.
+                * name - the name of the document source field to add
+                    to this ``IndexField``
+            * data_trim_title - Trims common title words from a source
+                document attribute when populating an ``IndexField``.
+                This can be used to create an ``IndexField`` you can
+                use for sorting.  The value is a dict with the following
+                fields:
+                * default - An optional default value.
+                * language - an IETF RFC 4646 language code.
+                * separator - The separator that follows the text to trim.
+                * name - The name of the document source field to add.
+
+        :raises: BaseException, InternalException, LimitExceededException,
+            InvalidTypeException, ResourceNotFoundException
+        """
+        data = self.layer1.define_index_field(self.name, field_name,
+                                              field_type, default=default,
+                                              facet=facet, result=result,
+                                              searchable=searchable,
+                                              source_attributes=source_attributes)
+        return IndexFieldStatus(self, data,
+                                self.layer1.describe_index_fields)
+
+    def get_rank_expressions(self, rank_names=None):
+        """
+        Return a list of rank expressions defined for this domain.
+        """
+        fn = self.layer1.describe_rank_expressions
+        data = fn(self.name, rank_names)
+        return [RankExpressionStatus(self, d, fn) for d in data]
+
+    def create_rank_expression(self, name, expression):
+        """
+        Create a new rank expression.
+        
+        :type rank_name: string
+        :param rank_name: The name of an expression computed for ranking
+            while processing a search request.
+
+        :type rank_expression: string
+        :param rank_expression: The expression to evaluate for ranking
+            or thresholding while processing a search request. The
+            RankExpression syntax is based on JavaScript expressions
+            and supports:
+
+            * Integer, floating point, hex and octal literals
+            * Shortcut evaluation of logical operators such that an
+                expression a || b evaluates to the value a if a is
+                true without evaluting b at all
+            * JavaScript order of precedence for operators
+            * Arithmetic operators: + - * / %
+            * Boolean operators (including the ternary operator)
+            * Bitwise operators
+            * Comparison operators
+            * Common mathematic functions: abs ceil erf exp floor
+                lgamma ln log2 log10 max min sqrt pow
+            * Trigonometric library functions: acosh acos asinh asin
+                atanh atan cosh cos sinh sin tanh tan
+            * Random generation of a number between 0 and 1: rand
+            * Current time in epoch: time
+            * The min max functions that operate on a variable argument list
+
+            Intermediate results are calculated as double precision
+            floating point values. The final return value of a
+            RankExpression is automatically converted from floating
+            point to a 32-bit unsigned integer by rounding to the
+            nearest integer, with a natural floor of 0 and a ceiling
+            of max(uint32_t), 4294967295. Mathematical errors such as
+            dividing by 0 will fail during evaluation and return a
+            value of 0.
+
+            The source data for a RankExpression can be the name of an
+            IndexField of type uint, another RankExpression or the
+            reserved name text_relevance. The text_relevance source is
+            defined to return an integer from 0 to 1000 (inclusive) to
+            indicate how relevant a document is to the search request,
+            taking into account repetition of search terms in the
+            document and proximity of search terms to each other in
+            each matching IndexField in the document.
+
+            For more information about using rank expressions to
+            customize ranking, see the Amazon CloudSearch Developer
+            Guide.
+
+        :raises: BaseException, InternalException, LimitExceededException,
+            InvalidTypeException, ResourceNotFoundException
+        """
+        data = self.layer1.define_rank_expression(self.name, name, expression)
+        return RankExpressionStatus(self, data,
+                                    self.layer1.describe_rank_expressions)
+
+    def get_document_service(self):
+        return DocumentServiceConnection(domain=self)
+
+    def get_search_service(self):
+        return SearchConnection(domain=self)
+
+    def __repr__(self):
+        return '<Domain: %s>' % self.domain_name
+
diff --git a/boto/cloudsearch/layer1.py b/boto/cloudsearch/layer1.py
new file mode 100644
index 0000000..054fc32
--- /dev/null
+++ b/boto/cloudsearch/layer1.py
@@ -0,0 +1,738 @@
+# Copyright (c) 2012 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2012 Amazon.com, Inc. or its affiliates.
+# All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+
+import boto
+import boto.jsonresponse
+from boto.connection import AWSQueryConnection
+from boto.regioninfo import RegionInfo
+
+#boto.set_stream_logger('cloudsearch')
+
+
+def do_bool(val):
+    return 'true' if val in [True, 1, '1', 'true'] else 'false'
+
+
+class Layer1(AWSQueryConnection):
+
+    APIVersion = '2011-02-01'
+    DefaultRegionName = boto.config.get('Boto', 'cs_region_name', 'us-east-1')
+    DefaultRegionEndpoint = boto.config.get('Boto', 'cs_region_endpoint',
+                                            'cloudsearch.us-east-1.amazonaws.com')
+
+    def __init__(self, aws_access_key_id=None, aws_secret_access_key=None,
+                 is_secure=True, host=None, port=None,
+                 proxy=None, proxy_port=None,
+                 proxy_user=None, proxy_pass=None, debug=0,
+                 https_connection_factory=None, region=None, path='/',
+                 api_version=None, security_token=None,
+                 validate_certs=True):
+        if not region:
+            region = RegionInfo(self, self.DefaultRegionName,
+                                self.DefaultRegionEndpoint)
+        self.region = region
+        AWSQueryConnection.__init__(self, aws_access_key_id,
+                                    aws_secret_access_key,
+                                    is_secure, port, proxy, proxy_port,
+                                    proxy_user, proxy_pass,
+                                    self.region.endpoint, debug,
+                                    https_connection_factory, path,
+                                    security_token,
+                                    validate_certs=validate_certs)
+
+    def _required_auth_capability(self):
+        return ['sign-v2']
+
+    def get_response(self, doc_path, action, params, path='/',
+                     parent=None, verb='GET', list_marker=None):
+        if not parent:
+            parent = self
+        response = self.make_request(action, params, path, verb)
+        body = response.read()
+        boto.log.debug(body)
+        if response.status == 200:
+            e = boto.jsonresponse.Element(
+                list_marker=list_marker if list_marker else 'Set',
+                pythonize_name=True)
+            h = boto.jsonresponse.XmlHandler(e, parent)
+            h.parse(body)
+            inner = e
+            for p in doc_path:
+                inner = inner.get(p)
+            if not inner:
+                return None if list_marker == None else []
+            if isinstance(inner, list):
+                return [dict(**i) for i in inner]
+            else:
+                return dict(**inner)
+        else:
+            raise self.ResponseError(response.status, response.reason, body)
+
+    def create_domain(self, domain_name):
+        """
+        Create a new search domain.
+
+        :type domain_name: string
+        :param domain_name: A string that represents the name of a
+            domain. Domain names must be unique across the domains
+            owned by an account within an AWS region. Domain names
+            must start with a letter or number and can contain the
+            following characters: a-z (lowercase), 0-9, and -
+            (hyphen). Uppercase letters and underscores are not
+            allowed.
+
+        :raises: BaseException, InternalException, LimitExceededException
+        """
+        doc_path = ('create_domain_response',
+                    'create_domain_result',
+                    'domain_status')
+        params = {'DomainName': domain_name}
+        return self.get_response(doc_path, 'CreateDomain',
+                                 params, verb='POST')
+
+    def define_index_field(self, domain_name, field_name, field_type,
+                           default='', facet=False, result=False,
+                           searchable=False, source_attributes=None):
+        """
+        Defines an ``IndexField``, either replacing an existing
+        definition or creating a new one.
+
+        :type domain_name: string
+        :param domain_name: A string that represents the name of a
+            domain. Domain names must be unique across the domains
+            owned by an account within an AWS region. Domain names
+            must start with a letter or number and can contain the
+            following characters: a-z (lowercase), 0-9, and -
+            (hyphen). Uppercase letters and underscores are not
+            allowed.
+
+        :type field_name: string
+        :param field_name: The name of a field in the search index.
+
+        :type field_type: string
+        :param field_type: The type of field.  Valid values are
+            uint | literal | text
+
+        :type default: string or int
+        :param default: The default value for the field.  If the
+            field is of type ``uint`` this should be an integer value.
+            Otherwise, it's a string.
+
+        :type facet: bool
+        :param facet: A boolean to indicate whether facets
+            are enabled for this field or not.  Does not apply to
+            fields of type ``uint``.
+
+        :type results: bool
+        :param results: A boolean to indicate whether values
+            of this field can be returned in search results or
+            used in ranking.  Does not apply to fields of type ``uint``.
+
+        :type searchable: bool
+        :param searchable: A boolean to indicate whether search
+            is enabled for this field or not.  Applies only to fields
+            of type ``literal``.
+
+        :type source_attributes: list of dicts
+        :param source_attributes: An optional list of dicts that
+            provide information about attributes for this index field.
+            A maximum of 20 source attributes can be configured for
+            each index field.
+
+            Each item in the list is a dict with the following keys:
+
+            * data_copy - The value is a dict with the following keys:
+                * default - Optional default value if the source attribute
+                    is not specified in a document.
+                * name - The name of the document source field to add
+                    to this ``IndexField``.
+            * data_function - Identifies the transformation to apply
+                when copying data from a source attribute.
+            * data_map - The value is a dict with the following keys:
+                * cases - A dict that translates source field values
+                    to custom values.
+                * default - An optional default value to use if the
+                    source attribute is not specified in a document.
+                * name - the name of the document source field to add
+                    to this ``IndexField``
+            * data_trim_title - Trims common title words from a source
+                document attribute when populating an ``IndexField``.
+                This can be used to create an ``IndexField`` you can
+                use for sorting.  The value is a dict with the following
+                fields:
+                * default - An optional default value.
+                * language - an IETF RFC 4646 language code.
+                * separator - The separator that follows the text to trim.
+                * name - The name of the document source field to add.
+
+        :raises: BaseException, InternalException, LimitExceededException,
+            InvalidTypeException, ResourceNotFoundException
+        """
+        doc_path = ('define_index_field_response',
+                    'define_index_field_result',
+                    'index_field')
+        params = {'DomainName': domain_name,
+                  'IndexField.IndexFieldName': field_name,
+                  'IndexField.IndexFieldType': field_type}
+        if field_type == 'literal':
+            params['IndexField.LiteralOptions.DefaultValue'] = default
+            params['IndexField.LiteralOptions.FacetEnabled'] = do_bool(facet)
+            params['IndexField.LiteralOptions.ResultEnabled'] = do_bool(result)
+            params['IndexField.LiteralOptions.SearchEnabled'] = do_bool(searchable)
+        elif field_type == 'uint':
+            params['IndexField.UIntOptions.DefaultValue'] = default
+        elif field_type == 'text':
+            params['IndexField.TextOptions.DefaultValue'] = default
+            params['IndexField.TextOptions.FacetEnabled'] = do_bool(facet)
+            params['IndexField.TextOptions.ResultEnabled'] = do_bool(result)
+
+        return self.get_response(doc_path, 'DefineIndexField',
+                                 params, verb='POST')
+
+    def define_rank_expression(self, domain_name, rank_name, rank_expression):
+        """
+        Defines a RankExpression, either replacing an existing
+        definition or creating a new one.
+
+        :type domain_name: string
+        :param domain_name: A string that represents the name of a
+            domain. Domain names must be unique across the domains
+            owned by an account within an AWS region. Domain names
+            must start with a letter or number and can contain the
+            following characters: a-z (lowercase), 0-9, and -
+            (hyphen). Uppercase letters and underscores are not
+            allowed.
+
+        :type rank_name: string
+        :param rank_name: The name of an expression computed for ranking
+            while processing a search request.
+
+        :type rank_expression: string
+        :param rank_expression: The expression to evaluate for ranking
+            or thresholding while processing a search request. The
+            RankExpression syntax is based on JavaScript expressions
+            and supports:
+
+            * Integer, floating point, hex and octal literals
+            * Shortcut evaluation of logical operators such that an
+                expression a || b evaluates to the value a if a is
+                true without evaluting b at all
+            * JavaScript order of precedence for operators
+            * Arithmetic operators: + - * / %
+            * Boolean operators (including the ternary operator)
+            * Bitwise operators
+            * Comparison operators
+            * Common mathematic functions: abs ceil erf exp floor
+                lgamma ln log2 log10 max min sqrt pow
+            * Trigonometric library functions: acosh acos asinh asin
+                atanh atan cosh cos sinh sin tanh tan
+            * Random generation of a number between 0 and 1: rand
+            * Current time in epoch: time
+            * The min max functions that operate on a variable argument list
+
+            Intermediate results are calculated as double precision
+            floating point values. The final return value of a
+            RankExpression is automatically converted from floating
+            point to a 32-bit unsigned integer by rounding to the
+            nearest integer, with a natural floor of 0 and a ceiling
+            of max(uint32_t), 4294967295. Mathematical errors such as
+            dividing by 0 will fail during evaluation and return a
+            value of 0.
+
+            The source data for a RankExpression can be the name of an
+            IndexField of type uint, another RankExpression or the
+            reserved name text_relevance. The text_relevance source is
+            defined to return an integer from 0 to 1000 (inclusive) to
+            indicate how relevant a document is to the search request,
+            taking into account repetition of search terms in the
+            document and proximity of search terms to each other in
+            each matching IndexField in the document.
+
+            For more information about using rank expressions to
+            customize ranking, see the Amazon CloudSearch Developer
+            Guide.
+
+        :raises: BaseException, InternalException, LimitExceededException,
+            InvalidTypeException, ResourceNotFoundException
+        """
+        doc_path = ('define_rank_expression_response',
+                    'define_rank_expression_result',
+                    'rank_expression')
+        params = {'DomainName': domain_name,
+                  'RankExpression.RankExpression': rank_expression,
+                  'RankExpression.RankName': rank_name}
+        return self.get_response(doc_path, 'DefineRankExpression',
+                                 params, verb='POST')
+
+    def delete_domain(self, domain_name):
+        """
+        Delete a search domain.
+
+        :type domain_name: string
+        :param domain_name: A string that represents the name of a
+            domain. Domain names must be unique across the domains
+            owned by an account within an AWS region. Domain names
+            must start with a letter or number and can contain the
+            following characters: a-z (lowercase), 0-9, and -
+            (hyphen). Uppercase letters and underscores are not
+            allowed.
+
+        :raises: BaseException, InternalException
+        """
+        doc_path = ('delete_domain_response',
+                    'delete_domain_result',
+                    'domain_status')
+        params = {'DomainName': domain_name}
+        return self.get_response(doc_path, 'DeleteDomain',
+                                 params, verb='POST')
+
+    def delete_index_field(self, domain_name, field_name):
+        """
+        Deletes an existing ``IndexField`` from the search domain.
+
+        :type domain_name: string
+        :param domain_name: A string that represents the name of a
+            domain. Domain names must be unique across the domains
+            owned by an account within an AWS region. Domain names
+            must start with a letter or number and can contain the
+            following characters: a-z (lowercase), 0-9, and -
+            (hyphen). Uppercase letters and underscores are not
+            allowed.
+
+        :type field_name: string
+        :param field_name: A string that represents the name of
+            an index field. Field names must begin with a letter and
+            can contain the following characters: a-z (lowercase),
+            0-9, and _ (underscore). Uppercase letters and hyphens are
+            not allowed. The names "body", "docid", and
+            "text_relevance" are reserved and cannot be specified as
+            field or rank expression names.
+
+        :raises: BaseException, InternalException, ResourceNotFoundException
+        """
+        doc_path = ('delete_index_field_response',
+                    'delete_index_field_result',
+                    'index_field')
+        params = {'DomainName': domain_name,
+                  'IndexFieldName': field_name}
+        return self.get_response(doc_path, 'DeleteIndexField',
+                                 params, verb='POST')
+
+    def delete_rank_expression(self, domain_name, rank_name):
+        """
+        Deletes an existing ``RankExpression`` from the search domain.
+
+        :type domain_name: string
+        :param domain_name: A string that represents the name of a
+            domain. Domain names must be unique across the domains
+            owned by an account within an AWS region. Domain names
+            must start with a letter or number and can contain the
+            following characters: a-z (lowercase), 0-9, and -
+            (hyphen). Uppercase letters and underscores are not
+            allowed.
+
+        :type rank_name: string
+        :param rank_name: Name of the ``RankExpression`` to delete.
+
+        :raises: BaseException, InternalException, ResourceNotFoundException
+        """
+        doc_path = ('delete_rank_expression_response',
+                    'delete_rank_expression_result',
+                    'rank_expression')
+        params = {'DomainName': domain_name, 'RankName': rank_name}
+        return self.get_response(doc_path, 'DeleteRankExpression',
+                                 params, verb='POST')
+
+    def describe_default_search_field(self, domain_name):
+        """
+        Describes options defining the default search field used by
+        indexing for the search domain.
+
+        :type domain_name: string
+        :param domain_name: A string that represents the name of a
+            domain. Domain names must be unique across the domains
+            owned by an account within an AWS region. Domain names
+            must start with a letter or number and can contain the
+            following characters: a-z (lowercase), 0-9, and -
+            (hyphen). Uppercase letters and underscores are not
+            allowed.
+
+        :raises: BaseException, InternalException, ResourceNotFoundException
+        """
+        doc_path = ('describe_default_search_field_response',
+                    'describe_default_search_field_result',
+                    'default_search_field')
+        params = {'DomainName': domain_name}
+        return self.get_response(doc_path, 'DescribeDefaultSearchField',
+                                 params, verb='POST')
+
+    def describe_domains(self, domain_names=None):
+        """
+        Describes the domains (optionally limited to one or more
+        domains by name) owned by this account.
+
+        :type domain_names: list
+        :param domain_names: Limits the response to the specified domains.
+
+        :raises: BaseException, InternalException
+        """
+        doc_path = ('describe_domains_response',
+                    'describe_domains_result',
+                    'domain_status_list')
+        params = {}
+        if domain_names:
+            for i, domain_name in enumerate(domain_names, 1):
+                params['DomainNames.member.%d' % i] = domain_name
+        return self.get_response(doc_path, 'DescribeDomains',
+                                 params, verb='POST',
+                                 list_marker='DomainStatusList')
+
+    def describe_index_fields(self, domain_name, field_names=None):
+        """
+        Describes index fields in the search domain, optionally
+        limited to a single ``IndexField``.
+
+        :type domain_name: string
+        :param domain_name: A string that represents the name of a
+            domain. Domain names must be unique across the domains
+            owned by an account within an AWS region. Domain names
+            must start with a letter or number and can contain the
+            following characters: a-z (lowercase), 0-9, and -
+            (hyphen). Uppercase letters and underscores are not
+            allowed.
+
+        :type field_names: list
+        :param field_names: Limits the response to the specified fields.
+
+        :raises: BaseException, InternalException, ResourceNotFoundException
+        """
+        doc_path = ('describe_index_fields_response',
+                    'describe_index_fields_result',
+                    'index_fields')
+        params = {'DomainName': domain_name}
+        if field_names:
+            for i, field_name in enumerate(field_names, 1):
+                params['FieldNames.member.%d' % i] = field_name
+        return self.get_response(doc_path, 'DescribeIndexFields',
+                                 params, verb='POST',
+                                 list_marker='IndexFields')
+
+    def describe_rank_expressions(self, domain_name, rank_names=None):
+        """
+        Describes RankExpressions in the search domain, optionally
+        limited to a single expression.
+
+        :type domain_name: string
+        :param domain_name: A string that represents the name of a
+            domain. Domain names must be unique across the domains
+            owned by an account within an AWS region. Domain names
+            must start with a letter or number and can contain the
+            following characters: a-z (lowercase), 0-9, and -
+            (hyphen). Uppercase letters and underscores are not
+            allowed.
+
+        :type rank_names: list
+        :param rank_names: Limit response to the specified rank names.
+
+        :raises: BaseException, InternalException, ResourceNotFoundException
+        """
+        doc_path = ('describe_rank_expressions_response',
+                    'describe_rank_expressions_result',
+                    'rank_expressions')
+        params = {'DomainName': domain_name}
+        if rank_names:
+            for i, rank_name in enumerate(rank_names, 1):
+                params['RankNames.member.%d' % i] = rank_name
+        return self.get_response(doc_path, 'DescribeRankExpressions',
+                                 params, verb='POST',
+                                 list_marker='RankExpressions')
+
+    def describe_service_access_policies(self, domain_name):
+        """
+        Describes the resource-based policies controlling access to
+        the services in this search domain.
+
+        :type domain_name: string
+        :param domain_name: A string that represents the name of a
+            domain. Domain names must be unique across the domains
+            owned by an account within an AWS region. Domain names
+            must start with a letter or number and can contain the
+            following characters: a-z (lowercase), 0-9, and -
+            (hyphen). Uppercase letters and underscores are not
+            allowed.
+
+        :raises: BaseException, InternalException, ResourceNotFoundException
+        """
+        doc_path = ('describe_service_access_policies_response',
+                    'describe_service_access_policies_result',
+                    'access_policies')
+        params = {'DomainName': domain_name}
+        return self.get_response(doc_path, 'DescribeServiceAccessPolicies',
+                                 params, verb='POST')
+
+    def describe_stemming_options(self, domain_name):
+        """
+        Describes stemming options used by indexing for the search domain.
+
+        :type domain_name: string
+        :param domain_name: A string that represents the name of a
+            domain. Domain names must be unique across the domains
+            owned by an account within an AWS region. Domain names
+            must start with a letter or number and can contain the
+            following characters: a-z (lowercase), 0-9, and -
+            (hyphen). Uppercase letters and underscores are not
+            allowed.
+
+        :raises: BaseException, InternalException, ResourceNotFoundException
+        """
+        doc_path = ('describe_stemming_options_response',
+                    'describe_stemming_options_result',
+                    'stems')
+        params = {'DomainName': domain_name}
+        return self.get_response(doc_path, 'DescribeStemmingOptions',
+                                 params, verb='POST')
+
+    def describe_stopword_options(self, domain_name):
+        """
+        Describes stopword options used by indexing for the search domain.
+
+        :type domain_name: string
+        :param domain_name: A string that represents the name of a
+            domain. Domain names must be unique across the domains
+            owned by an account within an AWS region. Domain names
+            must start with a letter or number and can contain the
+            following characters: a-z (lowercase), 0-9, and -
+            (hyphen). Uppercase letters and underscores are not
+            allowed.
+
+        :raises: BaseException, InternalException, ResourceNotFoundException
+        """
+        doc_path = ('describe_stopword_options_response',
+                    'describe_stopword_options_result',
+                    'stopwords')
+        params = {'DomainName': domain_name}
+        return self.get_response(doc_path, 'DescribeStopwordOptions',
+                                 params, verb='POST')
+
+    def describe_synonym_options(self, domain_name):
+        """
+        Describes synonym options used by indexing for the search domain.
+
+        :type domain_name: string
+        :param domain_name: A string that represents the name of a
+            domain. Domain names must be unique across the domains
+            owned by an account within an AWS region. Domain names
+            must start with a letter or number and can contain the
+            following characters: a-z (lowercase), 0-9, and -
+            (hyphen). Uppercase letters and underscores are not
+            allowed.
+
+        :raises: BaseException, InternalException, ResourceNotFoundException
+        """
+        doc_path = ('describe_synonym_options_response',
+                    'describe_synonym_options_result',
+                    'synonyms')
+        params = {'DomainName': domain_name}
+        return self.get_response(doc_path, 'DescribeSynonymOptions',
+                                 params, verb='POST')
+
+    def index_documents(self, domain_name):
+        """
+        Tells the search domain to start scanning its documents using
+        the latest text processing options and ``IndexFields``.  This
+        operation must be invoked to make visible in searches any
+        options whose <a>OptionStatus</a> has ``OptionState`` of
+        ``RequiresIndexDocuments``.
+
+        :type domain_name: string
+        :param domain_name: A string that represents the name of a
+            domain. Domain names must be unique across the domains
+            owned by an account within an AWS region. Domain names
+            must start with a letter or number and can contain the
+            following characters: a-z (lowercase), 0-9, and -
+            (hyphen). Uppercase letters and underscores are not
+            allowed.
+
+        :raises: BaseException, InternalException, ResourceNotFoundException
+        """
+        doc_path = ('index_documents_response',
+                    'index_documents_result',
+                    'field_names')
+        params = {'DomainName': domain_name}
+        return self.get_response(doc_path, 'IndexDocuments', params,
+                                 verb='POST', list_marker='FieldNames')
+
+    def update_default_search_field(self, domain_name, default_search_field):
+        """
+        Updates options defining the default search field used by
+        indexing for the search domain.
+
+        :type domain_name: string
+        :param domain_name: A string that represents the name of a
+            domain. Domain names must be unique across the domains
+            owned by an account within an AWS region. Domain names
+            must start with a letter or number and can contain the
+            following characters: a-z (lowercase), 0-9, and -
+            (hyphen). Uppercase letters and underscores are not
+            allowed.
+
+        :type default_search_field: string
+        :param default_search_field: The IndexField to use for search
+            requests issued with the q parameter. The default is an
+            empty string, which automatically searches all text
+            fields.
+
+        :raises: BaseException, InternalException, InvalidTypeException,
+            ResourceNotFoundException
+        """
+        doc_path = ('update_default_search_field_response',
+                    'update_default_search_field_result',
+                    'default_search_field')
+        params = {'DomainName': domain_name,
+                  'DefaultSearchField': default_search_field}
+        return self.get_response(doc_path, 'UpdateDefaultSearchField',
+                                 params, verb='POST')
+
+    def update_service_access_policies(self, domain_name, access_policies):
+        """
+        Updates the policies controlling access to the services in
+        this search domain.
+
+        :type domain_name: string
+        :param domain_name: A string that represents the name of a
+            domain. Domain names must be unique across the domains
+            owned by an account within an AWS region. Domain names
+            must start with a letter or number and can contain the
+            following characters: a-z (lowercase), 0-9, and -
+            (hyphen). Uppercase letters and underscores are not
+            allowed.
+
+        :type access_policies: string
+        :param access_policies: An IAM access policy as described in
+            The Access Policy Language in Using AWS Identity and
+            Access Management. The maximum size of an access policy
+            document is 100KB.
+
+        :raises: BaseException, InternalException, LimitExceededException,
+            ResourceNotFoundException, InvalidTypeException
+        """
+        doc_path = ('update_service_access_policies_response',
+                    'update_service_access_policies_result',
+                    'access_policies')
+        params = {'AccessPolicies': access_policies,
+                  'DomainName': domain_name}
+        return self.get_response(doc_path, 'UpdateServiceAccessPolicies',
+                                 params, verb='POST')
+
+    def update_stemming_options(self, domain_name, stems):
+        """
+        Updates stemming options used by indexing for the search domain.
+
+        :type domain_name: string
+        :param domain_name: A string that represents the name of a
+            domain. Domain names must be unique across the domains
+            owned by an account within an AWS region. Domain names
+            must start with a letter or number and can contain the
+            following characters: a-z (lowercase), 0-9, and -
+            (hyphen). Uppercase letters and underscores are not
+            allowed.
+
+        :type stems: string
+        :param stems: Maps terms to their stems.  The JSON object
+            has a single key called "stems" whose value is a
+            dict mapping terms to their stems. The maximum size
+            of a stemming document is 500KB.
+            Example: {"stems":{"people": "person", "walking":"walk"}}
+
+        :raises: BaseException, InternalException, InvalidTypeException,
+            LimitExceededException, ResourceNotFoundException
+        """
+        doc_path = ('update_stemming_options_response',
+                    'update_stemming_options_result',
+                    'stems')
+        params = {'DomainName': domain_name,
+                 'Stems': stems}
+        return self.get_response(doc_path, 'UpdateStemmingOptions',
+                                 params, verb='POST')
+
+    def update_stopword_options(self, domain_name, stopwords):
+        """
+        Updates stopword options used by indexing for the search domain.
+
+        :type domain_name: string
+        :param domain_name: A string that represents the name of a
+            domain. Domain names must be unique across the domains
+            owned by an account within an AWS region. Domain names
+            must start with a letter or number and can contain the
+            following characters: a-z (lowercase), 0-9, and -
+            (hyphen). Uppercase letters and underscores are not
+            allowed.
+
+        :type stopwords: string
+        :param stopwords: Lists stopwords in a JSON object. The object has a
+            single key called "stopwords" whose value is an array of strings.
+            The maximum size of a stopwords document is 10KB. Example:
+            {"stopwords": ["a", "an", "the", "of"]}
+
+        :raises: BaseException, InternalException, InvalidTypeException,
+            LimitExceededException, ResourceNotFoundException
+        """
+        doc_path = ('update_stopword_options_response',
+                    'update_stopword_options_result',
+                    'stopwords')
+        params = {'DomainName': domain_name,
+                  'Stopwords': stopwords}
+        return self.get_response(doc_path, 'UpdateStopwordOptions',
+                                 params, verb='POST')
+
+    def update_synonym_options(self, domain_name, synonyms):
+        """
+        Updates synonym options used by indexing for the search domain.
+
+        :type domain_name: string
+        :param domain_name: A string that represents the name of a
+            domain. Domain names must be unique across the domains
+            owned by an account within an AWS region. Domain names
+            must start with a letter or number and can contain the
+            following characters: a-z (lowercase), 0-9, and -
+            (hyphen). Uppercase letters and underscores are not
+            allowed.
+
+        :type synonyms: string
+        :param synonyms: Maps terms to their synonyms.  The JSON object
+            has a single key "synonyms" whose value is a dict mapping terms
+            to their synonyms. Each synonym is a simple string or an
+            array of strings. The maximum size of a stopwords document
+            is 100KB. Example:
+            {"synonyms": {"cat": ["feline", "kitten"], "puppy": "dog"}}
+
+        :raises: BaseException, InternalException, InvalidTypeException,
+            LimitExceededException, ResourceNotFoundException
+        """
+        doc_path = ('update_synonym_options_response',
+                    'update_synonym_options_result',
+                    'synonyms')
+        params = {'DomainName': domain_name,
+                  'Synonyms': synonyms}
+        return self.get_response(doc_path, 'UpdateSynonymOptions',
+                                 params, verb='POST')
diff --git a/boto/cloudsearch/layer2.py b/boto/cloudsearch/layer2.py
new file mode 100644
index 0000000..af5c4d1
--- /dev/null
+++ b/boto/cloudsearch/layer2.py
@@ -0,0 +1,67 @@
+# Copyright (c) 2012 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2012 Amazon.com, Inc. or its affiliates.
+# All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+
+from .layer1 import Layer1
+from .domain import Domain
+
+
+class Layer2(object):
+
+    def __init__(self, aws_access_key_id=None, aws_secret_access_key=None,
+                 is_secure=True, port=None, proxy=None, proxy_port=None,
+                 host=None, debug=0, session_token=None, region=None,
+                 validate_certs=True):
+        self.layer1 = Layer1(aws_access_key_id, aws_secret_access_key,
+                             is_secure, port, proxy, proxy_port,
+                             host, debug, session_token, region,
+                             validate_certs=validate_certs)
+
+    def list_domains(self, domain_names=None):
+        """
+        Return a list of :class:`boto.cloudsearch.domain.Domain`
+        objects for each domain defined in the current account.
+        """
+        domain_data = self.layer1.describe_domains(domain_names)
+        return [Domain(self.layer1, data) for data in domain_data]
+
+    def create_domain(self, domain_name):
+        """
+        Create a new CloudSearch domain and return the corresponding
+        :class:`boto.cloudsearch.domain.Domain` object.
+        """
+        data = self.layer1.create_domain(domain_name)
+        return Domain(self.layer1, data)
+
+    def lookup(self, domain_name):
+        """
+        Lookup a single domain
+        :param domain_name: The name of the domain to look up
+        :type domain_name: str
+
+        :return: Domain object, or None if the domain isn't found
+        :rtype: :class:`boto.cloudsearch.domain.Domain`
+        """
+        domains = self.list_domains(domain_names=[domain_name])
+        if len(domains) > 0:
+            return domains[0]
diff --git a/boto/cloudsearch/optionstatus.py b/boto/cloudsearch/optionstatus.py
new file mode 100644
index 0000000..869d82f
--- /dev/null
+++ b/boto/cloudsearch/optionstatus.py
@@ -0,0 +1,249 @@
+# Copyright (c) 2012 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2012 Amazon.com, Inc. or its affiliates.
+# All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+
+try:
+    import simplejson as json
+except ImportError:
+    import json
+
+class OptionStatus(dict):
+    """
+    Presents a combination of status field (defined below) which are
+    accessed as attributes and option values which are stored in the
+    native Python dictionary.  In this class, the option values are
+    merged from a JSON object that is stored as the Option part of
+    the object.
+
+    :ivar domain_name: The name of the domain this option is associated with.
+    :ivar create_date: A timestamp for when this option was created.
+    :ivar state: The state of processing a change to an option.
+        Possible values:
+
+        * RequiresIndexDocuments: the option's latest value will not
+          be visible in searches until IndexDocuments has been called
+          and indexing is complete.
+        * Processing: the option's latest value is not yet visible in
+          all searches but is in the process of being activated.
+        * Active: the option's latest value is completely visible.
+
+    :ivar update_date: A timestamp for when this option was updated.
+    :ivar update_version: A unique integer that indicates when this
+        option was last updated.
+    """
+
+    def __init__(self, domain, data=None, refresh_fn=None, save_fn=None):
+        self.domain = domain
+        self.refresh_fn = refresh_fn
+        self.save_fn = save_fn
+        self.refresh(data)
+
+    def _update_status(self, status):
+        self.creation_date = status['creation_date']
+        self.status = status['state']
+        self.update_date = status['update_date']
+        self.update_version = int(status['update_version'])
+
+    def _update_options(self, options):
+        if options:
+            self.update(json.loads(options))
+
+    def refresh(self, data=None):
+        """
+        Refresh the local state of the object.  You can either pass
+        new state data in as the parameter ``data`` or, if that parameter
+        is omitted, the state data will be retrieved from CloudSearch.
+        """
+        if not data:
+            if self.refresh_fn:
+                data = self.refresh_fn(self.domain.name)
+        if data:
+            self._update_status(data['status'])
+            self._update_options(data['options'])
+
+    def to_json(self):
+        """
+        Return the JSON representation of the options as a string.
+        """
+        return json.dumps(self)
+
+    def startElement(self, name, attrs, connection):
+        return None
+
+    def endElement(self, name, value, connection):
+        if name == 'CreationDate':
+            self.created = value
+        elif name == 'State':
+            self.state = value
+        elif name == 'UpdateDate':
+            self.updated = value
+        elif name == 'UpdateVersion':
+            self.update_version = int(value)
+        elif name == 'Options':
+            self.update_from_json_doc(value)
+        else:
+            setattr(self, name, value)
+
+    def save(self):
+        """
+        Write the current state of the local object back to the
+        CloudSearch service.
+        """
+        if self.save_fn:
+            data = self.save_fn(self.domain.name, self.to_json())
+            self.refresh(data)
+
+    def wait_for_state(self, state):
+        """
+        Performs polling of CloudSearch to wait for the ``state``
+        of this object to change to the provided state.
+        """
+        while self.state != state:
+            time.sleep(5)
+            self.refresh()
+
+
+class IndexFieldStatus(OptionStatus):
+
+    def _update_options(self, options):
+        self.update(options)
+
+    def save(self):
+        pass
+
+
+class RankExpressionStatus(IndexFieldStatus):
+
+    pass
+
+class ServicePoliciesStatus(OptionStatus):
+
+    def new_statement(self, arn, ip):
+        """
+        Returns a new policy statement that will allow
+        access to the service described by ``arn`` by the
+        ip specified in ``ip``.
+
+        :type arn: string
+        :param arn: The Amazon Resource Notation identifier for the
+            service you wish to provide access to.  This would be
+            either the search service or the document service.
+
+        :type ip: string
+        :param ip: An IP address or CIDR block you wish to grant access
+            to.
+        """
+        return {
+                    "Effect":"Allow",
+                    "Action":"*",  # Docs say use GET, but denies unless *
+                    "Resource": arn,
+                    "Condition": {
+                        "IpAddress": {
+                            "aws:SourceIp": [ip]
+                            }
+                        }
+                    }
+
+    def _allow_ip(self, arn, ip):
+        if 'Statement' not in self:
+            s = self.new_statement(arn, ip)
+            self['Statement'] = [s]
+            self.save()
+        else:
+            add_statement = True
+            for statement in self['Statement']:
+                if statement['Resource'] == arn:
+                    for condition_name in statement['Condition']:
+                        if condition_name == 'IpAddress':
+                            add_statement = False
+                            condition = statement['Condition'][condition_name]
+                            if ip not in condition['aws:SourceIp']:
+                                condition['aws:SourceIp'].append(ip)
+
+            if add_statement:
+                s = self.new_statement(arn, ip)
+                self['Statement'].append(s)
+            self.save()
+
+    def allow_search_ip(self, ip):
+        """
+        Add the provided ip address or CIDR block to the list of
+        allowable address for the search service.
+
+        :type ip: string
+        :param ip: An IP address or CIDR block you wish to grant access
+            to.
+        """
+        arn = self.domain.search_service_arn
+        self._allow_ip(arn, ip)
+
+    def allow_doc_ip(self, ip):
+        """
+        Add the provided ip address or CIDR block to the list of
+        allowable address for the document service.
+
+        :type ip: string
+        :param ip: An IP address or CIDR block you wish to grant access
+            to.
+        """
+        arn = self.domain.doc_service_arn
+        self._allow_ip(arn, ip)
+
+    def _disallow_ip(self, arn, ip):
+        if 'Statement' not in self:
+            return
+        need_update = False
+        for statement in self['Statement']:
+            if statement['Resource'] == arn:
+                for condition_name in statement['Condition']:
+                    if condition_name == 'IpAddress':
+                        condition = statement['Condition'][condition_name]
+                        if ip in condition['aws:SourceIp']:
+                            condition['aws:SourceIp'].remove(ip)
+                            need_update = True
+        if need_update:
+            self.save()
+
+    def disallow_search_ip(self, ip):
+        """
+        Remove the provided ip address or CIDR block from the list of
+        allowable address for the search service.
+
+        :type ip: string
+        :param ip: An IP address or CIDR block you wish to grant access
+            to.
+        """
+        arn = self.domain.search_service_arn
+        self._disallow_ip(arn, ip)
+
+    def disallow_doc_ip(self, ip):
+        """
+        Remove the provided ip address or CIDR block from the list of
+        allowable address for the document service.
+
+        :type ip: string
+        :param ip: An IP address or CIDR block you wish to grant access
+            to.
+        """
+        arn = self.domain.doc_service_arn
+        self._disallow_ip(arn, ip)
diff --git a/boto/cloudsearch/search.py b/boto/cloudsearch/search.py
new file mode 100644
index 0000000..f1b16e4
--- /dev/null
+++ b/boto/cloudsearch/search.py
@@ -0,0 +1,298 @@
+# Copyright (c) 2012 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2012 Amazon.com, Inc. or its affiliates.
+# All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+from math import ceil
+import time
+import json
+import boto
+import requests
+
+
+class SearchServiceException(Exception):
+    pass
+
+
+class CommitMismatchError(Exception):
+    pass
+
+
+class SearchResults(object):
+    
+    def __init__(self, **attrs):
+        self.rid = attrs['info']['rid']
+        # self.doc_coverage_pct = attrs['info']['doc-coverage-pct']
+        self.cpu_time_ms = attrs['info']['cpu-time-ms']
+        self.time_ms = attrs['info']['time-ms']
+        self.hits = attrs['hits']['found']
+        self.docs = attrs['hits']['hit']
+        self.start = attrs['hits']['start']
+        self.rank = attrs['rank']
+        self.match_expression = attrs['match-expr']
+        self.query = attrs['query']
+        self.search_service = attrs['search_service']
+
+        self.num_pages_needed = ceil(self.hits / self.query.real_size)
+
+    def __len__(self):
+        return len(self.docs)
+
+    def __iter__(self):
+        return iter(self.docs)
+
+    def next_page(self):
+        """Call Cloudsearch to get the next page of search results
+
+        :rtype: :class:`exfm.cloudsearch.SearchResults`
+        :return: A cloudsearch SearchResults object
+        """
+        if self.query.page <= self.num_pages_needed:
+            self.query.start += self.query.real_size
+            self.query.page += 1
+            return self.search_service(self.query)
+        else:
+            raise StopIteration
+
+
+class Query(object):
+    
+    RESULTS_PER_PAGE = 500
+
+    def __init__(self, q=None, bq=None, rank=None,
+                 return_fields=None, size=10,
+                 start=0, facet=None, facet_constraints=None,
+                 facet_sort=None, facet_top_n=None, t=None):
+
+        self.q = q
+        self.bq = bq
+        self.rank = rank or []
+        self.return_fields = return_fields or []
+        self.start = start
+        self.facet = facet or []
+        self.facet_constraints = facet_constraints or {}
+        self.facet_sort = facet_sort or {}
+        self.facet_top_n = facet_top_n or {}
+        self.t = t or {}
+        self.page = 0
+        self.update_size(size)
+
+    def update_size(self, new_size):
+        self.size = new_size
+        self.real_size = Query.RESULTS_PER_PAGE if (self.size >
+            Query.RESULTS_PER_PAGE or self.size == 0) else self.size
+
+    def to_params(self):
+        """Transform search parameters from instance properties to a dictionary
+
+        :rtype: dict
+        :return: search parameters
+        """
+        params = {'start': self.start, 'size': self.real_size}
+
+        if self.q:
+            params['q'] = self.q
+
+        if self.bq:
+            params['bq'] = self.bq
+
+        if self.rank:
+            params['rank'] = ','.join(self.rank)
+
+        if self.return_fields:
+            params['return-fields'] = ','.join(self.return_fields)
+
+        if self.facet:
+            params['facet'] = ','.join(self.facet)
+
+        if self.facet_constraints:
+            for k, v in self.facet_constraints.iteritems():
+                params['facet-%s-constraints' % k] = v
+
+        if self.facet_sort:
+            for k, v in self.facet_sort.iteritems():
+                params['facet-%s-sort' % k] = v
+
+        if self.facet_top_n:
+            for k, v in self.facet_top_n.iteritems():
+                params['facet-%s-top-n' % k] = v
+
+        if self.t:
+            for k, v in self.t.iteritems():
+                params['t-%s' % k] = v
+        return params
+
+
+class SearchConnection(object):
+    
+    def __init__(self, domain=None, endpoint=None):
+        self.domain = domain
+        self.endpoint = endpoint
+        if not endpoint:
+            self.endpoint = domain.search_service_endpoint
+
+    def build_query(self, q=None, bq=None, rank=None, return_fields=None,
+                    size=10, start=0, facet=None, facet_constraints=None,
+                    facet_sort=None, facet_top_n=None, t=None):
+        return Query(q=q, bq=bq, rank=rank, return_fields=return_fields,
+                     size=size, start=start, facet=facet,
+                     facet_constraints=facet_constraints,
+                     facet_sort=facet_sort, facet_top_n=facet_top_n, t=t)
+
+    def search(self, q=None, bq=None, rank=None, return_fields=None,
+               size=10, start=0, facet=None, facet_constraints=None,
+               facet_sort=None, facet_top_n=None, t=None):
+        """
+        Query Cloudsearch
+
+        :type q:
+        :param q:
+
+        :type bq:
+        :param bq:
+
+        :type rank:
+        :param rank:
+
+        :type return_fields:
+        :param return_fields:
+
+        :type size:
+        :param size:
+
+        :type start:
+        :param start:
+
+        :type facet:
+        :param facet:
+
+        :type facet_constraints:
+        :param facet_constraints:
+
+        :type facet_sort:
+        :param facet_sort:
+
+        :type facet_top_n:
+        :param facet_top_n:
+
+        :type t:
+        :param t:
+
+        :rtype: :class:`exfm.cloudsearch.SearchResults`
+        :return: A cloudsearch SearchResults object
+        """
+
+        query = self.build_query(q=q, bq=bq, rank=rank,
+                                 return_fields=return_fields,
+                                 size=size, start=start, facet=facet,
+                                 facet_constraints=facet_constraints,
+                                 facet_sort=facet_sort,
+                                 facet_top_n=facet_top_n, t=t)
+        return self(query)
+
+    def __call__(self, query):
+        """Make a call to CloudSearch
+
+        :type query: :class:`exfm.cloudsearch.Query`
+        :param query: A fully specified Query instance
+
+        :rtype: :class:`exfm.cloudsearch.SearchResults`
+        :return: A cloudsearch SearchResults object
+        """
+        url = "http://%s/2011-02-01/search" % (self.endpoint)
+        params = query.to_params()
+
+        r = requests.get(url, params=params)
+        data = json.loads(r.content)
+        data['query'] = query
+        data['search_service'] = self
+
+        if 'messages' in data and 'error' in data:
+            for m in data['messages']:
+                if m['severity'] == 'fatal':
+                    raise SearchServiceException("Error processing search %s "
+                        "=> %s" % (params, m['message']), query)
+        elif 'error' in data:
+            raise SearchServiceException("Unknown error processing search %s"
+                % (params), query)
+
+        return SearchResults(**data)
+
+    def get_all_paged(self, query, per_page):
+        """Get a generator to iterate over all pages of search results
+
+        :type query: :class:`exfm.cloudsearch.Query`
+        :param query: A fully specified Query instance
+
+        :type per_page: int
+        :param per_page: Number of docs in each SearchResults object.
+
+        :rtype: generator
+        :return: Generator containing :class:`exfm.cloudsearch.SearchResults`
+        """
+        query.update_size(per_page)
+        page = 0
+        num_pages_needed = 0
+        while page <= num_pages_needed:
+            results = self(query)
+            num_pages_needed = results.num_pages_needed
+            yield results
+            query.start += query.real_size
+            page += 1
+
+    def get_all_hits(self, query):
+        """Get a generator to iterate over all search results
+
+        Transparently handles the results paging from Cloudsearch
+        search results so even if you have many thousands of results
+        you can iterate over all results in a reasonably efficient
+        manner.
+
+        :type query: :class:`exfm.cloudsearch.Query`
+        :param query: A fully specified Query instance
+
+        :rtype: generator
+        :return: All docs matching query
+        """
+        page = 0
+        num_pages_needed = 0
+        while page <= num_pages_needed:
+            results = self(query)
+            num_pages_needed = results.num_pages_needed
+            for doc in results:
+                yield doc
+            query.start += query.real_size
+            page += 1
+
+    def get_num_hits(self, query):
+        """Return the total number of hits for query
+
+        :type query: :class:`exfm.cloudsearch.Query`
+        :param query: A fully specified Query instance
+
+        :rtype: int
+        :return: Total number of hits for query
+        """
+        query.update_size(1)
+        return self(query).hits
+
+
+
diff --git a/boto/cloudsearch/sourceattribute.py b/boto/cloudsearch/sourceattribute.py
new file mode 100644
index 0000000..c343507
--- /dev/null
+++ b/boto/cloudsearch/sourceattribute.py
@@ -0,0 +1,75 @@
+# Copyright (c) 202 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2012 Amazon.com, Inc. or its affiliates.
+# All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+
+class SourceAttribute(object):
+    """
+    Provide information about attributes for an index field.
+    A maximum of 20 source attributes can be configured for
+    each index field.
+
+    :ivar default: Optional default value if the source attribute
+        is not specified in a document.
+        
+    :ivar name: The name of the document source field to add
+        to this ``IndexField``.
+
+    :ivar data_function: Identifies the transformation to apply
+        when copying data from a source attribute.
+        
+    :ivar data_map: The value is a dict with the following keys:
+        * cases - A dict that translates source field values
+            to custom values.
+        * default - An optional default value to use if the
+            source attribute is not specified in a document.
+        * name - the name of the document source field to add
+            to this ``IndexField``
+    :ivar data_trim_title: Trims common title words from a source
+        document attribute when populating an ``IndexField``.
+        This can be used to create an ``IndexField`` you can
+        use for sorting.  The value is a dict with the following
+        fields:
+        * default - An optional default value.
+        * language - an IETF RFC 4646 language code.
+        * separator - The separator that follows the text to trim.
+        * name - The name of the document source field to add.
+    """
+
+    ValidDataFunctions = ('Copy', 'TrimTitle', 'Map')
+
+    def __init__(self):
+        self.data_copy = {}
+        self._data_function = self.ValidDataFunctions[0]
+        self.data_map = {}
+        self.data_trim_title = {}
+
+    @property
+    def data_function(self):
+        return self._data_function
+
+    @data_function.setter
+    def data_function(self, value):
+        if value not in self.ValidDataFunctions:
+            valid = '|'.join(self.ValidDataFunctions)
+            raise ValueError('data_function must be one of: %s' % valid)
+        self._data_function = value
+
diff --git a/boto/connection.py b/boto/connection.py
index 3c9f237..080ff5e 100644
--- a/boto/connection.py
+++ b/boto/connection.py
@@ -1,4 +1,5 @@
-# Copyright (c) 2006-2010 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2006-2012 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2012 Amazon.com, Inc. or its affiliates.
 # Copyright (c) 2010 Google
 # Copyright (c) 2008 rPath, Inc.
 # Copyright (c) 2009 The Echo Nest Corporation
@@ -53,8 +54,10 @@
 import socket
 import sys
 import time
-import urllib, urlparse
+import urllib
+import urlparse
 import xml.sax
+import copy
 
 import auth
 import auth_handler
@@ -64,7 +67,8 @@
 import boto.cacerts
 
 from boto import config, UserAgent
-from boto.exception import AWSConnectionError, BotoClientError, BotoServerError
+from boto.exception import AWSConnectionError, BotoClientError
+from boto.exception import BotoServerError
 from boto.provider import Provider
 from boto.resultset import ResultSet
 
@@ -86,10 +90,11 @@
 ON_APP_ENGINE = all(key in os.environ for key in (
     'USER_IS_ADMIN', 'CURRENT_VERSION_ID', 'APPLICATION_ID'))
 
-PORTS_BY_SECURITY = { True: 443, False: 80 }
+PORTS_BY_SECURITY = {True: 443,
+                     False: 80}
 
-DEFAULT_CA_CERTS_FILE = os.path.join(
-        os.path.dirname(os.path.abspath(boto.cacerts.__file__ )), "cacerts.txt")
+DEFAULT_CA_CERTS_FILE = os.path.join(os.path.dirname(os.path.abspath(boto.cacerts.__file__ )), "cacerts.txt")
+
 
 class HostConnectionPool(object):
 
@@ -127,7 +132,7 @@
         ready to be returned by get().
         """
         return len(self.queue)
-    
+
     def put(self, conn):
         """
         Adds a connection to the pool, along with the time it was
@@ -169,13 +174,13 @@
         state we care about isn't available in any public methods.
         """
         if ON_APP_ENGINE:
-            # Google App Engine implementation of HTTPConnection doesn't contain
+            # Google AppEngine implementation of HTTPConnection doesn't contain
             # _HTTPConnection__response attribute. Moreover, it's not possible
             # to determine if given connection is ready. Reusing connections
             # simply doesn't make sense with App Engine urlfetch service.
             return False
         else:
-            response = conn._HTTPConnection__response
+            response = getattr(conn, '_HTTPConnection__response', None)
             return (response is None) or response.isclosed()
 
     def clean(self):
@@ -196,6 +201,7 @@
         now = time.time()
         return return_time + ConnectionPool.STALE_DURATION < now
 
+
 class ConnectionPool(object):
 
     """
@@ -209,7 +215,7 @@
     #
     # The amout of time between calls to clean.
     #
-    
+
     CLEAN_INTERVAL = 5.0
 
     #
@@ -232,6 +238,18 @@
         # The last time the pool was cleaned.
         self.last_clean_time = 0.0
         self.mutex = threading.Lock()
+        ConnectionPool.STALE_DURATION = \
+            config.getfloat('Boto', 'connection_stale_duration',
+                            ConnectionPool.STALE_DURATION)
+
+    def __getstate__(self):
+        pickled_dict = copy.copy(self.__dict__)
+        pickled_dict['host_to_pool'] = {}
+        del pickled_dict['mutex']
+        return pickled_dict
+
+    def __setstate__(self, dct):
+        self.__init__()
 
     def size(self):
         """
@@ -242,7 +260,9 @@
     def get_http_connection(self, host, is_secure):
         """
         Gets a connection from the pool for the named host.  Returns
-        None if there is no connection that can be reused.
+        None if there is no connection that can be reused. It's the caller's
+        responsibility to call close() on the connection when it's no longer
+        needed.
         """
         self.clean()
         with self.mutex:
@@ -268,7 +288,7 @@
         get rid of empty pools.  Pools clean themselves every time a
         connection is fetched; this cleaning takes care of pools that
         aren't being used any more, so nothing is being gotten from
-        them. 
+        them.
         """
         with self.mutex:
             now = time.time()
@@ -282,6 +302,7 @@
                     del self.host_to_pool[host]
                 self.last_clean_time = now
 
+
 class HTTPRequest(object):
 
     def __init__(self, method, protocol, host, port, path, auth_path,
@@ -299,26 +320,26 @@
 
         :type port: int
         :param port: port on which the request is being sent. Zero means unset,
-                     in which case default port will be chosen.
+            in which case default port will be chosen.
 
         :type path: string
-        :param path: URL path that is bein accessed.
+        :param path: URL path that is being accessed.
 
         :type auth_path: string
         :param path: The part of the URL path used when creating the
-                     authentication string.
+            authentication string.
 
         :type params: dict
-        :param params: HTTP url query parameters, with key as name of the param,
-                       and value as value of param.
+        :param params: HTTP url query parameters, with key as name of
+            the param, and value as value of param.
 
         :type headers: dict
         :param headers: HTTP headers, with key as name of the header and value
-                        as value of header.
+            as value of header.
 
         :type body: string
         :param body: Body of the HTTP request. If not present, will be None or
-                     empty string ('').
+            empty string ('').
         """
         self.method = method
         self.protocol = protocol
@@ -356,17 +377,50 @@
         self.headers['User-Agent'] = UserAgent
         # I'm not sure if this is still needed, now that add_auth is
         # setting the content-length for POST requests.
-        if not self.headers.has_key('Content-Length'):
-            if not self.headers.has_key('Transfer-Encoding') or \
+        if 'Content-Length' not in self.headers:
+            if 'Transfer-Encoding' not in self.headers or \
                     self.headers['Transfer-Encoding'] != 'chunked':
                 self.headers['Content-Length'] = str(len(self.body))
 
+
+class HTTPResponse(httplib.HTTPResponse):
+
+    def __init__(self, *args, **kwargs):
+        httplib.HTTPResponse.__init__(self, *args, **kwargs)
+        self._cached_response = ''
+
+    def read(self, amt=None):
+        """Read the response.
+
+        This method does not have the same behavior as
+        httplib.HTTPResponse.read.  Instead, if this method is called with
+        no ``amt`` arg, then the response body will be cached.  Subsequent
+        calls to ``read()`` with no args **will return the cached response**.
+
+        """
+        if amt is None:
+            # The reason for doing this is that many places in boto call
+            # response.read() and except to get the response body that they
+            # can then process.  To make sure this always works as they expect
+            # we're caching the response so that multiple calls to read()
+            # will return the full body.  Note that this behavior only
+            # happens if the amt arg is not specified.
+            if not self._cached_response:
+                self._cached_response = httplib.HTTPResponse.read(self)
+            return self._cached_response
+        else:
+            return httplib.HTTPResponse.read(self, amt)
+
+
 class AWSAuthConnection(object):
-    def __init__(self, host, aws_access_key_id=None, aws_secret_access_key=None,
+    def __init__(self, host, aws_access_key_id=None,
+                 aws_secret_access_key=None,
                  is_secure=True, port=None, proxy=None, proxy_port=None,
                  proxy_user=None, proxy_pass=None, debug=0,
                  https_connection_factory=None, path='/',
-                 provider='aws', security_token=None):
+                 provider='aws', security_token=None,
+                 suppress_consec_slashes=True,
+                 validate_certs=True):
         """
         :type host: str
         :param host: The host to make the connection to
@@ -383,9 +437,8 @@
 
         :type https_connection_factory: list or tuple
         :param https_connection_factory: A pair of an HTTP connection
-                                         factory and the exceptions to catch.
-                                         The factory should have a similar
-                                         interface to L{httplib.HTTPSConnection}.
+            factory and the exceptions to catch.  The factory should have
+            a similar interface to L{httplib.HTTPSConnection}.
 
         :param str proxy: Address/hostname for a proxy server
 
@@ -400,16 +453,28 @@
 
         :type port: int
         :param port: The port to use to connect
+
+        :type suppress_consec_slashes: bool
+        :param suppress_consec_slashes: If provided, controls whether
+            consecutive slashes will be suppressed in key paths.
+
+        :type validate_certs: bool
+        :param validate_certs: Controls whether SSL certificates
+            will be validated or not.  Defaults to True.
         """
+        self.suppress_consec_slashes = suppress_consec_slashes
         self.num_retries = 6
         # Override passed-in is_secure setting if value was defined in config.
         if config.has_option('Boto', 'is_secure'):
             is_secure = config.getboolean('Boto', 'is_secure')
         self.is_secure = is_secure
-        # Whether or not to validate server certificates.  At some point in the
-        # future, the default should be flipped to true.
+        # Whether or not to validate server certificates.
+        # The default is now to validate certificates.  This can be
+        # overridden in the boto config file are by passing an
+        # explicit validate_certs parameter to the class constructor.
         self.https_validate_certificates = config.getbool(
-                'Boto', 'https_validate_certificates', False)
+            'Boto', 'https_validate_certificates',
+            validate_certs)
         if self.https_validate_certificates and not HAVE_HTTPS_CONNECTION:
             raise BotoClientError(
                     "SSL server certificate validation is enabled in boto "
@@ -426,7 +491,6 @@
         # define subclasses of the above that are not retryable.
         self.http_unretryable_exceptions = []
         if HAVE_HTTPS_CONNECTION:
-            self.http_unretryable_exceptions.append(ssl.SSLError)
             self.http_unretryable_exceptions.append(
                     https_connection.InvalidCertificateException)
 
@@ -443,10 +507,10 @@
             self.protocol = 'http'
         self.host = host
         self.path = path
-        if debug:
-            self.debug = debug
-        else:
-            self.debug = config.getint('Boto', 'debug', debug)
+        # if the value passed in for debug
+        if not isinstance(debug, (int, long)):
+            debug = 0
+        self.debug = config.getint('Boto', 'debug', debug)
         if port:
             self.port = port
         else:
@@ -463,10 +527,15 @@
                 timeout = config.getint('Boto', 'http_socket_timeout')
                 self.http_connection_kwargs['timeout'] = timeout
 
-        self.provider = Provider(provider,
-                                 aws_access_key_id,
-                                 aws_secret_access_key,
-                                 security_token)
+        if isinstance(provider, Provider):
+            # Allow overriding Provider
+            self.provider = provider
+        else:
+            self._provider_type = provider
+            self.provider = Provider(self._provider_type,
+                                     aws_access_key_id,
+                                     aws_secret_access_key,
+                                     security_token)
 
         # allow config file to override default host
         if self.provider.host:
@@ -501,6 +570,12 @@
     secret_key = aws_secret_access_key
 
     def get_path(self, path='/'):
+        # The default behavior is to suppress consecutive slashes for reasons
+        # discussed at
+        # https://groups.google.com/forum/#!topic/boto-dev/-ft0XPUy0y8
+        # You can override that behavior with the suppress_consec_slashes param.
+        if not self.suppress_consec_slashes:
+            return self.path + re.sub('^/*', "", path)
         pos = path.find('?')
         if pos >= 0:
             params = path[pos:]
@@ -546,7 +621,7 @@
         self.proxy_port = proxy_port
         self.proxy_user = proxy_user
         self.proxy_pass = proxy_pass
-        if os.environ.has_key('http_proxy') and not self.proxy:
+        if 'http_proxy' in os.environ and not self.proxy:
             pattern = re.compile(
                 '(?:http://)?' \
                 '(?:(?P<user>\w+):(?P<pass>.*)@)?' \
@@ -605,7 +680,13 @@
         else:
             boto.log.debug('establishing HTTP connection: kwargs=%s' %
                     self.http_connection_kwargs)
-            connection = httplib.HTTPConnection(host,
+            if self.https_connection_factory:
+                # even though the factory says https, this is too handy
+                # to not be able to allow overriding for http also.
+                connection = self.https_connection_factory(host,
+                    **self.http_connection_kwargs)
+            else:
+                connection = httplib.HTTPConnection(host,
                     **self.http_connection_kwargs)
         if self.debug > 1:
             connection.set_debuglevel(self.debug)
@@ -614,6 +695,9 @@
         # set a private variable which will enable that
         if host.split(':')[0] == self.host and is_secure == self.is_secure:
             self._connection = (host, is_secure)
+        # Set the response class of the http connection to use our custom
+        # class.
+        connection.response_class = HTTPResponse
         return connection
 
     def put_http_connection(self, host, is_secure, connection):
@@ -632,7 +716,12 @@
         if self.proxy_user and self.proxy_pass:
             for k, v in self.get_proxy_auth_header().items():
                 sock.sendall("%s: %s\r\n" % (k, v))
-        sock.sendall("\r\n")
+            # See discussion about this config option at
+            # https://groups.google.com/forum/?fromgroups#!topic/boto-dev/teenFvOq2Cc
+            if config.getbool('Boto', 'send_crlf_after_proxy_auth_headers', False):
+                sock.sendall("\r\n")
+        else:
+            sock.sendall("\r\n")
         resp = httplib.HTTPResponse(sock, strict=True, debuglevel=self.debug)
         resp.begin()
 
@@ -641,7 +730,8 @@
             # been generated by the socket library
             raise socket.error(-71,
                                "Error talking to HTTP proxy %s:%s: %s (%s)" %
-                               (self.proxy, self.proxy_port, resp.status, resp.reason))
+                               (self.proxy, self.proxy_port,
+                                resp.status, resp.reason))
 
         # We can safely close the response, it duped the original socket
         resp.close()
@@ -683,7 +773,8 @@
         auth = base64.encodestring(self.proxy_user + ':' + self.proxy_pass)
         return {'Proxy-Authorization': 'Basic %s' % auth}
 
-    def _mexe(self, request, sender=None, override_num_retries=None):
+    def _mexe(self, request, sender=None, override_num_retries=None,
+              retry_handler=None):
         """
         mexe - Multi-execute inside a loop, retrying multiple times to handle
                transient Internet errors by simply trying again.
@@ -691,6 +782,7 @@
 
         This code was inspired by the S3Utils classes posted to the boto-users
         Google group by Larry Bates.  Thanks!
+
         """
         boto.log.debug('Method: %s' % request.method)
         boto.log.debug('Path: %s' % request.path)
@@ -711,35 +803,51 @@
             next_sleep = random.random() * (2 ** i)
             try:
                 # we now re-sign each request before it is retried
+                boto.log.debug('Token: %s' % self.provider.security_token)
                 request.authorize(connection=self)
                 if callable(sender):
                     response = sender(connection, request.method, request.path,
                                       request.body, request.headers)
                 else:
-                    connection.request(request.method, request.path, request.body,
-                                       request.headers)
+                    connection.request(request.method, request.path,
+                                       request.body, request.headers)
                     response = connection.getresponse()
                 location = response.getheader('location')
                 # -- gross hack --
                 # httplib gets confused with chunked responses to HEAD requests
                 # so I have to fake it out
-                if request.method == 'HEAD' and getattr(response, 'chunked', False):
+                if request.method == 'HEAD' and getattr(response,
+                                                        'chunked', False):
                     response.chunked = 0
+                if callable(retry_handler):
+                    status = retry_handler(response, i, next_sleep)
+                    if status:
+                        msg, i, next_sleep = status
+                        if msg:
+                            boto.log.debug(msg)
+                        time.sleep(next_sleep)
+                        continue
                 if response.status == 500 or response.status == 503:
-                    boto.log.debug('received %d response, retrying in %3.1f seconds' %
-                                   (response.status, next_sleep))
+                    msg = 'Received %d response.  ' % response.status
+                    msg += 'Retrying in %3.1f seconds' % next_sleep
+                    boto.log.debug(msg)
                     body = response.read()
                 elif response.status < 300 or response.status >= 400 or \
                         not location:
-                    self.put_http_connection(request.host, self.is_secure, connection)
+                    self.put_http_connection(request.host, self.is_secure,
+                                             connection)
                     return response
                 else:
-                    scheme, request.host, request.path, params, query, fragment = \
-                            urlparse.urlparse(location)
+                    scheme, request.host, request.path, \
+                        params, query, fragment = urlparse.urlparse(location)
                     if query:
                         request.path += '?' + query
-                    boto.log.debug('Redirecting: %s' % scheme + '://' + request.host + request.path)
-                    connection = self.get_http_connection(request.host, scheme == 'https')
+                    msg = 'Redirecting: %s' % scheme + '://'
+                    msg += request.host + request.path
+                    boto.log.debug(msg)
+                    connection = self.get_http_connection(request.host,
+                                                          scheme == 'https')
+                    response = None
                     continue
             except self.http_exceptions, e:
                 for unretryable in self.http_unretryable_exceptions:
@@ -750,18 +858,21 @@
                         raise e
                 boto.log.debug('encountered %s exception, reconnecting' % \
                                   e.__class__.__name__)
-                connection = self.new_http_connection(request.host, self.is_secure)
+                connection = self.new_http_connection(request.host,
+                                                      self.is_secure)
             time.sleep(next_sleep)
             i += 1
-        # If we made it here, it's because we have exhausted our retries and stil haven't
-        # succeeded.  So, if we have a response object, use it to raise an exception.
-        # Otherwise, raise the exception that must have already happened.
+        # If we made it here, it's because we have exhausted our retries
+        # and stil haven't succeeded.  So, if we have a response object,
+        # use it to raise an exception.
+        # Otherwise, raise the exception that must have already h#appened.
         if response:
             raise BotoServerError(response.status, response.reason, body)
         elif e:
             raise e
         else:
-            raise BotoClientError('Please report this exception as a Boto Issue!')
+            msg = 'Please report this exception as a Boto Issue!'
+            raise BotoClientError(msg)
 
     def build_base_http_request(self, method, path, auth_path,
                                 params=None, headers=None, data='', host=None):
@@ -789,10 +900,13 @@
                            path, auth_path, params, headers, data)
 
     def make_request(self, method, path, headers=None, data='', host=None,
-                     auth_path=None, sender=None, override_num_retries=None):
+                     auth_path=None, sender=None, override_num_retries=None,
+                     params=None):
         """Makes a request to the server, with stock multiple-retry logic."""
+        if params is None:
+            params = {}
         http_request = self.build_base_http_request(method, path, auth_path,
-                                                    {}, headers, data, host)
+                                                    params, headers, data, host)
         return self._mexe(http_request, sender, override_num_retries)
 
     def close(self):
@@ -800,7 +914,8 @@
         and making a new request will open a connection again."""
 
         boto.log.debug('closing all HTTP connections')
-        self.connection = None  # compat field
+        self._connection = None  # compat field
+
 
 class AWSQueryConnection(AWSAuthConnection):
 
@@ -810,13 +925,15 @@
     def __init__(self, aws_access_key_id=None, aws_secret_access_key=None,
                  is_secure=True, port=None, proxy=None, proxy_port=None,
                  proxy_user=None, proxy_pass=None, host=None, debug=0,
-                 https_connection_factory=None, path='/', security_token=None):
+                 https_connection_factory=None, path='/', security_token=None,
+                 validate_certs=True):
         AWSAuthConnection.__init__(self, host, aws_access_key_id,
                                    aws_secret_access_key,
                                    is_secure, port, proxy,
                                    proxy_port, proxy_user, proxy_pass,
                                    debug, https_connection_factory, path,
-                                   security_token=security_token)
+                                   security_token=security_token,
+                                   validate_certs=validate_certs)
 
     def _required_auth_capability(self):
         return []
@@ -830,7 +947,8 @@
                                                     self.server_name())
         if action:
             http_request.params['Action'] = action
-        http_request.params['Version'] = self.APIVersion
+        if self.APIVersion:
+            http_request.params['Version'] = self.APIVersion
         return self._mexe(http_request)
 
     def build_list_params(self, params, items, label):
diff --git a/boto/core/README b/boto/core/README
new file mode 100644
index 0000000..9c3f217
--- /dev/null
+++ b/boto/core/README
@@ -0,0 +1,58 @@
+What's This All About?
+======================
+
+This directory contains the beginnings of what is hoped will be the
+new core of boto.  We want to move from using httplib to using
+requests.  We also want to offer full support for Python 2.6, 2.7, and
+3.x.  This is a pretty big change and will require some time to roll
+out but this module provides a starting point.
+
+What you will find in this module:
+
+* auth.py provides a SigV2 authentication packages as a args hook for requests.
+* credentials.py provides a way of finding AWS credentials (see below).
+* dictresponse.py provides a generic response handler that parses XML responses
+  and returns them as nested Python data structures.
+* service.py provides a simple example of a service that actually makes an EC2
+  request and returns a response.
+
+Credentials
+===========
+
+Credentials are being handled a bit differently here.  The following
+describes the order of search for credentials:
+
+1. If your local environment for has ACCESS_KEY and SECRET_KEY variables
+   defined, these will be used.
+
+2. If your local environment has AWS_CREDENTIAL_FILE defined, it is assumed
+   that it will be a config file with entries like this:
+
+   [default]
+   access_key = xxxxxxxxxxxxxxxx
+   sercret_key = xxxxxxxxxxxxxxxxxx
+
+   [test]
+   access_key = yyyyyyyyyyyyyy
+   secret_key = yyyyyyyyyyyyyyyyyy
+
+   Each section in the config file is called a persona and you can reference
+   a particular persona by name when instantiating a Service class.
+
+3. If a standard boto config file is found that contains credentials, those
+   will be used.
+
+4. If temporary credentials for an IAM Role are found in the instance
+   metadata of an EC2 instance, these credentials will be used.
+
+Trying Things Out
+=================
+To try this code out, cd to the directory containing the core module.
+
+    >>> import core.service
+    >>> s = core.service.Service()
+    >>> s.describe_instances()
+
+This code should return a Python data structure containing information
+about your currently running EC2 instances.  This example should run in
+Python 2.6.x, 2.7.x and Python 3.x.
\ No newline at end of file
diff --git a/boto/core/__init__.py b/boto/core/__init__.py
new file mode 100644
index 0000000..e27666d
--- /dev/null
+++ b/boto/core/__init__.py
@@ -0,0 +1,23 @@
+# Copyright (c) 2012 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2012 Amazon.com, Inc. or its affiliates.
+# All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
diff --git a/boto/core/auth.py b/boto/core/auth.py
new file mode 100644
index 0000000..890faa5
--- /dev/null
+++ b/boto/core/auth.py
@@ -0,0 +1,78 @@
+# Copyright (c) 2012 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2012 Amazon.com, Inc. or its affiliates.
+# All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+import requests.packages.urllib3
+import hmac
+import base64
+from hashlib import sha256
+import sys
+import datetime
+
+try:
+    from urllib.parse import quote
+except ImportError:
+    from urllib import quote
+
+
+class SigV2Auth(object):
+    """
+    Sign an Query Signature V2 request.
+    """
+    def __init__(self, credentials, api_version=''):
+        self.credentials = credentials
+        self.api_version = api_version
+        self.hmac = hmac.new(self.credentials.secret_key.encode('utf-8'),
+                             digestmod=sha256)
+
+    def calc_signature(self, args):
+        scheme, host, port = requests.packages.urllib3.get_host(args['url'])
+        string_to_sign = '%s\n%s\n%s\n' % (args['method'], host, '/')
+        hmac = self.hmac.copy()
+        args['params']['SignatureMethod'] = 'HmacSHA256'
+        if self.credentials.token:
+            args['params']['SecurityToken'] = self.credentials.token
+        sorted_params = sorted(args['params'])
+        pairs = []
+        for key in sorted_params:
+            value = args['params'][key]
+            pairs.append(quote(key, safe='') + '=' +
+                         quote(value, safe='-_~'))
+        qs = '&'.join(pairs)
+        string_to_sign += qs
+        print('string_to_sign')
+        print(string_to_sign)
+        hmac.update(string_to_sign.encode('utf-8'))
+        b64 = base64.b64encode(hmac.digest()).strip().decode('utf-8')
+        return (qs, b64)
+
+    def add_auth(self, args):
+        args['params']['Action'] = 'DescribeInstances'
+        args['params']['AWSAccessKeyId'] = self.credentials.access_key
+        args['params']['SignatureVersion'] = '2'
+        args['params']['Timestamp'] = datetime.datetime.utcnow().isoformat()
+        args['params']['Version'] = self.api_version
+        qs, signature = self.calc_signature(args)
+        args['params']['Signature'] = signature
+        if args['method'] == 'POST':
+            args['data'] = args['params']
+            args['params'] = {}
diff --git a/boto/core/credentials.py b/boto/core/credentials.py
new file mode 100644
index 0000000..1f315a3
--- /dev/null
+++ b/boto/core/credentials.py
@@ -0,0 +1,154 @@
+# Copyright (c) 2012 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2012 Amazon.com, Inc. or its affiliates.
+# All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+import os
+from six.moves import configparser
+import requests
+import json
+
+
+class Credentials(object):
+    """
+    Holds the credentials needed to authenticate requests.  In addition
+    the Credential object knows how to search for credentials and how
+    to choose the right credentials when multiple credentials are found.
+    """
+
+    def __init__(self, access_key=None, secret_key=None, token=None):
+        self.access_key = access_key
+        self.secret_key = secret_key
+        self.token = token
+
+
+def _search_md(url='http://169.254.169.254/latest/meta-data/iam/'):
+    d = {}
+    try:
+        r = requests.get(url, timeout=.1)
+        if r.content:
+            fields = r.content.split('\n')
+            for field in fields:
+                if field.endswith('/'):
+                    d[field[0:-1]] = get_iam_role(url + field)
+                else:
+                    val = requests.get(url + field).content
+                    if val[0] == '{':
+                        val = json.loads(val)
+                    else:
+                        p = val.find('\n')
+                        if p > 0:
+                            val = r.content.split('\n')
+                    d[field] = val
+    except (requests.Timeout, requests.ConnectionError):
+        pass
+    return d
+
+
+def search_metadata(**kwargs):
+    credentials = None
+    metadata = _search_md()
+    # Assuming there's only one role on the instance profile.
+    if metadata:
+        metadata = metadata['iam']['security-credentials'].values()[0]
+        credentials = Credentials(metadata['AccessKeyId'],
+                                  metadata['SecretAccessKey'],
+                                  metadata['Token'])
+    return credentials
+
+
+def search_environment(**kwargs):
+    """
+    Search for credentials in explicit environment variables.
+    """
+    credentials = None
+    access_key = os.environ.get(kwargs['access_key_name'].upper(), None)
+    secret_key = os.environ.get(kwargs['secret_key_name'].upper(), None)
+    if access_key and secret_key:
+        credentials = Credentials(access_key, secret_key)
+    return credentials
+
+
+def search_file(**kwargs):
+    """
+    If the 'AWS_CREDENTIAL_FILE' environment variable exists, parse that
+    file for credentials.
+    """
+    credentials = None
+    if 'AWS_CREDENTIAL_FILE' in os.environ:
+        persona = kwargs.get('persona', 'default')
+        access_key_name = kwargs['access_key_name']
+        secret_key_name = kwargs['secret_key_name']
+        access_key = secret_key = None
+        path = os.getenv('AWS_CREDENTIAL_FILE')
+        path = os.path.expandvars(path)
+        path = os.path.expanduser(path)
+        cp = configparser.RawConfigParser()
+        cp.read(path)
+        if not cp.has_section(persona):
+            raise ValueError('Persona: %s not found' % persona)
+        if cp.has_option(persona, access_key_name):
+            access_key = cp.get(persona, access_key_name)
+        else:
+            access_key = None
+        if cp.has_option(persona, secret_key_name):
+            secret_key = cp.get(persona, secret_key_name)
+        else:
+            secret_key = None
+        if access_key and secret_key:
+            credentials = Credentials(access_key, secret_key)
+    return credentials
+
+
+def search_boto_config(**kwargs):
+    """
+    Look for credentials in boto config file.
+    """
+    credentials = access_key = secret_key = None
+    if 'BOTO_CONFIG' in os.environ:
+        paths = [os.environ['BOTO_CONFIG']]
+    else:
+        paths = ['/etc/boto.cfg', '~/.boto']
+    paths = [os.path.expandvars(p) for p in paths]
+    paths = [os.path.expanduser(p) for p in paths]
+    cp = configparser.RawConfigParser()
+    cp.read(paths)
+    if cp.has_section('Credentials'):
+        access_key = cp.get('Credentials', 'aws_access_key_id')
+        secret_key = cp.get('Credentials', 'aws_secret_access_key')
+    if access_key and secret_key:
+        credentials = Credentials(access_key, secret_key)
+    return credentials
+
+AllCredentialFunctions = [search_environment,
+                          search_file,
+                          search_boto_config,
+                          search_metadata]
+
+
+def get_credentials(persona='default'):
+    for cred_fn in AllCredentialFunctions:
+        credentials = cred_fn(persona=persona,
+                              access_key_name='access_key',
+                              secret_key_name='secret_key')
+        if credentials:
+            break
+    return credentials
diff --git a/boto/core/dictresponse.py b/boto/core/dictresponse.py
new file mode 100644
index 0000000..3518834
--- /dev/null
+++ b/boto/core/dictresponse.py
@@ -0,0 +1,178 @@
+# Copyright (c) 2012 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2012 Amazon.com, Inc. or its affiliates.
+# All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+
+import xml.sax
+
+
+def pythonize_name(name, sep='_'):
+    s = ''
+    if name[0].isupper:
+        s = name[0].lower()
+    for c in name[1:]:
+        if c.isupper():
+            s += sep + c.lower()
+        else:
+            s += c
+    return s
+
+
+class XmlHandler(xml.sax.ContentHandler):
+
+    def __init__(self, root_node, connection):
+        self.connection = connection
+        self.nodes = [('root', root_node)]
+        self.current_text = ''
+
+    def startElement(self, name, attrs):
+        self.current_text = ''
+        t = self.nodes[-1][1].startElement(name, attrs, self.connection)
+        if t != None:
+            if isinstance(t, tuple):
+                self.nodes.append(t)
+            else:
+                self.nodes.append((name, t))
+
+    def endElement(self, name):
+        self.nodes[-1][1].endElement(name, self.current_text, self.connection)
+        if self.nodes[-1][0] == name:
+            self.nodes.pop()
+        self.current_text = ''
+
+    def characters(self, content):
+        self.current_text += content
+
+    def parse(self, s):
+        xml.sax.parseString(s, self)
+
+
+class Element(dict):
+
+    def __init__(self, connection=None, element_name=None,
+                 stack=None, parent=None, list_marker=None,
+                 item_marker=None, pythonize_name=False):
+        dict.__init__(self)
+        self.connection = connection
+        self.element_name = element_name
+        self.list_marker = list_marker or ['Set']
+        self.item_marker = item_marker or ['member', 'item']
+        if stack is None:
+            self.stack = []
+        else:
+            self.stack = stack
+        self.pythonize_name = pythonize_name
+        self.parent = parent
+
+    def __getattr__(self, key):
+        if key in self:
+            return self[key]
+        for k in self:
+            e = self[k]
+            if isinstance(e, Element):
+                try:
+                    return getattr(e, key)
+                except AttributeError:
+                    pass
+        raise AttributeError
+
+    def get_name(self, name):
+        if self.pythonize_name:
+            name = pythonize_name(name)
+        return name
+
+    def startElement(self, name, attrs, connection):
+        self.stack.append(name)
+        for lm in self.list_marker:
+            if name.endswith(lm):
+                l = ListElement(self.connection, name, self.list_marker,
+                                self.item_marker, self.pythonize_name)
+                self[self.get_name(name)] = l
+                return l
+        if len(self.stack) > 0:
+            element_name = self.stack[-1]
+            e = Element(self.connection, element_name, self.stack, self,
+                        self.list_marker, self.item_marker,
+                        self.pythonize_name)
+            self[self.get_name(element_name)] = e
+            return (element_name, e)
+        else:
+            return None
+
+    def endElement(self, name, value, connection):
+        if len(self.stack) > 0:
+            self.stack.pop()
+        value = value.strip()
+        if value:
+            if isinstance(self.parent, Element):
+                self.parent[self.get_name(name)] = value
+            elif isinstance(self.parent, ListElement):
+                self.parent.append(value)
+
+
+class ListElement(list):
+
+    def __init__(self, connection=None, element_name=None,
+                 list_marker=['Set'], item_marker=('member', 'item'),
+                 pythonize_name=False):
+        list.__init__(self)
+        self.connection = connection
+        self.element_name = element_name
+        self.list_marker = list_marker
+        self.item_marker = item_marker
+        self.pythonize_name = pythonize_name
+
+    def get_name(self, name):
+        if self.pythonize_name:
+            name = utils.pythonize_name(name)
+        return name
+
+    def startElement(self, name, attrs, connection):
+        for lm in self.list_marker:
+            if name.endswith(lm):
+                l = ListElement(self.connection, name,
+                                self.list_marker, self.item_marker,
+                                self.pythonize_name)
+                setattr(self, self.get_name(name), l)
+                return l
+        if name in self.item_marker:
+            e = Element(self.connection, name, parent=self,
+                        list_marker=self.list_marker,
+                        item_marker=self.item_marker,
+                        pythonize_name=self.pythonize_name)
+            self.append(e)
+            return e
+        else:
+            return None
+
+    def endElement(self, name, value, connection):
+        if name == self.element_name:
+            if len(self) > 0:
+                empty = []
+                for e in self:
+                    if isinstance(e, Element):
+                        if len(e) == 0:
+                            empty.append(e)
+                for e in empty:
+                    self.remove(e)
+        else:
+            setattr(self, self.get_name(name), value)
diff --git a/boto/core/service.py b/boto/core/service.py
new file mode 100644
index 0000000..53c53c5
--- /dev/null
+++ b/boto/core/service.py
@@ -0,0 +1,67 @@
+# Copyright (c) 2012 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2012 Amazon.com, Inc. or its affiliates.
+# All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+
+import requests
+from .auth import SigV2Auth
+from .credentials import get_credentials
+from .dictresponse import Element, XmlHandler
+
+
+class Service(object):
+    """
+    This is a simple example service that connects to the EC2 endpoint
+    and supports a single request (DescribeInstances) to show how to
+    use the requests-based code rather than the standard boto code which
+    is based on httplib.  At the moment, the only auth mechanism
+    supported is SigV2.
+    """
+
+    def __init__(self, host='https://ec2.us-east-1.amazonaws.com',
+                 path='/', api_version='2012-03-01', persona=None):
+        self.credentials = get_credentials(persona)
+        self.auth = SigV2Auth(self.credentials, api_version=api_version)
+        self.host = host
+        self.path = path
+
+    def get_response(self, params, list_marker=None):
+        r = requests.post(self.host, params=params,
+                          hooks={'args': self.auth.add_auth})
+        r.encoding = 'utf-8'
+        body = r.text.encode('utf-8')
+        e = Element(list_marker=list_marker, pythonize_name=True)
+        h = XmlHandler(e, self)
+        h.parse(body)
+        return e
+
+    def build_list_params(self, params, items, label):
+        if isinstance(items, str):
+            items = [items]
+        for i in range(1, len(items) + 1):
+            params['%s.%d' % (label, i)] = items[i - 1]
+
+    def describe_instances(self, instance_ids=None):
+        params = {}
+        if instance_ids:
+            self.build_list_params(params, instance_ids, 'InstanceId')
+        return self.get_response(params)
diff --git a/boto/dynamodb/__init__.py b/boto/dynamodb/__init__.py
new file mode 100644
index 0000000..c60b5c3
--- /dev/null
+++ b/boto/dynamodb/__init__.py
@@ -0,0 +1,60 @@
+# Copyright (c) 2011 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2011 Amazon.com, Inc. or its affiliates.  All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+
+from boto.regioninfo import RegionInfo
+
+
+def regions():
+    """
+    Get all available regions for the Amazon DynamoDB service.
+
+    :rtype: list
+    :return: A list of :class:`boto.regioninfo.RegionInfo`
+    """
+    import boto.dynamodb.layer2
+    return [RegionInfo(name='us-east-1',
+                       endpoint='dynamodb.us-east-1.amazonaws.com',
+                       connection_cls=boto.dynamodb.layer2.Layer2),
+            RegionInfo(name='us-west-1',
+                       endpoint='dynamodb.us-west-1.amazonaws.com',
+                       connection_cls=boto.dynamodb.layer2.Layer2),
+            RegionInfo(name='us-west-2',
+                       endpoint='dynamodb.us-west-2.amazonaws.com',
+                       connection_cls=boto.dynamodb.layer2.Layer2),
+            RegionInfo(name='ap-northeast-1',
+                       endpoint='dynamodb.ap-northeast-1.amazonaws.com',
+                       connection_cls=boto.dynamodb.layer2.Layer2),
+            RegionInfo(name='ap-southeast-1',
+                       endpoint='dynamodb.ap-southeast-1.amazonaws.com',
+                       connection_cls=boto.dynamodb.layer2.Layer2),
+            RegionInfo(name='eu-west-1',
+                       endpoint='dynamodb.eu-west-1.amazonaws.com',
+                       connection_cls=boto.dynamodb.layer2.Layer2),
+            ]
+
+
+def connect_to_region(region_name, **kw_params):
+    for region in regions():
+        if region.name == region_name:
+            return region.connect(**kw_params)
+    return None
diff --git a/boto/dynamodb/batch.py b/boto/dynamodb/batch.py
new file mode 100644
index 0000000..87c84fc
--- /dev/null
+++ b/boto/dynamodb/batch.py
@@ -0,0 +1,249 @@
+# Copyright (c) 2012 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2012 Amazon.com, Inc. or its affiliates.  All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+
+
+class Batch(object):
+    """
+    Used to construct a BatchGet request.
+
+    :ivar table: The Table object from which the item is retrieved.
+
+    :ivar keys: A list of scalar or tuple values.  Each element in the
+        list represents one Item to retrieve.  If the schema for the
+        table has both a HashKey and a RangeKey, each element in the
+        list should be a tuple consisting of (hash_key, range_key).  If
+        the schema for the table contains only a HashKey, each element
+        in the list should be a scalar value of the appropriate type
+        for the table schema. NOTE: The maximum number of items that
+        can be retrieved for a single operation is 100. Also, the
+        number of items retrieved is constrained by a 1 MB size limit.
+
+    :ivar attributes_to_get: A list of attribute names.
+        If supplied, only the specified attribute names will
+        be returned.  Otherwise, all attributes will be returned.
+    """
+
+    def __init__(self, table, keys, attributes_to_get=None):
+        self.table = table
+        self.keys = keys
+        self.attributes_to_get = attributes_to_get
+
+    def to_dict(self):
+        """
+        Convert the Batch object into the format required for Layer1.
+        """
+        batch_dict = {}
+        key_list = []
+        for key in self.keys:
+            if isinstance(key, tuple):
+                hash_key, range_key = key
+            else:
+                hash_key = key
+                range_key = None
+            k = self.table.layer2.build_key_from_values(self.table.schema,
+                                                        hash_key, range_key)
+            key_list.append(k)
+        batch_dict['Keys'] = key_list
+        if self.attributes_to_get:
+            batch_dict['AttributesToGet'] = self.attributes_to_get
+        return batch_dict
+
+class BatchWrite(object):
+    """
+    Used to construct a BatchWrite request.  Each BatchWrite object
+    represents a collection of PutItem and DeleteItem requests for
+    a single Table.
+
+    :ivar table: The Table object from which the item is retrieved.
+
+    :ivar puts: A list of :class:`boto.dynamodb.item.Item` objects
+        that you want to write to DynamoDB.
+
+    :ivar deletes: A list of scalar or tuple values.  Each element in the
+        list represents one Item to delete.  If the schema for the
+        table has both a HashKey and a RangeKey, each element in the
+        list should be a tuple consisting of (hash_key, range_key).  If
+        the schema for the table contains only a HashKey, each element
+        in the list should be a scalar value of the appropriate type
+        for the table schema.
+    """
+
+    def __init__(self, table, puts=None, deletes=None):
+        self.table = table
+        self.puts = puts or []
+        self.deletes = deletes or []
+
+    def to_dict(self):
+        """
+        Convert the Batch object into the format required for Layer1.
+        """
+        op_list = []
+        for item in self.puts:
+            d = {'Item': self.table.layer2.dynamize_item(item)}
+            d = {'PutRequest': d}
+            op_list.append(d)
+        for key in self.deletes:
+            if isinstance(key, tuple):
+                hash_key, range_key = key
+            else:
+                hash_key = key
+                range_key = None
+            k = self.table.layer2.build_key_from_values(self.table.schema,
+                                                        hash_key, range_key)
+            d = {'Key': k}
+            op_list.append({'DeleteRequest': d})
+        return (self.table.name, op_list)
+
+
+class BatchList(list):
+    """
+    A subclass of a list object that contains a collection of
+    :class:`boto.dynamodb.batch.Batch` objects.
+    """
+
+    def __init__(self, layer2):
+        list.__init__(self)
+        self.unprocessed = None
+        self.layer2 = layer2
+
+    def add_batch(self, table, keys, attributes_to_get=None):
+        """
+        Add a Batch to this BatchList.
+
+        :type table: :class:`boto.dynamodb.table.Table`
+        :param table: The Table object in which the items are contained.
+
+        :type keys: list
+        :param keys: A list of scalar or tuple values.  Each element in the
+            list represents one Item to retrieve.  If the schema for the
+            table has both a HashKey and a RangeKey, each element in the
+            list should be a tuple consisting of (hash_key, range_key).  If
+            the schema for the table contains only a HashKey, each element
+            in the list should be a scalar value of the appropriate type
+            for the table schema. NOTE: The maximum number of items that
+            can be retrieved for a single operation is 100. Also, the
+            number of items retrieved is constrained by a 1 MB size limit.
+
+        :type attributes_to_get: list
+        :param attributes_to_get: A list of attribute names.
+            If supplied, only the specified attribute names will
+            be returned.  Otherwise, all attributes will be returned.
+        """
+        self.append(Batch(table, keys, attributes_to_get))
+
+    def resubmit(self):
+        """
+        Resubmit the batch to get the next result set. The request object is
+        rebuild from scratch meaning that all batch added between ``submit``
+        and ``resubmit`` will be lost.
+
+        Note: This method is experimental and subject to changes in future releases
+        """
+        del self[:]
+
+        if not self.unprocessed:
+            return None
+
+        for table_name, table_req in self.unprocessed.iteritems():
+            table_keys = table_req['Keys']
+            table = self.layer2.get_table(table_name)
+
+            keys = []
+            for key in table_keys:
+                h = key['HashKeyElement']
+                r = None
+                if 'RangeKeyElement' in key:
+                    r = key['RangeKeyElement']
+                keys.append((h, r))
+
+            attributes_to_get = None
+            if 'AttributesToGet' in table_req:
+                attributes_to_get = table_req['AttributesToGet']
+
+            self.add_batch(table, keys, attributes_to_get=attributes_to_get)
+
+        return self.submit()
+
+
+    def submit(self):
+        res = self.layer2.batch_get_item(self)
+        if 'UnprocessedKeys' in res:
+            self.unprocessed = res['UnprocessedKeys']
+        return res
+
+    def to_dict(self):
+        """
+        Convert a BatchList object into format required for Layer1.
+        """
+        d = {}
+        for batch in self:
+            b = batch.to_dict()
+            if b['Keys']:
+                d[batch.table.name] = b
+        return d
+
+class BatchWriteList(list):
+    """
+    A subclass of a list object that contains a collection of
+    :class:`boto.dynamodb.batch.BatchWrite` objects.
+    """
+
+    def __init__(self, layer2):
+        list.__init__(self)
+        self.layer2 = layer2
+
+    def add_batch(self, table, puts=None, deletes=None):
+        """
+        Add a BatchWrite to this BatchWriteList.
+
+        :type table: :class:`boto.dynamodb.table.Table`
+        :param table: The Table object in which the items are contained.
+
+        :type puts: list of :class:`boto.dynamodb.item.Item` objects
+        :param puts: A list of items that you want to write to DynamoDB.
+
+        :type deletes: A list
+        :param deletes: A list of scalar or tuple values.  Each element
+            in the list represents one Item to delete.  If the schema
+            for the table has both a HashKey and a RangeKey, each
+            element in the list should be a tuple consisting of
+            (hash_key, range_key).  If the schema for the table
+            contains only a HashKey, each element in the list should
+            be a scalar value of the appropriate type for the table
+            schema.
+        """
+        self.append(BatchWrite(table, puts, deletes))
+
+    def submit(self):
+        return self.layer2.batch_write_item(self)
+
+    def to_dict(self):
+        """
+        Convert a BatchWriteList object into format required for Layer1.
+        """
+        d = {}
+        for batch in self:
+            table_name, batch_dict = batch.to_dict()
+            d[table_name] = batch_dict
+        return d
+
diff --git a/boto/dynamodb/condition.py b/boto/dynamodb/condition.py
new file mode 100644
index 0000000..0b76790
--- /dev/null
+++ b/boto/dynamodb/condition.py
@@ -0,0 +1,170 @@
+# Copyright (c) 2012 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2012 Amazon.com, Inc. or its affiliates.  All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+
+from boto.dynamodb.types import dynamize_value
+
+
+class Condition(object):
+    """
+    Base class for conditions.  Doesn't do a darn thing but allows
+    is to test if something is a Condition instance or not.
+    """
+
+    def __eq__(self, other):
+        if isinstance(other, Condition):
+            return self.to_dict() == other.to_dict()
+
+class ConditionNoArgs(Condition):
+    """
+    Abstract class for Conditions that require no arguments, such
+    as NULL or NOT_NULL.
+    """
+
+    def __repr__(self):
+        return '%s' % self.__class__.__name__
+
+    def to_dict(self):
+        return {'ComparisonOperator': self.__class__.__name__}
+
+
+class ConditionOneArg(Condition):
+    """
+    Abstract class for Conditions that require a single argument
+    such as EQ or NE.
+    """
+
+    def __init__(self, v1):
+        self.v1 = v1
+
+    def __repr__(self):
+        return '%s:%s' % (self.__class__.__name__, self.v1)
+
+    def to_dict(self):
+        return {'AttributeValueList': [dynamize_value(self.v1)],
+                'ComparisonOperator': self.__class__.__name__}
+
+
+class ConditionTwoArgs(Condition):
+    """
+    Abstract class for Conditions that require two arguments.
+    The only example of this currently is BETWEEN.
+    """
+
+    def __init__(self, v1, v2):
+        self.v1 = v1
+        self.v2 = v2
+
+    def __repr__(self):
+        return '%s(%s, %s)' % (self.__class__.__name__, self.v1, self.v2)
+
+    def to_dict(self):
+        values = (self.v1, self.v2)
+        return {'AttributeValueList': [dynamize_value(v) for v in values],
+                'ComparisonOperator': self.__class__.__name__}
+
+
+class ConditionSeveralArgs(Condition):
+    """
+    Abstract class for conditions that require several argument (ex: IN).
+    """
+
+    def __init__(self, values):
+        self.values = values
+
+    def __repr__(self):
+        return '{}({})'.format(self.__class__.__name__,
+                               ', '.join(self.values))
+
+    def to_dict(self):
+        return {'AttributeValueList': [dynamize_value(v) for v in self.values],
+                'ComparisonOperator': self.__class__.__name__}
+
+
+class EQ(ConditionOneArg):
+
+    pass
+
+
+class NE(ConditionOneArg):
+
+    pass
+
+
+class LE(ConditionOneArg):
+
+    pass
+
+
+class LT(ConditionOneArg):
+
+    pass
+
+
+class GE(ConditionOneArg):
+
+    pass
+
+
+class GT(ConditionOneArg):
+
+    pass
+
+
+class NULL(ConditionNoArgs):
+
+    pass
+
+
+class NOT_NULL(ConditionNoArgs):
+
+    pass
+
+
+class CONTAINS(ConditionOneArg):
+
+    pass
+
+
+class NOT_CONTAINS(ConditionOneArg):
+
+    pass
+
+
+class BEGINS_WITH(ConditionOneArg):
+
+    pass
+
+
+class IN(ConditionSeveralArgs):
+
+    pass
+
+
+class BEGINS_WITH(ConditionOneArg):
+
+    pass
+
+
+class BETWEEN(ConditionTwoArgs):
+
+    pass
diff --git a/boto/dynamodb/exceptions.py b/boto/dynamodb/exceptions.py
new file mode 100644
index 0000000..b60d5aa
--- /dev/null
+++ b/boto/dynamodb/exceptions.py
@@ -0,0 +1,45 @@
+"""
+Exceptions that are specific to the dynamodb module.
+"""
+from boto.exception import BotoServerError, BotoClientError
+from boto.exception import DynamoDBResponseError
+
+class DynamoDBExpiredTokenError(BotoServerError):
+    """
+    Raised when a DynamoDB security token expires. This is generally boto's
+    (or the user's) notice to renew their DynamoDB security tokens.
+    """
+    pass
+
+
+class DynamoDBKeyNotFoundError(BotoClientError):
+    """
+    Raised when attempting to retrieve or interact with an item whose key
+    can't be found.
+    """
+    pass
+
+
+class DynamoDBItemError(BotoClientError):
+    """
+    Raised when invalid parameters are passed when creating a
+    new Item in DynamoDB.
+    """
+    pass
+
+
+class DynamoDBConditionalCheckFailedError(DynamoDBResponseError):
+    """
+    Raised when a ConditionalCheckFailedException response is received.
+    This happens when a conditional check, expressed via the expected_value
+    paramenter, fails.
+    """
+    pass
+
+class DynamoDBValidationError(DynamoDBResponseError):
+    """
+    Raised when a ValidationException response is received. This happens
+    when one or more required parameter values are missing, or if the item
+    has exceeded the 64Kb size limit.
+    """
+    pass
diff --git a/boto/dynamodb/item.py b/boto/dynamodb/item.py
new file mode 100644
index 0000000..4d4abda
--- /dev/null
+++ b/boto/dynamodb/item.py
@@ -0,0 +1,196 @@
+# Copyright (c) 2012 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2012 Amazon.com, Inc. or its affiliates.  All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+
+from boto.dynamodb.exceptions import DynamoDBItemError
+
+
+class Item(dict):
+    """
+    An item in Amazon DynamoDB.
+
+    :ivar hash_key: The HashKey of this item.
+    :ivar range_key: The RangeKey of this item or None if no RangeKey
+        is defined.
+    :ivar hash_key_name: The name of the HashKey associated with this item.
+    :ivar range_key_name: The name of the RangeKey associated with this item.
+    :ivar table: The Table this item belongs to.
+    """
+    
+    def __init__(self, table, hash_key=None, range_key=None, attrs=None):
+        self.table = table
+        self._updates = None
+        self._hash_key_name = self.table.schema.hash_key_name
+        self._range_key_name = self.table.schema.range_key_name
+        if attrs == None:
+            attrs = {}
+        if hash_key == None:
+            hash_key = attrs.get(self._hash_key_name, None)
+        self[self._hash_key_name] = hash_key
+        if self._range_key_name:
+            if range_key == None:
+                range_key = attrs.get(self._range_key_name, None)
+            self[self._range_key_name] = range_key
+        for key, value in attrs.items():
+            if key != self._hash_key_name and key != self._range_key_name:
+                self[key] = value
+        self.consumed_units = 0
+        self._updates = {}
+
+    @property
+    def hash_key(self):
+        return self[self._hash_key_name]
+
+    @property
+    def range_key(self):
+        return self.get(self._range_key_name)
+
+    @property
+    def hash_key_name(self):
+        return self._hash_key_name
+
+    @property
+    def range_key_name(self):
+        return self._range_key_name
+
+    def add_attribute(self, attr_name, attr_value):
+        """
+        Queue the addition of an attribute to an item in DynamoDB.
+        This will eventually result in an UpdateItem request being issued
+        with an update action of ADD when the save method is called.
+
+        :type attr_name: str
+        :param attr_name: Name of the attribute you want to alter.
+
+        :type attr_value: int|long|float|set
+        :param attr_value: Value which is to be added to the attribute.
+        """
+        self._updates[attr_name] = ("ADD", attr_value)
+
+    def delete_attribute(self, attr_name, attr_value=None):
+        """
+        Queue the deletion of an attribute from an item in DynamoDB.
+        This call will result in a UpdateItem request being issued
+        with update action of DELETE when the save method is called.
+
+        :type attr_name: str
+        :param attr_name: Name of the attribute you want to alter.
+
+        :type attr_value: set
+        :param attr_value: A set of values to be removed from the attribute.
+            This parameter is optional. If None, the whole attribute is
+            removed from the item.
+        """
+        self._updates[attr_name] = ("DELETE", attr_value)
+
+    def put_attribute(self, attr_name, attr_value):
+        """
+        Queue the putting of an attribute to an item in DynamoDB.
+        This call will result in an UpdateItem request being issued
+        with the update action of PUT when the save method is called.
+
+        :type attr_name: str
+        :param attr_name: Name of the attribute you want to alter.
+
+        :type attr_value: int|long|float|str|set
+        :param attr_value: New value of the attribute.
+        """
+        self._updates[attr_name] = ("PUT", attr_value)
+
+    def save(self, expected_value=None, return_values=None):
+        """
+        Commits pending updates to Amazon DynamoDB.
+
+        :type expected_value: dict
+        :param expected_value: A dictionary of name/value pairs that
+            you expect.  This dictionary should have name/value pairs
+            where the name is the name of the attribute and the value is
+            either the value you are expecting or False if you expect
+            the attribute not to exist.
+
+        :type return_values: str
+        :param return_values: Controls the return of attribute name/value pairs
+            before they were updated. Possible values are: None, 'ALL_OLD',
+            'UPDATED_OLD', 'ALL_NEW' or 'UPDATED_NEW'. If 'ALL_OLD' is
+            specified and the item is overwritten, the content of the old item
+            is returned. If 'ALL_NEW' is specified, then all the attributes of
+            the new version of the item are returned. If 'UPDATED_NEW' is
+            specified, the new versions of only the updated attributes are
+            returned.
+        """
+        return self.table.layer2.update_item(self, expected_value,
+                                             return_values)
+
+    def delete(self, expected_value=None, return_values=None):
+        """
+        Delete the item from DynamoDB.
+
+        :type expected_value: dict
+        :param expected_value: A dictionary of name/value pairs that
+            you expect.  This dictionary should have name/value pairs
+            where the name is the name of the attribute and the value
+            is either the value you are expecting or False if you expect
+            the attribute not to exist.
+
+        :type return_values: str
+        :param return_values: Controls the return of attribute
+            name-value pairs before then were changed.  Possible
+            values are: None or 'ALL_OLD'. If 'ALL_OLD' is
+            specified and the item is overwritten, the content
+            of the old item is returned.
+        """
+        return self.table.layer2.delete_item(self, expected_value,
+                                             return_values)
+
+    def put(self, expected_value=None, return_values=None):
+        """
+        Store a new item or completely replace an existing item
+        in Amazon DynamoDB.
+
+        :type expected_value: dict
+        :param expected_value: A dictionary of name/value pairs that
+            you expect.  This dictionary should have name/value pairs
+            where the name is the name of the attribute and the value
+            is either the value you are expecting or False if you expect
+            the attribute not to exist.
+
+        :type return_values: str
+        :param return_values: Controls the return of attribute
+            name-value pairs before then were changed.  Possible
+            values are: None or 'ALL_OLD'. If 'ALL_OLD' is
+            specified and the item is overwritten, the content
+            of the old item is returned.
+        """
+        return self.table.layer2.put_item(self, expected_value, return_values)
+
+    def __setitem__(self, key, value):
+        """Overrwrite the setter to instead update the _updates
+        method so this can act like a normal dict"""
+        if self._updates is not None:
+            self.put_attribute(key, value)
+        dict.__setitem__(self, key, value)
+
+    def __delitem__(self, key):
+        """Remove this key from the items"""
+        if self._updates is not None:
+            self.delete_attribute(key)
+        dict.__delitem__(self, key)
diff --git a/boto/dynamodb/layer1.py b/boto/dynamodb/layer1.py
new file mode 100644
index 0000000..40dac5c
--- /dev/null
+++ b/boto/dynamodb/layer1.py
@@ -0,0 +1,554 @@
+# Copyright (c) 2012 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2012 Amazon.com, Inc. or its affiliates.  All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+
+import boto
+from boto.connection import AWSAuthConnection
+from boto.exception import DynamoDBResponseError
+from boto.provider import Provider
+from boto.dynamodb import exceptions as dynamodb_exceptions
+
+import time
+try:
+    import simplejson as json
+except ImportError:
+    import json
+
+#
+# To get full debug output, uncomment the following line and set the
+# value of Debug to be 2
+#
+#boto.set_stream_logger('dynamodb')
+Debug = 0
+
+
+class Layer1(AWSAuthConnection):
+    """
+    This is the lowest-level interface to DynamoDB.  Methods at this
+    layer map directly to API requests and parameters to the methods
+    are either simple, scalar values or they are the Python equivalent
+    of the JSON input as defined in the DynamoDB Developer's Guide.
+    All responses are direct decoding of the JSON response bodies to
+    Python data structures via the json or simplejson modules.
+
+    :ivar throughput_exceeded_events: An integer variable that
+        keeps a running total of the number of ThroughputExceeded
+        responses this connection has received from Amazon DynamoDB.
+    """
+
+    DefaultRegionName = 'us-east-1'
+    """The default region name for DynamoDB API."""
+
+    ServiceName = 'DynamoDB'
+    """The name of the Service"""
+
+    Version = '20111205'
+    """DynamoDB API version."""
+
+    ThruputError = "ProvisionedThroughputExceededException"
+    """The error response returned when provisioned throughput is exceeded"""
+
+    SessionExpiredError = 'com.amazon.coral.service#ExpiredTokenException'
+    """The error response returned when session token has expired"""
+
+    ConditionalCheckFailedError = 'ConditionalCheckFailedException'
+    """The error response returned when a conditional check fails"""
+
+    ValidationError = 'ValidationException'
+    """The error response returned when an item is invalid in some way"""
+
+    ResponseError = DynamoDBResponseError
+
+    def __init__(self, aws_access_key_id=None, aws_secret_access_key=None,
+                 is_secure=True, port=None, proxy=None, proxy_port=None,
+                 debug=0, security_token=None, region=None,
+                 validate_certs=True):
+        if not region:
+            region_name = boto.config.get('DynamoDB', 'region',
+                                          self.DefaultRegionName)
+            for reg in boto.dynamodb.regions():
+                if reg.name == region_name:
+                    region = reg
+                    break
+
+        self.region = region
+        AWSAuthConnection.__init__(self, self.region.endpoint,
+                                   aws_access_key_id,
+                                   aws_secret_access_key,
+                                   is_secure, port, proxy, proxy_port,
+                                   debug=debug, security_token=security_token,
+                                   validate_certs=validate_certs)
+        self.throughput_exceeded_events = 0
+
+    def _get_session_token(self):
+        self.provider = Provider(self._provider_type)
+        self._auth_handler.update_provider(self.provider)
+
+    def _required_auth_capability(self):
+        return ['hmac-v4']
+
+    def make_request(self, action, body='', object_hook=None):
+        """
+        :raises: ``DynamoDBExpiredTokenError`` if the security token expires.
+        """
+        headers = {'X-Amz-Target': '%s_%s.%s' % (self.ServiceName,
+                                                 self.Version, action),
+                   'Host': self.region.endpoint,
+                   'Content-Type': 'application/x-amz-json-1.0',
+                   'Content-Length': str(len(body))}
+        http_request = self.build_base_http_request('POST', '/', '/',
+                                                    {}, headers, body, None)
+        start = time.time()
+        response = self._mexe(http_request, sender=None,
+                              override_num_retries=10,
+                              retry_handler=self._retry_handler)
+        elapsed = (time.time() - start)*1000
+        request_id = response.getheader('x-amzn-RequestId')
+        boto.log.debug('RequestId: %s' % request_id)
+        boto.perflog.info('%s: id=%s time=%sms',
+                          headers['X-Amz-Target'], request_id, int(elapsed))
+        response_body = response.read()
+        boto.log.debug(response_body)
+        return json.loads(response_body, object_hook=object_hook)
+
+    def _retry_handler(self, response, i, next_sleep):
+        status = None
+        if response.status == 400:
+            response_body = response.read()
+            boto.log.debug(response_body)
+            data = json.loads(response_body)
+            if self.ThruputError in data.get('__type'):
+                self.throughput_exceeded_events += 1
+                msg = "%s, retry attempt %s" % (self.ThruputError, i)
+                if i == 0:
+                    next_sleep = 0
+                else:
+                    next_sleep = 0.05 * (2 ** i)
+                i += 1
+                status = (msg, i, next_sleep)
+            elif self.SessionExpiredError in data.get('__type'):
+                msg = 'Renewing Session Token'
+                self._get_session_token()
+                status = (msg, i + self.num_retries - 1, 0)
+            elif self.ConditionalCheckFailedError in data.get('__type'):
+                raise dynamodb_exceptions.DynamoDBConditionalCheckFailedError(
+                    response.status, response.reason, data)
+            elif self.ValidationError in data.get('__type'):
+                raise dynamodb_exceptions.DynamoDBValidationError(
+                    response.status, response.reason, data)
+            else:
+                raise self.ResponseError(response.status, response.reason,
+                                         data)
+        return status
+
+    def list_tables(self, limit=None, start_table=None):
+        """
+        Returns a dictionary of results.  The dictionary contains
+        a **TableNames** key whose value is a list of the table names.
+        The dictionary could also contain a **LastEvaluatedTableName**
+        key whose value would be the last table name returned if
+        the complete list of table names was not returned.  This
+        value would then be passed as the ``start_table`` parameter on
+        a subsequent call to this method.
+
+        :type limit: int
+        :param limit: The maximum number of tables to return.
+
+        :type start_table: str
+        :param start_table: The name of the table that starts the
+            list.  If you ran a previous list_tables and not
+            all results were returned, the response dict would
+            include a LastEvaluatedTableName attribute.  Use
+            that value here to continue the listing.
+        """
+        data = {}
+        if limit:
+            data['Limit'] = limit
+        if start_table:
+            data['ExclusiveStartTableName'] = start_table
+        json_input = json.dumps(data)
+        return self.make_request('ListTables', json_input)
+
+    def describe_table(self, table_name):
+        """
+        Returns information about the table including current
+        state of the table, primary key schema and when the
+        table was created.
+
+        :type table_name: str
+        :param table_name: The name of the table to describe.
+        """
+        data = {'TableName': table_name}
+        json_input = json.dumps(data)
+        return self.make_request('DescribeTable', json_input)
+
+    def create_table(self, table_name, schema, provisioned_throughput):
+        """
+        Add a new table to your account.  The table name must be unique
+        among those associated with the account issuing the request.
+        This request triggers an asynchronous workflow to begin creating
+        the table.  When the workflow is complete, the state of the
+        table will be ACTIVE.
+
+        :type table_name: str
+        :param table_name: The name of the table to create.
+
+        :type schema: dict
+        :param schema: A Python version of the KeySchema data structure
+            as defined by DynamoDB
+
+        :type provisioned_throughput: dict
+        :param provisioned_throughput: A Python version of the
+            ProvisionedThroughput data structure defined by
+            DynamoDB.
+        """
+        data = {'TableName': table_name,
+                'KeySchema': schema,
+                'ProvisionedThroughput': provisioned_throughput}
+        json_input = json.dumps(data)
+        response_dict = self.make_request('CreateTable', json_input)
+        return response_dict
+
+    def update_table(self, table_name, provisioned_throughput):
+        """
+        Updates the provisioned throughput for a given table.
+
+        :type table_name: str
+        :param table_name: The name of the table to update.
+
+        :type provisioned_throughput: dict
+        :param provisioned_throughput: A Python version of the
+            ProvisionedThroughput data structure defined by
+            DynamoDB.
+        """
+        data = {'TableName': table_name,
+                'ProvisionedThroughput': provisioned_throughput}
+        json_input = json.dumps(data)
+        return self.make_request('UpdateTable', json_input)
+
+    def delete_table(self, table_name):
+        """
+        Deletes the table and all of it's data.  After this request
+        the table will be in the DELETING state until DynamoDB
+        completes the delete operation.
+
+        :type table_name: str
+        :param table_name: The name of the table to delete.
+        """
+        data = {'TableName': table_name}
+        json_input = json.dumps(data)
+        return self.make_request('DeleteTable', json_input)
+
+    def get_item(self, table_name, key, attributes_to_get=None,
+                 consistent_read=False, object_hook=None):
+        """
+        Return a set of attributes for an item that matches
+        the supplied key.
+
+        :type table_name: str
+        :param table_name: The name of the table containing the item.
+
+        :type key: dict
+        :param key: A Python version of the Key data structure
+            defined by DynamoDB.
+
+        :type attributes_to_get: list
+        :param attributes_to_get: A list of attribute names.
+            If supplied, only the specified attribute names will
+            be returned.  Otherwise, all attributes will be returned.
+
+        :type consistent_read: bool
+        :param consistent_read: If True, a consistent read
+            request is issued.  Otherwise, an eventually consistent
+            request is issued.
+        """
+        data = {'TableName': table_name,
+                'Key': key}
+        if attributes_to_get:
+            data['AttributesToGet'] = attributes_to_get
+        if consistent_read:
+            data['ConsistentRead'] = True
+        json_input = json.dumps(data)
+        response = self.make_request('GetItem', json_input,
+                                     object_hook=object_hook)
+        if 'Item' not in response:
+            raise dynamodb_exceptions.DynamoDBKeyNotFoundError(
+                "Key does not exist."
+            )
+        return response
+
+    def batch_get_item(self, request_items, object_hook=None):
+        """
+        Return a set of attributes for a multiple items in
+        multiple tables using their primary keys.
+
+        :type request_items: dict
+        :param request_items: A Python version of the RequestItems
+            data structure defined by DynamoDB.
+        """
+        # If the list is empty, return empty response
+        if not request_items:
+            return {}
+        data = {'RequestItems': request_items}
+        json_input = json.dumps(data)
+        return self.make_request('BatchGetItem', json_input,
+                                 object_hook=object_hook)
+
+    def batch_write_item(self, request_items, object_hook=None):
+        """
+        This operation enables you to put or delete several items
+        across multiple tables in a single API call.
+
+        :type request_items: dict
+        :param request_items: A Python version of the RequestItems
+            data structure defined by DynamoDB.
+        """
+        data = {'RequestItems': request_items}
+        json_input = json.dumps(data)
+        return self.make_request('BatchWriteItem', json_input,
+                                 object_hook=object_hook)
+
+    def put_item(self, table_name, item,
+                 expected=None, return_values=None,
+                 object_hook=None):
+        """
+        Create a new item or replace an old item with a new
+        item (including all attributes).  If an item already
+        exists in the specified table with the same primary
+        key, the new item will completely replace the old item.
+        You can perform a conditional put by specifying an
+        expected rule.
+
+        :type table_name: str
+        :param table_name: The name of the table in which to put the item.
+
+        :type item: dict
+        :param item: A Python version of the Item data structure
+            defined by DynamoDB.
+
+        :type expected: dict
+        :param expected: A Python version of the Expected
+            data structure defined by DynamoDB.
+
+        :type return_values: str
+        :param return_values: Controls the return of attribute
+            name-value pairs before then were changed.  Possible
+            values are: None or 'ALL_OLD'. If 'ALL_OLD' is
+            specified and the item is overwritten, the content
+            of the old item is returned.
+        """
+        data = {'TableName': table_name,
+                'Item': item}
+        if expected:
+            data['Expected'] = expected
+        if return_values:
+            data['ReturnValues'] = return_values
+        json_input = json.dumps(data)
+        return self.make_request('PutItem', json_input,
+                                 object_hook=object_hook)
+
+    def update_item(self, table_name, key, attribute_updates,
+                    expected=None, return_values=None,
+                    object_hook=None):
+        """
+        Edits an existing item's attributes. You can perform a conditional
+        update (insert a new attribute name-value pair if it doesn't exist,
+        or replace an existing name-value pair if it has certain expected
+        attribute values).
+
+        :type table_name: str
+        :param table_name: The name of the table.
+
+        :type key: dict
+        :param key: A Python version of the Key data structure
+            defined by DynamoDB which identifies the item to be updated.
+
+        :type attribute_updates: dict
+        :param attribute_updates: A Python version of the AttributeUpdates
+            data structure defined by DynamoDB.
+
+        :type expected: dict
+        :param expected: A Python version of the Expected
+            data structure defined by DynamoDB.
+
+        :type return_values: str
+        :param return_values: Controls the return of attribute
+            name-value pairs before then were changed.  Possible
+            values are: None or 'ALL_OLD'. If 'ALL_OLD' is
+            specified and the item is overwritten, the content
+            of the old item is returned.
+        """
+        data = {'TableName': table_name,
+                'Key': key,
+                'AttributeUpdates': attribute_updates}
+        if expected:
+            data['Expected'] = expected
+        if return_values:
+            data['ReturnValues'] = return_values
+        json_input = json.dumps(data)
+        return self.make_request('UpdateItem', json_input,
+                                 object_hook=object_hook)
+
+    def delete_item(self, table_name, key,
+                    expected=None, return_values=None,
+                    object_hook=None):
+        """
+        Delete an item and all of it's attributes by primary key.
+        You can perform a conditional delete by specifying an
+        expected rule.
+
+        :type table_name: str
+        :param table_name: The name of the table containing the item.
+
+        :type key: dict
+        :param key: A Python version of the Key data structure
+            defined by DynamoDB.
+
+        :type expected: dict
+        :param expected: A Python version of the Expected
+            data structure defined by DynamoDB.
+
+        :type return_values: str
+        :param return_values: Controls the return of attribute
+            name-value pairs before then were changed.  Possible
+            values are: None or 'ALL_OLD'. If 'ALL_OLD' is
+            specified and the item is overwritten, the content
+            of the old item is returned.
+        """
+        data = {'TableName': table_name,
+                'Key': key}
+        if expected:
+            data['Expected'] = expected
+        if return_values:
+            data['ReturnValues'] = return_values
+        json_input = json.dumps(data)
+        return self.make_request('DeleteItem', json_input,
+                                 object_hook=object_hook)
+
+    def query(self, table_name, hash_key_value, range_key_conditions=None,
+              attributes_to_get=None, limit=None, consistent_read=False,
+              scan_index_forward=True, exclusive_start_key=None,
+              object_hook=None):
+        """
+        Perform a query of DynamoDB.  This version is currently punting
+        and expecting you to provide a full and correct JSON body
+        which is passed as is to DynamoDB.
+
+        :type table_name: str
+        :param table_name: The name of the table to query.
+
+        :type hash_key_value: dict
+        :param key: A DynamoDB-style HashKeyValue.
+
+        :type range_key_conditions: dict
+        :param range_key_conditions: A Python version of the
+            RangeKeyConditions data structure.
+
+        :type attributes_to_get: list
+        :param attributes_to_get: A list of attribute names.
+            If supplied, only the specified attribute names will
+            be returned.  Otherwise, all attributes will be returned.
+
+        :type limit: int
+        :param limit: The maximum number of items to return.
+
+        :type consistent_read: bool
+        :param consistent_read: If True, a consistent read
+            request is issued.  Otherwise, an eventually consistent
+            request is issued.
+
+        :type scan_index_forward: bool
+        :param scan_index_forward: Specified forward or backward
+            traversal of the index.  Default is forward (True).
+
+        :type exclusive_start_key: list or tuple
+        :param exclusive_start_key: Primary key of the item from
+            which to continue an earlier query.  This would be
+            provided as the LastEvaluatedKey in that query.
+        """
+        data = {'TableName': table_name,
+                'HashKeyValue': hash_key_value}
+        if range_key_conditions:
+            data['RangeKeyCondition'] = range_key_conditions
+        if attributes_to_get:
+            data['AttributesToGet'] = attributes_to_get
+        if limit:
+            data['Limit'] = limit
+        if consistent_read:
+            data['ConsistentRead'] = True
+        if scan_index_forward:
+            data['ScanIndexForward'] = True
+        else:
+            data['ScanIndexForward'] = False
+        if exclusive_start_key:
+            data['ExclusiveStartKey'] = exclusive_start_key
+        json_input = json.dumps(data)
+        return self.make_request('Query', json_input,
+                                 object_hook=object_hook)
+
+    def scan(self, table_name, scan_filter=None,
+             attributes_to_get=None, limit=None,
+             count=False, exclusive_start_key=None,
+             object_hook=None):
+        """
+        Perform a scan of DynamoDB.  This version is currently punting
+        and expecting you to provide a full and correct JSON body
+        which is passed as is to DynamoDB.
+
+        :type table_name: str
+        :param table_name: The name of the table to scan.
+
+        :type scan_filter: dict
+        :param scan_filter: A Python version of the
+            ScanFilter data structure.
+
+        :type attributes_to_get: list
+        :param attributes_to_get: A list of attribute names.
+            If supplied, only the specified attribute names will
+            be returned.  Otherwise, all attributes will be returned.
+
+        :type limit: int
+        :param limit: The maximum number of items to return.
+
+        :type count: bool
+        :param count: If True, Amazon DynamoDB returns a total
+            number of items for the Scan operation, even if the
+            operation has no matching items for the assigned filter.
+
+        :type exclusive_start_key: list or tuple
+        :param exclusive_start_key: Primary key of the item from
+            which to continue an earlier query.  This would be
+            provided as the LastEvaluatedKey in that query.
+        """
+        data = {'TableName': table_name}
+        if scan_filter:
+            data['ScanFilter'] = scan_filter
+        if attributes_to_get:
+            data['AttributesToGet'] = attributes_to_get
+        if limit:
+            data['Limit'] = limit
+        if count:
+            data['Count'] = True
+        if exclusive_start_key:
+            data['ExclusiveStartKey'] = exclusive_start_key
+        json_input = json.dumps(data)
+        return self.make_request('Scan', json_input, object_hook=object_hook)
diff --git a/boto/dynamodb/layer2.py b/boto/dynamodb/layer2.py
new file mode 100644
index 0000000..45fd069
--- /dev/null
+++ b/boto/dynamodb/layer2.py
@@ -0,0 +1,726 @@
+# Copyright (c) 2011 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2011 Amazon.com, Inc. or its affiliates.  All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+import base64
+
+from boto.dynamodb.layer1 import Layer1
+from boto.dynamodb.table import Table
+from boto.dynamodb.schema import Schema
+from boto.dynamodb.item import Item
+from boto.dynamodb.batch import BatchList, BatchWriteList
+from boto.dynamodb.types import get_dynamodb_type, dynamize_value, \
+        convert_num, convert_binary
+
+
+def item_object_hook(dct):
+    """
+    A custom object hook for use when decoding JSON item bodys.
+    This hook will transform Amazon DynamoDB JSON responses to something
+    that maps directly to native Python types.
+    """
+    if len(dct.keys()) > 1:
+        return dct
+    if 'S' in dct:
+        return dct['S']
+    if 'N' in dct:
+        return convert_num(dct['N'])
+    if 'SS' in dct:
+        return set(dct['SS'])
+    if 'NS' in dct:
+        return set(map(convert_num, dct['NS']))
+    if 'B' in dct:
+        return base64.b64decode(dct['B'])
+    if 'BS' in dct:
+        return set(map(convert_binary, dct['BS']))
+    return dct
+
+
+def table_generator(tgen):
+    """
+    A low-level generator used to page through results from
+    query and scan operations.  This is used by
+    :class:`boto.dynamodb.layer2.TableGenerator` and is not intended
+    to be used outside of that context.
+    """
+    response = True
+    n = 0
+    while response:
+        if tgen.max_results and n == tgen.max_results:
+            break
+        if response is True:
+            pass
+        elif 'LastEvaluatedKey' in response:
+            lek = response['LastEvaluatedKey']
+            esk = tgen.table.layer2.dynamize_last_evaluated_key(lek)
+            tgen.kwargs['exclusive_start_key'] = esk
+        else:
+            break
+        response = tgen.callable(**tgen.kwargs)
+        if 'ConsumedCapacityUnits' in response:
+            tgen.consumed_units += response['ConsumedCapacityUnits']
+        for item in response['Items']:
+            if tgen.max_results and n == tgen.max_results:
+                break
+            yield tgen.item_class(tgen.table, attrs=item)
+            n += 1
+
+
+class TableGenerator:
+    """
+    This is an object that wraps up the table_generator function.
+    The only real reason to have this is that we want to be able
+    to accumulate and return the ConsumedCapacityUnits element that
+    is part of each response.
+
+    :ivar consumed_units: An integer that holds the number of
+        ConsumedCapacityUnits accumulated thus far for this
+        generator.
+    """
+
+    def __init__(self, table, callable, max_results, item_class, kwargs):
+        self.table = table
+        self.callable = callable
+        self.max_results = max_results
+        self.item_class = item_class
+        self.kwargs = kwargs
+        self.consumed_units = 0
+
+    def __iter__(self):
+        return table_generator(self)
+
+
+class Layer2(object):
+
+    def __init__(self, aws_access_key_id=None, aws_secret_access_key=None,
+                 is_secure=True, port=None, proxy=None, proxy_port=None,
+                 debug=0, security_token=None, region=None,
+                 validate_certs=True):
+        self.layer1 = Layer1(aws_access_key_id, aws_secret_access_key,
+                             is_secure, port, proxy, proxy_port,
+                             debug, security_token, region,
+                             validate_certs=validate_certs)
+
+    def dynamize_attribute_updates(self, pending_updates):
+        """
+        Convert a set of pending item updates into the structure
+        required by Layer1.
+        """
+        d = {}
+        for attr_name in pending_updates:
+            action, value = pending_updates[attr_name]
+            if value is None:
+                # DELETE without an attribute value
+                d[attr_name] = {"Action": action}
+            else:
+                d[attr_name] = {"Action": action,
+                                "Value": dynamize_value(value)}
+        return d
+
+    def dynamize_item(self, item):
+        d = {}
+        for attr_name in item:
+            d[attr_name] = dynamize_value(item[attr_name])
+        return d
+
+    def dynamize_range_key_condition(self, range_key_condition):
+        """
+        Convert a layer2 range_key_condition parameter into the
+        structure required by Layer1.
+        """
+        return range_key_condition.to_dict()
+
+    def dynamize_scan_filter(self, scan_filter):
+        """
+        Convert a layer2 scan_filter parameter into the
+        structure required by Layer1.
+        """
+        d = None
+        if scan_filter:
+            d = {}
+            for attr_name in scan_filter:
+                condition = scan_filter[attr_name]
+                d[attr_name] = condition.to_dict()
+        return d
+
+    def dynamize_expected_value(self, expected_value):
+        """
+        Convert an expected_value parameter into the data structure
+        required for Layer1.
+        """
+        d = None
+        if expected_value:
+            d = {}
+            for attr_name in expected_value:
+                attr_value = expected_value[attr_name]
+                if attr_value is True:
+                    attr_value = {'Exists': True}
+                elif attr_value is False:
+                    attr_value = {'Exists': False}
+                else:
+                    val = dynamize_value(expected_value[attr_name])
+                    attr_value = {'Value': val}
+                d[attr_name] = attr_value
+        return d
+
+    def dynamize_last_evaluated_key(self, last_evaluated_key):
+        """
+        Convert a last_evaluated_key parameter into the data structure
+        required for Layer1.
+        """
+        d = None
+        if last_evaluated_key:
+            hash_key = last_evaluated_key['HashKeyElement']
+            d = {'HashKeyElement': dynamize_value(hash_key)}
+            if 'RangeKeyElement' in last_evaluated_key:
+                range_key = last_evaluated_key['RangeKeyElement']
+                d['RangeKeyElement'] = dynamize_value(range_key)
+        return d
+
+    def build_key_from_values(self, schema, hash_key, range_key=None):
+        """
+        Build a Key structure to be used for accessing items
+        in Amazon DynamoDB.  This method takes the supplied hash_key
+        and optional range_key and validates them against the
+        schema.  If there is a mismatch, a TypeError is raised.
+        Otherwise, a Python dict version of a Amazon DynamoDB Key
+        data structure is returned.
+
+        :type hash_key: int, float, str, or unicode
+        :param hash_key: The hash key of the item you are looking for.
+            The type of the hash key should match the type defined in
+            the schema.
+
+        :type range_key: int, float, str or unicode
+        :param range_key: The range key of the item your are looking for.
+            This should be supplied only if the schema requires a
+            range key.  The type of the range key should match the
+            type defined in the schema.
+        """
+        dynamodb_key = {}
+        dynamodb_value = dynamize_value(hash_key)
+        if dynamodb_value.keys()[0] != schema.hash_key_type:
+            msg = 'Hashkey must be of type: %s' % schema.hash_key_type
+            raise TypeError(msg)
+        dynamodb_key['HashKeyElement'] = dynamodb_value
+        if range_key is not None:
+            dynamodb_value = dynamize_value(range_key)
+            if dynamodb_value.keys()[0] != schema.range_key_type:
+                msg = 'RangeKey must be of type: %s' % schema.range_key_type
+                raise TypeError(msg)
+            dynamodb_key['RangeKeyElement'] = dynamodb_value
+        return dynamodb_key
+
+    def new_batch_list(self):
+        """
+        Return a new, empty :class:`boto.dynamodb.batch.BatchList`
+        object.
+        """
+        return BatchList(self)
+
+    def new_batch_write_list(self):
+        """
+        Return a new, empty :class:`boto.dynamodb.batch.BatchWriteList`
+        object.
+        """
+        return BatchWriteList(self)
+
+    def list_tables(self, limit=None):
+        """
+        Return a list of the names of all tables associated with the
+        current account and region.
+
+        :type limit: int
+        :param limit: The maximum number of tables to return.
+        """
+        tables = []
+        start_table = None
+        while not limit or len(tables) < limit:
+            this_round_limit = None
+            if limit:
+                this_round_limit = limit - len(tables)
+                this_round_limit = min(this_round_limit, 100)
+            result = self.layer1.list_tables(limit=this_round_limit, start_table=start_table)
+            tables.extend(result.get('TableNames', []))
+            start_table = result.get('LastEvaluatedTableName', None)
+            if not start_table:
+                break
+        return tables
+
+    def describe_table(self, name):
+        """
+        Retrieve information about an existing table.
+
+        :type name: str
+        :param name: The name of the desired table.
+
+        """
+        return self.layer1.describe_table(name)
+
+    def get_table(self, name):
+        """
+        Retrieve the Table object for an existing table.
+
+        :type name: str
+        :param name: The name of the desired table.
+
+        :rtype: :class:`boto.dynamodb.table.Table`
+        :return: A Table object representing the table.
+        """
+        response = self.layer1.describe_table(name)
+        return Table(self,  response)
+
+    lookup = get_table
+
+    def create_table(self, name, schema, read_units, write_units):
+        """
+        Create a new Amazon DynamoDB table.
+
+        :type name: str
+        :param name: The name of the desired table.
+
+        :type schema: :class:`boto.dynamodb.schema.Schema`
+        :param schema: The Schema object that defines the schema used
+            by this table.
+
+        :type read_units: int
+        :param read_units: The value for ReadCapacityUnits.
+
+        :type write_units: int
+        :param write_units: The value for WriteCapacityUnits.
+
+        :rtype: :class:`boto.dynamodb.table.Table`
+        :return: A Table object representing the new Amazon DynamoDB table.
+        """
+        response = self.layer1.create_table(name, schema.dict,
+                                            {'ReadCapacityUnits': read_units,
+                                             'WriteCapacityUnits': write_units})
+        return Table(self,  response)
+
+    def update_throughput(self, table, read_units, write_units):
+        """
+        Update the ProvisionedThroughput for the Amazon DynamoDB Table.
+
+        :type table: :class:`boto.dynamodb.table.Table`
+        :param table: The Table object whose throughput is being updated.
+
+        :type read_units: int
+        :param read_units: The new value for ReadCapacityUnits.
+
+        :type write_units: int
+        :param write_units: The new value for WriteCapacityUnits.
+        """
+        response = self.layer1.update_table(table.name,
+                                            {'ReadCapacityUnits': read_units,
+                                             'WriteCapacityUnits': write_units})
+        table.update_from_response(response)
+
+    def delete_table(self, table):
+        """
+        Delete this table and all items in it.  After calling this
+        the Table objects status attribute will be set to 'DELETING'.
+
+        :type table: :class:`boto.dynamodb.table.Table`
+        :param table: The Table object that is being deleted.
+        """
+        response = self.layer1.delete_table(table.name)
+        table.update_from_response(response)
+
+    def create_schema(self, hash_key_name, hash_key_proto_value,
+                      range_key_name=None, range_key_proto_value=None):
+        """
+        Create a Schema object used when creating a Table.
+
+        :type hash_key_name: str
+        :param hash_key_name: The name of the HashKey for the schema.
+
+        :type hash_key_proto_value: int|long|float|str|unicode
+        :param hash_key_proto_value: A sample or prototype of the type
+            of value you want to use for the HashKey.  Alternatively,
+            you can also just pass in the Python type (e.g. int, float, etc.).
+
+        :type range_key_name: str
+        :param range_key_name: The name of the RangeKey for the schema.
+            This parameter is optional.
+
+        :type range_key_proto_value: int|long|float|str|unicode
+        :param range_key_proto_value: A sample or prototype of the type
+            of value you want to use for the RangeKey.  Alternatively,
+            you can also pass in the Python type (e.g. int, float, etc.)
+            This parameter is optional.
+        """
+        schema = {}
+        hash_key = {}
+        hash_key['AttributeName'] = hash_key_name
+        hash_key_type = get_dynamodb_type(hash_key_proto_value)
+        hash_key['AttributeType'] = hash_key_type
+        schema['HashKeyElement'] = hash_key
+        if range_key_name and range_key_proto_value is not None:
+            range_key = {}
+            range_key['AttributeName'] = range_key_name
+            range_key_type = get_dynamodb_type(range_key_proto_value)
+            range_key['AttributeType'] = range_key_type
+            schema['RangeKeyElement'] = range_key
+        return Schema(schema)
+
+    def get_item(self, table, hash_key, range_key=None,
+                 attributes_to_get=None, consistent_read=False,
+                 item_class=Item):
+        """
+        Retrieve an existing item from the table.
+
+        :type table: :class:`boto.dynamodb.table.Table`
+        :param table: The Table object from which the item is retrieved.
+
+        :type hash_key: int|long|float|str|unicode
+        :param hash_key: The HashKey of the requested item.  The
+            type of the value must match the type defined in the
+            schema for the table.
+
+        :type range_key: int|long|float|str|unicode
+        :param range_key: The optional RangeKey of the requested item.
+            The type of the value must match the type defined in the
+            schema for the table.
+
+        :type attributes_to_get: list
+        :param attributes_to_get: A list of attribute names.
+            If supplied, only the specified attribute names will
+            be returned.  Otherwise, all attributes will be returned.
+
+        :type consistent_read: bool
+        :param consistent_read: If True, a consistent read
+            request is issued.  Otherwise, an eventually consistent
+            request is issued.
+
+        :type item_class: Class
+        :param item_class: Allows you to override the class used
+            to generate the items. This should be a subclass of
+            :class:`boto.dynamodb.item.Item`
+        """
+        key = self.build_key_from_values(table.schema, hash_key, range_key)
+        response = self.layer1.get_item(table.name, key,
+                                        attributes_to_get, consistent_read,
+                                        object_hook=item_object_hook)
+        item = item_class(table, hash_key, range_key, response['Item'])
+        if 'ConsumedCapacityUnits' in response:
+            item.consumed_units = response['ConsumedCapacityUnits']
+        return item
+
+    def batch_get_item(self, batch_list):
+        """
+        Return a set of attributes for a multiple items in
+        multiple tables using their primary keys.
+
+        :type batch_list: :class:`boto.dynamodb.batch.BatchList`
+        :param batch_list: A BatchList object which consists of a
+            list of :class:`boto.dynamoddb.batch.Batch` objects.
+            Each Batch object contains the information about one
+            batch of objects that you wish to retrieve in this
+            request.
+        """
+        request_items = batch_list.to_dict()
+        return self.layer1.batch_get_item(request_items,
+                                          object_hook=item_object_hook)
+
+    def batch_write_item(self, batch_list):
+        """
+        Performs multiple Puts and Deletes in one batch.
+
+        :type batch_list: :class:`boto.dynamodb.batch.BatchWriteList`
+        :param batch_list: A BatchWriteList object which consists of a
+            list of :class:`boto.dynamoddb.batch.BatchWrite` objects.
+            Each Batch object contains the information about one
+            batch of objects that you wish to put or delete.
+        """
+        request_items = batch_list.to_dict()
+        return self.layer1.batch_write_item(request_items,
+                                            object_hook=item_object_hook)
+
+    def put_item(self, item, expected_value=None, return_values=None):
+        """
+        Store a new item or completely replace an existing item
+        in Amazon DynamoDB.
+
+        :type item: :class:`boto.dynamodb.item.Item`
+        :param item: The Item to write to Amazon DynamoDB.
+
+        :type expected_value: dict
+        :param expected_value: A dictionary of name/value pairs that you expect.
+            This dictionary should have name/value pairs where the name
+            is the name of the attribute and the value is either the value
+            you are expecting or False if you expect the attribute not to
+            exist.
+
+        :type return_values: str
+        :param return_values: Controls the return of attribute
+            name-value pairs before then were changed.  Possible
+            values are: None or 'ALL_OLD'. If 'ALL_OLD' is
+            specified and the item is overwritten, the content
+            of the old item is returned.
+        """
+        expected_value = self.dynamize_expected_value(expected_value)
+        response = self.layer1.put_item(item.table.name,
+                                        self.dynamize_item(item),
+                                        expected_value, return_values,
+                                        object_hook=item_object_hook)
+        if 'ConsumedCapacityUnits' in response:
+            item.consumed_units = response['ConsumedCapacityUnits']
+        return response
+
+    def update_item(self, item, expected_value=None, return_values=None):
+        """
+        Commit pending item updates to Amazon DynamoDB.
+
+        :type item: :class:`boto.dynamodb.item.Item`
+        :param item: The Item to update in Amazon DynamoDB.  It is expected
+            that you would have called the add_attribute, put_attribute
+            and/or delete_attribute methods on this Item prior to calling
+            this method.  Those queued changes are what will be updated.
+
+        :type expected_value: dict
+        :param expected_value: A dictionary of name/value pairs that you
+            expect.  This dictionary should have name/value pairs where the
+            name is the name of the attribute and the value is either the
+            value you are expecting or False if you expect the attribute
+            not to exist.
+
+        :type return_values: str
+        :param return_values: Controls the return of attribute name/value pairs
+            before they were updated. Possible values are: None, 'ALL_OLD',
+            'UPDATED_OLD', 'ALL_NEW' or 'UPDATED_NEW'. If 'ALL_OLD' is
+            specified and the item is overwritten, the content of the old item
+            is returned. If 'ALL_NEW' is specified, then all the attributes of
+            the new version of the item are returned. If 'UPDATED_NEW' is
+            specified, the new versions of only the updated attributes are
+            returned.
+
+        """
+        expected_value = self.dynamize_expected_value(expected_value)
+        key = self.build_key_from_values(item.table.schema,
+                                         item.hash_key, item.range_key)
+        attr_updates = self.dynamize_attribute_updates(item._updates)
+
+        response = self.layer1.update_item(item.table.name, key,
+                                           attr_updates,
+                                           expected_value, return_values,
+                                           object_hook=item_object_hook)
+        item._updates.clear()
+        if 'ConsumedCapacityUnits' in response:
+            item.consumed_units = response['ConsumedCapacityUnits']
+        return response
+
+    def delete_item(self, item, expected_value=None, return_values=None):
+        """
+        Delete the item from Amazon DynamoDB.
+
+        :type item: :class:`boto.dynamodb.item.Item`
+        :param item: The Item to delete from Amazon DynamoDB.
+
+        :type expected_value: dict
+        :param expected_value: A dictionary of name/value pairs that you expect.
+            This dictionary should have name/value pairs where the name
+            is the name of the attribute and the value is either the value
+            you are expecting or False if you expect the attribute not to
+            exist.
+
+        :type return_values: str
+        :param return_values: Controls the return of attribute
+            name-value pairs before then were changed.  Possible
+            values are: None or 'ALL_OLD'. If 'ALL_OLD' is
+            specified and the item is overwritten, the content
+            of the old item is returned.
+        """
+        expected_value = self.dynamize_expected_value(expected_value)
+        key = self.build_key_from_values(item.table.schema,
+                                         item.hash_key, item.range_key)
+        return self.layer1.delete_item(item.table.name, key,
+                                       expected=expected_value,
+                                       return_values=return_values,
+                                       object_hook=item_object_hook)
+
+    def query(self, table, hash_key, range_key_condition=None,
+              attributes_to_get=None, request_limit=None,
+              max_results=None, consistent_read=False,
+              scan_index_forward=True, exclusive_start_key=None,
+              item_class=Item):
+        """
+        Perform a query on the table.
+
+        :type table: :class:`boto.dynamodb.table.Table`
+        :param table: The Table object that is being queried.
+
+        :type hash_key: int|long|float|str|unicode
+        :param hash_key: The HashKey of the requested item.  The
+            type of the value must match the type defined in the
+            schema for the table.
+
+        :type range_key_condition: :class:`boto.dynamodb.condition.Condition`
+        :param range_key_condition: A Condition object.
+            Condition object can be one of the following types:
+
+            EQ|LE|LT|GE|GT|BEGINS_WITH|BETWEEN
+
+            The only condition which expects or will accept two
+            values is 'BETWEEN', otherwise a single value should
+            be passed to the Condition constructor.
+
+        :type attributes_to_get: list
+        :param attributes_to_get: A list of attribute names.
+            If supplied, only the specified attribute names will
+            be returned.  Otherwise, all attributes will be returned.
+
+        :type request_limit: int
+        :param request_limit: The maximum number of items to retrieve
+            from Amazon DynamoDB on each request.  You may want to set
+            a specific request_limit based on the provisioned throughput
+            of your table.  The default behavior is to retrieve as many
+            results as possible per request.
+
+        :type max_results: int
+        :param max_results: The maximum number of results that will
+            be retrieved from Amazon DynamoDB in total.  For example,
+            if you only wanted to see the first 100 results from the
+            query, regardless of how many were actually available, you
+            could set max_results to 100 and the generator returned
+            from the query method will only yeild 100 results max.
+
+        :type consistent_read: bool
+        :param consistent_read: If True, a consistent read
+            request is issued.  Otherwise, an eventually consistent
+            request is issued.
+
+        :type scan_index_forward: bool
+        :param scan_index_forward: Specified forward or backward
+            traversal of the index.  Default is forward (True).
+
+        :type exclusive_start_key: list or tuple
+        :param exclusive_start_key: Primary key of the item from
+            which to continue an earlier query.  This would be
+            provided as the LastEvaluatedKey in that query.
+
+        :type item_class: Class
+        :param item_class: Allows you to override the class used
+            to generate the items. This should be a subclass of
+            :class:`boto.dynamodb.item.Item`
+
+        :rtype: :class:`boto.dynamodb.layer2.TableGenerator`
+        """
+        if range_key_condition:
+            rkc = self.dynamize_range_key_condition(range_key_condition)
+        else:
+            rkc = None
+        if exclusive_start_key:
+            esk = self.build_key_from_values(table.schema,
+                                             *exclusive_start_key)
+        else:
+            esk = None
+        kwargs = {'table_name': table.name,
+                  'hash_key_value': dynamize_value(hash_key),
+                  'range_key_conditions': rkc,
+                  'attributes_to_get': attributes_to_get,
+                  'limit': request_limit,
+                  'consistent_read': consistent_read,
+                  'scan_index_forward': scan_index_forward,
+                  'exclusive_start_key': esk,
+                  'object_hook': item_object_hook}
+        return TableGenerator(table, self.layer1.query,
+                              max_results, item_class, kwargs)
+
+    def scan(self, table, scan_filter=None,
+             attributes_to_get=None, request_limit=None, max_results=None,
+             count=False, exclusive_start_key=None, item_class=Item):
+        """
+        Perform a scan of DynamoDB.
+
+        :type table: :class:`boto.dynamodb.table.Table`
+        :param table: The Table object that is being scanned.
+
+        :type scan_filter: A dict
+        :param scan_filter: A dictionary where the key is the
+            attribute name and the value is a
+            :class:`boto.dynamodb.condition.Condition` object.
+            Valid Condition objects include:
+
+             * EQ - equal (1)
+             * NE - not equal (1)
+             * LE - less than or equal (1)
+             * LT - less than (1)
+             * GE - greater than or equal (1)
+             * GT - greater than (1)
+             * NOT_NULL - attribute exists (0, use None)
+             * NULL - attribute does not exist (0, use None)
+             * CONTAINS - substring or value in list (1)
+             * NOT_CONTAINS - absence of substring or value in list (1)
+             * BEGINS_WITH - substring prefix (1)
+             * IN - exact match in list (N)
+             * BETWEEN - >= first value, <= second value (2)
+
+        :type attributes_to_get: list
+        :param attributes_to_get: A list of attribute names.
+            If supplied, only the specified attribute names will
+            be returned.  Otherwise, all attributes will be returned.
+
+        :type request_limit: int
+        :param request_limit: The maximum number of items to retrieve
+            from Amazon DynamoDB on each request.  You may want to set
+            a specific request_limit based on the provisioned throughput
+            of your table.  The default behavior is to retrieve as many
+            results as possible per request.
+
+        :type max_results: int
+        :param max_results: The maximum number of results that will
+            be retrieved from Amazon DynamoDB in total.  For example,
+            if you only wanted to see the first 100 results from the
+            query, regardless of how many were actually available, you
+            could set max_results to 100 and the generator returned
+            from the query method will only yeild 100 results max.
+
+        :type count: bool
+        :param count: If True, Amazon DynamoDB returns a total
+            number of items for the Scan operation, even if the
+            operation has no matching items for the assigned filter.
+
+        :type exclusive_start_key: list or tuple
+        :param exclusive_start_key: Primary key of the item from
+            which to continue an earlier query.  This would be
+            provided as the LastEvaluatedKey in that query.
+
+        :type item_class: Class
+        :param item_class: Allows you to override the class used
+            to generate the items. This should be a subclass of
+            :class:`boto.dynamodb.item.Item`
+
+        :rtype: :class:`boto.dynamodb.layer2.TableGenerator`
+        """
+        if exclusive_start_key:
+            esk = self.build_key_from_values(table.schema,
+                                             *exclusive_start_key)
+        else:
+            esk = None
+        kwargs = {'table_name': table.name,
+                  'scan_filter': self.dynamize_scan_filter(scan_filter),
+                  'attributes_to_get': attributes_to_get,
+                  'limit': request_limit,
+                  'count': count,
+                  'exclusive_start_key': esk,
+                  'object_hook': item_object_hook}
+        return TableGenerator(table, self.layer1.scan,
+                              max_results, item_class, kwargs)
diff --git a/boto/dynamodb/schema.py b/boto/dynamodb/schema.py
new file mode 100644
index 0000000..34ff212
--- /dev/null
+++ b/boto/dynamodb/schema.py
@@ -0,0 +1,74 @@
+# Copyright (c) 2011 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2011 Amazon.com, Inc. or its affiliates.  All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+
+
+class Schema(object):
+    """
+    Represents a DynamoDB schema.
+
+    :ivar hash_key_name: The name of the hash key of the schema.
+    :ivar hash_key_type: The DynamoDB type specification for the
+        hash key of the schema.
+    :ivar range_key_name: The name of the range key of the schema
+        or None if no range key is defined.
+    :ivar range_key_type: The DynamoDB type specification for the
+        range key of the schema or None if no range key is defined.
+    :ivar dict: The underlying Python dictionary that needs to be
+        passed to Layer1 methods.
+    """
+
+    def __init__(self, schema_dict):
+        self._dict = schema_dict
+
+    def __repr__(self):
+        if self.range_key_name:
+            s = 'Schema(%s:%s)' % (self.hash_key_name, self.range_key_name)
+        else:
+            s = 'Schema(%s)' % self.hash_key_name
+        return s
+
+    @property
+    def dict(self):
+        return self._dict
+
+    @property
+    def hash_key_name(self):
+        return self._dict['HashKeyElement']['AttributeName']
+
+    @property
+    def hash_key_type(self):
+        return self._dict['HashKeyElement']['AttributeType']
+
+    @property
+    def range_key_name(self):
+        name = None
+        if 'RangeKeyElement' in self._dict:
+            name = self._dict['RangeKeyElement']['AttributeName']
+        return name
+
+    @property
+    def range_key_type(self):
+        type = None
+        if 'RangeKeyElement' in self._dict:
+            type = self._dict['RangeKeyElement']['AttributeType']
+        return type
diff --git a/boto/dynamodb/table.py b/boto/dynamodb/table.py
new file mode 100644
index 0000000..ee73b1a
--- /dev/null
+++ b/boto/dynamodb/table.py
@@ -0,0 +1,490 @@
+# Copyright (c) 2012 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2012 Amazon.com, Inc. or its affiliates.  All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+
+from boto.dynamodb.batch import BatchList
+from boto.dynamodb.schema import Schema
+from boto.dynamodb.item import Item
+from boto.dynamodb import exceptions as dynamodb_exceptions
+import time
+
+class TableBatchGenerator(object):
+    """
+    A low-level generator used to page through results from
+    batch_get_item operations.
+
+    :ivar consumed_units: An integer that holds the number of
+        ConsumedCapacityUnits accumulated thus far for this
+        generator.
+    """
+
+    def __init__(self, table, keys, attributes_to_get=None):
+        self.table = table
+        self.keys = keys
+        self.consumed_units = 0
+        self.attributes_to_get = attributes_to_get
+
+    def _queue_unprocessed(self, res):
+        if not u'UnprocessedKeys' in res:
+            return
+        if not self.table.name in res[u'UnprocessedKeys']:
+            return
+
+        keys = res[u'UnprocessedKeys'][self.table.name][u'Keys']
+
+        for key in keys:
+            h = key[u'HashKeyElement']
+            r = key[u'RangeKeyElement'] if u'RangeKeyElement' in key else None
+            self.keys.append((h, r))
+
+    def __iter__(self):
+        while self.keys:
+            # Build the next batch
+            batch = BatchList(self.table.layer2)
+            batch.add_batch(self.table, self.keys[:100], self.attributes_to_get)
+            res = batch.submit()
+
+            # parse the results
+            if not self.table.name in res[u'Responses']:
+                continue
+            self.consumed_units += res[u'Responses'][self.table.name][u'ConsumedCapacityUnits']
+            for elem in res[u'Responses'][self.table.name][u'Items']:
+                yield elem
+
+            # re-queue un processed keys
+            self.keys = self.keys[100:]
+            self._queue_unprocessed(res)
+
+
+class Table(object):
+    """
+    An Amazon DynamoDB table.
+
+    :ivar name: The name of the table.
+    :ivar create_time: The date and time that the table was created.
+    :ivar status: The current status of the table.  One of:
+        'ACTIVE', 'UPDATING', 'DELETING'.
+    :ivar schema: A :class:`boto.dynamodb.schema.Schema` object representing
+        the schema defined for the table.
+    :ivar item_count: The number of items in the table.  This value is
+        set only when the Table object is created or refreshed and
+        may not reflect the actual count.
+    :ivar size_bytes: Total size of the specified table, in bytes.
+        Amazon DynamoDB updates this value approximately every six hours.
+        Recent changes might not be reflected in this value.
+    :ivar read_units: The ReadCapacityUnits of the tables
+        Provisioned Throughput.
+    :ivar write_units: The WriteCapacityUnits of the tables
+        Provisioned Throughput.
+    :ivar schema: The Schema object associated with the table.
+    """
+
+    def __init__(self, layer2, response):
+        self.layer2 = layer2
+        self._dict = {}
+        self.update_from_response(response)
+
+    def __repr__(self):
+        return 'Table(%s)' % self.name
+
+    @property
+    def name(self):
+        return self._dict['TableName']
+
+    @property
+    def create_time(self):
+        return self._dict['CreationDateTime']
+
+    @property
+    def status(self):
+        return self._dict['TableStatus']
+
+    @property
+    def item_count(self):
+        return self._dict.get('ItemCount', 0)
+
+    @property
+    def size_bytes(self):
+        return self._dict.get('TableSizeBytes', 0)
+
+    @property
+    def schema(self):
+        return self._schema
+
+    @property
+    def read_units(self):
+        return self._dict['ProvisionedThroughput']['ReadCapacityUnits']
+
+    @property
+    def write_units(self):
+        return self._dict['ProvisionedThroughput']['WriteCapacityUnits']
+
+    def update_from_response(self, response):
+        """
+        Update the state of the Table object based on the response
+        data received from Amazon DynamoDB.
+        """
+        if 'Table' in response:
+            self._dict.update(response['Table'])
+        elif 'TableDescription' in response:
+            self._dict.update(response['TableDescription'])
+        if 'KeySchema' in self._dict:
+            self._schema = Schema(self._dict['KeySchema'])
+
+    def refresh(self, wait_for_active=False, retry_seconds=5):
+        """
+        Refresh all of the fields of the Table object by calling
+        the underlying DescribeTable request.
+
+        :type wait_for_active: bool
+        :param wait_for_active: If True, this command will not return
+            until the table status, as returned from Amazon DynamoDB, is
+            'ACTIVE'.
+
+        :type retry_seconds: int
+        :param retry_seconds: If wait_for_active is True, this
+            parameter controls the number of seconds of delay between
+            calls to update_table in Amazon DynamoDB.  Default is 5 seconds.
+        """
+        done = False
+        while not done:
+            response = self.layer2.describe_table(self.name)
+            self.update_from_response(response)
+            if wait_for_active:
+                if self.status == 'ACTIVE':
+                    done = True
+                else:
+                    time.sleep(retry_seconds)
+            else:
+                done = True
+
+    def update_throughput(self, read_units, write_units):
+        """
+        Update the ProvisionedThroughput for the Amazon DynamoDB Table.
+
+        :type read_units: int
+        :param read_units: The new value for ReadCapacityUnits.
+
+        :type write_units: int
+        :param write_units: The new value for WriteCapacityUnits.
+        """
+        self.layer2.update_throughput(self, read_units, write_units)
+
+    def delete(self):
+        """
+        Delete this table and all items in it.  After calling this
+        the Table objects status attribute will be set to 'DELETING'.
+        """
+        self.layer2.delete_table(self)
+
+    def get_item(self, hash_key, range_key=None,
+                 attributes_to_get=None, consistent_read=False,
+                 item_class=Item):
+        """
+        Retrieve an existing item from the table.
+
+        :type hash_key: int|long|float|str|unicode
+        :param hash_key: The HashKey of the requested item.  The
+            type of the value must match the type defined in the
+            schema for the table.
+
+        :type range_key: int|long|float|str|unicode
+        :param range_key: The optional RangeKey of the requested item.
+            The type of the value must match the type defined in the
+            schema for the table.
+
+        :type attributes_to_get: list
+        :param attributes_to_get: A list of attribute names.
+            If supplied, only the specified attribute names will
+            be returned.  Otherwise, all attributes will be returned.
+
+        :type consistent_read: bool
+        :param consistent_read: If True, a consistent read
+            request is issued.  Otherwise, an eventually consistent
+            request is issued.
+
+        :type item_class: Class
+        :param item_class: Allows you to override the class used
+            to generate the items. This should be a subclass of
+            :class:`boto.dynamodb.item.Item`
+        """
+        return self.layer2.get_item(self, hash_key, range_key,
+                                    attributes_to_get, consistent_read,
+                                    item_class)
+    lookup = get_item
+
+    def has_item(self, hash_key, range_key=None, consistent_read=False):
+        """
+        Checks the table to see if the Item with the specified ``hash_key``
+        exists. This may save a tiny bit of time/bandwidth over a
+        straight :py:meth:`get_item` if you have no intention to touch
+        the data that is returned, since this method specifically tells
+        Amazon not to return anything but the Item's key.
+
+        :type hash_key: int|long|float|str|unicode
+        :param hash_key: The HashKey of the requested item.  The
+            type of the value must match the type defined in the
+            schema for the table.
+
+        :type range_key: int|long|float|str|unicode
+        :param range_key: The optional RangeKey of the requested item.
+            The type of the value must match the type defined in the
+            schema for the table.
+
+        :type consistent_read: bool
+        :param consistent_read: If True, a consistent read
+            request is issued.  Otherwise, an eventually consistent
+            request is issued.
+
+        :rtype: bool
+        :returns: ``True`` if the Item exists, ``False`` if not.
+        """
+        try:
+            # Attempt to get the key. If it can't be found, it'll raise
+            # an exception.
+            self.get_item(hash_key, range_key=range_key,
+                          # This minimizes the size of the response body.
+                          attributes_to_get=[hash_key],
+                          consistent_read=consistent_read)
+        except dynamodb_exceptions.DynamoDBKeyNotFoundError:
+            # Key doesn't exist.
+            return False
+        return True
+
+    def new_item(self, hash_key=None, range_key=None, attrs=None,
+                 item_class=Item):
+        """
+        Return an new, unsaved Item which can later be PUT to
+        Amazon DynamoDB.
+
+        This method has explicit (but optional) parameters for
+        the hash_key and range_key values of the item.  You can use
+        these explicit parameters when calling the method, such as::
+
+        >>> my_item = my_table.new_item(hash_key='a', range_key=1,
+                                        attrs={'key1': 'val1', 'key2': 'val2'})
+            >>> my_item
+            {u'bar': 1, u'foo': 'a', 'key1': 'val1', 'key2': 'val2'}
+
+        Or, if you prefer, you can simply put the hash_key and range_key
+        in the attrs dictionary itself, like this::
+
+            >>> attrs = {'foo': 'a', 'bar': 1, 'key1': 'val1', 'key2': 'val2'}
+            >>> my_item = my_table.new_item(attrs=attrs)
+            >>> my_item
+            {u'bar': 1, u'foo': 'a', 'key1': 'val1', 'key2': 'val2'}
+
+        The effect is the same.
+
+        .. note:
+           The explicit parameters take priority over the values in
+           the attrs dict.  So, if you have a hash_key or range_key
+           in the attrs dict and you also supply either or both using
+           the explicit parameters, the values in the attrs will be
+           ignored.
+
+        :type hash_key: int|long|float|str|unicode
+        :param hash_key: The HashKey of the new item.  The
+            type of the value must match the type defined in the
+            schema for the table.
+
+        :type range_key: int|long|float|str|unicode
+        :param range_key: The optional RangeKey of the new item.
+            The type of the value must match the type defined in the
+            schema for the table.
+
+        :type attrs: dict
+        :param attrs: A dictionary of key value pairs used to
+            populate the new item.
+
+        :type item_class: Class
+        :param item_class: Allows you to override the class used
+            to generate the items. This should be a subclass of
+            :class:`boto.dynamodb.item.Item`
+        """
+        return item_class(self, hash_key, range_key, attrs)
+
+    def query(self, hash_key, range_key_condition=None,
+              attributes_to_get=None, request_limit=None,
+              max_results=None, consistent_read=False,
+              scan_index_forward=True, exclusive_start_key=None,
+              item_class=Item):
+        """
+        Perform a query on the table.
+
+        :type hash_key: int|long|float|str|unicode
+        :param hash_key: The HashKey of the requested item.  The
+            type of the value must match the type defined in the
+            schema for the table.
+
+        :type range_key_condition: :class:`boto.dynamodb.condition.Condition`
+        :param range_key_condition: A Condition object.
+            Condition object can be one of the following types:
+
+            EQ|LE|LT|GE|GT|BEGINS_WITH|BETWEEN
+
+            The only condition which expects or will accept two
+            values is 'BETWEEN', otherwise a single value should
+            be passed to the Condition constructor.
+
+        :type attributes_to_get: list
+        :param attributes_to_get: A list of attribute names.
+            If supplied, only the specified attribute names will
+            be returned.  Otherwise, all attributes will be returned.
+
+        :type request_limit: int
+        :param request_limit: The maximum number of items to retrieve
+            from Amazon DynamoDB on each request.  You may want to set
+            a specific request_limit based on the provisioned throughput
+            of your table.  The default behavior is to retrieve as many
+            results as possible per request.
+
+        :type max_results: int
+        :param max_results: The maximum number of results that will
+            be retrieved from Amazon DynamoDB in total.  For example,
+            if you only wanted to see the first 100 results from the
+            query, regardless of how many were actually available, you
+            could set max_results to 100 and the generator returned
+            from the query method will only yeild 100 results max.
+
+        :type consistent_read: bool
+        :param consistent_read: If True, a consistent read
+            request is issued.  Otherwise, an eventually consistent
+            request is issued.
+
+        :type scan_index_forward: bool
+        :param scan_index_forward: Specified forward or backward
+            traversal of the index.  Default is forward (True).
+
+        :type exclusive_start_key: list or tuple
+        :param exclusive_start_key: Primary key of the item from
+            which to continue an earlier query.  This would be
+            provided as the LastEvaluatedKey in that query.
+
+        :type item_class: Class
+        :param item_class: Allows you to override the class used
+            to generate the items. This should be a subclass of
+            :class:`boto.dynamodb.item.Item`
+        """
+        return self.layer2.query(self, hash_key, range_key_condition,
+                                 attributes_to_get, request_limit,
+                                 max_results, consistent_read,
+                                 scan_index_forward, exclusive_start_key,
+                                 item_class=item_class)
+
+    def scan(self, scan_filter=None,
+             attributes_to_get=None, request_limit=None, max_results=None,
+             count=False, exclusive_start_key=None, item_class=Item):
+        """
+        Scan through this table, this is a very long
+        and expensive operation, and should be avoided if
+        at all possible.
+
+        :type scan_filter: A list of tuples
+        :param scan_filter: A list of tuples where each tuple consists
+            of an attribute name, a comparison operator, and either
+            a scalar or tuple consisting of the values to compare
+            the attribute to.  Valid comparison operators are shown below
+            along with the expected number of values that should be supplied.
+
+             * EQ - equal (1)
+             * NE - not equal (1)
+             * LE - less than or equal (1)
+             * LT - less than (1)
+             * GE - greater than or equal (1)
+             * GT - greater than (1)
+             * NOT_NULL - attribute exists (0, use None)
+             * NULL - attribute does not exist (0, use None)
+             * CONTAINS - substring or value in list (1)
+             * NOT_CONTAINS - absence of substring or value in list (1)
+             * BEGINS_WITH - substring prefix (1)
+             * IN - exact match in list (N)
+             * BETWEEN - >= first value, <= second value (2)
+
+        :type attributes_to_get: list
+        :param attributes_to_get: A list of attribute names.
+            If supplied, only the specified attribute names will
+            be returned.  Otherwise, all attributes will be returned.
+
+        :type request_limit: int
+        :param request_limit: The maximum number of items to retrieve
+            from Amazon DynamoDB on each request.  You may want to set
+            a specific request_limit based on the provisioned throughput
+            of your table.  The default behavior is to retrieve as many
+            results as possible per request.
+
+        :type max_results: int
+        :param max_results: The maximum number of results that will
+            be retrieved from Amazon DynamoDB in total.  For example,
+            if you only wanted to see the first 100 results from the
+            query, regardless of how many were actually available, you
+            could set max_results to 100 and the generator returned
+            from the query method will only yeild 100 results max.
+
+        :type count: bool
+        :param count: If True, Amazon DynamoDB returns a total
+            number of items for the Scan operation, even if the
+            operation has no matching items for the assigned filter.
+
+        :type exclusive_start_key: list or tuple
+        :param exclusive_start_key: Primary key of the item from
+            which to continue an earlier query.  This would be
+            provided as the LastEvaluatedKey in that query.
+
+        :type item_class: Class
+        :param item_class: Allows you to override the class used
+            to generate the items. This should be a subclass of
+            :class:`boto.dynamodb.item.Item`
+
+        :return: A TableGenerator (generator) object which will iterate over all results
+        :rtype: :class:`boto.dynamodb.layer2.TableGenerator`
+        """
+        return self.layer2.scan(self, scan_filter, attributes_to_get,
+                                request_limit, max_results, count,
+                                exclusive_start_key, item_class=item_class)
+
+    def batch_get_item(self, keys, attributes_to_get=None):
+        """
+        Return a set of attributes for a multiple items from a single table
+        using their primary keys. This abstraction removes the 100 Items per
+        batch limitations as well as the "UnprocessedKeys" logic.
+
+        :type keys: list
+        :param keys: A list of scalar or tuple values.  Each element in the
+            list represents one Item to retrieve.  If the schema for the
+            table has both a HashKey and a RangeKey, each element in the
+            list should be a tuple consisting of (hash_key, range_key).  If
+            the schema for the table contains only a HashKey, each element
+            in the list should be a scalar value of the appropriate type
+            for the table schema. NOTE: The maximum number of items that
+            can be retrieved for a single operation is 100. Also, the
+            number of items retrieved is constrained by a 1 MB size limit.
+
+        :type attributes_to_get: list
+        :param attributes_to_get: A list of attribute names.
+            If supplied, only the specified attribute names will
+            be returned.  Otherwise, all attributes will be returned.
+
+        :return: A TableBatchGenerator (generator) object which will iterate over all results
+        :rtype: :class:`boto.dynamodb.table.TableBatchGenerator`
+        """
+        return TableBatchGenerator(self, keys, attributes_to_get)
diff --git a/boto/dynamodb/types.py b/boto/dynamodb/types.py
new file mode 100644
index 0000000..5b33076
--- /dev/null
+++ b/boto/dynamodb/types.py
@@ -0,0 +1,138 @@
+# Copyright (c) 2011 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2011 Amazon.com, Inc. or its affiliates.  All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+"""
+Some utility functions to deal with mapping Amazon DynamoDB types to
+Python types and vice-versa.
+"""
+import base64
+
+
+def is_num(n):
+    types = (int, long, float, bool)
+    return isinstance(n, types) or n in types
+
+
+def is_str(n):
+    return isinstance(n, basestring) or (isinstance(n, type) and
+                                         issubclass(n, basestring))
+
+
+def is_binary(n):
+    return isinstance(n, Binary)
+
+
+def convert_num(s):
+    if '.' in s:
+        n = float(s)
+    else:
+        n = int(s)
+    return n
+
+
+def convert_binary(n):
+    return Binary(base64.b64decode(n))
+
+
+def get_dynamodb_type(val):
+    """
+    Take a scalar Python value and return a string representing
+    the corresponding Amazon DynamoDB type.  If the value passed in is
+    not a supported type, raise a TypeError.
+    """
+    dynamodb_type = None
+    if is_num(val):
+        dynamodb_type = 'N'
+    elif is_str(val):
+        dynamodb_type = 'S'
+    elif isinstance(val, (set, frozenset)):
+        if False not in map(is_num, val):
+            dynamodb_type = 'NS'
+        elif False not in map(is_str, val):
+            dynamodb_type = 'SS'
+        elif False not in map(is_binary, val):
+            dynamodb_type = 'BS'
+    elif isinstance(val, Binary):
+        dynamodb_type = 'B'
+    if dynamodb_type is None:
+        msg = 'Unsupported type "%s" for value "%s"' % (type(val), val)
+        raise TypeError(msg)
+    return dynamodb_type
+
+
+def dynamize_value(val):
+    """
+    Take a scalar Python value and return a dict consisting
+    of the Amazon DynamoDB type specification and the value that
+    needs to be sent to Amazon DynamoDB.  If the type of the value
+    is not supported, raise a TypeError
+    """
+    def _str(val):
+        """
+        DynamoDB stores booleans as numbers. True is 1, False is 0.
+        This function converts Python booleans into DynamoDB friendly
+        representation.
+        """
+        if isinstance(val, bool):
+            return str(int(val))
+        return str(val)
+
+    dynamodb_type = get_dynamodb_type(val)
+    if dynamodb_type == 'N':
+        val = {dynamodb_type: _str(val)}
+    elif dynamodb_type == 'S':
+        val = {dynamodb_type: val}
+    elif dynamodb_type == 'NS':
+        val = {dynamodb_type: [str(n) for n in val]}
+    elif dynamodb_type == 'SS':
+        val = {dynamodb_type: [n for n in val]}
+    elif dynamodb_type == 'B':
+        val = {dynamodb_type: val.encode()}
+    elif dynamodb_type == 'BS':
+        val = {dynamodb_type: [n.encode() for n in val]}
+    return val
+
+
+class Binary(object):
+    def __init__(self, value):
+        self.value = value
+
+    def encode(self):
+        return base64.b64encode(self.value)
+
+    def __eq__(self, other):
+        if isinstance(other, Binary):
+            return self.value == other.value
+        else:
+            return self.value == other
+
+    def __ne__(self, other):
+        return not self.__eq__(other)
+
+    def __repr__(self):
+        return 'Binary(%s)' % self.value
+
+    def __str__(self):
+        return self.value
+
+    def __hash__(self):
+        return hash(self.value)
diff --git a/boto/ec2/__init__.py b/boto/ec2/__init__.py
index ff9422b..963b6d9 100644
--- a/boto/ec2/__init__.py
+++ b/boto/ec2/__init__.py
@@ -14,7 +14,7 @@
 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
 # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
 # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
-# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
 # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
@@ -25,29 +25,31 @@
 """
 from boto.ec2.connection import EC2Connection
 
+
 def regions(**kw_params):
     """
     Get all available regions for the EC2 service.
     You may pass any of the arguments accepted by the EC2Connection
     object's constructor as keyword arguments and they will be
     passed along to the EC2Connection object.
-        
+
     :rtype: list
     :return: A list of :class:`boto.ec2.regioninfo.RegionInfo`
     """
     c = EC2Connection(**kw_params)
     return c.get_all_regions()
 
+
 def connect_to_region(region_name, **kw_params):
     """
-    Given a valid region name, return a 
+    Given a valid region name, return a
     :class:`boto.ec2.connection.EC2Connection`.
     Any additional parameters after the region_name are passed on to
     the connect method of the region object.
 
     :type: str
     :param region_name: The name of the region to connect to.
-    
+
     :rtype: :class:`boto.ec2.connection.EC2Connection` or ``None``
     :return: A connection to the given region, or None if an invalid region
              name is given
@@ -56,7 +58,8 @@
         if region.name == region_name:
             return region.connect(**kw_params)
     return None
-    
+
+
 def get_region(region_name, **kw_params):
     """
     Find and return a :class:`boto.ec2.regioninfo.RegionInfo` object
@@ -73,4 +76,3 @@
         if region.name == region_name:
             return region
     return None
-    
diff --git a/boto/ec2/address.py b/boto/ec2/address.py
index 770a904..9eadfaa 100644
--- a/boto/ec2/address.py
+++ b/boto/ec2/address.py
@@ -19,13 +19,22 @@
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
 
-"""
-Represents an EC2 Elastic IP Address
-"""
 
 from boto.ec2.ec2object import EC2Object
 
 class Address(EC2Object):
+    """
+    Represents an EC2 Elastic IP Address
+
+    :ivar public_ip: The Elastic IP address.
+    :ivar instance_id: The instance the address is associated with (if any).
+    :ivar domain: Indicates whether the address is a EC2 address or a VPC address (standard|vpc).
+    :ivar allocation_id: The allocation ID for the address (VPC addresses only).
+    :ivar association_id: The association ID for the address (VPC addresses only).
+    :ivar network_interface_id: The network interface (if any) that the address is associated with (VPC addresses only).
+    :ivar network_interface_owner_id: The owner IID (VPC addresses only).
+    :ivar private_ip_address: The private IP address associated with the Elastic IP address (VPC addresses only).
+    """
 
     def __init__(self, connection=None, public_ip=None, instance_id=None):
         EC2Object.__init__(self, connection)
@@ -35,6 +44,9 @@
         self.domain = None
         self.allocation_id = None
         self.association_id = None
+        self.network_interface_id = None
+        self.network_interface_owner_id = None
+        self.private_ip_address = None
 
     def __repr__(self):
         return 'Address:%s' % self.public_ip
@@ -50,18 +62,42 @@
             self.allocation_id = value
         elif name == 'associationId':
             self.association_id = value
+        elif name == 'networkInterfaceId':
+            self.network_interface_id = value
+        elif name == 'networkInterfaceOwnerId':
+            self.network_interface_owner_id = value
+        elif name == 'privateIpAddress':
+            self.private_ip_address = value
         else:
             setattr(self, name, value)
 
     def release(self):
-        return self.connection.release_address(self.public_ip)
+        """
+        Free up this Elastic IP address.
+        :see: :meth:`boto.ec2.connection.EC2Connection.release_address`
+        """
+        if self.allocation_id:
+            return self.connection.release_address(None, self.allocation_id)
+        else:
+            return self.connection.release_address(self.public_ip)
 
     delete = release
 
     def associate(self, instance_id):
+        """
+        Associate this Elastic IP address with a currently running instance.
+        :see: :meth:`boto.ec2.connection.EC2Connection.associate_address`
+        """
         return self.connection.associate_address(instance_id, self.public_ip)
 
     def disassociate(self):
-        return self.connection.disassociate_address(self.public_ip)
+        """
+        Disassociate this Elastic IP address from a currently running instance.
+        :see: :meth:`boto.ec2.connection.EC2Connection.disassociate_address`
+        """
+        if self.association_id:
+            return self.connection.disassociate_address(None, self.association_id)
+        else:
+            return self.connection.disassociate_address(self.public_ip)
 
 
diff --git a/boto/ec2/autoscale/__init__.py b/boto/ec2/autoscale/__init__.py
index fceac59..80c3c85 100644
--- a/boto/ec2/autoscale/__init__.py
+++ b/boto/ec2/autoscale/__init__.py
@@ -1,5 +1,7 @@
 # Copyright (c) 2009-2011 Reza Lotun http://reza.lotun.name/
 # Copyright (c) 2011 Jann Kleen
+# Copyright (c) 2012 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2012 Amazon.com, Inc. or its affiliates.  All Rights Reserved
 #
 # Permission is hereby granted, free of charge, to any person obtaining a
 # copy of this software and associated documentation files (the
@@ -32,20 +34,25 @@
 from boto.ec2.regioninfo import RegionInfo
 from boto.ec2.autoscale.request import Request
 from boto.ec2.autoscale.launchconfig import LaunchConfiguration
-from boto.ec2.autoscale.group import AutoScalingGroup, ProcessType
+from boto.ec2.autoscale.group import AutoScalingGroup
+from boto.ec2.autoscale.group import ProcessType
 from boto.ec2.autoscale.activity import Activity
-from boto.ec2.autoscale.policy import AdjustmentType, MetricCollectionTypes, ScalingPolicy
+from boto.ec2.autoscale.policy import AdjustmentType
+from boto.ec2.autoscale.policy import MetricCollectionTypes
+from boto.ec2.autoscale.policy import ScalingPolicy
 from boto.ec2.autoscale.instance import Instance
 from boto.ec2.autoscale.scheduled import ScheduledUpdateGroupAction
-
+from boto.ec2.autoscale.tag import Tag
 
 RegionData = {
-    'us-east-1' : 'autoscaling.us-east-1.amazonaws.com',
-    'us-west-1' : 'autoscaling.us-west-1.amazonaws.com',
-    'us-west-2' : 'autoscaling.us-west-2.amazonaws.com',
-    'eu-west-1' : 'autoscaling.eu-west-1.amazonaws.com',
-    'ap-northeast-1' : 'autoscaling.ap-northeast-1.amazonaws.com',
-    'ap-southeast-1' : 'autoscaling.ap-southeast-1.amazonaws.com'}
+    'us-east-1': 'autoscaling.us-east-1.amazonaws.com',
+    'us-west-1': 'autoscaling.us-west-1.amazonaws.com',
+    'us-west-2': 'autoscaling.us-west-2.amazonaws.com',
+    'sa-east-1': 'autoscaling.sa-east-1.amazonaws.com',
+    'eu-west-1': 'autoscaling.eu-west-1.amazonaws.com',
+    'ap-northeast-1': 'autoscaling.ap-northeast-1.amazonaws.com',
+    'ap-southeast-1': 'autoscaling.ap-southeast-1.amazonaws.com'}
+
 
 def regions():
     """
@@ -62,6 +69,7 @@
         regions.append(region)
     return regions
 
+
 def connect_to_region(region_name, **kw_params):
     """
     Given a valid region name, return a
@@ -82,13 +90,15 @@
 class AutoScaleConnection(AWSQueryConnection):
     APIVersion = boto.config.get('Boto', 'autoscale_version', '2011-01-01')
     DefaultRegionEndpoint = boto.config.get('Boto', 'autoscale_endpoint',
-                                            'autoscaling.amazonaws.com')
-    DefaultRegionName =  boto.config.get('Boto', 'autoscale_region_name', 'us-east-1')
+                                            'autoscaling.us-east-1.amazonaws.com')
+    DefaultRegionName = boto.config.get('Boto', 'autoscale_region_name',
+                                        'us-east-1')
 
     def __init__(self, aws_access_key_id=None, aws_secret_access_key=None,
                  is_secure=True, port=None, proxy=None, proxy_port=None,
                  proxy_user=None, proxy_pass=None, debug=0,
-                 https_connection_factory=None, region=None, path='/'):
+                 https_connection_factory=None, region=None, path='/',
+                 security_token=None, validate_certs=True):
         """
         Init method to create a new connection to the AutoScaling service.
 
@@ -105,10 +115,12 @@
                                     is_secure, port, proxy, proxy_port,
                                     proxy_user, proxy_pass,
                                     self.region.endpoint, debug,
-                                    https_connection_factory, path=path)
+                                    https_connection_factory, path=path,
+                                    security_token=security_token,
+                                    validate_certs=validate_certs)
 
     def _required_auth_capability(self):
-        return ['ec2']
+        return ['hmac-v4']
 
     def build_list_params(self, params, items, label):
         """
@@ -128,28 +140,25 @@
             ['us-east-1b',...]
         """
         # different from EC2 list params
-        for i in xrange(1, len(items)+1):
-            if isinstance(items[i-1], dict):
-                for k, v in items[i-1].iteritems():
+        for i in xrange(1, len(items) + 1):
+            if isinstance(items[i - 1], dict):
+                for k, v in items[i - 1].iteritems():
                     if isinstance(v, dict):
                         for kk, vv in v.iteritems():
                             params['%s.member.%d.%s.%s' % (label, i, k, kk)] = vv
                     else:
                         params['%s.member.%d.%s' % (label, i, k)] = v
-            elif isinstance(items[i-1], basestring):
-                params['%s.member.%d' % (label, i)] = items[i-1]
+            elif isinstance(items[i - 1], basestring):
+                params['%s.member.%d' % (label, i)] = items[i - 1]
 
     def _update_group(self, op, as_group):
-        params = {
-                  'AutoScalingGroupName'    : as_group.name,
-                  'LaunchConfigurationName' : as_group.launch_config_name,
-                  'MinSize'                 : as_group.min_size,
-                  'MaxSize'                 : as_group.max_size,
-                  }
+        params = {'AutoScalingGroupName': as_group.name,
+                  'LaunchConfigurationName': as_group.launch_config_name,
+                  'MinSize': as_group.min_size,
+                  'MaxSize': as_group.max_size}
         # get availability zone information (required param)
         zones = as_group.availability_zones
-        self.build_list_params(params, zones,
-                                'AvailabilityZones')
+        self.build_list_params(params, zones, 'AvailabilityZones')
         if as_group.desired_capacity:
             params['DesiredCapacity'] = as_group.desired_capacity
         if as_group.vpc_zone_identifier:
@@ -163,10 +172,14 @@
         if as_group.placement_group:
             params['PlacementGroup'] = as_group.placement_group
         if op.startswith('Create'):
-            # you can only associate load balancers with an autoscale group at creation time
+            # you can only associate load balancers with an autoscale
+            # group at creation time
             if as_group.load_balancers:
                 self.build_list_params(params, as_group.load_balancers,
                                        'LoadBalancerNames')
+            if as_group.tags:
+                for i, tag in enumerate(as_group.tags):
+                    tag.build_params(params, i + 1)
         return self.get_object(op, params, Request)
 
     def create_auto_scaling_group(self, as_group):
@@ -181,9 +194,9 @@
         and no scaling activities in progress.
         """
         if(force_delete):
-            params = {'AutoScalingGroupName' : name, 'ForceDelete' : 'true'}
+            params = {'AutoScalingGroupName': name, 'ForceDelete': 'true'}
         else:
-            params = {'AutoScalingGroupName' : name}
+            params = {'AutoScalingGroupName': name}
         return self.get_object('DeleteAutoScalingGroup', params, Request)
 
     def create_launch_configuration(self, launch_config):
@@ -192,13 +205,10 @@
 
         :type launch_config: :class:`boto.ec2.autoscale.launchconfig.LaunchConfiguration`
         :param launch_config: LaunchConfiguration object.
-
         """
-        params = {
-                  'ImageId'                 : launch_config.image_id,
-                  'LaunchConfigurationName' : launch_config.name,
-                  'InstanceType'            : launch_config.instance_type,
-                 }
+        params = {'ImageId': launch_config.image_id,
+                  'LaunchConfigurationName': launch_config.name,
+                  'InstanceType': launch_config.instance_type}
         if launch_config.key_name:
             params['KeyName'] = launch_config.key_name
         if launch_config.user_data:
@@ -214,9 +224,15 @@
             self.build_list_params(params, launch_config.security_groups,
                                    'SecurityGroups')
         if launch_config.instance_monitoring:
-            params['InstanceMonitoring.member.Enabled'] = 'true'
+            params['InstanceMonitoring.Enabled'] = 'true'
+        else:
+            params['InstanceMonitoring.Enabled'] = 'false'
+        if launch_config.spot_price is not None:
+            params['SpotPrice'] = str(launch_config.spot_price)
+        if launch_config.instance_profile_name is not None:
+            params['IamInstanceProfile'] = launch_config.instance_profile_name
         return self.get_object('CreateLaunchConfiguration', params,
-                                  Request, verb='POST')
+                               Request, verb='POST')
 
     def create_scaling_policy(self, scaling_policy):
         """
@@ -225,11 +241,10 @@
         :type scaling_policy: :class:`boto.ec2.autoscale.policy.ScalingPolicy`
         :param scaling_policy: ScalingPolicy object.
         """
-        params = {'AdjustmentType'      : scaling_policy.adjustment_type,
+        params = {'AdjustmentType': scaling_policy.adjustment_type,
                   'AutoScalingGroupName': scaling_policy.as_name,
-                  'PolicyName'          : scaling_policy.name,
-                  'ScalingAdjustment'   : scaling_policy.scaling_adjustment,}
-
+                  'PolicyName': scaling_policy.name,
+                  'ScalingAdjustment': scaling_policy.scaling_adjustment}
         if scaling_policy.cooldown is not None:
             params['Cooldown'] = scaling_policy.cooldown
 
@@ -243,7 +258,7 @@
         Scaling group. Once this call completes, the launch configuration is no
         longer available for use.
         """
-        params = {'LaunchConfigurationName' : launch_config_name}
+        params = {'LaunchConfigurationName': launch_config_name}
         return self.get_object('DeleteLaunchConfiguration', params, Request)
 
     def get_all_groups(self, names=None, max_records=None, next_token=None):
@@ -264,7 +279,8 @@
         :param max_records: Maximum amount of groups to return.
 
         :rtype: list
-        :returns: List of :class:`boto.ec2.autoscale.group.AutoScalingGroup` instances.
+        :returns: List of :class:`boto.ec2.autoscale.group.AutoScalingGroup`
+            instances.
         """
         params = {}
         if max_records:
@@ -291,11 +307,13 @@
         :param max_records: Maximum amount of configurations to return.
 
         :type next_token: str
-        :param next_token: If you have more results than can be returned at once, pass in this
-                           parameter to page through all results.
+        :param next_token: If you have more results than can be returned
+            at once, pass in this  parameter to page through all results.
 
         :rtype: list
-        :returns: List of :class:`boto.ec2.autoscale.launchconfig.LaunchConfiguration` instances.
+        :returns: List of
+            :class:`boto.ec2.autoscale.launchconfig.LaunchConfiguration`
+            instances.
         """
         params = {}
         max_records = kwargs.get('max_records', None)
@@ -310,7 +328,8 @@
         return self.get_list('DescribeLaunchConfigurations', params,
                              [('member', LaunchConfiguration)])
 
-    def get_all_activities(self, autoscale_group, activity_ids=None, max_records=None, next_token=None):
+    def get_all_activities(self, autoscale_group, activity_ids=None,
+                           max_records=None, next_token=None):
         """
         Get all activities for the given autoscaling group.
 
@@ -318,19 +337,21 @@
         pages to retrieve. To get the next page, call this action again with
         the returned token as the NextToken parameter
 
-        :type autoscale_group: str or :class:`boto.ec2.autoscale.group.AutoScalingGroup` object
+        :type autoscale_group: str or
+            :class:`boto.ec2.autoscale.group.AutoScalingGroup` object
         :param autoscale_group: The auto scaling group to get activities on.
 
         :type max_records: int
         :param max_records: Maximum amount of activities to return.
 
         :rtype: list
-        :returns: List of :class:`boto.ec2.autoscale.activity.Activity` instances.
+        :returns: List of
+            :class:`boto.ec2.autoscale.activity.Activity` instances.
         """
         name = autoscale_group
         if isinstance(autoscale_group, AutoScalingGroup):
             name = autoscale_group.name
-        params = {'AutoScalingGroupName' : name}
+        params = {'AutoScalingGroupName': name}
         if max_records:
             params['MaxRecords'] = max_records
         if next_token:
@@ -345,11 +366,14 @@
         """
         Deletes a previously scheduled action.
 
-        :param str scheduled_action_name: The name of the action you want
+        :type scheduled_action_name: str
+        :param scheduled_action_name: The name of the action you want
             to delete.
-        :param str autoscale_group: The name of the autoscale group.
+
+        :type autoscale_group: str
+        :param autoscale_group: The name of the autoscale group.
         """
-        params = {'ScheduledActionName' : scheduled_action_name}
+        params = {'ScheduledActionName': scheduled_action_name}
         if autoscale_group:
             params['AutoScalingGroupName'] = autoscale_group
         return self.get_status('DeleteScheduledAction', params)
@@ -359,11 +383,14 @@
         Terminates the specified instance. The desired group size can
         also be adjusted, if desired.
 
-        :param str instance_id: The ID of the instance to be terminated.
-        :param bool decrement_capacity: Whether to decrement the size of the
+        :type instance_id: str
+        :param instance_id: The ID of the instance to be terminated.
+
+        :type decrement_capability: bool
+        :param decrement_capacity: Whether to decrement the size of the
             autoscaling group or not.
         """
-        params = {'InstanceId' : instance_id}
+        params = {'InstanceId': instance_id}
         if decrement_capacity:
             params['ShouldDecrementDesiredCapacity'] = 'true'
         else:
@@ -387,7 +414,8 @@
         return self.get_status('DeletePolicy', params)
 
     def get_all_adjustment_types(self):
-        return self.get_list('DescribeAdjustmentTypes', {}, [('member', AdjustmentType)])
+        return self.get_list('DescribeAdjustmentTypes', {},
+                             [('member', AdjustmentType)])
 
     def get_all_autoscaling_instances(self, instance_ids=None,
                                       max_records=None, next_token=None):
@@ -402,13 +430,14 @@
 
         :type instance_ids: list
         :param instance_ids: List of Autoscaling Instance IDs which should be
-                             searched for.
+            searched for.
 
         :type max_records: int
         :param max_records: Maximum number of results to return.
 
         :rtype: list
-        :returns: List of :class:`boto.ec2.autoscale.activity.Activity` instances.
+        :returns: List of
+            :class:`boto.ec2.autoscale.instance.Instance` objects.
         """
         params = {}
         if instance_ids:
@@ -436,11 +465,12 @@
         available. To get the additional records, repeat the request with the
         response token as the NextToken parameter.
 
-        If no group name or list of policy names are provided, all available policies
-        are returned.
+        If no group name or list of policy names are provided, all
+        available policies are returned.
 
         :type as_name: str
-        :param as_name: the name of the :class:`boto.ec2.autoscale.group.AutoScalingGroup` to filter for.
+        :param as_name: The name of the
+            :class:`boto.ec2.autoscale.group.AutoScalingGroup` to filter for.
 
         :type names: list
         :param names: List of policy names which should be searched for.
@@ -461,47 +491,53 @@
                              [('member', ScalingPolicy)])
 
     def get_all_scaling_process_types(self):
-        """ Returns scaling process types for use in the ResumeProcesses and
+        """
+        Returns scaling process types for use in the ResumeProcesses and
         SuspendProcesses actions.
         """
         return self.get_list('DescribeScalingProcessTypes', {},
                              [('member', ProcessType)])
 
     def suspend_processes(self, as_group, scaling_processes=None):
-        """ Suspends Auto Scaling processes for an Auto Scaling group.
+        """
+        Suspends Auto Scaling processes for an Auto Scaling group.
 
         :type as_group: string
         :param as_group: The auto scaling group to suspend processes on.
 
         :type scaling_processes: list
-        :param scaling_processes: Processes you want to suspend. If omitted, all
-                                  processes will be suspended.
+        :param scaling_processes: Processes you want to suspend. If omitted,
+            all processes will be suspended.
         """
-        params = {'AutoScalingGroupName' : as_group}
+        params = {'AutoScalingGroupName': as_group}
         if scaling_processes:
-            self.build_list_params(params, scaling_processes, 'ScalingProcesses')
+            self.build_list_params(params, scaling_processes,
+                                   'ScalingProcesses')
         return self.get_status('SuspendProcesses', params)
 
     def resume_processes(self, as_group, scaling_processes=None):
-        """ Resumes Auto Scaling processes for an Auto Scaling group.
+        """
+        Resumes Auto Scaling processes for an Auto Scaling group.
 
         :type as_group: string
         :param as_group: The auto scaling group to resume processes on.
 
         :type scaling_processes: list
         :param scaling_processes: Processes you want to resume. If omitted, all
-                                  processes will be resumed.
+            processes will be resumed.
         """
-        params = {
-                    'AutoScalingGroupName'      :   as_group
-                 }
+        params = {'AutoScalingGroupName': as_group}
+
         if scaling_processes:
-            self.build_list_params(params, scaling_processes, 'ScalingProcesses')
+            self.build_list_params(params, scaling_processes,
+                                   'ScalingProcesses')
         return self.get_status('ResumeProcesses', params)
 
-    def create_scheduled_group_action(self, as_group, name, time, desired_capacity=None,
+    def create_scheduled_group_action(self, as_group, name, time,
+                                      desired_capacity=None,
                                       min_size=None, max_size=None):
-        """ Creates a scheduled scaling action for a Auto Scaling group. If you
+        """
+        Creates a scheduled scaling action for a Auto Scaling group. If you
         leave a parameter unspecified, the corresponding value remains
         unchanged in the affected Auto Scaling group.
 
@@ -515,8 +551,8 @@
         :param time: The time for this action to start.
 
         :type desired_capacity: int
-        :param desired_capacity: The number of EC2 instances that should be running in
-                                this group.
+        :param desired_capacity: The number of EC2 instances that should
+            be running in this group.
 
         :type min_size: int
         :param min_size: The minimum size for the new auto scaling group.
@@ -524,11 +560,9 @@
         :type max_size: int
         :param max_size: The minimum size for the new auto scaling group.
         """
-        params = {
-                    'AutoScalingGroupName'      :   as_group,
-                    'ScheduledActionName'       :   name,
-                    'Time'                      :   time.isoformat(),
-                 }
+        params = {'AutoScalingGroupName': as_group,
+                  'ScheduledActionName': name,
+                  'Time': time.isoformat()}
         if desired_capacity is not None:
             params['DesiredCapacity'] = desired_capacity
         if min_size is not None:
@@ -537,18 +571,21 @@
             params['MaxSize'] = max_size
         return self.get_status('PutScheduledUpdateGroupAction', params)
 
-    def get_all_scheduled_actions(self, as_group=None, start_time=None, end_time=None, scheduled_actions=None,
+    def get_all_scheduled_actions(self, as_group=None, start_time=None,
+                                  end_time=None, scheduled_actions=None,
                                   max_records=None, next_token=None):
         params = {}
         if as_group:
             params['AutoScalingGroupName'] = as_group
         if scheduled_actions:
-            self.build_list_params(params, scheduled_actions, 'ScheduledActionNames')
+            self.build_list_params(params, scheduled_actions,
+                                   'ScheduledActionNames')
         if max_records:
             params['MaxRecords'] = max_records
         if next_token:
             params['NextToken'] = next_token
-        return self.get_list('DescribeScheduledActions', params, [('member', ScheduledUpdateGroupAction)])
+        return self.get_list('DescribeScheduledActions', params,
+                             [('member', ScheduledUpdateGroupAction)])
 
     def disable_metrics_collection(self, as_group, metrics=None):
         """
@@ -556,9 +593,8 @@
         specified in AutoScalingGroupName. You can specify the list of affected
         metrics with the Metrics parameter.
         """
-        params = {
-                    'AutoScalingGroupName'      :   as_group,
-                 }
+        params = {'AutoScalingGroupName': as_group}
+
         if metrics:
             self.build_list_params(params, metrics, 'Metrics')
         return self.get_status('DisableMetricsCollection', params)
@@ -578,30 +614,54 @@
 
         :type granularity: string
         :param granularity: The granularity to associate with the metrics to
-                            collect. Currently, the only legal granularity is "1Minute".
+            collect. Currently, the only legal granularity is "1Minute".
 
         :type metrics: string list
         :param metrics: The list of metrics to collect. If no metrics are
                         specified, all metrics are enabled.
         """
-        params = {
-                    'AutoScalingGroupName'      :   as_group,
-                    'Granularity'               :   granularity,
-                 }
+        params = {'AutoScalingGroupName': as_group,
+                  'Granularity': granularity}
         if metrics:
             self.build_list_params(params, metrics, 'Metrics')
         return self.get_status('EnableMetricsCollection', params)
 
     def execute_policy(self, policy_name, as_group=None, honor_cooldown=None):
-        params = {
-                    'PolicyName'        :   policy_name,
-                 }
+        params = {'PolicyName': policy_name}
         if as_group:
             params['AutoScalingGroupName'] = as_group
         if honor_cooldown:
             params['HonorCooldown'] = honor_cooldown
         return self.get_status('ExecutePolicy', params)
 
+    def put_notification_configuration(self, autoscale_group, topic, notification_types):
+        """
+        Configures an Auto Scaling group to send notifications when
+        specified events take place.
+
+        :type as_group: str or
+            :class:`boto.ec2.autoscale.group.AutoScalingGroup` object
+        :param as_group: The Auto Scaling group to put notification
+            configuration on.
+
+        :type topic: str
+        :param topic: The Amazon Resource Name (ARN) of the Amazon Simple
+            Notification Service (SNS) topic.
+
+        :type notification_types: list
+        :param notification_types: The type of events that will trigger
+            the notification.
+        """
+
+        name = autoscale_group
+        if isinstance(autoscale_group, AutoScalingGroup):
+            name = autoscale_group.name
+
+        params = {'AutoScalingGroupName': name,
+                  'TopicARN': topic}
+        self.build_list_params(params, notification_types, 'NotificationTypes')
+        return self.get_status('PutNotificationConfiguration', params)
+
     def set_instance_health(self, instance_id, health_status,
                             should_respect_grace_period=True):
         """
@@ -612,22 +672,72 @@
 
         :type health_status: str
         :param health_status: The health status of the instance.
-                              "Healthy" means that the instance is
-                              healthy and should remain in service.
-                              "Unhealthy" means that the instance is
-                              unhealthy. Auto Scaling should terminate
-                              and replace it.
+            "Healthy" means that the instance is healthy and should remain
+            in service. "Unhealthy" means that the instance is unhealthy.
+            Auto Scaling should terminate and replace it.
 
         :type should_respect_grace_period: bool
         :param should_respect_grace_period: If True, this call should
-                                            respect the grace period
-                                            associated with the group.
+            respect the grace period associated with the group.
         """
-        params = {'InstanceId' : instance_id,
-                  'HealthStatus' : health_status}
+        params = {'InstanceId': instance_id,
+                  'HealthStatus': health_status}
         if should_respect_grace_period:
             params['ShouldRespectGracePeriod'] = 'true'
         else:
             params['ShouldRespectGracePeriod'] = 'false'
         return self.get_status('SetInstanceHealth', params)
 
+    # Tag methods
+
+    def get_all_tags(self, filters=None, max_records=None, next_token=None):
+        """
+        Lists the Auto Scaling group tags.
+
+        This action supports pagination by returning a token if there
+        are more pages to retrieve. To get the next page, call this
+        action again with the returned token as the NextToken
+        parameter.
+
+        :type filters: dict
+        :param filters: The value of the filter type used to identify
+            the tags to be returned.  NOT IMPLEMENTED YET.
+
+        :type max_records: int
+        :param max_records: Maximum number of tags to return.
+
+        :rtype: list
+        :returns: List of :class:`boto.ec2.autoscale.tag.Tag`
+            instances.
+        """
+        params = {}
+        if max_records:
+            params['MaxRecords'] = max_records
+        if next_token:
+            params['NextToken'] = next_token
+        return self.get_list('DescribeTags', params,
+                             [('member', Tag)])
+
+    def create_or_update_tags(self, tags):
+        """
+        Creates new tags or updates existing tags for an Auto Scaling group.
+
+        :type tags: List of :class:`boto.ec2.autoscale.tag.Tag`
+        :param tags: The new or updated tags.
+        """
+        params = {}
+        for i, tag in enumerate(tags):
+            tag.build_params(params, i + 1)
+        return self.get_status('CreateOrUpdateTags', params, verb='POST')
+
+    def delete_tags(self, tags):
+        """
+        Deletes existing tags for an Auto Scaling group.
+
+        :type tags: List of :class:`boto.ec2.autoscale.tag.Tag`
+        :param tags: The new or updated tags.
+        """
+        params = {}
+        for i, tag in enumerate(tags):
+            tag.build_params(params, i + 1)
+        return self.get_status('DeleteTags', params, verb='POST')
diff --git a/boto/ec2/autoscale/group.py b/boto/ec2/autoscale/group.py
index eb65853..eb72f6f 100644
--- a/boto/ec2/autoscale/group.py
+++ b/boto/ec2/autoscale/group.py
@@ -19,12 +19,12 @@
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
 
-
 from boto.ec2.elb.listelement import ListElement
 from boto.resultset import ResultSet
 from boto.ec2.autoscale.launchconfig import LaunchConfiguration
 from boto.ec2.autoscale.request import Request
 from boto.ec2.autoscale.instance import Instance
+from boto.ec2.autoscale.tag import Tag
 
 
 class ProcessType(object):
@@ -81,13 +81,24 @@
             self.metric = value
 
 
+class TerminationPolicies(list):
+
+    def startElement(self, name, attrs, connection):
+        pass
+
+    def endElement(self, name, value, connection):
+        if name == 'member':
+            self.append(value)
+
+
 class AutoScalingGroup(object):
     def __init__(self, connection=None, name=None,
                  launch_config=None, availability_zones=None,
                  load_balancers=None, default_cooldown=None,
                  health_check_type=None, health_check_period=None,
-                 placement_group=None, vpc_zone_identifier=None, desired_capacity=None,
-                 min_size=None, max_size=None, **kwargs):
+                 placement_group=None, vpc_zone_identifier=None,
+                 desired_capacity=None, min_size=None, max_size=None,
+                 tags=None, **kwargs):
         """
         Creates a new AutoScalingGroup with the specified name.
 
@@ -103,24 +114,23 @@
         :param availability_zones: List of availability zones (required).
 
         :type default_cooldown: int
-        :param default_cooldown: Number of seconds after a Scaling Activity completes
-                                 before any further scaling activities can start.
+        :param default_cooldown: Number of seconds after a Scaling Activity
+            completes before any further scaling activities can start.
 
         :type desired_capacity: int
         :param desired_capacity: The desired capacity for the group.
 
         :type health_check_period: str
-        :param health_check_period: Length of time in seconds after a new EC2 instance
-                                    comes into service that Auto Scaling starts checking its
-                                    health.
+        :param health_check_period: Length of time in seconds after a new
+            EC2 instance comes into service that Auto Scaling starts
+            checking its health.
 
         :type health_check_type: str
         :param health_check_type: The service you want the health status from,
-                                   Amazon EC2 or Elastic Load Balancer.
+            Amazon EC2 or Elastic Load Balancer.
 
-        :type launch_config: str or LaunchConfiguration
-        :param launch_config: Name of launch configuration (required).
-
+        :type launch_config_name: str or LaunchConfiguration
+        :param launch_config_name: Name of launch configuration (required).
 
         :type load_balancers: list
         :param load_balancers: List of load balancers.
@@ -133,21 +143,25 @@
 
         :type placement_group: str
         :param placement_group: Physical location of your cluster placement
-                                group created in Amazon EC2.
+            group created in Amazon EC2.
 
         :type vpc_zone_identifier: str
-        :param vpc_zone_identifier: The subnet identifier of the Virtual Private Cloud.
+        :param vpc_zone_identifier: The subnet identifier of the Virtual
+            Private Cloud.
 
         :rtype: :class:`boto.ec2.autoscale.group.AutoScalingGroup`
         :return: An autoscale group.
         """
-        self.name = name or kwargs.get('group_name')   # backwards compatibility
+        self.name = name or kwargs.get('group_name')   # backwards compat
         self.connection = connection
         self.min_size = int(min_size) if min_size is not None else None
         self.max_size = int(max_size) if max_size is not None else None
         self.created_time = None
-        default_cooldown = default_cooldown or kwargs.get('cooldown')  # backwards compatibility
-        self.default_cooldown = int(default_cooldown) if default_cooldown is not None else None
+        # backwards compatibility
+        default_cooldown = default_cooldown or kwargs.get('cooldown')
+        if default_cooldown is not None:
+            default_cooldown = int(default_cooldown)
+        self.default_cooldown = default_cooldown
         self.launch_config_name = launch_config
         if launch_config and isinstance(launch_config, LaunchConfiguration):
             self.launch_config_name = launch_config.name
@@ -162,20 +176,20 @@
         self.autoscaling_group_arn = None
         self.vpc_zone_identifier = vpc_zone_identifier
         self.instances = None
+        self.tags = tags or None
+        self.termination_policies = TerminationPolicies()
 
     # backwards compatible access to 'cooldown' param
     def _get_cooldown(self):
         return self.default_cooldown
+
     def _set_cooldown(self, val):
         self.default_cooldown = val
+
     cooldown = property(_get_cooldown, _set_cooldown)
 
     def __repr__(self):
-        return 'AutoScalingGroup<%s>: created:%s, minsize:%s, maxsize:%s, capacity:%s' % (self.name,
-                                                                                          self.created_time,
-                                                                                          self.min_size,
-                                                                                          self.max_size,
-                                                                                          self.desired_capacity)
+        return 'AutoScaleGroup<%s>' % self.name
 
     def startElement(self, name, attrs, connection):
         if name == 'Instances':
@@ -191,6 +205,11 @@
         elif name == 'SuspendedProcesses':
             self.suspended_processes = ResultSet([('member', SuspendedProcess)])
             return self.suspended_processes
+        elif name == 'Tags':
+            self.tags = ResultSet([('member', Tag)])
+            return self.tags
+        elif name == 'TerminationPolicies':
+            return self.termination_policies
         else:
             return
 
@@ -214,7 +233,10 @@
         elif name == 'PlacementGroup':
             self.placement_group = value
         elif name == 'HealthCheckGracePeriod':
-            self.health_check_period = int(value)
+            try:
+                self.health_check_period = int(value)
+            except ValueError:
+                self.health_check_period = None
         elif name == 'HealthCheckType':
             self.health_check_type = value
         elif name == 'VPCZoneIdentifier':
@@ -223,22 +245,25 @@
             setattr(self, name, value)
 
     def set_capacity(self, capacity):
-        """ Set the desired capacity for the group. """
-        params = {
-                  'AutoScalingGroupName' : self.name,
-                  'DesiredCapacity'      : capacity,
-                 }
+        """
+        Set the desired capacity for the group.
+        """
+        params = {'AutoScalingGroupName': self.name,
+                  'DesiredCapacity': capacity}
         req = self.connection.get_object('SetDesiredCapacity', params,
-                                            Request)
+                                         Request)
         self.connection.last_request = req
         return req
 
     def update(self):
-        """ Sync local changes with AutoScaling group. """
+        """
+        Sync local changes with AutoScaling group.
+        """
         return self.connection._update_group('UpdateAutoScalingGroup', self)
 
     def shutdown_instances(self):
-        """ Convenience method which shuts down all instances associated with
+        """
+        Convenience method which shuts down all instances associated with
         this group.
         """
         self.min_size = 0
@@ -247,23 +272,39 @@
         self.update()
 
     def delete(self, force_delete=False):
-        """ Delete this auto-scaling group if no instances attached or no
+        """
+        Delete this auto-scaling group if no instances attached or no
         scaling activities in progress.
         """
-        return self.connection.delete_auto_scaling_group(self.name, force_delete)
+        return self.connection.delete_auto_scaling_group(self.name,
+                                                         force_delete)
 
     def get_activities(self, activity_ids=None, max_records=50):
         """
         Get all activies for this group.
         """
-        return self.connection.get_all_activities(self, activity_ids, max_records)
+        return self.connection.get_all_activities(self, activity_ids,
+                                                  max_records)
+
+    def put_notification_configuration(self, topic, notification_types):
+        """
+        Configures an Auto Scaling group to send notifications when
+        specified events take place.
+        """
+        return self.connection.put_notification_configuration(self,
+                                                              topic,
+                                                              notification_types)
 
     def suspend_processes(self, scaling_processes=None):
-        """ Suspends Auto Scaling processes for an Auto Scaling group. """
+        """
+        Suspends Auto Scaling processes for an Auto Scaling group.
+        """
         return self.connection.suspend_processes(self.name, scaling_processes)
 
     def resume_processes(self, scaling_processes=None):
-        """ Resumes Auto Scaling processes for an Auto Scaling group. """
+        """
+        Resumes Auto Scaling processes for an Auto Scaling group.
+        """
         return self.connection.resume_processes(self.name, scaling_processes)
 
 
@@ -287,4 +328,3 @@
             self.granularity = value
         else:
             setattr(self, name, value)
-
diff --git a/boto/ec2/autoscale/launchconfig.py b/boto/ec2/autoscale/launchconfig.py
index 2f55b24..e6e38fd 100644
--- a/boto/ec2/autoscale/launchconfig.py
+++ b/boto/ec2/autoscale/launchconfig.py
@@ -1,4 +1,5 @@
 # Copyright (c) 2009 Reza Lotun http://reza.lotun.name/
+# Copyright (c) 2012 Amazon.com, Inc. or its affiliates.  All Rights Reserved
 #
 # Permission is hereby granted, free of charge, to any person obtaining a
 # copy of this software and associated documentation files (the
@@ -19,13 +20,15 @@
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
 
-
 from datetime import datetime
-import base64
 from boto.resultset import ResultSet
 from boto.ec2.elb.listelement import ListElement
+import boto.utils
+import base64
 
 # this should use the corresponding object from boto.ec2
+
+
 class Ebs(object):
     def __init__(self, connection=None, snapshot_id=None, volume_size=None):
         self.connection = connection
@@ -70,7 +73,8 @@
         self.ebs = None
 
     def __repr__(self):
-        return 'BlockDeviceMapping(%s, %s)' % (self.device_name, self.virtual_name)
+        return 'BlockDeviceMapping(%s, %s)' % (self.device_name,
+                                               self.virtual_name)
 
     def startElement(self, name, attrs, connection):
         if name == 'Ebs':
@@ -89,7 +93,8 @@
                  key_name=None, security_groups=None, user_data=None,
                  instance_type='m1.small', kernel_id=None,
                  ramdisk_id=None, block_device_mappings=None,
-                 instance_monitoring=False):
+                 instance_monitoring=False, spot_price=None,
+                 instance_profile_name=None):
         """
         A launch configuration.
 
@@ -98,14 +103,14 @@
 
         :type image_id: str
         :param image_id: Unique ID of the Amazon Machine Image (AMI) which was
-                         assigned during registration.
+            assigned during registration.
 
         :type key_name: str
         :param key_name: The name of the EC2 key pair.
 
         :type security_groups: list
         :param security_groups: Names of the security groups with which to
-                                associate the EC2 instances.
+            associate the EC2 instances.
 
         :type user_data: str
         :param user_data: The user data available to launched EC2 instances.
@@ -121,11 +126,20 @@
 
         :type block_device_mappings: list
         :param block_device_mappings: Specifies how block devices are exposed
-                                      for instances
+            for instances
 
         :type instance_monitoring: bool
         :param instance_monitoring: Whether instances in group are launched
-                                    with detailed monitoring.
+            with detailed monitoring.
+
+        :type spot_price: float
+        :param spot_price: The spot price you are bidding.  Only applies
+            if you are building an autoscaling group with spot instances.
+
+        :type instance_profile_name: string
+        :param instance_profile_name: The name or the Amazon Resource
+            Name (ARN) of the instance profile associated with the IAM
+            role for the instance.
         """
         self.connection = connection
         self.name = name
@@ -141,6 +155,8 @@
         self.user_data = user_data
         self.created_time = None
         self.instance_monitoring = instance_monitoring
+        self.spot_price = spot_price
+        self.instance_profile_name = instance_profile_name
         self.launch_configuration_arn = None
 
     def __repr__(self):
@@ -150,7 +166,8 @@
         if name == 'SecurityGroups':
             return self.security_groups
         elif name == 'BlockDeviceMappings':
-            self.block_device_mappings = ResultSet([('member', BlockDeviceMapping)])
+            self.block_device_mappings = ResultSet([('member',
+                                                     BlockDeviceMapping)])
             return self.block_device_mappings
         elif name == 'InstanceMonitoring':
             self.instance_monitoring = InstanceMonitoring(self)
@@ -166,24 +183,27 @@
         elif name == 'ImageId':
             self.image_id = value
         elif name == 'CreatedTime':
-            try:
-                self.created_time = datetime.strptime(value, '%Y-%m-%dT%H:%M:%S.%fZ')
-            except ValueError:
-                self.created_time = datetime.strptime(value, '%Y-%m-%dT%H:%M:%SZ')
+            self.created_time = boto.utils.parse_ts(value)
         elif name == 'KernelId':
             self.kernel_id = value
         elif name == 'RamdiskId':
             self.ramdisk_id = value
         elif name == 'UserData':
-            self.user_data = base64.b64decode(value)
+            try:
+                self.user_data = base64.b64decode(value)
+            except TypeError:
+                self.user_data = value
         elif name == 'LaunchConfigurationARN':
             self.launch_configuration_arn = value
         elif name == 'InstanceMonitoring':
             self.instance_monitoring = value
+        elif name == 'SpotPrice':
+            self.spot_price = float(value)
+        elif name == 'IamInstanceProfile':
+            self.instance_profile_name = value
         else:
             setattr(self, name, value)
 
     def delete(self):
         """ Delete this launch configuration. """
         return self.connection.delete_launch_configuration(self.name)
-
diff --git a/boto/ec2/autoscale/tag.py b/boto/ec2/autoscale/tag.py
new file mode 100644
index 0000000..ad9641d
--- /dev/null
+++ b/boto/ec2/autoscale/tag.py
@@ -0,0 +1,84 @@
+# Copyright (c) 2012 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2012 Amazon.com, Inc. or its affiliates.  All Rights Reserved
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+
+
+class Tag(object):
+    """
+    A name/value tag on an AutoScalingGroup resource.
+
+    :ivar key: The key of the tag.
+    :ivar value: The value of the tag.
+    :ivar propagate_at_launch: Boolean value which specifies whether the
+        new tag will be applied to instances launched after the tag is created.
+    :ivar resource_id: The name of the autoscaling group.
+    :ivar resource_type: The only supported resource type at this time
+        is "auto-scaling-group".
+    """
+
+    def __init__(self, connection=None, key=None, value=None,
+                 propagate_at_launch=False, resource_id=None,
+                 resource_type='auto-scaling-group'):
+        self.connection = connection
+        self.key = key
+        self.value = value
+        self.propagate_at_launch = propagate_at_launch
+        self.resource_id = resource_id
+        self.resource_type = resource_type
+
+    def __repr__(self):
+        return 'Tag(%s=%s)' % (self.key, self.value)
+
+    def startElement(self, name, attrs, connection):
+        pass
+
+    def endElement(self, name, value, connection):
+        if name == 'Key':
+            self.key = value
+        elif name == 'Value':
+            self.value = value
+        elif name == 'PropogateAtLaunch':
+            if value.lower() == 'true':
+                self.propogate_at_launch = True
+            else:
+                self.propogate_at_launch = False
+        elif name == 'ResourceId':
+            self.resource_id = value
+        elif name == 'ResourceType':
+            self.resource_type = value
+
+    def build_params(self, params, i):
+        """
+        Populates a dictionary with the name/value pairs necessary
+        to identify this Tag in a request.
+        """
+        prefix = 'Tags.member.%d.' % i
+        params[prefix + 'ResourceId'] = self.resource_id
+        params[prefix + 'ResourceType'] = self.resource_type
+        params[prefix + 'Key'] = self.key
+        params[prefix + 'Value'] = self.value
+        if self.propagate_at_launch:
+            params[prefix + 'PropagateAtLaunch'] = 'true'
+        else:
+            params[prefix + 'PropagateAtLaunch'] = 'false'
+
+    def delete(self):
+        return self.connection.delete_tags([self])
diff --git a/boto/ec2/blockdevicemapping.py b/boto/ec2/blockdevicemapping.py
index 75be2a4..ca0e937 100644
--- a/boto/ec2/blockdevicemapping.py
+++ b/boto/ec2/blockdevicemapping.py
@@ -1,4 +1,5 @@
-# Copyright (c) 2009 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2009-2012 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2012 Amazon.com, Inc. or its affiliates.  All Rights Reserved
 #
 # Permission is hereby granted, free of charge, to any person obtaining a
 # copy of this software and associated documentation files (the
@@ -14,13 +15,17 @@
 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
 # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
 # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
-# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
 # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
 # IN THE SOFTWARE.
 #
 
+
 class BlockDeviceType(object):
+    """
+    Represents parameters for a block device.
+    """
 
     def __init__(self,
                  connection=None,
@@ -31,7 +36,9 @@
                  status=None,
                  attach_time=None,
                  delete_on_termination=False,
-                 size=None):
+                 size=None,
+                 volume_type=None,
+                 iops=None):
         self.connection = connection
         self.ephemeral_name = ephemeral_name
         self.no_device = no_device
@@ -41,18 +48,20 @@
         self.attach_time = attach_time
         self.delete_on_termination = delete_on_termination
         self.size = size
+        self.volume_type = volume_type
+        self.iops = iops
 
     def startElement(self, name, attrs, connection):
         pass
 
     def endElement(self, name, value, connection):
-        if name =='volumeId':
+        if name == 'volumeId':
             self.volume_id = value
         elif name == 'virtualName':
             self.ephemeral_name = value
-        elif name =='NoDevice':
+        elif name == 'NoDevice':
             self.no_device = (value == 'true')
-        elif name =='snapshotId':
+        elif name == 'snapshotId':
             self.snapshot_id = value
         elif name == 'volumeSize':
             self.size = int(value)
@@ -61,19 +70,35 @@
         elif name == 'attachTime':
             self.attach_time = value
         elif name == 'deleteOnTermination':
-            if value == 'true':
-                self.delete_on_termination = True
-            else:
-                self.delete_on_termination = False
+            self.delete_on_termination = (value == 'true')
+        elif name == 'volumeType':
+            self.volume_type = value
+        elif name == 'iops':
+            self.iops = int(value)
         else:
             setattr(self, name, value)
 
 # for backwards compatibility
 EBSBlockDeviceType = BlockDeviceType
 
+
 class BlockDeviceMapping(dict):
+    """
+    Represents a collection of BlockDeviceTypes when creating ec2 instances.
+
+    Example:
+    dev_sda1 = BlockDeviceType()
+    dev_sda1.size = 100   # change root volume to 100GB instead of default
+    bdm = BlockDeviceMapping()
+    bdm['/dev/sda1'] = dev_sda1
+    reservation = image.run(..., block_device_map=bdm, ...)
+    """
 
     def __init__(self, connection=None):
+        """
+        :type connection: :class:`boto.ec2.EC2Connection`
+        :param connection: Optional connection.
+        """
         dict.__init__(self)
         self.connection = connection
         self.current_name = None
@@ -109,4 +134,8 @@
                     params['%s.Ebs.DeleteOnTermination' % pre] = 'true'
                 else:
                     params['%s.Ebs.DeleteOnTermination' % pre] = 'false'
+                if block_dev.volume_type:
+                    params['%s.Ebs.VolumeType' % pre] = block_dev.volume_type
+                if block_dev.iops is not None:
+                    params['%s.Ebs.Iops' % pre] = block_dev.iops
             i += 1
diff --git a/boto/ec2/cloudwatch/__init__.py b/boto/ec2/cloudwatch/__init__.py
index d301167..5b8db5b 100644
--- a/boto/ec2/cloudwatch/__init__.py
+++ b/boto/ec2/cloudwatch/__init__.py
@@ -22,119 +22,6 @@
 """
 This module provides an interface to the Elastic Compute Cloud (EC2)
 CloudWatch service from AWS.
-
-The 5 Minute How-To Guide
--------------------------
-First, make sure you have something to monitor.  You can either create a
-LoadBalancer or enable monitoring on an existing EC2 instance.  To enable
-monitoring, you can either call the monitor_instance method on the
-EC2Connection object or call the monitor method on the Instance object.
-
-It takes a while for the monitoring data to start accumulating but once
-it does, you can do this:
-
->>> import boto
->>> c = boto.connect_cloudwatch()
->>> metrics = c.list_metrics()
->>> metrics
-[Metric:NetworkIn,
- Metric:NetworkOut,
- Metric:NetworkOut(InstanceType,m1.small),
- Metric:NetworkIn(InstanceId,i-e573e68c),
- Metric:CPUUtilization(InstanceId,i-e573e68c),
- Metric:DiskWriteBytes(InstanceType,m1.small),
- Metric:DiskWriteBytes(ImageId,ami-a1ffb63),
- Metric:NetworkOut(ImageId,ami-a1ffb63),
- Metric:DiskWriteOps(InstanceType,m1.small),
- Metric:DiskReadBytes(InstanceType,m1.small),
- Metric:DiskReadOps(ImageId,ami-a1ffb63),
- Metric:CPUUtilization(InstanceType,m1.small),
- Metric:NetworkIn(ImageId,ami-a1ffb63),
- Metric:DiskReadOps(InstanceType,m1.small),
- Metric:DiskReadBytes,
- Metric:CPUUtilization,
- Metric:DiskWriteBytes(InstanceId,i-e573e68c),
- Metric:DiskWriteOps(InstanceId,i-e573e68c),
- Metric:DiskWriteOps,
- Metric:DiskReadOps,
- Metric:CPUUtilization(ImageId,ami-a1ffb63),
- Metric:DiskReadOps(InstanceId,i-e573e68c),
- Metric:NetworkOut(InstanceId,i-e573e68c),
- Metric:DiskReadBytes(ImageId,ami-a1ffb63),
- Metric:DiskReadBytes(InstanceId,i-e573e68c),
- Metric:DiskWriteBytes,
- Metric:NetworkIn(InstanceType,m1.small),
- Metric:DiskWriteOps(ImageId,ami-a1ffb63)]
-
-The list_metrics call will return a list of all of the available metrics
-that you can query against.  Each entry in the list is a Metric object.
-As you can see from the list above, some of the metrics are generic metrics
-and some have Dimensions associated with them (e.g. InstanceType=m1.small).
-The Dimension can be used to refine your query.  So, for example, I could
-query the metric Metric:CPUUtilization which would create the desired statistic
-by aggregating cpu utilization data across all sources of information available
-or I could refine that by querying the metric
-Metric:CPUUtilization(InstanceId,i-e573e68c) which would use only the data
-associated with the instance identified by the instance ID i-e573e68c.
-
-Because for this example, I'm only monitoring a single instance, the set
-of metrics available to me are fairly limited.  If I was monitoring many
-instances, using many different instance types and AMI's and also several
-load balancers, the list of available metrics would grow considerably.
-
-Once you have the list of available metrics, you can actually
-query the CloudWatch system for that metric.  Let's choose the CPU utilization
-metric for our instance.
-
->>> m = metrics[5]
->>> m
-Metric:CPUUtilization(InstanceId,i-e573e68c)
-
-The Metric object has a query method that lets us actually perform
-the query against the collected data in CloudWatch.  To call that,
-we need a start time and end time to control the time span of data
-that we are interested in.  For this example, let's say we want the
-data for the previous hour:
-
->>> import datetime
->>> end = datetime.datetime.now()
->>> start = end - datetime.timedelta(hours=1)
-
-We also need to supply the Statistic that we want reported and
-the Units to use for the results.  The Statistic can be one of these
-values:
-
-['Minimum', 'Maximum', 'Sum', 'Average', 'SampleCount']
-
-And Units must be one of the following:
-
-['Seconds', 'Percent', 'Bytes', 'Bits', 'Count',
-'Bytes/Second', 'Bits/Second', 'Count/Second']
-
-The query method also takes an optional parameter, period.  This
-parameter controls the granularity (in seconds) of the data returned.
-The smallest period is 60 seconds and the value must be a multiple
-of 60 seconds.  So, let's ask for the average as a percent:
-
->>> datapoints = m.query(start, end, 'Average', 'Percent')
->>> len(datapoints)
-60
-
-Our period was 60 seconds and our duration was one hour so
-we should get 60 data points back and we can see that we did.
-Each element in the datapoints list is a DataPoint object
-which is a simple subclass of a Python dict object.  Each
-Datapoint object contains all of the information available
-about that particular data point.
-
->>> d = datapoints[0]
->>> d
-{u'Average': 0.0,
- u'SampleCount': 1.0,
- u'Timestamp': u'2009-05-21T19:55:00Z',
- u'Unit': u'Percent'}
-
-My server obviously isn't very busy right now!
 """
 try:
     import simplejson as json
@@ -143,18 +30,20 @@
 
 from boto.connection import AWSQueryConnection
 from boto.ec2.cloudwatch.metric import Metric
-from boto.ec2.cloudwatch.alarm import MetricAlarm, AlarmHistoryItem
+from boto.ec2.cloudwatch.alarm import MetricAlarm, MetricAlarms, AlarmHistoryItem
 from boto.ec2.cloudwatch.datapoint import Datapoint
 from boto.regioninfo import RegionInfo
 import boto
 
 RegionData = {
-    'us-east-1' : 'monitoring.us-east-1.amazonaws.com',
-    'us-west-1' : 'monitoring.us-west-1.amazonaws.com',
-    'us-west-2' : 'monitoring.us-west-2.amazonaws.com',
-    'eu-west-1' : 'monitoring.eu-west-1.amazonaws.com',
-    'ap-northeast-1' : 'monitoring.ap-northeast-1.amazonaws.com',
-    'ap-southeast-1' : 'monitoring.ap-southeast-1.amazonaws.com'}
+    'us-east-1': 'monitoring.us-east-1.amazonaws.com',
+    'us-west-1': 'monitoring.us-west-1.amazonaws.com',
+    'us-west-2': 'monitoring.us-west-2.amazonaws.com',
+    'sa-east-1': 'monitoring.sa-east-1.amazonaws.com',
+    'eu-west-1': 'monitoring.eu-west-1.amazonaws.com',
+    'ap-northeast-1': 'monitoring.ap-northeast-1.amazonaws.com',
+    'ap-southeast-1': 'monitoring.ap-southeast-1.amazonaws.com'}
+
 
 def regions():
     """
@@ -171,6 +60,7 @@
         regions.append(region)
     return regions
 
+
 def connect_to_region(region_name, **kw_params):
     """
     Given a valid region name, return a
@@ -195,13 +85,13 @@
                                         'us-east-1')
     DefaultRegionEndpoint = boto.config.get('Boto',
                                             'cloudwatch_region_endpoint',
-                                            'monitoring.amazonaws.com')
-
+                                            'monitoring.us-east-1.amazonaws.com')
 
     def __init__(self, aws_access_key_id=None, aws_secret_access_key=None,
                  is_secure=True, port=None, proxy=None, proxy_port=None,
                  proxy_user=None, proxy_pass=None, debug=0,
-                 https_connection_factory=None, region=None, path='/'):
+                 https_connection_factory=None, region=None, path='/',
+                 security_token=None, validate_certs=True):
         """
         Init method to create a new connection to EC2 Monitoring Service.
 
@@ -213,41 +103,55 @@
                                 self.DefaultRegionEndpoint)
         self.region = region
 
+        # Ugly hack to get around both a bug in Python and a
+        # misconfigured SSL cert for the eu-west-1 endpoint
+        if self.region.name == 'eu-west-1':
+            validate_certs = False
+
         AWSQueryConnection.__init__(self, aws_access_key_id,
-                                    aws_secret_access_key, 
+                                    aws_secret_access_key,
                                     is_secure, port, proxy, proxy_port,
                                     proxy_user, proxy_pass,
                                     self.region.endpoint, debug,
-                                    https_connection_factory, path)
+                                    https_connection_factory, path,
+                                    security_token,
+                                    validate_certs=validate_certs)
 
     def _required_auth_capability(self):
         return ['ec2']
 
     def build_dimension_param(self, dimension, params):
+        prefix = 'Dimensions.member'
+        i = 0
         for dim_name in dimension:
             dim_value = dimension[dim_name]
-            if isinstance(dim_value, basestring):
-                dim_value = [dim_value]
-            for i, value in enumerate(dim_value):
-                params['Dimensions.member.%d.Name' % (i+1)] = dim_name
-                params['Dimensions.member.%d.Value' % (i+1)] = value
-    
+            if dim_value:
+                if isinstance(dim_value, basestring):
+                    dim_value = [dim_value]
+                for value in dim_value:
+                    params['%s.%d.Name' % (prefix, i+1)] = dim_name
+                    params['%s.%d.Value' % (prefix, i+1)] = value
+                    i += 1
+            else:
+                params['%s.%d.Name' % (prefix, i+1)] = dim_name
+                i += 1
+
     def build_list_params(self, params, items, label):
         if isinstance(items, basestring):
             items = [items]
         for index, item in enumerate(items):
             i = index + 1
             if isinstance(item, dict):
-                for k,v in item.iteritems():
+                for k, v in item.iteritems():
                     params[label % (i, 'Name')] = k
                     if v is not None:
                         params[label % (i, 'Value')] = v
             else:
                 params[label % i] = item
 
-    def build_put_params(self, params, name, value=None, timestamp=None, 
+    def build_put_params(self, params, name, value=None, timestamp=None,
                         unit=None, dimensions=None, statistics=None):
-        args = (name, value, unit, dimensions, statistics)
+        args = (name, value, unit, dimensions, statistics, timestamp)
         length = max(map(lambda a: len(a) if isinstance(a, list) else 1, args))
 
         def aslist(a):
@@ -257,18 +161,18 @@
                 return a
             return [a] * length
 
-        for index, (n, v, u, d, s) in enumerate(zip(*map(aslist, args))):
+        for index, (n, v, u, d, s, t) in enumerate(zip(*map(aslist, args))):
             metric_data = {'MetricName': n}
 
             if timestamp:
-                metric_data['Timestamp'] = timestamp.isoformat()
-            
+                metric_data['Timestamp'] = t.isoformat()
+
             if unit:
                 metric_data['Unit'] = u
-            
+
             if dimensions:
                 self.build_dimension_param(d, metric_data)
-            
+
             if statistics:
                 metric_data['StatisticValues.Maximum'] = s['maximum']
                 metric_data['StatisticValues.Minimum'] = s['minimum']
@@ -294,20 +198,20 @@
 
         :type period: integer
         :param period: The granularity, in seconds, of the returned datapoints.
-                       Period must be at least 60 seconds and must be a multiple
-                       of 60. The default value is 60.
+            Period must be at least 60 seconds and must be a multiple
+            of 60. The default value is 60.
 
         :type start_time: datetime
-        :param start_time: The time stamp to use for determining the first
-                           datapoint to return. The value specified is
-                           inclusive; results include datapoints with the
-                           time stamp specified.
+        :param start_time: The time stamp to use for determining the
+            first datapoint to return. The value specified is
+            inclusive; results include datapoints with the time stamp
+            specified.
 
         :type end_time: datetime
-        :param end_time: The time stamp to use for determining the last
-                         datapoint to return. The value specified is
-                         exclusive; results will include datapoints up to
-                         the time stamp specified.
+        :param end_time: The time stamp to use for determining the
+            last datapoint to return. The value specified is
+            exclusive; results will include datapoints up to the time
+            stamp specified.
 
         :type metric_name: string
         :param metric_name: The metric name.
@@ -317,7 +221,7 @@
 
         :type statistics: list
         :param statistics: A list of statistics names Valid values:
-                           Average | Sum | SampleCount | Maximum | Minimum
+            Average | Sum | SampleCount | Maximum | Minimum
 
         :type dimensions: dict
         :param dimensions: A dictionary of dimension key/values where
@@ -325,16 +229,29 @@
                            is either a scalar value or an iterator
                            of values to be associated with that
                            dimension.
+
+        :type unit: string
+        :param unit: The unit for the metric.  Value values are:
+            Seconds | Microseconds | Milliseconds | Bytes | Kilobytes |
+            Megabytes | Gigabytes | Terabytes | Bits | Kilobits |
+            Megabits | Gigabits | Terabits | Percent | Count |
+            Bytes/Second | Kilobytes/Second | Megabytes/Second |
+            Gigabytes/Second | Terabytes/Second | Bits/Second |
+            Kilobits/Second | Megabits/Second | Gigabits/Second |
+            Terabits/Second | Count/Second | None
+
         :rtype: list
         """
-        params = {'Period' : period,
-                  'MetricName' : metric_name,
-                  'Namespace' : namespace,
-                  'StartTime' : start_time.isoformat(),
-                  'EndTime' : end_time.isoformat()}
+        params = {'Period': period,
+                  'MetricName': metric_name,
+                  'Namespace': namespace,
+                  'StartTime': start_time.isoformat(),
+                  'EndTime': end_time.isoformat()}
         self.build_list_params(params, statistics, 'Statistics.member.%d')
         if dimensions:
             self.build_dimension_param(dimensions, params)
+        if unit:
+            params['Unit'] = unit
         return self.get_list('GetMetricStatistics', params,
                              [('member', Datapoint)])
 
@@ -345,30 +262,28 @@
         data available.
 
         :type next_token: str
-        :param next_token: A maximum of 500 metrics will be returned at one
-                           time.  If more results are available, the
-                           ResultSet returned will contain a non-Null
-                           next_token attribute.  Passing that token as a
-                           parameter to list_metrics will retrieve the
-                           next page of metrics.
+        :param next_token: A maximum of 500 metrics will be returned
+            at one time.  If more results are available, the ResultSet
+            returned will contain a non-Null next_token attribute.
+            Passing that token as a parameter to list_metrics will
+            retrieve the next page of metrics.
 
-        :type dimension: dict
-        :param dimension_filters: A dictionary containing name/value pairs
-                                  that will be used to filter the results.
-                                  The key in the dictionary is the name of
-                                  a Dimension.  The value in the dictionary
-                                  is either a scalar value of that Dimension
-                                  name that you want to filter on, a list
-                                  of values to filter on or None if
-                                  you want all metrics with that Dimension name.
+        :type dimensions: dict
+        :param dimensions: A dictionary containing name/value
+            pairs that will be used to filter the results.  The key in
+            the dictionary is the name of a Dimension.  The value in
+            the dictionary is either a scalar value of that Dimension
+            name that you want to filter on, a list of values to
+            filter on or None if you want all metrics with that
+            Dimension name.
 
         :type metric_name: str
         :param metric_name: The name of the Metric to filter against.  If None,
-                            all Metric names will be returned.
+            all Metric names will be returned.
 
         :type namespace: str
         :param namespace: A Metric namespace to filter against (e.g. AWS/EC2).
-                          If None, Metrics from all namespaces will be returned.
+            If None, Metrics from all namespaces will be returned.
         """
         params = {}
         if next_token:
@@ -379,16 +294,16 @@
             params['MetricName'] = metric_name
         if namespace:
             params['Namespace'] = namespace
-        
+
         return self.get_list('ListMetrics', params, [('member', Metric)])
-    
-    def put_metric_data(self, namespace, name, value=None, timestamp=None, 
+
+    def put_metric_data(self, namespace, name, value=None, timestamp=None,
                         unit=None, dimensions=None, statistics=None):
         """
-        Publishes metric data points to Amazon CloudWatch. Amazon Cloudwatch 
-        associates the data points with the specified metric. If the specified 
-        metric does not exist, Amazon CloudWatch creates the metric. If a list 
-        is specified for some, but not all, of the arguments, the remaining 
+        Publishes metric data points to Amazon CloudWatch. Amazon Cloudwatch
+        associates the data points with the specified metric. If the specified
+        metric does not exist, Amazon CloudWatch creates the metric. If a list
+        is specified for some, but not all, of the arguments, the remaining
         arguments are repeated a corresponding number of times.
 
         :type namespace: str
@@ -401,11 +316,11 @@
         :param value: The value for the metric.
 
         :type timestamp: datetime or list
-        :param timestamp: The time stamp used for the metric. If not specified, 
+        :param timestamp: The time stamp used for the metric. If not specified,
             the default value is set to the time the metric data was received.
-        
+
         :type unit: string or list
-        :param unit: The unit of the metric.  Valid Values: Seconds | 
+        :param unit: The unit of the metric.  Valid Values: Seconds |
             Microseconds | Milliseconds | Bytes | Kilobytes |
             Megabytes | Gigabytes | Terabytes | Bits | Kilobits |
             Megabits | Gigabits | Terabits | Percent | Count |
@@ -413,12 +328,12 @@
             Gigabytes/Second | Terabytes/Second | Bits/Second |
             Kilobits/Second | Megabits/Second | Gigabits/Second |
             Terabits/Second | Count/Second | None
-        
+
         :type dimensions: dict
-        :param dimensions: Add extra name value pairs to associate 
+        :param dimensions: Add extra name value pairs to associate
             with the metric, i.e.:
             {'name1': value1, 'name2': (value2, value3)}
-        
+
         :type statistics: dict or list
         :param statistics: Use a statistic set instead of a value, for example::
 
@@ -428,8 +343,7 @@
         self.build_put_params(params, name, value=value, timestamp=timestamp,
             unit=unit, dimensions=dimensions, statistics=statistics)
 
-        return self.get_status('PutMetricData', params)
-
+        return self.get_status('PutMetricData', params, verb="POST")
 
     def describe_alarms(self, action_prefix=None, alarm_name_prefix=None,
                         alarm_names=None, max_records=None, state_value=None,
@@ -445,21 +359,21 @@
 
         :type alarm_name_prefix: string
         :param alarm_name_prefix: The alarm name prefix. AlarmNames cannot
-                                  be specified if this parameter is specified.
+            be specified if this parameter is specified.
 
         :type alarm_names: list
         :param alarm_names: A list of alarm names to retrieve information for.
 
         :type max_records: int
         :param max_records: The maximum number of alarm descriptions
-                            to retrieve.
+            to retrieve.
 
         :type state_value: string
         :param state_value: The state value to be used in matching alarms.
 
         :type next_token: string
         :param next_token: The token returned by a previous call to
-                           indicate that there is more data.
+            indicate that there is more data.
 
         :rtype list
         """
@@ -477,7 +391,7 @@
         if state_value:
             params['StateValue'] = state_value
         return self.get_list('DescribeAlarms', params,
-                             [('member', MetricAlarm)])
+                             [('MetricAlarms', MetricAlarms)])[0]
 
     def describe_alarm_history(self, alarm_name=None,
                                start_date=None, end_date=None,
@@ -503,15 +417,15 @@
 
         :type history_item_type: string
         :param history_item_type: The type of alarm histories to retreive
-                                  (ConfigurationUpdate | StateUpdate | Action)
+            (ConfigurationUpdate | StateUpdate | Action)
 
         :type max_records: int
         :param max_records: The maximum number of alarm descriptions
-                            to retrieve.
+            to retrieve.
 
         :type next_token: string
         :param next_token: The token returned by a previous call to indicate
-                           that there is more data.
+            that there is more data.
 
         :rtype list
         """
@@ -545,26 +459,25 @@
 
         :type period: int
         :param period: The period in seconds over which the statistic
-                       is applied.
+            is applied.
 
         :type statistic: string
         :param statistic: The statistic for the metric.
 
-        :param dimension_filters: A dictionary containing name/value pairs
-                                  that will be used to filter the results.
-                                  The key in the dictionary is the name of
-                                  a Dimension.  The value in the dictionary
-                                  is either a scalar value of that Dimension
-                                  name that you want to filter on, a list
-                                  of values to filter on or None if
-                                  you want all metrics with that Dimension name.
+        :param dimension_filters: A dictionary containing name/value
+            pairs that will be used to filter the results.  The key in
+            the dictionary is the name of a Dimension.  The value in
+            the dictionary is either a scalar value of that Dimension
+            name that you want to filter on, a list of values to
+            filter on or None if you want all metrics with that
+            Dimension name.
 
         :type unit: string
 
         :rtype list
         """
-        params = {'MetricName' : metric_name,
-                  'Namespace' : namespace}
+        params = {'MetricName': metric_name,
+                  'Namespace': namespace}
         if period:
             params['Period'] = period
         if statistic:
@@ -593,14 +506,14 @@
         :param alarm: MetricAlarm object.
         """
         params = {
-                    'AlarmName'             :       alarm.name,
-                    'MetricName'            :       alarm.metric,
-                    'Namespace'             :       alarm.namespace,
-                    'Statistic'             :       alarm.statistic,
-                    'ComparisonOperator'    :       alarm.comparison,
-                    'Threshold'             :       alarm.threshold,
-                    'EvaluationPeriods'     :       alarm.evaluation_periods,
-                    'Period'                :       alarm.period,
+                    'AlarmName': alarm.name,
+                    'MetricName': alarm.metric,
+                    'Namespace': alarm.namespace,
+                    'Statistic': alarm.statistic,
+                    'ComparisonOperator': alarm.comparison,
+                    'Threshold': alarm.threshold,
+                    'EvaluationPeriods': alarm.evaluation_periods,
+                    'Period': alarm.period,
                  }
         if alarm.actions_enabled is not None:
             params['ActionsEnabled'] = alarm.actions_enabled
@@ -657,9 +570,9 @@
         :type state_reason_data: string
         :param state_reason_data: Reason string (will be jsonified).
         """
-        params = {'AlarmName' : alarm_name,
-                  'StateReason' : state_reason,
-                  'StateValue' : state_value}
+        params = {'AlarmName': alarm_name,
+                  'StateReason': state_reason,
+                  'StateValue': state_value}
         if state_reason_data:
             params['StateReasonData'] = json.dumps(state_reason_data)
 
@@ -686,4 +599,3 @@
         params = {}
         self.build_list_params(params, alarm_names, 'AlarmNames.member.%s')
         return self.get_status('DisableAlarmActions', params)
-
diff --git a/boto/ec2/cloudwatch/alarm.py b/boto/ec2/cloudwatch/alarm.py
index f81157d..b0b9fd0 100644
--- a/boto/ec2/cloudwatch/alarm.py
+++ b/boto/ec2/cloudwatch/alarm.py
@@ -23,11 +23,32 @@
 from datetime import datetime
 from boto.resultset import ResultSet
 from boto.ec2.cloudwatch.listelement import ListElement
+from boto.ec2.cloudwatch.dimension import Dimension
+
 try:
     import simplejson as json
 except ImportError:
     import json
 
+
+class MetricAlarms(list):
+    def __init__(self, connection=None):
+        """
+        Parses a list of MetricAlarms.
+        """
+        list.__init__(self)
+        self.connection = connection
+
+    def startElement(self, name, attrs, connection):
+        if name == 'member':
+            metric_alarm = MetricAlarm(connection)
+            self.append(metric_alarm)
+            return metric_alarm
+
+    def endElement(self, name, value, connection):
+        pass
+
+
 class MetricAlarm(object):
 
     OK = 'OK'
@@ -35,10 +56,10 @@
     INSUFFICIENT_DATA = 'INSUFFICIENT_DATA'
 
     _cmp_map = {
-                    '>='    :   'GreaterThanOrEqualToThreshold',
-                    '>'     :   'GreaterThanThreshold',
-                    '<'     :   'LessThanThreshold',
-                    '<='    :   'LessThanOrEqualToThreshold',
+                    '>=': 'GreaterThanOrEqualToThreshold',
+                    '>':  'GreaterThanThreshold',
+                    '<':  'LessThanThreshold',
+                    '<=': 'LessThanOrEqualToThreshold',
                }
     _rev_cmp_map = dict((v, k) for (k, v) in _cmp_map.iteritems())
 
@@ -156,6 +177,9 @@
         elif name == 'OKActions':
             self.ok_actions = ListElement()
             return self.ok_actions
+        elif name == 'Dimensions':
+            self.dimensions = Dimension()
+            return self.dimensions
         else:
             pass
 
@@ -266,7 +290,7 @@
         self.ok_actions.append(action_arn)
 
     def delete(self):
-        self.connection.delete_alarms([self])
+        self.connection.delete_alarms([self.name])
 
 class AlarmHistoryItem(object):
     def __init__(self, connection=None):
diff --git a/boto/ec2/cloudwatch/dimension.py b/boto/ec2/cloudwatch/dimension.py
new file mode 100644
index 0000000..42c8a88
--- /dev/null
+++ b/boto/ec2/cloudwatch/dimension.py
@@ -0,0 +1,38 @@
+# Copyright (c) 2006-2012 Mitch Garnaat http://garnaat.org/
+#
+# Permission is hereby granted, free of charge, to any person obtaining a
+# copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish, dis-
+# tribute, sublicense, and/or sell copies of the Software, and to permit
+# persons to whom the Software is furnished to do so, subject to the fol-
+# lowing conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
+# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
+# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+#
+
+class Dimension(dict):
+
+    def startElement(self, name, attrs, connection):
+        pass
+
+    def endElement(self, name, value, connection):
+        if name == 'Name':
+            self._name = value
+        elif name == 'Value':
+            if self._name in self:
+                self[self._name].append(value)
+            else:
+                self[self._name] = [value]
+        else:
+            setattr(self, name, value)
+
diff --git a/boto/ec2/cloudwatch/metric.py b/boto/ec2/cloudwatch/metric.py
index cda02d8..9c19b94 100644
--- a/boto/ec2/cloudwatch/metric.py
+++ b/boto/ec2/cloudwatch/metric.py
@@ -1,4 +1,6 @@
-# Copyright (c) 2006-2009 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2006-2012 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2012 Amazon.com, Inc. or its affiliates.
+# All Rights Reserved
 #
 # Permission is hereby granted, free of charge, to any person obtaining a
 # copy of this software and associated documentation files (the
@@ -21,22 +23,8 @@
 #
 
 from boto.ec2.cloudwatch.alarm import MetricAlarm
+from boto.ec2.cloudwatch.dimension import Dimension
 
-class Dimension(dict):
-
-    def startElement(self, name, attrs, connection):
-        pass
-
-    def endElement(self, name, value, connection):
-        if name == 'Name':
-            self._name = value
-        elif name == 'Value':
-            if self._name in self:
-                self[self._name].append(value)
-            else:
-                self[self._name] = [value]
-        else:
-            setattr(self, name, value)
 
 class Metric(object):
 
@@ -72,6 +60,46 @@
             setattr(self, name, value)
 
     def query(self, start_time, end_time, statistics, unit=None, period=60):
+        """
+        :type start_time: datetime
+        :param start_time: The time stamp to use for determining the
+            first datapoint to return. The value specified is
+            inclusive; results include datapoints with the time stamp
+            specified.
+
+        :type end_time: datetime
+        :param end_time: The time stamp to use for determining the
+            last datapoint to return. The value specified is
+            exclusive; results will include datapoints up to the time
+            stamp specified.
+
+        :type statistics: list
+        :param statistics: A list of statistics names Valid values:
+            Average | Sum | SampleCount | Maximum | Minimum
+
+        :type dimensions: dict
+        :param dimensions: A dictionary of dimension key/values where
+                           the key is the dimension name and the value
+                           is either a scalar value or an iterator
+                           of values to be associated with that
+                           dimension.
+
+        :type unit: string
+        :param unit: The unit for the metric.  Value values are:
+            Seconds | Microseconds | Milliseconds | Bytes | Kilobytes |
+            Megabytes | Gigabytes | Terabytes | Bits | Kilobits |
+            Megabits | Gigabits | Terabits | Percent | Count |
+            Bytes/Second | Kilobytes/Second | Megabytes/Second |
+            Gigabytes/Second | Terabytes/Second | Bits/Second |
+            Kilobits/Second | Megabits/Second | Gigabits/Second |
+            Terabits/Second | Count/Second | None
+
+        :type period: integer
+        :param period: The granularity, in seconds, of the returned datapoints.
+            Period must be at least 60 seconds and must be a multiple
+            of 60. The default value is 60.
+
+        """
         if not isinstance(statistics, list):
             statistics = [statistics]
         return self.connection.get_metric_statistics(period,
@@ -88,6 +116,21 @@
                      statistic, enabled=True, description=None,
                      dimensions=None, alarm_actions=None, ok_actions=None,
                      insufficient_data_actions=None, unit=None):
+        """
+        Creates or updates an alarm and associates it with this metric.
+        Optionally, this operation can associate one or more
+        Amazon Simple Notification Service resources with the alarm.
+
+        When this operation creates an alarm, the alarm state is immediately
+        set to INSUFFICIENT_DATA. The alarm is evaluated and its StateValue is
+        set appropriately. Any actions associated with the StateValue is then
+        executed.
+
+        When updating an existing alarm, its StateValue is left unchanged.
+
+        :type alarm: boto.ec2.cloudwatch.alarm.MetricAlarm
+        :param alarm: MetricAlarm object.
+        """
         if not dimensions:
             dimensions = self.dimensions
         alarm = MetricAlarm(self.connection, name, self.name,
@@ -101,12 +144,32 @@
 
     def describe_alarms(self, period=None, statistic=None,
                         dimensions=None, unit=None):
+        """
+        Retrieves all alarms for this metric. Specify a statistic, period,
+        or unit to filter the set of alarms further.
+
+        :type period: int
+        :param period: The period in seconds over which the statistic
+            is applied.
+
+        :type statistic: string
+        :param statistic: The statistic for the metric.
+
+        :param dimension_filters: A dictionary containing name/value
+            pairs that will be used to filter the results.  The key in
+            the dictionary is the name of a Dimension.  The value in
+            the dictionary is either a scalar value of that Dimension
+            name that you want to filter on, a list of values to
+            filter on or None if you want all metrics with that
+            Dimension name.
+
+        :type unit: string
+
+        :rtype list
+        """
         return self.connection.describe_alarms_for_metric(self.name,
                                                           self.namespace,
                                                           period,
                                                           statistic,
                                                           dimensions,
                                                           unit)
-
-
-    
diff --git a/boto/ec2/connection.py b/boto/ec2/connection.py
index f94f7f2..029c796 100644
--- a/boto/ec2/connection.py
+++ b/boto/ec2/connection.py
@@ -1,5 +1,6 @@
-# Copyright (c) 2006-2010 Mitch Garnaat http://garnaat.org/
+# Copyright (c) 2006-2012 Mitch Garnaat http://garnaat.org/
 # Copyright (c) 2010, Eucalyptus Systems, Inc.
+# Copyright (c) 2012 Amazon.com, Inc. or its affiliates.  All Rights Reserved
 #
 # Permission is hereby granted, free of charge, to any person obtaining a
 # copy of this software and associated documentation files (the
@@ -28,6 +29,7 @@
 import warnings
 from datetime import datetime
 from datetime import timedelta
+
 import boto
 from boto.connection import AWSQueryConnection
 from boto.resultset import ResultSet
@@ -36,7 +38,7 @@
 from boto.ec2.instance import ConsoleOutput, InstanceAttribute
 from boto.ec2.keypair import KeyPair
 from boto.ec2.address import Address
-from boto.ec2.volume import Volume
+from boto.ec2.volume import Volume, VolumeAttribute
 from boto.ec2.snapshot import Snapshot
 from boto.ec2.snapshot import SnapshotAttribute
 from boto.ec2.zone import Zone
@@ -45,22 +47,27 @@
 from boto.ec2.instanceinfo import InstanceInfo
 from boto.ec2.reservedinstance import ReservedInstancesOffering
 from boto.ec2.reservedinstance import ReservedInstance
+from boto.ec2.reservedinstance import ReservedInstanceListing
 from boto.ec2.spotinstancerequest import SpotInstanceRequest
 from boto.ec2.spotpricehistory import SpotPriceHistory
 from boto.